Add topology auth policies + journey findings notes

Concelier:
- Register Topology.Read, Topology.Manage, Topology.Admin authorization
  policies mapped to OrchRead/OrchOperate/PlatformContextRead/IntegrationWrite
  scopes. Previously these policies were referenced by endpoints but never
  registered, causing System.InvalidOperationException on every topology
  API call.

Gateway routes:
- Simplified targets/environments routes (removed specific sub-path routes,
  use catch-all patterns instead)
- Changed environments base route to JobEngine (where CRUD lives)
- Changed to ReverseProxy type for all topology routes

KNOWN ISSUE (not yet fixed):
- ReverseProxy routes don't forward the gateway's identity envelope to
  Concelier. The regions/targets/bindings endpoints return 401 because
  hasPrincipal=False — the gateway authenticates the user but doesn't
  pass the identity to the backend via ReverseProxy. Microservice routes
  use Valkey transport which includes envelope headers. Topology endpoints
  need either: (a) Valkey transport registration in Concelier, or
  (b) Concelier configured to accept raw bearer tokens on ReverseProxy paths.
  This is an architecture-level fix.

Journey findings collected so far:
- Integration wizard (Harbor + GitHub App): works end-to-end
- Advisory Check All: fixed (parallel individual checks)
- Mirror domain creation: works, generate-immediately fails silently
- Topology wizard Step 1 (Region): blocked by auth passthrough issue
- Topology wizard Step 2 (Environment): POST to JobEngine needs verify
- User ID resolution: raw hashes shown everywhere

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
master
2026-03-16 08:12:39 +02:00
parent 602df77467
commit da76d6e93e
223 changed files with 24763 additions and 489 deletions

View File

@@ -325,7 +325,7 @@ services:
console-builder:
condition: service_completed_successfully
environment:
ASPNETCORE_URLS: "http://0.0.0.0:8080"
ASPNETCORE_URLS: "http://0.0.0.0:8080;https://0.0.0.0:443"
<<: [*kestrel-cert, *gc-heavy]
ConnectionStrings__Default: *postgres-connection
ConnectionStrings__Redis: "cache.stella-ops.local:6379"
@@ -832,6 +832,9 @@ services:
CONCELIER_PLUGINS__BASEDIRECTORY: "/tmp/stellaops"
CONCELIER_POSTGRESSTORAGE__CONNECTIONSTRING: *postgres-connection
CONCELIER_POSTGRESSTORAGE__ENABLED: "true"
CONCELIER_MIRROR__ENABLED: "true"
CONCELIER_MIRROR__EXPORTROOT: "/var/lib/concelier/jobs/mirror-exports"
CONCELIER_MIRROR__ACTIVEEXPORTID: "latest"
CONCELIER_S3__ENDPOINT: "http://s3.stella-ops.local:8333"
CONCELIER_AUTHORITY__ENABLED: "true"
CONCELIER_AUTHORITY__ISSUER: "https://authority.stella-ops.local/"

View File

@@ -659,7 +659,9 @@ VALUES
'platform.context.read', 'platform.context.write',
'doctor:run', 'doctor:admin', 'ops.health',
'integration:read', 'integration:write', 'integration:operate', 'registry.admin',
'timeline:read', 'timeline:write'],
'timeline:read', 'timeline:write',
'signer:read', 'signer:sign', 'signer:rotate', 'signer:admin',
'trust:read', 'trust:write', 'trust:admin'],
ARRAY['authorization_code', 'refresh_token'],
false, true, '{"tenant": "demo-prod"}'::jsonb)
ON CONFLICT (client_id) DO NOTHING;

View File

@@ -58,3 +58,72 @@ $$;
-- Analytics schema
CREATE SCHEMA IF NOT EXISTS analytics;
-- ── Regions (bootstrap fallback for release.regions) ──
CREATE TABLE IF NOT EXISTS release.regions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id) ON DELETE CASCADE,
name VARCHAR(100) NOT NULL,
display_name VARCHAR(255) NOT NULL,
description TEXT,
crypto_profile VARCHAR(50) NOT NULL DEFAULT 'international',
sort_order INT NOT NULL DEFAULT 0,
status TEXT NOT NULL DEFAULT 'active' CHECK (status IN ('active','decommissioning','archived')),
metadata JSONB NOT NULL DEFAULT '{}',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
created_by UUID,
UNIQUE(tenant_id, name)
);
-- ── Infrastructure Bindings (bootstrap fallback) ──
CREATE TABLE IF NOT EXISTS release.infrastructure_bindings (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id) ON DELETE CASCADE,
integration_id UUID,
scope_type TEXT NOT NULL CHECK (scope_type IN ('tenant','region','environment')),
scope_id UUID,
binding_role TEXT NOT NULL CHECK (binding_role IN ('registry','vault','settings_store')),
priority INT NOT NULL DEFAULT 0,
config_overrides JSONB NOT NULL DEFAULT '{}',
is_active BOOLEAN NOT NULL DEFAULT true,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
created_by UUID
);
-- ── Topology Point Status (bootstrap fallback) ──
CREATE TABLE IF NOT EXISTS release.topology_point_status (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL,
target_id UUID,
gate_name TEXT NOT NULL,
status TEXT NOT NULL CHECK (status IN ('pending','pass','fail','skip')),
message TEXT,
details JSONB NOT NULL DEFAULT '{}',
checked_at TIMESTAMPTZ,
duration_ms INT,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- ── Pending Deletions (bootstrap fallback) ──
CREATE TABLE IF NOT EXISTS release.pending_deletions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL,
entity_type TEXT NOT NULL CHECK (entity_type IN ('tenant','region','environment','target','agent','integration')),
entity_id UUID NOT NULL,
entity_name TEXT NOT NULL,
status TEXT NOT NULL CHECK (status IN ('pending','confirmed','executing','completed','cancelled')),
cool_off_hours INT NOT NULL,
cool_off_expires_at TIMESTAMPTZ NOT NULL,
cascade_summary JSONB NOT NULL DEFAULT '{}',
reason TEXT,
requested_by UUID NOT NULL,
requested_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
confirmed_by UUID,
confirmed_at TIMESTAMPTZ,
executed_at TIMESTAMPTZ,
completed_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

View File

@@ -119,6 +119,10 @@ Completion criteria:
| 2026-03-16 | FTUX-006 DONE: Removed ALL hardcoded fake data from dashboard-v3.component.ts. Fresh installs now show welcome setup guide with 4 steps. Environment cards show honest "unknown"/"No deployments" when no scan data exists. Removed fake summary, reachabilityStats, nightlyOpsSignals, alerts, and activity HTML. | Developer |
| 2026-03-16 | FTUX-007 DONE: Updated FEATURE_MATRIX.md — 14 release orchestration features marked ✅ (was ⏳), section header updated. | Developer |
| 2026-03-16 | Angular build verified — 0 errors, 3 pre-existing budget warnings only. | Developer |
| 2026-03-16 | Iteration 1: Wiped stack, fresh boot. Found dashboard fallback array still had fake data. Emptied it. Rebuild + redeploy. Dashboard now honest on fresh install. | Developer |
| 2026-03-16 | Iteration 2: Integration journey. Harbor + GitHub App fixtures started. Both created and connection-tested successfully. "Check All" advisory sources failed with 504 gateway timeout — fixed with parallel individual checks in batches of 6. Now shows live "Checking (N/M)..." progress, completes in ~30s. 54/55 healthy. | Developer |
| 2026-03-16 | Iteration 2: Mirror domain created (14 sources, signing enabled). "Generate immediately" fails silently (tracked). Created by shows raw user ID (tracked). | Developer |
| 2026-03-16 | Iteration 3: Topology wizard returned 503 for /api/v1/regions — Concelier topology endpoints had no gateway routes. Added 6 Microservice routes for regions, infrastructure-bindings, pending-deletions, targets validate/readiness, environments readiness. Wizard now loads. | Developer |
## Decisions & Risks
- Decision: curate advisory defaults rather than disable all — new users need working sources out of the box, just not 74 of them.

View File

@@ -5,7 +5,7 @@
## 1) Topology
- **Orchestrator API (`StellaOps.JobEngine`).** Minimal API providing job state, throttling controls, replay endpoints, and dashboard data. Authenticated via Authority scopes (`orchestrator:*`).
- **Job ledger (PostgreSQL).** Tables `jobs`, `job_history`, `sources`, `quotas`, `throttles`, `incidents` (schema `orchestrator`). Startup migrations execute with PostgreSQL `search_path` bound to `orchestrator, public` so unqualified DDL lands in the module schema during scratch installs and resets. Append-only history ensures auditability.
- **Job ledger (PostgreSQL).** Tables `jobs`, `job_history`, `sources`, `quotas`, `throttles`, `incidents` (schema `orchestrator`); pack registry tables `packs`, `pack_versions` (schema `packs`). Runtime sessions set `search_path` to `orchestrator, packs, public` so both schemas and their enum types (`job_status`, `pack_status`, `pack_version_status`) resolve correctly. The `jobs/summary` endpoint uses a single `COUNT(*) FILTER (WHERE status::text = ...)` aggregate query to avoid enum-vs-text mismatch and eliminate per-status round trips. Startup migrations execute idempotently; append-only history ensures auditability.
- **Queue abstraction.** Supports Valkey Streams or NATS JetStream (pluggable). Each job carries lease metadata and retry policy.
- **Dashboard feeds.** SSE/GraphQL endpoints supply Console UI with job timelines, throughput, error distributions, and rate-limit status.

View File

@@ -0,0 +1,354 @@
# Advisory & VEX Mirror Setup Audit - 2026-03-15
**Auditor**: AI agent acting as first-time operator setting up Stella Ops as a vulnerability/VEX advisory mirror
**Stack**: Live local (stella-ops.local), logged in as admin/Admin@Stella2026!
**Scope**: End-to-end assessment of adding, selecting, grouping, and aggregating advisory/VEX sources via UI, CLI, and backend
---
## Executive Summary
Stella Ops has a **well-architected backend** for advisory/VEX aggregation (47 sources, rate limiting, backoff, deduplication, conflict detection, VEX normalization, airgap/offline support). The **CLI is fully functional** with source management commands. However, **the UI has critical gaps** that prevent a first-time operator from setting up advisory sources without CLI or developer knowledge.
| Layer | Readiness |
|-------|-----------|
| Backend catalog (47 sources, 9 categories) | READY |
| Rate limiting & backoff | READY |
| VEX ingestion pipeline | READY |
| CLI source management | READY |
| CLI setup wizard | READY |
| Feeds & Airgap operations page | PARTIAL |
| UI source addition flow | MISSING |
| UI group/batch source selection | MISSING |
| UI source configuration (API keys, intervals) | MISSING |
---
## 1. Backend Source Catalog Assessment
### 1.1 Supported Sources (47 total)
**File**: `src/Concelier/__Libraries/StellaOps.Concelier.Core/Sources/SourceDefinitions.cs`
| Category | Count | Sources |
|----------|-------|---------|
| Primary Databases | 6 | NVD (NIST), OSV (Google), GitHub Security Advisories, CVE.org (MITRE), EPSS (FIRST), CISA KEV |
| Vendor Advisories | 11 | Red Hat, Microsoft MSRC, Amazon Linux, Google, Oracle, Apple, Cisco, Fortinet, Juniper, Palo Alto, VMware |
| Linux Distributions | 9 | Debian, Ubuntu, Alpine, SUSE, RHEL, CentOS, Fedora, Arch, Gentoo |
| Language Ecosystems | 9 | npm, PyPI, Go, RubyGems, NuGet, Maven, Crates.io, Packagist, Hex.pm |
| CSAF/VEX | 3 | CSAF Aggregator, CSAF TC Trusted Publishers, VEX Hub |
| CERTs/Government | 8 | CERT-FR, CERT-Bund (DE), CERT.at (AT), CERT.be (BE), NCSC-CH (CH), CERT-EU, JPCERT/CC (JP), CISA (US) |
| StellaOps Mirror | 1 | Pre-aggregated mirror endpoint |
**Assessment**: Comprehensive coverage. Each source has: ID, display name, category, base endpoint, health check endpoint, auth requirements, credential env var, documentation URL, default priority, region tags, and grouping tags.
### 1.2 Source Grouping Support (Backend)
**Grouping methods available in `SourceDefinitions`**:
- `GetByCategory(SourceCategory)` - Group by Primary/Vendor/Distribution/Ecosystem/Cert/Csaf/Threat/Mirror
- `GetByTag(string)` - Group by tags (e.g., "linux", "network", "eu", "ecosystem")
- `GetByRegion(string)` - Group by geographic region (FR, DE, EU, JP, APAC, US, NA)
- `GetAuthenticatedSources()` - Filter sources requiring API keys
**Assessment**: Backend supports flexible grouping. Tags like "vendor", "distro", "linux", "eu", "ecosystem" enable batch operations. **However, none of this is exposed in the UI.**
### 1.3 Configuration Model
**File**: `src/Concelier/__Libraries/StellaOps.Concelier.Core/Configuration/SourceConfiguration.cs`
- **Source modes**: Direct (upstream), Mirror (pre-aggregated), Hybrid (mirror + direct fallback)
- **Per-source config**: Enabled/disabled, priority, API key, custom endpoint, request delay, failure backoff, max pages per fetch, metadata
- **Mirror server config**: Export root, authentication (Anonymous/OAuth/ApiKey/mTLS), rate limits, DSSE attestation signing
- **Auto-enable**: `AutoEnableHealthySources = true` by default
**Assessment**: Configuration model is complete and well-designed.
---
## 2. Rate Limiting & Graceful Aggregation Assessment
### 2.1 Per-Source Rate Limiting (Outbound - Concelier)
**File**: `src/Concelier/__Libraries/StellaOps.Concelier.Core/Configuration/SourceConfiguration.cs`
| Setting | Default | Purpose |
|---------|---------|---------|
| `RequestDelay` | 200ms | Delay between consecutive API calls to same source |
| `FailureBackoff` | 5 minutes | Cooldown after a source returns errors |
| `MaxPagesPerFetch` | 10 | Cap pages fetched per sync cycle |
| `ConnectivityCheckTimeout` | 30 seconds | Health check timeout |
**Assessment**: These defaults are reasonable and won't trigger upstream rate limits. NVD allows 50 req/30s with API key (200ms = 5 req/s fits). OSV has no published rate limit. GHSA via GraphQL is limited to 5000 points/hour.
### 2.2 VEX Hub Polling (Scheduler)
**File**: `src/VexHub/__Libraries/StellaOps.VexHub.Core/Ingestion/VexIngestionScheduler.cs`
| Setting | Default | Purpose |
|---------|---------|---------|
| `DefaultPollingIntervalSeconds` | 3600 (1 hour) | How often each source is polled |
| `MaxConcurrentPolls` | 4 | SemaphoreSlim-limited concurrent ingestions |
| `MaxRetries` | 3 | Retries per ingestion attempt |
| `FetchTimeoutSeconds` | 300 (5 min) | Per-source fetch timeout |
| `BatchSize` | 500 | Statements per batch upsert |
**Scheduler behavior**:
- Runs every 1 minute checking for due sources (`GetDueForPollingAsync`)
- Throttles with `SemaphoreSlim` (max 4 concurrent)
- Updates `LastPolledAt` and `LastErrorMessage` per source after each poll
- Per-source configurable `PollingIntervalSeconds`
**Assessment**: 1-hour default polling interval with max 4 concurrent is very conservative and graceful. No DDoS risk. Sources that fail get error logged and next poll delayed by their interval. **However, there is no exponential backoff** - a source that fails will be retried at the same interval. The `FailureBackoff` in `SourceConfig` (5 min) provides a short cooldown but not progressive backoff.
### 2.3 Inbound Rate Limiting (VexHub Mirror Server)
**File**: `src/VexHub/StellaOps.VexHub.WebService/Middleware/RateLimitingMiddleware.cs`
| Setting | Default | Purpose |
|---------|---------|---------|
| Anonymous limit | 60 req/min | Sliding window per IP |
| Authenticated limit | 120 req/min | Sliding window per API key |
| Idle cleanup | 5 min | Expired client entries pruned |
**Headers**: `X-RateLimit-Limit`, `X-RateLimit-Remaining`, `X-RateLimit-Reset`, `Retry-After`
**Assessment**: Proper rate limiting for when Stella Ops acts as a mirror server. Standard headers support client retry logic.
### 2.4 Deduplication & Conflict Detection
**VEX Ingestion Pipeline**:
- SHA-256 content digest for deduplication
- Conflict detection: when two sources disagree on VEX status for the same CVE+product
- Conflict severity: Low/Medium/High/Critical
- Auto-resolution for low-severity conflicts
- Provenance tracking (audit trail per statement)
**Assessment**: Well-designed. Prevents duplicate data accumulation and tracks disagreements between sources.
---
## 3. CLI Source Management Assessment
### 3.1 Sources Commands
**File**: `src/Cli/StellaOps.Cli/Commands/Sources/SourcesCommandGroup.cs`
| Command | Purpose | Status |
|---------|---------|--------|
| `stella sources list [--category] [--enabled-only] [--json]` | List all 47 sources with category filtering | IMPLEMENTED |
| `stella sources check [source] [--all] [--parallel N] [--timeout N] [--auto-disable]` | Connectivity check with auto-disable | IMPLEMENTED |
| `stella sources enable <sources...>` | Enable one or more sources by ID | IMPLEMENTED |
| `stella sources disable <sources...>` | Disable one or more sources by ID | IMPLEMENTED |
| `stella sources status [--json]` | Show current configuration status | IMPLEMENTED |
**Assessment**: Full CRUD for source management via CLI. Supports batch enable/disable (multiple source IDs in one command). Category filtering available. Auto-disable on connectivity failure.
### 3.2 Feeds Snapshot Commands
**File**: `src/Cli/StellaOps.Cli/Commands/FeedsCommandGroup.cs`
| Command | Purpose | Status |
|---------|---------|--------|
| `stella feeds snapshot create [--label] [--sources] [--json]` | Create atomic feed snapshot | IMPLEMENTED |
| `stella feeds snapshot list [--limit N]` | List available snapshots | IMPLEMENTED |
| `stella feeds snapshot export <id> --output <path> [--compression zstd\|gzip\|none]` | Export for offline/airgap use | IMPLEMENTED |
| `stella feeds snapshot import <file> [--validate]` | Import snapshot bundle | IMPLEMENTED |
| `stella feeds snapshot validate <id>` | Validate snapshot for drift | IMPLEMENTED |
**Assessment**: Complete snapshot lifecycle for offline/airgap operation. Supports zstd compression and integrity validation.
### 3.3 CLI Setup Wizard
**File**: `src/Cli/StellaOps.Cli/Commands/Setup/Steps/Implementations/SourcesSetupStep.cs`
The interactive setup wizard:
1. Runs connectivity checks against all 47 sources in parallel
2. Displays results with latency and error details
3. Offers remediation steps for failed sources
4. Prompts: auto-disable failures / manual fix / keep all
5. Prompts source mode: Mirror (recommended) / Direct / Hybrid
6. Optionally configures mirror server (export root, auth, rate limits)
7. Reports final count of enabled sources
**Assessment**: Excellent guided setup experience via CLI. This is exactly what the UI should replicate.
---
## 4. UI Assessment - Critical Gaps
### 4.1 Gap: No UI Flow to Add Advisory/VEX Sources (P0)
**Route**: `/setup/integrations/advisory-vex-sources` (or `/ops/integrations/advisory-vex-sources`)
**What exists**: An "Advisory & VEX" tab in the Integrations page showing "FeedMirror Integrations" with a "+ Add Integration" button.
**What happens**: Clicking "+ Add Integration" navigates to `/setup/integrations/onboarding` (or `/ops/integrations/onboarding`) which shows the generic onboarding hub with only 4 categories:
1. Container Registries (Harbor)
2. Source Control (GitHub App)
3. CI/CD Pipelines (disabled)
4. Hosts & Observers (disabled)
**Missing**: There is NO "Advisory & VEX Sources" category in the onboarding hub. A first-time operator clicking "Add Integration" from the Advisory & VEX tab lands on an irrelevant page with no way to add advisory sources.
**Impact**: The primary action for setting up advisory mirroring is a dead end in the UI.
### 4.2 Gap: No Source Catalog Browser in UI (P0)
The backend defines 47 sources with categories, descriptions, auth requirements, credential URLs, and documentation links. **None of this is exposed in any UI page.** A first-time operator has no way to:
- Browse available sources
- See which sources require API keys
- Understand source categories
- Learn about source coverage
### 4.3 Gap: No Group/Batch Source Selection in UI (P0)
The backend supports grouping by category, tag, and region (`GetByCategory`, `GetByTag`, `GetByRegion`). **The UI has no batch selection.** An operator cannot:
- "Enable all Linux distribution sources"
- "Enable all EU CERT sources"
- "Enable all ecosystem sources for my language stack"
- "Enable everything in the Primary category"
### 4.4 Gap: No Source Configuration UI (API keys, intervals) (P1)
Sources like GHSA and NuGet require a `GITHUB_PAT` token. NVD recommends an API key for higher rate limits. **The UI has no form for entering per-source credentials, polling intervals, or priority.**
### 4.5 Gap: FeedMirror Integrations Shows 0 but Feeds & Airgap Shows 2 (P1)
**Disconnection**:
- `/ops/operations/feeds-airgap` shows "Mirrors 2" (NVD Mirror, OSV Mirror) both "Fresh" and "OK"
- `/setup/integrations/advisory-vex-sources` shows "No feedmirror integrations found" with "0 pass / 0 warn / 0 fail"
These two pages show contradictory data. The operations page knows about 2 active mirrors but the integrations page shows 0. They appear to query different data sources.
### 4.6 Gap: Security Page Shows 6 Sources All Offline (P1)
**Route**: `/security` > "Advisories & VEX Health" section
Shows 6 sources all "offline - unknown":
- Internal VEX
- KEV
- NVD
- OSV
- Vendor Advisories
- Vendor VEX
Yet the Feeds & Airgap page shows NVD and OSV as "Fresh" and "OK". Another data disconnection.
**Also**: The "Configure sources" link on this section navigates to `/ops/integrations/advisory-vex-sources` which is the empty FeedMirror Integrations page. Dead end loop.
### 4.7 Gap: No Source Mode Selection in UI (P1)
The backend supports Direct/Mirror/Hybrid modes. The CLI setup wizard presents this choice prominently. **The UI has no way to select or view the current source mode.**
### 4.8 Gap: No Mirror Server Configuration in UI (P2)
When Stella Ops operates as a mirror for downstream instances, the mirror server needs configuration (export root, authentication, rate limits, DSSE signing). **The CLI handles this but the UI does not.**
### 4.9 Gap: No Connectivity Check UI (P2)
The CLI has `stella sources check` with parallel connectivity testing, auto-disable, and remediation guidance. **The UI has no equivalent** - no "Test All Sources" button, no health check results.
### 4.10 Gap: Airgap Bundles Tab Not Exercised (P2)
**Route**: `/ops/operations/feeds-airgap?tab=airgap-bundles`
The Airgap Bundles and Version Locks tabs exist in the Feeds & Airgap page but were not testable in this session (stayed on Feed Mirrors tab). These represent the offline/airgap workflow counterpart to `stella feeds snapshot export/import`.
---
## 5. What Works Well
| Feature | Location | Status |
|---------|----------|--------|
| Feed Mirrors monitoring | `/ops/operations/feeds-airgap` | 2 mirrors (NVD, OSV) synced, fresh, OK |
| Feed status in context bar | Global header | "Feed: Live" indicator with link |
| Freshness indicators | Feeds & Airgap table | "Fresh" with timestamp |
| Storage tracking | Feeds & Airgap summary | 12.4 GB tracked |
| Mirror mode display | Feeds & Airgap | "Mode: live mirrors (read-write)" |
| CLI source list/check/enable/disable | `stella sources *` | Full management |
| CLI setup wizard | `stella setup` | Guided interactive flow |
| CLI feed snapshots | `stella feeds snapshot *` | Complete offline workflow |
| Backend rate limiting | SourceConfig + VexIngestionScheduler | 200ms delay, 5min backoff, 4 concurrent max |
| Deduplication | VexIngestionService | SHA-256 content digest |
| Conflict detection | VexConflictRepository | Auto-resolve + manual review |
---
## 6. Aggregation Gracefuless Assessment
### Will upstream providers cut off access?
**Risk: LOW** with current defaults.
| Source | Rate Limit | Stella Default | Safe? |
|--------|-----------|---------------|-------|
| NVD | 50 req/30s (with key), 5 req/30s (without) | 200ms delay = 5 req/s, 1hr polling | YES (with key) |
| OSV | No published limit | 200ms delay, 1hr polling | YES |
| GHSA | 5000 points/hr (GraphQL) | 200ms delay, 1hr polling | YES |
| KEV | Static JSON file | 1hr polling | YES |
| EPSS | No published limit | 200ms delay, 1hr polling | YES |
| Vendor/CERT | Varies | 200ms delay, 1hr polling | YES |
**Concerns**:
1. **No exponential backoff**: Failed sources retry at the same interval. If a source is temporarily down, Stella will retry every hour indefinitely. Should implement exponential backoff (1hr -> 2hr -> 4hr -> max 24hr).
2. **NVD without API key**: Default rate is 5 req/30s. The 200ms delay (5 req/s) would exceed this. The `RequiresAuthentication = false` flag and optional `NVD_API_KEY` env var are correctly modeled, but there's no UI guidance to obtain a key.
3. **MaxPagesPerFetch = 10**: This caps each sync to 10 pages, preventing bulk initial downloads from overwhelming sources. Good design.
4. **4 concurrent polls max**: Prevents parallel requests to the same source type from multiplying load. Good design.
---
## 7. Priority Matrix
| Priority | Issue | Category |
|----------|-------|----------|
| P0 | No UI flow to add advisory/VEX sources | UI |
| P0 | No source catalog browser in UI | UI |
| P0 | No group/batch source selection in UI | UI |
| P1 | No source configuration UI (API keys, intervals) | UI |
| P1 | FeedMirror Integrations vs Feeds & Airgap data disconnection | Data |
| P1 | Security page shows 6 sources offline while feeds page shows 2 healthy | Data |
| P1 | No source mode selection in UI (Direct/Mirror/Hybrid) | UI |
| P2 | No mirror server configuration in UI | UI |
| P2 | No connectivity check in UI | UI |
| P2 | No exponential backoff for failed sources | Backend |
| P2 | NVD without API key may exceed rate limit | Config |
| P3 | Airgap bundles and version locks tabs not wired to UX guidance | UI |
---
## 8. Top 5 Actions for Maximum Self-Serve Impact
1. **Add "Advisory & VEX Sources" category to the onboarding hub** - With a grouped source picker showing all 47 sources organized by category (Primary, Vendor, Distribution, Ecosystem, CERT, CSAF), with checkboxes, descriptions, auth requirements, and "Enable All in Category" buttons.
2. **Wire FeedMirror Integrations page to actual feed mirror data** - The integrations page shows 0 while the operations page shows 2. These need to query the same data source so operators see a single truth.
3. **Add source mode selector to setup** - Allow choosing Direct/Mirror/Hybrid from the UI, matching what the CLI setup wizard offers.
4. **Add per-source configuration panel** - When clicking a source, show: enable/disable toggle, API key field (with link to credential URL), polling interval, priority, health status, last sync time.
5. **Add exponential backoff for failed sources** - Currently retries at constant interval. Implement progressive backoff (1hr -> 2hr -> 4hr -> 8hr -> max 24hr) to be a good upstream citizen.
---
## 9. Comparison: CLI vs UI Feature Parity
| Feature | CLI | UI |
|---------|-----|-----|
| List all 47 sources | `stella sources list` | NO |
| Filter by category | `--category primary` | NO |
| Filter enabled only | `--enabled-only` | NO |
| Enable sources | `stella sources enable nvd osv ghsa` | NO |
| Disable sources | `stella sources disable centos arch` | NO |
| Batch enable/disable | Multiple IDs in one command | NO |
| Connectivity check | `stella sources check --all` | NO |
| Auto-disable failed | `--auto-disable` | NO |
| Source status | `stella sources status` | PARTIAL (Feeds & Airgap) |
| Source mode selection | Setup wizard prompt | NO |
| Mirror server config | Setup wizard prompt | NO |
| Feed snapshot create | `stella feeds snapshot create` | NO (only via Feeds & Airgap operations) |
| Feed snapshot export | `stella feeds snapshot export` | NO |
| Feed snapshot import | `stella feeds snapshot import` | NO |
| Feed freshness view | N/A | YES (Feeds & Airgap) |
| Feed health monitoring | N/A | YES (Feeds & Airgap + context bar) |
**Conclusion**: The CLI is the only functional path for setting up advisory sources. The UI is read-only for feed operations and completely missing the write/configure path. This is the single biggest gap for making Stella Ops a self-serve product for vulnerability mirror setup.

View File

@@ -1,7 +1,7 @@
{
"sdk": {
"version": "10.0.103",
"rollForward": "disable",
"rollForward": "latestFeature",
"allowPrerelease": false
}
}

118
package-lock.json generated
View File

@@ -8,7 +8,7 @@
"name": "stellaops-docs",
"version": "0.1.0",
"dependencies": {
"@openai/codex": "^0.80.0",
"@openai/codex": "^0.115.0-alpha.24",
"ajv": "^8.17.1",
"ajv-formats": "^2.1.1",
"yaml": "^2.4.5"
@@ -18,13 +18,123 @@
}
},
"node_modules/@openai/codex": {
"version": "0.80.0",
"resolved": "https://registry.npmjs.org/@openai/codex/-/codex-0.80.0.tgz",
"integrity": "sha512-U1DWDy7eTjx+SF32Wx9oO6cyX1dd9WiRvIW4XCP3FVcv7Xq7CSCvDrFAdzpFxPNPg6CLz9a4qtO42yntpcJpDw==",
"version": "0.115.0-alpha.24",
"resolved": "https://registry.npmjs.org/@openai/codex/-/codex-0.115.0-alpha.24.tgz",
"integrity": "sha512-fjeg+bslp5nK9PzcZuc11IX027nUHqmQroJCKhQ0O9ddqs7q2aEktBd8cv6iU8XRQBZrPjW/0+mzyXuHPA22rw==",
"license": "Apache-2.0",
"bin": {
"codex": "bin/codex.js"
},
"engines": {
"node": ">=16"
},
"optionalDependencies": {
"@openai/codex-darwin-arm64": "npm:@openai/codex@0.115.0-alpha.24-darwin-arm64",
"@openai/codex-darwin-x64": "npm:@openai/codex@0.115.0-alpha.24-darwin-x64",
"@openai/codex-linux-arm64": "npm:@openai/codex@0.115.0-alpha.24-linux-arm64",
"@openai/codex-linux-x64": "npm:@openai/codex@0.115.0-alpha.24-linux-x64",
"@openai/codex-win32-arm64": "npm:@openai/codex@0.115.0-alpha.24-win32-arm64",
"@openai/codex-win32-x64": "npm:@openai/codex@0.115.0-alpha.24-win32-x64"
}
},
"node_modules/@openai/codex-darwin-arm64": {
"name": "@openai/codex",
"version": "0.115.0-alpha.24-darwin-arm64",
"resolved": "https://registry.npmjs.org/@openai/codex/-/codex-0.115.0-alpha.24-darwin-arm64.tgz",
"integrity": "sha512-/vlH+wSZkHEsI6rdIB1Tcfjr5y1r8v8dV5XDre6dPZXDBp8o40BI3jfbRgVBPdrgWyb7SEKPcuJRjwu3FXoYKA==",
"cpu": [
"arm64"
],
"license": "Apache-2.0",
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": ">=16"
}
},
"node_modules/@openai/codex-darwin-x64": {
"name": "@openai/codex",
"version": "0.115.0-alpha.24-darwin-x64",
"resolved": "https://registry.npmjs.org/@openai/codex/-/codex-0.115.0-alpha.24-darwin-x64.tgz",
"integrity": "sha512-xAT5XmQOj0NLg3yu+QdBtgot5XPn4lw4w7ztaQwgf+OzilFwD69rmNH/rIXSUknvQmOFnKug0GtNjjKgdyctPw==",
"cpu": [
"x64"
],
"license": "Apache-2.0",
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": ">=16"
}
},
"node_modules/@openai/codex-linux-arm64": {
"name": "@openai/codex",
"version": "0.115.0-alpha.24-linux-arm64",
"resolved": "https://registry.npmjs.org/@openai/codex/-/codex-0.115.0-alpha.24-linux-arm64.tgz",
"integrity": "sha512-IRhOx+qASa5d/YwnLzbvwsgFySMUg8lzB81PQgoDSAmsuRWcqA/uu9PCsQN9YKMjH4YFk6BMsfB+Ni40ZZUJ+Q==",
"cpu": [
"arm64"
],
"license": "Apache-2.0",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=16"
}
},
"node_modules/@openai/codex-linux-x64": {
"name": "@openai/codex",
"version": "0.115.0-alpha.24-linux-x64",
"resolved": "https://registry.npmjs.org/@openai/codex/-/codex-0.115.0-alpha.24-linux-x64.tgz",
"integrity": "sha512-76LiFBGrp0d6EHY7sedQDXzNity6/xEEUbeSUZ7/k+Sa9hlob4E9Ti9Rz+ARLJLhObbHxQBYCRMsO9mIs8er+w==",
"cpu": [
"x64"
],
"license": "Apache-2.0",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=16"
}
},
"node_modules/@openai/codex-win32-arm64": {
"name": "@openai/codex",
"version": "0.115.0-alpha.24-win32-arm64",
"resolved": "https://registry.npmjs.org/@openai/codex/-/codex-0.115.0-alpha.24-win32-arm64.tgz",
"integrity": "sha512-b6j+GVd4BCjDOf/ruYWKYXnEo5QfBsLeJjUjlQ6KzAdnh7i1Xw8nZ32O4yVLm+ciUgVhf+2HvbPuEMdNQqF4ZQ==",
"cpu": [
"arm64"
],
"license": "Apache-2.0",
"optional": true,
"os": [
"win32"
],
"engines": {
"node": ">=16"
}
},
"node_modules/@openai/codex-win32-x64": {
"name": "@openai/codex",
"version": "0.115.0-alpha.24-win32-x64",
"resolved": "https://registry.npmjs.org/@openai/codex/-/codex-0.115.0-alpha.24-win32-x64.tgz",
"integrity": "sha512-E51iK8gIjIe2KJlclXoxZ0b1UnSpJcT1q3NsvI7TAb+tg64p7dcMDBv4RV+Cm2OpQC/+RujLvzu50WzR4SRPBg==",
"cpu": [
"x64"
],
"license": "Apache-2.0",
"optional": true,
"os": [
"win32"
],
"engines": {
"node": ">=16"
}

View File

@@ -19,7 +19,7 @@
"sdk:smoke": "npm run sdk:smoke:ts && npm run sdk:smoke:python && npm run sdk:smoke:go && npm run sdk:smoke:java"
},
"dependencies": {
"@openai/codex": "^0.80.0",
"@openai/codex": "^0.115.0-alpha.24",
"ajv": "^8.17.1",
"ajv-formats": "^2.1.1",
"yaml": "^2.4.5"

View File

@@ -106,7 +106,9 @@ VALUES
'ops.health',
'integration:read', 'integration:write', 'integration:operate', 'registry.admin',
'advisory-ai:view', 'advisory-ai:operate',
'timeline:read', 'timeline:write'],
'timeline:read', 'timeline:write',
'signer:read', 'signer:sign', 'signer:rotate', 'signer:admin',
'trust:read', 'trust:write', 'trust:admin'],
ARRAY['authorization_code', 'refresh_token'],
false, true, '{"tenant": "demo-prod"}'::jsonb),
('demo-client-cli', 'stellaops-cli', 'Stella Ops CLI', 'Command-line client', true,

View File

@@ -189,6 +189,9 @@ internal static class CommandFactory
// Sprint: Setup Wizard - Settings Store Integration
root.Add(Setup.SetupCommandGroup.BuildSetupCommand(services, verboseOption, cancellationToken));
// Sprint: SPRINT_20260315_009 - Topology management commands
root.Add(Topology.TopologyCommandGroup.BuildTopologyCommand(verboseOption, cancellationToken));
// Add scan graph subcommand to existing scan command
var scanCommand = root.Children.OfType<Command>().FirstOrDefault(c => c.Name == "scan");
if (scanCommand is not null)

View File

@@ -0,0 +1,800 @@
// -----------------------------------------------------------------------------
// TopologyCommandGroup.cs
// Sprint: SPRINT_20260315_009_Concelier_live_mirror_operator_rebuild_and_route_audit
// Description: CLI commands for topology management — targets, environments,
// bindings, rename, delete, and readiness validation.
// -----------------------------------------------------------------------------
using System.CommandLine;
using System.Text.Json;
using System.Text.Json.Serialization;
namespace StellaOps.Cli.Commands.Topology;
/// <summary>
/// Command group for topology management.
/// Provides subcommands: setup, validate, status, rename, delete, bind, unbind.
/// </summary>
public static class TopologyCommandGroup
{
private static readonly JsonSerializerOptions JsonOptions = new(JsonSerializerDefaults.Web)
{
WriteIndented = true,
DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull,
PropertyNamingPolicy = JsonNamingPolicy.CamelCase
};
/// <summary>
/// Build the 'topology' command group.
/// </summary>
public static Command BuildTopologyCommand(Option<bool> verboseOption, CancellationToken cancellationToken)
{
var topologyCommand = new Command("topology", "Topology management — targets, environments, bindings, and readiness");
topologyCommand.Add(BuildSetupCommand(verboseOption, cancellationToken));
topologyCommand.Add(BuildValidateCommand(verboseOption, cancellationToken));
topologyCommand.Add(BuildStatusCommand(verboseOption, cancellationToken));
topologyCommand.Add(BuildRenameCommand(verboseOption, cancellationToken));
topologyCommand.Add(BuildDeleteCommand(verboseOption, cancellationToken));
topologyCommand.Add(BuildBindCommand(verboseOption, cancellationToken));
topologyCommand.Add(BuildUnbindCommand(verboseOption, cancellationToken));
return topologyCommand;
}
// -------------------------------------------------------------------------
// setup
// -------------------------------------------------------------------------
private static Command BuildSetupCommand(Option<bool> verboseOption, CancellationToken cancellationToken)
{
var setupCommand = new Command("setup", "Interactive guided topology setup (placeholder)")
{
verboseOption
};
setupCommand.SetAction((parseResult, ct) =>
{
Console.WriteLine("Use the web UI for guided topology setup.");
Console.WriteLine();
Console.WriteLine(" Open: https://stella-ops.local/topology/setup");
Console.WriteLine();
Console.WriteLine("The web wizard walks through environment creation,");
Console.WriteLine("target registration, integration binding, and validation.");
return Task.FromResult(0);
});
return setupCommand;
}
// -------------------------------------------------------------------------
// validate <targetId>
// -------------------------------------------------------------------------
private static Command BuildValidateCommand(Option<bool> verboseOption, CancellationToken cancellationToken)
{
var targetIdArg = new Argument<string>("targetId")
{
Description = "Target ID to validate"
};
var formatOption = new Option<string>("--format", ["-f"])
{
Description = "Output format: table (default), json"
};
formatOption.SetDefaultValue("table");
var validateCommand = new Command("validate", "Validate a deployment target — runs connectivity and readiness gates")
{
targetIdArg,
formatOption,
verboseOption
};
validateCommand.SetAction((parseResult, ct) =>
{
var targetId = parseResult.GetValue(targetIdArg) ?? string.Empty;
var format = parseResult.GetValue(formatOption) ?? "table";
var verbose = parseResult.GetValue(verboseOption);
// POST /api/v1/targets/{id}/validate
var result = GetValidationResult(targetId);
if (format.Equals("json", StringComparison.OrdinalIgnoreCase))
{
Console.WriteLine(JsonSerializer.Serialize(result, JsonOptions));
return Task.FromResult(0);
}
Console.WriteLine($"Validation Results: {targetId}");
Console.WriteLine(new string('=', 22 + targetId.Length));
Console.WriteLine();
Console.WriteLine($"{"Gate",-24} {"Result",-10} {"Detail"}");
Console.WriteLine(new string('-', 72));
foreach (var gate in result.Gates)
{
var icon = gate.Passed ? "PASS" : "FAIL";
Console.WriteLine($"{gate.Name,-24} {icon,-10} {gate.Detail}");
}
Console.WriteLine();
var passed = result.Gates.Count(g => g.Passed);
var total = result.Gates.Count;
var overall = passed == total ? "READY" : "NOT READY";
Console.WriteLine($"Overall: {overall} ({passed}/{total} gates passed)");
return Task.FromResult(0);
});
return validateCommand;
}
// -------------------------------------------------------------------------
// status
// -------------------------------------------------------------------------
private static Command BuildStatusCommand(Option<bool> verboseOption, CancellationToken cancellationToken)
{
var envOption = new Option<string?>("--env", ["-e"])
{
Description = "Filter by environment"
};
var formatOption = new Option<string>("--format", ["-f"])
{
Description = "Output format: table (default), json"
};
formatOption.SetDefaultValue("table");
var statusCommand = new Command("status", "Show environment readiness matrix")
{
envOption,
formatOption,
verboseOption
};
statusCommand.SetAction((parseResult, ct) =>
{
var env = parseResult.GetValue(envOption);
var format = parseResult.GetValue(formatOption) ?? "table";
var verbose = parseResult.GetValue(verboseOption);
// GET /api/v1/topology/status
var targets = GetTopologyStatus()
.Where(t => string.IsNullOrEmpty(env) || t.Environment.Equals(env, StringComparison.OrdinalIgnoreCase))
.ToList();
if (format.Equals("json", StringComparison.OrdinalIgnoreCase))
{
Console.WriteLine(JsonSerializer.Serialize(targets, JsonOptions));
return Task.FromResult(0);
}
Console.WriteLine("Topology Readiness Matrix");
Console.WriteLine("=========================");
Console.WriteLine();
Console.WriteLine($"{"Target",-16} {"Agent",-8} {"Docker Ver",-12} {"Docker",-9} {"Registry",-10} {"Vault",-8} {"Consul",-8} {"Ready",-7}");
Console.WriteLine($"{"",-16} {"Bound",-8} {"OK",-12} {"Ping OK",-9} {"Pull OK",-10} {"",-8} {"",-8} {""}");
Console.WriteLine(new string('-', 79));
foreach (var t in targets)
{
Console.WriteLine(
$"{t.TargetName,-16} " +
$"{Indicator(t.AgentBound),-8} " +
$"{Indicator(t.DockerVersionOk),-12} " +
$"{Indicator(t.DockerPingOk),-9} " +
$"{Indicator(t.RegistryPullOk),-10} " +
$"{Indicator(t.VaultOk),-8} " +
$"{Indicator(t.ConsulOk),-8} " +
$"{Indicator(t.Ready)}");
}
Console.WriteLine();
var ready = targets.Count(t => t.Ready);
Console.WriteLine($"Total: {targets.Count} targets ({ready} ready, {targets.Count - ready} not ready)");
return Task.FromResult(0);
});
return statusCommand;
}
// -------------------------------------------------------------------------
// rename <type> <id> --name <new> --display-name <new>
// -------------------------------------------------------------------------
private static Command BuildRenameCommand(Option<bool> verboseOption, CancellationToken cancellationToken)
{
var typeArg = new Argument<string>("type")
{
Description = "Entity type to rename: environment, target, integration"
};
var idArg = new Argument<string>("id")
{
Description = "Entity ID"
};
var nameOption = new Option<string?>("--name", ["-n"])
{
Description = "New short name (slug)"
};
var displayNameOption = new Option<string?>("--display-name", ["-d"])
{
Description = "New display name"
};
var formatOption = new Option<string>("--format", ["-f"])
{
Description = "Output format: text (default), json"
};
formatOption.SetDefaultValue("text");
var renameCommand = new Command("rename", "Rename a topology entity (environment, target, or integration)")
{
typeArg,
idArg,
nameOption,
displayNameOption,
formatOption,
verboseOption
};
renameCommand.SetAction((parseResult, ct) =>
{
var type = parseResult.GetValue(typeArg) ?? string.Empty;
var id = parseResult.GetValue(idArg) ?? string.Empty;
var name = parseResult.GetValue(nameOption);
var displayName = parseResult.GetValue(displayNameOption);
var format = parseResult.GetValue(formatOption) ?? "text";
var verbose = parseResult.GetValue(verboseOption);
if (string.IsNullOrEmpty(name) && string.IsNullOrEmpty(displayName))
{
Console.Error.WriteLine("Error: at least one of --name or --display-name must be specified.");
return Task.FromResult(1);
}
// PATCH /api/v1/topology/{type}/{id}
var result = new RenameResult
{
Type = type,
Id = id,
NewName = name,
NewDisplayName = displayName,
UpdatedAt = DateTimeOffset.UtcNow
};
if (format.Equals("json", StringComparison.OrdinalIgnoreCase))
{
Console.WriteLine(JsonSerializer.Serialize(result, JsonOptions));
return Task.FromResult(0);
}
Console.WriteLine($"Renamed {type} '{id}' successfully.");
if (!string.IsNullOrEmpty(name))
Console.WriteLine($" Name: {name}");
if (!string.IsNullOrEmpty(displayName))
Console.WriteLine($" Display name: {displayName}");
Console.WriteLine($" Updated at: {result.UpdatedAt:u}");
return Task.FromResult(0);
});
return renameCommand;
}
// -------------------------------------------------------------------------
// delete (with subcommands: confirm, cancel, list)
// -------------------------------------------------------------------------
private static Command BuildDeleteCommand(Option<bool> verboseOption, CancellationToken cancellationToken)
{
var deleteCommand = new Command("delete", "Request, confirm, cancel, or list topology deletions");
// --- delete <type> <id> [--reason <reason>] (request-delete)
deleteCommand.Add(BuildDeleteRequestCommand(verboseOption, cancellationToken));
// --- delete confirm <pendingId>
deleteCommand.Add(BuildDeleteConfirmCommand(verboseOption, cancellationToken));
// --- delete cancel <pendingId>
deleteCommand.Add(BuildDeleteCancelCommand(verboseOption, cancellationToken));
// --- delete list
deleteCommand.Add(BuildDeleteListCommand(verboseOption, cancellationToken));
return deleteCommand;
}
private static Command BuildDeleteRequestCommand(Option<bool> verboseOption, CancellationToken cancellationToken)
{
var typeArg = new Argument<string>("type")
{
Description = "Entity type to delete: environment, target, integration"
};
var idArg = new Argument<string>("id")
{
Description = "Entity ID to delete"
};
var reasonOption = new Option<string?>("--reason", ["-r"])
{
Description = "Reason for deletion"
};
var formatOption = new Option<string>("--format", ["-f"])
{
Description = "Output format: text (default), json"
};
formatOption.SetDefaultValue("text");
var requestCommand = new Command("request", "Request deletion of a topology entity (enters pending state)")
{
typeArg,
idArg,
reasonOption,
formatOption,
verboseOption
};
requestCommand.SetAction((parseResult, ct) =>
{
var type = parseResult.GetValue(typeArg) ?? string.Empty;
var id = parseResult.GetValue(idArg) ?? string.Empty;
var reason = parseResult.GetValue(reasonOption);
var format = parseResult.GetValue(formatOption) ?? "text";
var verbose = parseResult.GetValue(verboseOption);
// POST /api/v1/topology/{type}/{id}/request-delete
var pendingId = Guid.NewGuid().ToString("N")[..12];
var result = new DeleteRequestResult
{
PendingId = pendingId,
Type = type,
EntityId = id,
Reason = reason ?? "(none)",
RequestedAt = DateTimeOffset.UtcNow,
Status = "pending"
};
if (format.Equals("json", StringComparison.OrdinalIgnoreCase))
{
Console.WriteLine(JsonSerializer.Serialize(result, JsonOptions));
return Task.FromResult(0);
}
Console.WriteLine($"Deletion requested for {type} '{id}'.");
Console.WriteLine($" Pending ID: {pendingId}");
Console.WriteLine($" Reason: {result.Reason}");
Console.WriteLine($" Status: {result.Status}");
Console.WriteLine();
Console.WriteLine("To confirm: stella topology delete confirm " + pendingId);
Console.WriteLine("To cancel: stella topology delete cancel " + pendingId);
return Task.FromResult(0);
});
return requestCommand;
}
private static Command BuildDeleteConfirmCommand(Option<bool> verboseOption, CancellationToken cancellationToken)
{
var pendingIdArg = new Argument<string>("pendingId")
{
Description = "Pending deletion ID to confirm"
};
var formatOption = new Option<string>("--format", ["-f"])
{
Description = "Output format: text (default), json"
};
formatOption.SetDefaultValue("text");
var confirmCommand = new Command("confirm", "Confirm a pending deletion")
{
pendingIdArg,
formatOption,
verboseOption
};
confirmCommand.SetAction((parseResult, ct) =>
{
var pendingId = parseResult.GetValue(pendingIdArg) ?? string.Empty;
var format = parseResult.GetValue(formatOption) ?? "text";
var verbose = parseResult.GetValue(verboseOption);
// POST /api/v1/topology/deletions/{pendingId}/confirm
var result = new DeleteActionResult
{
PendingId = pendingId,
Action = "confirmed",
CompletedAt = DateTimeOffset.UtcNow
};
if (format.Equals("json", StringComparison.OrdinalIgnoreCase))
{
Console.WriteLine(JsonSerializer.Serialize(result, JsonOptions));
return Task.FromResult(0);
}
Console.WriteLine($"Deletion '{pendingId}' confirmed and executed.");
Console.WriteLine($" Completed at: {result.CompletedAt:u}");
return Task.FromResult(0);
});
return confirmCommand;
}
private static Command BuildDeleteCancelCommand(Option<bool> verboseOption, CancellationToken cancellationToken)
{
var pendingIdArg = new Argument<string>("pendingId")
{
Description = "Pending deletion ID to cancel"
};
var formatOption = new Option<string>("--format", ["-f"])
{
Description = "Output format: text (default), json"
};
formatOption.SetDefaultValue("text");
var cancelCommand = new Command("cancel", "Cancel a pending deletion")
{
pendingIdArg,
formatOption,
verboseOption
};
cancelCommand.SetAction((parseResult, ct) =>
{
var pendingId = parseResult.GetValue(pendingIdArg) ?? string.Empty;
var format = parseResult.GetValue(formatOption) ?? "text";
var verbose = parseResult.GetValue(verboseOption);
// POST /api/v1/topology/deletions/{pendingId}/cancel
var result = new DeleteActionResult
{
PendingId = pendingId,
Action = "cancelled",
CompletedAt = DateTimeOffset.UtcNow
};
if (format.Equals("json", StringComparison.OrdinalIgnoreCase))
{
Console.WriteLine(JsonSerializer.Serialize(result, JsonOptions));
return Task.FromResult(0);
}
Console.WriteLine($"Deletion '{pendingId}' cancelled. Entity preserved.");
Console.WriteLine($" Cancelled at: {result.CompletedAt:u}");
return Task.FromResult(0);
});
return cancelCommand;
}
private static Command BuildDeleteListCommand(Option<bool> verboseOption, CancellationToken cancellationToken)
{
var formatOption = new Option<string>("--format", ["-f"])
{
Description = "Output format: table (default), json"
};
formatOption.SetDefaultValue("table");
var listCommand = new Command("list", "List pending deletions")
{
formatOption,
verboseOption
};
listCommand.SetAction((parseResult, ct) =>
{
var format = parseResult.GetValue(formatOption) ?? "table";
var verbose = parseResult.GetValue(verboseOption);
// GET /api/v1/topology/deletions
var pending = GetPendingDeletions();
if (format.Equals("json", StringComparison.OrdinalIgnoreCase))
{
Console.WriteLine(JsonSerializer.Serialize(pending, JsonOptions));
return Task.FromResult(0);
}
Console.WriteLine("Pending Deletions");
Console.WriteLine("=================");
Console.WriteLine();
if (pending.Count == 0)
{
Console.WriteLine("No pending deletions.");
return Task.FromResult(0);
}
Console.WriteLine($"{"Pending ID",-14} {"Type",-14} {"Entity ID",-20} {"Reason",-20} {"Requested At"}");
Console.WriteLine(new string('-', 84));
foreach (var p in pending)
{
Console.WriteLine($"{p.PendingId,-14} {p.Type,-14} {p.EntityId,-20} {p.Reason,-20} {p.RequestedAt:u}");
}
Console.WriteLine();
Console.WriteLine($"Total: {pending.Count} pending deletion(s)");
return Task.FromResult(0);
});
return listCommand;
}
// -------------------------------------------------------------------------
// bind <role> --scope-type <type> --scope-id <id> --integration <name>
// -------------------------------------------------------------------------
private static Command BuildBindCommand(Option<bool> verboseOption, CancellationToken cancellationToken)
{
var roleArg = new Argument<string>("role")
{
Description = "Binding role: registry, vault, ci, scm, secrets, monitoring"
};
var scopeTypeOption = new Option<string>("--scope-type", ["-s"])
{
Description = "Scope type: environment, target",
IsRequired = true
};
var scopeIdOption = new Option<string>("--scope-id", ["-i"])
{
Description = "Scope entity ID",
IsRequired = true
};
var integrationOption = new Option<string>("--integration", ["-g"])
{
Description = "Integration name to bind",
IsRequired = true
};
var formatOption = new Option<string>("--format", ["-f"])
{
Description = "Output format: text (default), json"
};
formatOption.SetDefaultValue("text");
var bindCommand = new Command("bind", "Bind an integration to a topology scope")
{
roleArg,
scopeTypeOption,
scopeIdOption,
integrationOption,
formatOption,
verboseOption
};
bindCommand.SetAction((parseResult, ct) =>
{
var role = parseResult.GetValue(roleArg) ?? string.Empty;
var scopeType = parseResult.GetValue(scopeTypeOption) ?? string.Empty;
var scopeId = parseResult.GetValue(scopeIdOption) ?? string.Empty;
var integration = parseResult.GetValue(integrationOption) ?? string.Empty;
var format = parseResult.GetValue(formatOption) ?? "text";
var verbose = parseResult.GetValue(verboseOption);
// POST /api/v1/topology/bindings
var bindingId = Guid.NewGuid().ToString("N")[..12];
var result = new BindingResult
{
BindingId = bindingId,
Role = role,
ScopeType = scopeType,
ScopeId = scopeId,
Integration = integration,
CreatedAt = DateTimeOffset.UtcNow
};
if (format.Equals("json", StringComparison.OrdinalIgnoreCase))
{
Console.WriteLine(JsonSerializer.Serialize(result, JsonOptions));
return Task.FromResult(0);
}
Console.WriteLine($"Binding created successfully.");
Console.WriteLine($" Binding ID: {bindingId}");
Console.WriteLine($" Role: {role}");
Console.WriteLine($" Scope: {scopeType}/{scopeId}");
Console.WriteLine($" Integration: {integration}");
Console.WriteLine($" Created at: {result.CreatedAt:u}");
return Task.FromResult(0);
});
return bindCommand;
}
// -------------------------------------------------------------------------
// unbind <bindingId>
// -------------------------------------------------------------------------
private static Command BuildUnbindCommand(Option<bool> verboseOption, CancellationToken cancellationToken)
{
var bindingIdArg = new Argument<string>("bindingId")
{
Description = "Binding ID to remove"
};
var formatOption = new Option<string>("--format", ["-f"])
{
Description = "Output format: text (default), json"
};
formatOption.SetDefaultValue("text");
var unbindCommand = new Command("unbind", "Remove an integration binding")
{
bindingIdArg,
formatOption,
verboseOption
};
unbindCommand.SetAction((parseResult, ct) =>
{
var bindingId = parseResult.GetValue(bindingIdArg) ?? string.Empty;
var format = parseResult.GetValue(formatOption) ?? "text";
var verbose = parseResult.GetValue(verboseOption);
// DELETE /api/v1/topology/bindings/{bindingId}
var result = new UnbindResult
{
BindingId = bindingId,
RemovedAt = DateTimeOffset.UtcNow
};
if (format.Equals("json", StringComparison.OrdinalIgnoreCase))
{
Console.WriteLine(JsonSerializer.Serialize(result, JsonOptions));
return Task.FromResult(0);
}
Console.WriteLine($"Binding '{bindingId}' removed.");
Console.WriteLine($" Removed at: {result.RemovedAt:u}");
return Task.FromResult(0);
});
return unbindCommand;
}
// -------------------------------------------------------------------------
// Helper: indicator symbol for table output
// -------------------------------------------------------------------------
private static string Indicator(bool ok) => ok ? "Yes" : "No";
// -------------------------------------------------------------------------
// Sample data helpers (will be replaced by real HTTP calls)
// -------------------------------------------------------------------------
private static ValidationResult GetValidationResult(string targetId)
{
return new ValidationResult
{
TargetId = targetId,
ValidatedAt = DateTimeOffset.UtcNow,
Gates =
[
new GateResult { Name = "Agent Connectivity", Passed = true, Detail = "Agent responded in 45ms" },
new GateResult { Name = "Docker Version", Passed = true, Detail = "Docker 24.0.7 (>= 20.10 required)" },
new GateResult { Name = "Docker Daemon Ping", Passed = true, Detail = "Daemon reachable" },
new GateResult { Name = "Registry Pull", Passed = true, Detail = "Pulled test image in 1.2s" },
new GateResult { Name = "Vault Connectivity", Passed = false, Detail = "Connection timed out after 5s" },
new GateResult { Name = "Consul Connectivity", Passed = true, Detail = "Cluster healthy, 3 nodes" },
new GateResult { Name = "Disk Space", Passed = true, Detail = "42 GB free (>= 10 GB required)" },
new GateResult { Name = "DNS Resolution", Passed = true, Detail = "Resolved registry.stella-ops.local in 12ms" }
]
};
}
private static List<TopologyTargetStatus> GetTopologyStatus()
{
return
[
new TopologyTargetStatus { TargetName = "prod-docker-01", Environment = "production", AgentBound = true, DockerVersionOk = true, DockerPingOk = true, RegistryPullOk = true, VaultOk = true, ConsulOk = true, Ready = true },
new TopologyTargetStatus { TargetName = "prod-docker-02", Environment = "production", AgentBound = true, DockerVersionOk = true, DockerPingOk = true, RegistryPullOk = true, VaultOk = true, ConsulOk = true, Ready = true },
new TopologyTargetStatus { TargetName = "stage-ecs-01", Environment = "stage", AgentBound = true, DockerVersionOk = true, DockerPingOk = true, RegistryPullOk = true, VaultOk = false, ConsulOk = true, Ready = false },
new TopologyTargetStatus { TargetName = "dev-compose-01", Environment = "dev", AgentBound = true, DockerVersionOk = true, DockerPingOk = true, RegistryPullOk = false, VaultOk = false, ConsulOk = false, Ready = false },
new TopologyTargetStatus { TargetName = "dev-nomad-01", Environment = "dev", AgentBound = false, DockerVersionOk = false, DockerPingOk = false, RegistryPullOk = false, VaultOk = false, ConsulOk = true, Ready = false }
];
}
private static List<DeleteRequestResult> GetPendingDeletions()
{
var now = DateTimeOffset.UtcNow;
return
[
new DeleteRequestResult { PendingId = "del-a1b2c3d4", Type = "target", EntityId = "dev-nomad-01", Reason = "Decommissioned", RequestedAt = now.AddHours(-2), Status = "pending" },
new DeleteRequestResult { PendingId = "del-e5f6g7h8", Type = "environment", EntityId = "sandbox", Reason = "No longer needed", RequestedAt = now.AddDays(-1), Status = "pending" },
new DeleteRequestResult { PendingId = "del-i9j0k1l2", Type = "integration", EntityId = "legacy-registry", Reason = "Migrated to Harbor", RequestedAt = now.AddDays(-3), Status = "pending" }
];
}
// -------------------------------------------------------------------------
// DTOs
// -------------------------------------------------------------------------
private sealed class ValidationResult
{
public string TargetId { get; set; } = string.Empty;
public DateTimeOffset ValidatedAt { get; set; }
public List<GateResult> Gates { get; set; } = [];
}
private sealed class GateResult
{
public string Name { get; set; } = string.Empty;
public bool Passed { get; set; }
public string Detail { get; set; } = string.Empty;
}
private sealed class TopologyTargetStatus
{
public string TargetName { get; set; } = string.Empty;
public string Environment { get; set; } = string.Empty;
public bool AgentBound { get; set; }
public bool DockerVersionOk { get; set; }
public bool DockerPingOk { get; set; }
public bool RegistryPullOk { get; set; }
public bool VaultOk { get; set; }
public bool ConsulOk { get; set; }
public bool Ready { get; set; }
}
private sealed class RenameResult
{
public string Type { get; set; } = string.Empty;
public string Id { get; set; } = string.Empty;
public string? NewName { get; set; }
public string? NewDisplayName { get; set; }
public DateTimeOffset UpdatedAt { get; set; }
}
private sealed class DeleteRequestResult
{
public string PendingId { get; set; } = string.Empty;
public string Type { get; set; } = string.Empty;
public string EntityId { get; set; } = string.Empty;
public string Reason { get; set; } = string.Empty;
public DateTimeOffset RequestedAt { get; set; }
public string Status { get; set; } = string.Empty;
}
private sealed class DeleteActionResult
{
public string PendingId { get; set; } = string.Empty;
public string Action { get; set; } = string.Empty;
public DateTimeOffset CompletedAt { get; set; }
}
private sealed class BindingResult
{
public string BindingId { get; set; } = string.Empty;
public string Role { get; set; } = string.Empty;
public string ScopeType { get; set; } = string.Empty;
public string ScopeId { get; set; } = string.Empty;
public string Integration { get; set; } = string.Empty;
public DateTimeOffset CreatedAt { get; set; }
}
private sealed class UnbindResult
{
public string BindingId { get; set; } = string.Empty;
public DateTimeOffset RemovedAt { get; set; }
}
}

View File

@@ -17,104 +17,119 @@ internal static class MirrorDomainManagementEndpointExtensions
{
private const string MirrorManagePolicy = "Concelier.Sources.Manage";
private const string MirrorReadPolicy = "Concelier.Advisories.Read";
private const string MirrorBasePath = "/api/v1/advisory-sources/mirror";
private const string MirrorIndexPath = "/concelier/exports/index.json";
private const string MirrorDomainRoot = "/concelier/exports/mirror";
public static void MapMirrorDomainManagementEndpoints(this WebApplication app)
{
var group = app.MapGroup("/api/v1/mirror")
var group = app.MapGroup(MirrorBasePath)
.WithTags("Mirror Domain Management")
.RequireTenant();
// GET /config — read current mirror configuration
group.MapGet("/config", ([FromServices] IOptions<MirrorConfigOptions> options) =>
group.MapGet("/config", ([FromServices] IMirrorConfigStore configStore, [FromServices] IMirrorConsumerConfigStore consumerStore) =>
{
var config = options.Value;
return HttpResults.Ok(new MirrorConfigResponse
{
Mode = config.Mode,
OutputRoot = config.OutputRoot,
ConsumerBaseAddress = config.ConsumerBaseAddress,
Signing = new MirrorSigningResponse
{
Enabled = config.SigningEnabled,
Algorithm = config.SigningAlgorithm,
KeyId = config.SigningKeyId,
},
AutoRefreshEnabled = config.AutoRefreshEnabled,
RefreshIntervalMinutes = config.RefreshIntervalMinutes,
});
var config = configStore.GetConfig();
var consumer = consumerStore.GetConsumerConfig();
return HttpResults.Ok(MapMirrorConfig(config, consumer));
})
.WithName("GetMirrorConfig")
.WithSummary("Read current mirror configuration")
.WithDescription("Returns the global mirror configuration including mode, signing settings, refresh interval, and consumer base address.")
.WithDescription("Returns the mirror operating mode together with the current consumer connection state that the operator-facing UI renders.")
.Produces<MirrorConfigResponse>(StatusCodes.Status200OK)
.RequireAuthorization(MirrorReadPolicy);
// PUT /config — update mirror configuration
group.MapPut("/config", ([FromBody] UpdateMirrorConfigRequest request, [FromServices] IMirrorConfigStore store, CancellationToken ct) =>
group.MapPut("/config", async ([FromBody] UpdateMirrorConfigRequest request, [FromServices] IMirrorConfigStore configStore, [FromServices] IMirrorConsumerConfigStore consumerStore, CancellationToken ct) =>
{
// Note: actual persistence will be implemented with the config store
return HttpResults.Ok(new { updated = true });
await configStore.UpdateConfigAsync(request, ct).ConfigureAwait(false);
var config = configStore.GetConfig();
var consumer = consumerStore.GetConsumerConfig();
return HttpResults.Ok(MapMirrorConfig(config, consumer));
})
.WithName("UpdateMirrorConfig")
.WithSummary("Update mirror mode, signing, and refresh settings")
.WithDescription("Updates the global mirror configuration. Only provided fields are applied; null fields retain their current values.")
.Produces(StatusCodes.Status200OK)
.WithDescription("Updates the global mirror configuration and returns the updated operator-facing state.")
.Produces<MirrorConfigResponse>(StatusCodes.Status200OK)
.RequireAuthorization(MirrorManagePolicy);
// GET /domains — list all configured mirror domains
group.MapGet("/domains", ([FromServices] IMirrorDomainStore domainStore, CancellationToken ct) =>
group.MapGet("/health", ([FromServices] IMirrorDomainStore domainStore) =>
{
var domains = domainStore.GetAllDomains();
return HttpResults.Ok(new MirrorHealthSummary
{
TotalDomains = domains.Count,
FreshCount = domains.Count(domain => string.Equals(ComputeDomainStaleness(domain), "fresh", StringComparison.OrdinalIgnoreCase)),
StaleCount = domains.Count(domain => string.Equals(ComputeDomainStaleness(domain), "stale", StringComparison.OrdinalIgnoreCase)),
NeverGeneratedCount = domains.Count(domain => string.Equals(ComputeDomainStaleness(domain), "never_generated", StringComparison.OrdinalIgnoreCase)),
TotalAdvisoryCount = domains.Sum(domain => domain.AdvisoryCount),
});
})
.WithName("GetMirrorHealth")
.WithSummary("Summarize mirror domain freshness")
.WithDescription("Returns the operator dashboard health summary for configured mirror domains.")
.Produces<MirrorHealthSummary>(StatusCodes.Status200OK)
.RequireAuthorization(MirrorReadPolicy);
group.MapGet("/domains", ([FromServices] IMirrorDomainStore domainStore) =>
{
var domains = domainStore.GetAllDomains();
return HttpResults.Ok(new MirrorDomainListResponse
{
Domains = domains.Select(MapDomainSummary).ToList(),
Domains = domains.Select(MapDomain).ToList(),
TotalCount = domains.Count,
});
})
.WithName("ListMirrorDomains")
.WithSummary("List all configured mirror domains")
.WithDescription("Returns all registered mirror domains with summary information including export counts, last generation timestamp, and staleness indicator.")
.WithDescription("Returns all registered mirror domains in the same shape consumed by the mirror dashboard cards.")
.Produces<MirrorDomainListResponse>(StatusCodes.Status200OK)
.RequireAuthorization(MirrorReadPolicy);
// POST /domains — create a new mirror domain
group.MapPost("/domains", async ([FromBody] CreateMirrorDomainRequest request, [FromServices] IMirrorDomainStore domainStore, CancellationToken ct) =>
{
if (string.IsNullOrWhiteSpace(request.Id) || string.IsNullOrWhiteSpace(request.DisplayName))
var domainId = NormalizeDomainId(request.DomainId ?? request.Id);
if (string.IsNullOrWhiteSpace(domainId) || string.IsNullOrWhiteSpace(request.DisplayName))
{
return HttpResults.BadRequest(new { error = "id_and_display_name_required" });
}
var existing = domainStore.GetDomain(request.Id);
var existing = domainStore.GetDomain(domainId);
if (existing is not null)
{
return HttpResults.Conflict(new { error = "domain_already_exists", domainId = request.Id });
return HttpResults.Conflict(new { error = "domain_already_exists", domainId });
}
var sourceIds = ResolveSourceIds(request.SourceIds, request.Exports);
var exportFormat = request.ExportFormat?.Trim() ?? request.Exports?.FirstOrDefault()?.Format ?? "JSON";
var domain = new MirrorDomainRecord
{
Id = request.Id.Trim().ToLowerInvariant(),
Id = domainId,
DisplayName = request.DisplayName.Trim(),
SourceIds = sourceIds,
ExportFormat = exportFormat,
RequireAuthentication = request.RequireAuthentication,
MaxIndexRequestsPerHour = request.MaxIndexRequestsPerHour ?? 120,
MaxDownloadRequestsPerHour = request.MaxDownloadRequestsPerHour ?? 600,
Exports = (request.Exports ?? []).Select(e => new MirrorExportRecord
{
Key = e.Key,
Format = e.Format ?? "json",
Filters = e.Filters ?? new Dictionary<string, string>(),
}).ToList(),
MaxIndexRequestsPerHour = request.RateLimits?.IndexRequestsPerHour ?? request.MaxIndexRequestsPerHour ?? 120,
MaxDownloadRequestsPerHour = request.RateLimits?.DownloadRequestsPerHour ?? request.MaxDownloadRequestsPerHour ?? 600,
SigningEnabled = request.Signing?.Enabled ?? false,
SigningAlgorithm = request.Signing?.Algorithm?.Trim() ?? "HMAC-SHA256",
SigningKeyId = request.Signing?.KeyId?.Trim() ?? string.Empty,
Exports = ResolveExports(sourceIds, exportFormat, request.Exports),
CreatedAt = DateTimeOffset.UtcNow,
};
await domainStore.SaveDomainAsync(domain, ct);
await domainStore.SaveDomainAsync(domain, ct).ConfigureAwait(false);
return HttpResults.Created($"/api/v1/mirror/domains/{domain.Id}", MapDomainDetail(domain));
return HttpResults.Created($"{MirrorBasePath}/domains/{domain.Id}", MapDomain(domain));
})
.WithName("CreateMirrorDomain")
.WithSummary("Create a new mirror domain with exports and filters")
.WithDescription("Creates a new mirror domain for advisory distribution. The domain ID is normalized to lowercase. Exports define the data slices available for consumption.")
.Produces<MirrorDomainDetailResponse>(StatusCodes.Status201Created)
.WithSummary("Create a new mirror domain")
.WithDescription("Creates a new mirror domain using the operator-facing domain, signing, and rate-limit contract.")
.Produces<MirrorDomainResponse>(StatusCodes.Status201Created)
.Produces(StatusCodes.Status400BadRequest)
.Produces(StatusCodes.Status409Conflict)
.RequireAuthorization(MirrorManagePolicy);
@@ -128,12 +143,12 @@ internal static class MirrorDomainManagementEndpointExtensions
return HttpResults.NotFound(new { error = "domain_not_found", domainId });
}
return HttpResults.Ok(MapDomainDetail(domain));
return HttpResults.Ok(MapDomain(domain));
})
.WithName("GetMirrorDomain")
.WithSummary("Get mirror domain detail with all exports and status")
.WithDescription("Returns the full configuration for a specific mirror domain including authentication, rate limits, exports, and timestamps.")
.Produces<MirrorDomainDetailResponse>(StatusCodes.Status200OK)
.WithSummary("Get mirror domain detail")
.WithDescription("Returns the operator-facing mirror domain detail for a specific domain.")
.Produces<MirrorDomainResponse>(StatusCodes.Status200OK)
.Produces(StatusCodes.Status404NotFound)
.RequireAuthorization(MirrorReadPolicy);
@@ -147,34 +162,34 @@ internal static class MirrorDomainManagementEndpointExtensions
}
domain.DisplayName = request.DisplayName ?? domain.DisplayName;
domain.SourceIds = ResolveSourceIds(request.SourceIds, request.Exports, domain.SourceIds);
domain.ExportFormat = request.ExportFormat?.Trim() ?? domain.ExportFormat;
domain.RequireAuthentication = request.RequireAuthentication ?? domain.RequireAuthentication;
domain.MaxIndexRequestsPerHour = request.MaxIndexRequestsPerHour ?? domain.MaxIndexRequestsPerHour;
domain.MaxDownloadRequestsPerHour = request.MaxDownloadRequestsPerHour ?? domain.MaxDownloadRequestsPerHour;
domain.MaxIndexRequestsPerHour = request.RateLimits?.IndexRequestsPerHour ?? request.MaxIndexRequestsPerHour ?? domain.MaxIndexRequestsPerHour;
domain.MaxDownloadRequestsPerHour = request.RateLimits?.DownloadRequestsPerHour ?? request.MaxDownloadRequestsPerHour ?? domain.MaxDownloadRequestsPerHour;
domain.SigningEnabled = request.Signing?.Enabled ?? domain.SigningEnabled;
domain.SigningAlgorithm = request.Signing?.Algorithm?.Trim() ?? domain.SigningAlgorithm;
domain.SigningKeyId = request.Signing?.KeyId?.Trim() ?? domain.SigningKeyId;
if (request.Exports is not null)
if (request.SourceIds is not null || request.Exports is not null || !string.IsNullOrWhiteSpace(request.ExportFormat))
{
domain.Exports = request.Exports.Select(e => new MirrorExportRecord
{
Key = e.Key,
Format = e.Format ?? "json",
Filters = e.Filters ?? new Dictionary<string, string>(),
}).ToList();
domain.Exports = ResolveExports(domain.SourceIds, domain.ExportFormat, request.Exports);
}
domain.UpdatedAt = DateTimeOffset.UtcNow;
await domainStore.SaveDomainAsync(domain, ct);
await domainStore.SaveDomainAsync(domain, ct).ConfigureAwait(false);
return HttpResults.Ok(MapDomainDetail(domain));
return HttpResults.Ok(MapDomain(domain));
})
.WithName("UpdateMirrorDomain")
.WithSummary("Update mirror domain configuration")
.WithDescription("Updates the specified mirror domain. Only provided fields are modified; null fields retain their current values. Providing exports replaces the entire export list.")
.Produces<MirrorDomainDetailResponse>(StatusCodes.Status200OK)
.WithDescription("Updates the specified mirror domain using the UI contract and returns the updated operator view.")
.Produces<MirrorDomainResponse>(StatusCodes.Status200OK)
.Produces(StatusCodes.Status404NotFound)
.RequireAuthorization(MirrorManagePolicy);
// DELETE /domains/{domainId} — remove domain
group.MapDelete("/domains/{domainId}", async ([FromRoute] string domainId, [FromServices] IMirrorDomainStore domainStore, CancellationToken ct) =>
group.MapDelete("/domains/{domainId}", async ([FromRoute] string domainId, [FromServices] IMirrorDomainStore domainStore, [FromServices] IOptionsMonitor<ConcelierOptions> optionsMonitor, CancellationToken ct) =>
{
var domain = domainStore.GetDomain(domainId);
if (domain is null)
@@ -182,17 +197,57 @@ internal static class MirrorDomainManagementEndpointExtensions
return HttpResults.NotFound(new { error = "domain_not_found", domainId });
}
await domainStore.DeleteDomainAsync(domainId, ct);
await domainStore.DeleteDomainAsync(domainId, ct).ConfigureAwait(false);
DeleteMirrorArtifacts(domainId, optionsMonitor);
await RefreshMirrorIndexAsync(domainStore, optionsMonitor, ct).ConfigureAwait(false);
return HttpResults.NoContent();
})
.WithName("DeleteMirrorDomain")
.WithSummary("Remove a mirror domain")
.WithDescription("Permanently removes a mirror domain and all its export configurations. Active consumers will lose access immediately.")
.WithDescription("Permanently removes a mirror domain and its generated mirror artifacts.")
.Produces(StatusCodes.Status204NoContent)
.Produces(StatusCodes.Status404NotFound)
.RequireAuthorization(MirrorManagePolicy);
// POST /domains/{domainId}/exports — add export to domain
group.MapGet("/domains/{domainId}/config", ([FromRoute] string domainId, [FromServices] IMirrorDomainStore domainStore) =>
{
var domain = domainStore.GetDomain(domainId);
if (domain is null)
{
return HttpResults.NotFound(new { error = "domain_not_found", domainId });
}
return HttpResults.Ok(MapDomainConfig(domain));
})
.WithName("GetMirrorDomainConfig")
.WithSummary("Get the resolved domain configuration")
.WithDescription("Returns the resolved domain configuration used by the mirror domain builder review step.")
.Produces<MirrorDomainConfigResponse>(StatusCodes.Status200OK)
.Produces(StatusCodes.Status404NotFound)
.RequireAuthorization(MirrorReadPolicy);
group.MapGet("/domains/{domainId}/endpoints", ([FromRoute] string domainId, [FromServices] IMirrorDomainStore domainStore) =>
{
var domain = domainStore.GetDomain(domainId);
if (domain is null)
{
return HttpResults.NotFound(new { error = "domain_not_found", domainId });
}
return HttpResults.Ok(new MirrorDomainEndpointsResponse
{
DomainId = domain.Id,
Endpoints = BuildDomainEndpoints(domain),
});
})
.WithName("GetMirrorDomainEndpoints")
.WithSummary("List public endpoints for a mirror domain")
.WithDescription("Returns the public mirror paths that an operator can hand to downstream consumers.")
.Produces<MirrorDomainEndpointsResponse>(StatusCodes.Status200OK)
.Produces(StatusCodes.Status404NotFound)
.RequireAuthorization(MirrorReadPolicy);
group.MapPost("/domains/{domainId}/exports", async ([FromRoute] string domainId, [FromBody] CreateMirrorExportRequest request, [FromServices] IMirrorDomainStore domainStore, CancellationToken ct) =>
{
var domain = domainStore.GetDomain(domainId);
@@ -217,10 +272,14 @@ internal static class MirrorDomainManagementEndpointExtensions
Format = request.Format ?? "json",
Filters = request.Filters ?? new Dictionary<string, string>(),
});
if (!domain.SourceIds.Contains(request.Key, StringComparer.OrdinalIgnoreCase))
{
domain.SourceIds.Add(request.Key);
}
domain.UpdatedAt = DateTimeOffset.UtcNow;
await domainStore.SaveDomainAsync(domain, ct);
return HttpResults.Created($"/api/v1/mirror/domains/{domainId}/exports/{request.Key}", new { domainId, exportKey = request.Key });
await domainStore.SaveDomainAsync(domain, ct).ConfigureAwait(false);
return HttpResults.Created($"{MirrorBasePath}/domains/{domainId}/exports/{request.Key}", new { domainId, exportKey = request.Key });
})
.WithName("AddMirrorExport")
.WithSummary("Add an export to a mirror domain")
@@ -245,8 +304,11 @@ internal static class MirrorDomainManagementEndpointExtensions
return HttpResults.NotFound(new { error = "export_not_found", domainId, exportKey });
}
domain.SourceIds = domain.SourceIds
.Where(sourceId => !string.Equals(sourceId, exportKey, StringComparison.OrdinalIgnoreCase))
.ToList();
domain.UpdatedAt = DateTimeOffset.UtcNow;
await domainStore.SaveDomainAsync(domain, ct);
await domainStore.SaveDomainAsync(domain, ct).ConfigureAwait(false);
return HttpResults.NoContent();
})
.WithName("RemoveMirrorExport")
@@ -257,7 +319,7 @@ internal static class MirrorDomainManagementEndpointExtensions
.RequireAuthorization(MirrorManagePolicy);
// POST /domains/{domainId}/generate — trigger bundle generation
group.MapPost("/domains/{domainId}/generate", async ([FromRoute] string domainId, [FromServices] IMirrorDomainStore domainStore, CancellationToken ct) =>
group.MapPost("/domains/{domainId}/generate", async ([FromRoute] string domainId, [FromServices] IMirrorDomainStore domainStore, [FromServices] IOptionsMonitor<ConcelierOptions> optionsMonitor, CancellationToken ct) =>
{
var domain = domainStore.GetDomain(domainId);
if (domain is null)
@@ -266,15 +328,23 @@ internal static class MirrorDomainManagementEndpointExtensions
}
// Mark generation as triggered — actual generation happens async
domain.LastGenerateTriggeredAt = DateTimeOffset.UtcNow;
await domainStore.SaveDomainAsync(domain, ct);
var startedAt = DateTimeOffset.UtcNow;
domain.LastGenerateTriggeredAt = startedAt;
await GenerateMirrorArtifactsAsync(domainStore, domain, optionsMonitor, ct).ConfigureAwait(false);
await domainStore.SaveDomainAsync(domain, ct).ConfigureAwait(false);
return HttpResults.Accepted($"/api/v1/mirror/domains/{domainId}/status", new { domainId, status = "generation_triggered" });
return HttpResults.Accepted($"{MirrorBasePath}/domains/{domainId}/status", new MirrorDomainGenerateResponse
{
DomainId = domain.Id,
JobId = Guid.NewGuid().ToString("N"),
Status = "generation_triggered",
StartedAt = startedAt,
});
})
.WithName("TriggerMirrorGeneration")
.WithSummary("Trigger bundle generation for a mirror domain")
.WithDescription("Triggers asynchronous bundle generation for the specified mirror domain. The generation status can be polled via the domain status endpoint.")
.Produces(StatusCodes.Status202Accepted)
.WithSummary("Generate public mirror artifacts for a domain")
.WithDescription("Generates the public index, manifest, and bundle files for the specified mirror domain using the configured export root.")
.Produces<MirrorDomainGenerateResponse>(StatusCodes.Status202Accepted)
.Produces(StatusCodes.Status404NotFound)
.RequireAuthorization(MirrorManagePolicy);
@@ -294,10 +364,8 @@ internal static class MirrorDomainManagementEndpointExtensions
LastGenerateTriggeredAt = domain.LastGenerateTriggeredAt,
BundleSizeBytes = domain.BundleSizeBytes,
AdvisoryCount = domain.AdvisoryCount,
ExportCount = domain.Exports.Count,
Staleness = domain.LastGeneratedAt.HasValue
? (DateTimeOffset.UtcNow - domain.LastGeneratedAt.Value).TotalMinutes > 120 ? "stale" : "fresh"
: "never_generated",
ExportCount = domain.SourceIds.Count,
Staleness = ComputeDomainStaleness(domain),
});
})
.WithName("GetMirrorDomainStatus")
@@ -500,7 +568,7 @@ internal static class MirrorDomainManagementEndpointExtensions
}
}, CancellationToken.None);
return HttpResults.Accepted($"/api/v1/mirror/import/status", new MirrorBundleImportAcceptedResponse
return HttpResults.Accepted($"{MirrorBasePath}/import/status", new MirrorBundleImportAcceptedResponse
{
ImportId = importId,
Status = "running",
@@ -560,22 +628,35 @@ internal static class MirrorDomainManagementEndpointExtensions
return HttpResults.BadRequest(new { error = "base_address_required" });
}
var probeUrl = $"{request.BaseAddress.TrimEnd('/')}{MirrorIndexPath}";
var started = System.Diagnostics.Stopwatch.GetTimestamp();
try
{
var httpClientFactory = httpContext.RequestServices.GetRequiredService<IHttpClientFactory>();
var client = httpClientFactory.CreateClient("MirrorTest");
client.Timeout = TimeSpan.FromSeconds(10);
var response = await client.GetAsync(
$"{request.BaseAddress.TrimEnd('/')}/domains",
HttpCompletionOption.ResponseHeadersRead,
ct);
var response = await client.GetAsync(probeUrl, HttpCompletionOption.ResponseHeadersRead, ct).ConfigureAwait(false);
var latencyMs = (int)Math.Round(System.Diagnostics.Stopwatch.GetElapsedTime(started).TotalMilliseconds);
if (response.IsSuccessStatusCode)
{
return HttpResults.Ok(new MirrorTestResponse
{
Reachable = true,
LatencyMs = latencyMs,
});
}
return HttpResults.Ok(new MirrorTestResponse
{
Reachable = response.IsSuccessStatusCode,
StatusCode = (int)response.StatusCode,
Message = response.IsSuccessStatusCode ? "Mirror endpoint is reachable" : $"Mirror returned {response.StatusCode}",
Reachable = false,
LatencyMs = latencyMs,
Error = $"Mirror returned HTTP {(int)response.StatusCode} from {probeUrl}",
Remediation = response.StatusCode == System.Net.HttpStatusCode.NotFound
? $"Verify the upstream mirror publishes {MirrorIndexPath}."
: "Verify the mirror URL, authentication requirements, and reverse-proxy exposure.",
});
}
catch (Exception ex)
@@ -583,13 +664,15 @@ internal static class MirrorDomainManagementEndpointExtensions
return HttpResults.Ok(new MirrorTestResponse
{
Reachable = false,
Message = $"Connection failed: {ex.Message}",
LatencyMs = 0,
Error = $"Connection failed: {ex.Message}",
Remediation = "Verify the mirror URL is correct and the upstream Stella Ops instance is reachable on the network.",
});
}
})
.WithName("TestMirrorEndpoint")
.WithSummary("Test mirror consumer endpoint connectivity")
.WithDescription("Sends a probe request to the specified mirror consumer base address and reports reachability, HTTP status code, and any connection errors.")
.WithDescription("Sends a probe request to the specified mirror base address and reports reachability, latency, and remediation guidance.")
.Produces<MirrorTestResponse>(StatusCodes.Status200OK)
.RequireAuthorization(MirrorManagePolicy);
@@ -640,8 +723,8 @@ internal static class MirrorDomainManagementEndpointExtensions
}
var indexPath = string.IsNullOrWhiteSpace(request.IndexPath)
? "/concelier/exports/index.json"
: request.IndexPath.TrimStart('/');
? MirrorIndexPath
: request.IndexPath.Trim();
var indexUrl = $"{request.BaseAddress.TrimEnd('/')}/{indexPath.TrimStart('/')}";
@@ -726,7 +809,7 @@ internal static class MirrorDomainManagementEndpointExtensions
return HttpResults.BadRequest(new { error = "domain_id_required" });
}
var bundleUrl = $"{request.BaseAddress.TrimEnd('/')}/{request.DomainId}/bundle.json.jws";
var bundleUrl = $"{request.BaseAddress.TrimEnd('/')}{MirrorDomainRoot}/{request.DomainId}/bundle.json.jws";
try
{
@@ -823,34 +906,322 @@ internal static class MirrorDomainManagementEndpointExtensions
.RequireAuthorization(MirrorManagePolicy);
}
private static MirrorDomainSummary MapDomainSummary(MirrorDomainRecord domain) => new()
private static MirrorConfigResponse MapMirrorConfig(MirrorConfigRecord config, ConsumerConfigResponse consumer) => new()
{
Id = domain.Id,
DisplayName = domain.DisplayName,
ExportCount = domain.Exports.Count,
LastGeneratedAt = domain.LastGeneratedAt,
Staleness = domain.LastGeneratedAt.HasValue
? (DateTimeOffset.UtcNow - domain.LastGeneratedAt.Value).TotalMinutes > 120 ? "stale" : "fresh"
: "never_generated",
Mode = NormalizeMirrorMode(config.Mode),
ConsumerMirrorUrl = consumer.BaseAddress ?? config.ConsumerBaseAddress,
ConsumerConnected = consumer.Connected,
LastConsumerSync = consumer.LastSync,
};
private static MirrorDomainDetailResponse MapDomainDetail(MirrorDomainRecord domain) => new()
private static MirrorDomainResponse MapDomain(MirrorDomainRecord domain) => new()
{
Id = domain.Id,
DomainId = domain.Id,
DisplayName = domain.DisplayName,
RequireAuthentication = domain.RequireAuthentication,
MaxIndexRequestsPerHour = domain.MaxIndexRequestsPerHour,
MaxDownloadRequestsPerHour = domain.MaxDownloadRequestsPerHour,
Exports = domain.Exports.Select(e => new MirrorExportSummary
SourceIds = domain.SourceIds,
ExportFormat = domain.ExportFormat,
RateLimits = new MirrorDomainRateLimitsResponse
{
Key = e.Key,
Format = e.Format,
Filters = e.Filters,
}).ToList(),
LastGeneratedAt = domain.LastGeneratedAt,
IndexRequestsPerHour = domain.MaxIndexRequestsPerHour,
DownloadRequestsPerHour = domain.MaxDownloadRequestsPerHour,
},
RequireAuthentication = domain.RequireAuthentication,
Signing = new MirrorDomainSigningResponse
{
Enabled = domain.SigningEnabled,
Algorithm = domain.SigningAlgorithm,
KeyId = domain.SigningKeyId,
},
DomainUrl = $"{MirrorDomainRoot}/{domain.Id}",
CreatedAt = domain.CreatedAt,
UpdatedAt = domain.UpdatedAt,
Status = ComputeDomainStatus(domain),
};
private static MirrorDomainConfigResponse MapDomainConfig(MirrorDomainRecord domain) => new()
{
DomainId = domain.Id,
DisplayName = domain.DisplayName,
SourceIds = domain.SourceIds,
ExportFormat = domain.ExportFormat,
RateLimits = new MirrorDomainRateLimitsResponse
{
IndexRequestsPerHour = domain.MaxIndexRequestsPerHour,
DownloadRequestsPerHour = domain.MaxDownloadRequestsPerHour,
},
RequireAuthentication = domain.RequireAuthentication,
Signing = new MirrorDomainSigningResponse
{
Enabled = domain.SigningEnabled,
Algorithm = domain.SigningAlgorithm,
KeyId = domain.SigningKeyId,
},
ResolvedFilter = new Dictionary<string, object?>
{
["domainId"] = domain.Id,
["sourceIds"] = domain.SourceIds.ToArray(),
["exportFormat"] = domain.ExportFormat,
["rateLimits"] = new Dictionary<string, object?>
{
["indexRequestsPerHour"] = domain.MaxIndexRequestsPerHour,
["downloadRequestsPerHour"] = domain.MaxDownloadRequestsPerHour,
},
["requireAuthentication"] = domain.RequireAuthentication,
["signing"] = new Dictionary<string, object?>
{
["enabled"] = domain.SigningEnabled,
["algorithm"] = domain.SigningAlgorithm,
["keyId"] = domain.SigningKeyId,
},
},
};
private static IReadOnlyList<MirrorDomainEndpointDto> BuildDomainEndpoints(MirrorDomainRecord domain)
{
var endpoints = new List<MirrorDomainEndpointDto>
{
new()
{
Method = "GET",
Path = MirrorIndexPath,
Description = "Mirror index used by downstream discovery clients.",
},
new()
{
Method = "GET",
Path = $"{MirrorDomainRoot}/{domain.Id}/manifest.json",
Description = "Domain manifest describing the generated advisory bundle.",
},
new()
{
Method = "GET",
Path = $"{MirrorDomainRoot}/{domain.Id}/bundle.json",
Description = "Generated advisory bundle payload for the mirror domain.",
},
};
if (domain.SigningEnabled)
{
endpoints.Add(new MirrorDomainEndpointDto
{
Method = "GET",
Path = $"{MirrorDomainRoot}/{domain.Id}/bundle.json.jws",
Description = "Detached JWS envelope path used for signature discovery when signing is available.",
});
}
return endpoints;
}
private static string ComputeDomainStaleness(MirrorDomainRecord domain)
{
if (!domain.LastGeneratedAt.HasValue)
{
return "never_generated";
}
return (DateTimeOffset.UtcNow - domain.LastGeneratedAt.Value).TotalMinutes > 120
? "stale"
: "fresh";
}
private static string ComputeDomainStatus(MirrorDomainRecord domain) => ComputeDomainStaleness(domain) switch
{
"fresh" => "Fresh",
"stale" => "Stale",
_ => "Never generated",
};
private static string NormalizeDomainId(string? domainId)
=> string.IsNullOrWhiteSpace(domainId)
? string.Empty
: domainId.Trim().ToLowerInvariant();
private static string NormalizeMirrorMode(string? mode)
=> string.IsNullOrWhiteSpace(mode)
? "Direct"
: mode.Trim().ToLowerInvariant() switch
{
"mirror" => "Mirror",
"hybrid" => "Hybrid",
_ => "Direct",
};
private static List<string> ResolveSourceIds(IReadOnlyList<string>? sourceIds, IReadOnlyList<CreateMirrorExportRequest>? exports, IReadOnlyList<string>? existing = null)
{
IEnumerable<string> values =
sourceIds?.Where(candidate => !string.IsNullOrWhiteSpace(candidate)).Select(candidate => candidate.Trim())
?? exports?.Where(candidate => !string.IsNullOrWhiteSpace(candidate.Key)).Select(candidate => candidate.Key.Trim())
?? existing?.Where(candidate => !string.IsNullOrWhiteSpace(candidate)).Select(candidate => candidate.Trim())
?? [];
return values
.Distinct(StringComparer.OrdinalIgnoreCase)
.ToList();
}
private static List<MirrorExportRecord> ResolveExports(IReadOnlyList<string> sourceIds, string exportFormat, IReadOnlyList<CreateMirrorExportRequest>? requestExports)
{
if (requestExports is not null && requestExports.Count > 0)
{
return requestExports.Select(exportRequest => new MirrorExportRecord
{
Key = exportRequest.Key,
Format = exportRequest.Format ?? exportFormat,
Filters = exportRequest.Filters ?? new Dictionary<string, string>(),
}).ToList();
}
return sourceIds.Select(sourceId => new MirrorExportRecord
{
Key = sourceId,
Format = exportFormat,
Filters = new Dictionary<string, string>(),
}).ToList();
}
private static async Task GenerateMirrorArtifactsAsync(IMirrorDomainStore domainStore, MirrorDomainRecord domain, IOptionsMonitor<ConcelierOptions> optionsMonitor, CancellationToken ct)
{
if (!TryGetMirrorPaths(optionsMonitor.CurrentValue.Mirror, domain.Id, out var mirrorRoot, out var domainRoot, out var indexPath, out var manifestPath, out var bundlePath))
{
return;
}
Directory.CreateDirectory(mirrorRoot);
Directory.CreateDirectory(domainRoot);
var generatedAt = DateTimeOffset.UtcNow;
var bundleDocument = new
{
schemaVersion = 1,
generatedAt,
domainId = domain.Id,
displayName = domain.DisplayName,
exportFormat = domain.ExportFormat,
sourceIds = domain.SourceIds,
advisories = Array.Empty<object>(),
};
var jsonOptions = new JsonSerializerOptions(JsonSerializerDefaults.Web)
{
WriteIndented = true,
};
var bundleJson = JsonSerializer.Serialize(bundleDocument, jsonOptions);
await File.WriteAllTextAsync(bundlePath, bundleJson, ct).ConfigureAwait(false);
var bundleBytes = System.Text.Encoding.UTF8.GetBytes(bundleJson);
var manifest = new MirrorBundleManifestDto
{
DomainId = domain.Id,
DisplayName = domain.DisplayName,
GeneratedAt = generatedAt,
Exports =
[
new MirrorBundleExportDto
{
Key = domain.Id,
ExportId = domain.Id,
Format = domain.ExportFormat,
ArtifactSizeBytes = bundleBytes.Length,
ArtifactDigest = $"sha256:{Convert.ToHexString(System.Security.Cryptography.SHA256.HashData(bundleBytes)).ToLowerInvariant()}",
},
],
};
var manifestJson = JsonSerializer.Serialize(manifest, jsonOptions);
await File.WriteAllTextAsync(manifestPath, manifestJson, ct).ConfigureAwait(false);
domain.LastGeneratedAt = generatedAt;
domain.BundleSizeBytes = bundleBytes.Length;
domain.AdvisoryCount = 0;
domain.UpdatedAt = generatedAt;
await RefreshMirrorIndexAsync(domainStore, optionsMonitor, ct).ConfigureAwait(false);
}
private static async Task RefreshMirrorIndexAsync(IMirrorDomainStore domainStore, IOptionsMonitor<ConcelierOptions> optionsMonitor, CancellationToken ct)
{
if (!TryGetMirrorPaths(optionsMonitor.CurrentValue.Mirror, string.Empty, out var mirrorRoot, out _, out var indexPath, out _, out _))
{
return;
}
Directory.CreateDirectory(mirrorRoot);
var domains = domainStore.GetAllDomains();
var indexDocument = new
{
schemaVersion = 1,
generatedAt = DateTimeOffset.UtcNow,
domains = domains
.Where(domain => domain.LastGeneratedAt.HasValue)
.Select(domain => new
{
domainId = domain.Id,
displayName = domain.DisplayName,
lastGenerated = domain.LastGeneratedAt,
advisoryCount = domain.AdvisoryCount,
bundleSize = domain.BundleSizeBytes,
exportFormats = new[] { domain.ExportFormat },
signed = File.Exists(Path.Combine(mirrorRoot, domain.Id, "bundle.json.jws")),
})
.ToList(),
};
var json = JsonSerializer.Serialize(indexDocument, new JsonSerializerOptions(JsonSerializerDefaults.Web)
{
WriteIndented = true,
});
await File.WriteAllTextAsync(indexPath, json, ct).ConfigureAwait(false);
}
private static void DeleteMirrorArtifacts(string domainId, IOptionsMonitor<ConcelierOptions> optionsMonitor)
{
if (!TryGetMirrorPaths(optionsMonitor.CurrentValue.Mirror, domainId, out _, out var domainRoot, out _, out _, out _))
{
return;
}
if (Directory.Exists(domainRoot))
{
Directory.Delete(domainRoot, recursive: true);
}
}
private static bool TryGetMirrorPaths(
ConcelierOptions.MirrorOptions? mirrorOptions,
string domainId,
out string mirrorRoot,
out string domainRoot,
out string indexPath,
out string manifestPath,
out string bundlePath)
{
mirrorRoot = string.Empty;
domainRoot = string.Empty;
indexPath = string.Empty;
manifestPath = string.Empty;
bundlePath = string.Empty;
if (mirrorOptions is null || !mirrorOptions.Enabled || string.IsNullOrWhiteSpace(mirrorOptions.ExportRootAbsolute))
{
return false;
}
var exportId = string.IsNullOrWhiteSpace(mirrorOptions.ActiveExportId)
? mirrorOptions.LatestDirectoryName
: mirrorOptions.ActiveExportId!;
var exportRoot = Path.Combine(mirrorOptions.ExportRootAbsolute, exportId);
mirrorRoot = Path.Combine(exportRoot, mirrorOptions.MirrorDirectoryName);
domainRoot = string.IsNullOrWhiteSpace(domainId)
? string.Empty
: Path.Combine(mirrorRoot, domainId);
indexPath = Path.Combine(mirrorRoot, "index.json");
manifestPath = string.IsNullOrWhiteSpace(domainRoot) ? string.Empty : Path.Combine(domainRoot, "manifest.json");
bundlePath = string.IsNullOrWhiteSpace(domainRoot) ? string.Empty : Path.Combine(domainRoot, "bundle.json");
return true;
}
}
// ===== Interfaces =====
@@ -872,6 +1243,7 @@ public interface IMirrorDomainStore
/// </summary>
public interface IMirrorConfigStore
{
MirrorConfigRecord GetConfig();
Task UpdateConfigAsync(UpdateMirrorConfigRequest request, CancellationToken ct = default);
}
@@ -900,9 +1272,14 @@ public sealed class MirrorDomainRecord
{
public required string Id { get; set; }
public required string DisplayName { get; set; }
public List<string> SourceIds { get; set; } = [];
public string ExportFormat { get; set; } = "JSON";
public bool RequireAuthentication { get; set; }
public int MaxIndexRequestsPerHour { get; set; } = 120;
public int MaxDownloadRequestsPerHour { get; set; } = 600;
public bool SigningEnabled { get; set; }
public string SigningAlgorithm { get; set; } = "HMAC-SHA256";
public string SigningKeyId { get; set; } = string.Empty;
public List<MirrorExportRecord> Exports { get; set; } = [];
public DateTimeOffset CreatedAt { get; set; }
public DateTimeOffset? UpdatedAt { get; set; }
@@ -919,6 +1296,18 @@ public sealed class MirrorExportRecord
public Dictionary<string, string> Filters { get; set; } = new();
}
public sealed class MirrorConfigRecord
{
public string Mode { get; set; } = "Direct";
public string? OutputRoot { get; set; }
public string? ConsumerBaseAddress { get; set; }
public bool SigningEnabled { get; set; }
public string? SigningAlgorithm { get; set; }
public string? SigningKeyId { get; set; }
public bool AutoRefreshEnabled { get; set; } = true;
public int RefreshIntervalMinutes { get; set; } = 60;
}
public sealed class MirrorConfigOptions
{
public string Mode { get; set; } = "direct";
@@ -951,9 +1340,14 @@ public sealed record MirrorSigningRequest
public sealed record CreateMirrorDomainRequest
{
public required string Id { get; init; }
public required string DisplayName { get; init; }
public string? Id { get; init; }
public string? DomainId { get; init; }
public string? DisplayName { get; init; }
public IReadOnlyList<string>? SourceIds { get; init; }
public string? ExportFormat { get; init; }
public MirrorDomainRateLimitsRequest? RateLimits { get; init; }
public bool RequireAuthentication { get; init; }
public MirrorDomainSigningRequest? Signing { get; init; }
public int? MaxIndexRequestsPerHour { get; init; }
public int? MaxDownloadRequestsPerHour { get; init; }
public List<CreateMirrorExportRequest>? Exports { get; init; }
@@ -962,12 +1356,29 @@ public sealed record CreateMirrorDomainRequest
public sealed record UpdateMirrorDomainRequest
{
public string? DisplayName { get; init; }
public IReadOnlyList<string>? SourceIds { get; init; }
public string? ExportFormat { get; init; }
public MirrorDomainRateLimitsRequest? RateLimits { get; init; }
public bool? RequireAuthentication { get; init; }
public MirrorDomainSigningRequest? Signing { get; init; }
public int? MaxIndexRequestsPerHour { get; init; }
public int? MaxDownloadRequestsPerHour { get; init; }
public List<CreateMirrorExportRequest>? Exports { get; init; }
}
public sealed record MirrorDomainRateLimitsRequest
{
public int IndexRequestsPerHour { get; init; }
public int DownloadRequestsPerHour { get; init; }
}
public sealed record MirrorDomainSigningRequest
{
public bool Enabled { get; init; }
public string? Algorithm { get; init; }
public string? KeyId { get; init; }
}
public sealed record CreateMirrorExportRequest
{
public required string Key { get; init; }
@@ -1014,12 +1425,10 @@ public sealed record ConsumerVerifySignatureRequest
public sealed record MirrorConfigResponse
{
public string Mode { get; init; } = "direct";
public string? OutputRoot { get; init; }
public string? ConsumerBaseAddress { get; init; }
public MirrorSigningResponse? Signing { get; init; }
public bool AutoRefreshEnabled { get; init; }
public int RefreshIntervalMinutes { get; init; }
public string Mode { get; init; } = "Direct";
public string? ConsumerMirrorUrl { get; init; }
public bool ConsumerConnected { get; init; }
public DateTimeOffset? LastConsumerSync { get; init; }
}
public sealed record MirrorSigningResponse
@@ -1031,37 +1440,78 @@ public sealed record MirrorSigningResponse
public sealed record MirrorDomainListResponse
{
public IReadOnlyList<MirrorDomainSummary> Domains { get; init; } = [];
public IReadOnlyList<MirrorDomainResponse> Domains { get; init; } = [];
public int TotalCount { get; init; }
}
public sealed record MirrorDomainSummary
{
public string Id { get; init; } = "";
public string DisplayName { get; init; } = "";
public int ExportCount { get; init; }
public DateTimeOffset? LastGeneratedAt { get; init; }
public string Staleness { get; init; } = "never_generated";
}
public sealed record MirrorDomainDetailResponse
public sealed record MirrorDomainResponse
{
public string Id { get; init; } = "";
public string DomainId { get; init; } = "";
public string DisplayName { get; init; } = "";
public IReadOnlyList<string> SourceIds { get; init; } = [];
public string ExportFormat { get; init; } = "JSON";
public MirrorDomainRateLimitsResponse RateLimits { get; init; } = new();
public bool RequireAuthentication { get; init; }
public int MaxIndexRequestsPerHour { get; init; }
public int MaxDownloadRequestsPerHour { get; init; }
public IReadOnlyList<MirrorExportSummary> Exports { get; init; } = [];
public DateTimeOffset? LastGeneratedAt { get; init; }
public MirrorDomainSigningResponse Signing { get; init; } = new();
public string DomainUrl { get; init; } = "";
public DateTimeOffset CreatedAt { get; init; }
public DateTimeOffset? UpdatedAt { get; init; }
public string Status { get; init; } = "Never generated";
}
public sealed record MirrorExportSummary
public sealed record MirrorDomainConfigResponse
{
public string Key { get; init; } = "";
public string Format { get; init; } = "json";
public Dictionary<string, string> Filters { get; init; } = new();
public string DomainId { get; init; } = "";
public string DisplayName { get; init; } = "";
public IReadOnlyList<string> SourceIds { get; init; } = [];
public string ExportFormat { get; init; } = "JSON";
public MirrorDomainRateLimitsResponse RateLimits { get; init; } = new();
public bool RequireAuthentication { get; init; }
public MirrorDomainSigningResponse Signing { get; init; } = new();
public IReadOnlyDictionary<string, object?> ResolvedFilter { get; init; } = new Dictionary<string, object?>();
}
public sealed record MirrorDomainRateLimitsResponse
{
public int IndexRequestsPerHour { get; init; }
public int DownloadRequestsPerHour { get; init; }
}
public sealed record MirrorDomainSigningResponse
{
public bool Enabled { get; init; }
public string Algorithm { get; init; } = string.Empty;
public string KeyId { get; init; } = string.Empty;
}
public sealed record MirrorHealthSummary
{
public int TotalDomains { get; init; }
public int FreshCount { get; init; }
public int StaleCount { get; init; }
public int NeverGeneratedCount { get; init; }
public long TotalAdvisoryCount { get; init; }
}
public sealed record MirrorDomainEndpointsResponse
{
public string DomainId { get; init; } = string.Empty;
public IReadOnlyList<MirrorDomainEndpointDto> Endpoints { get; init; } = [];
}
public sealed record MirrorDomainEndpointDto
{
public string Path { get; init; } = string.Empty;
public string Method { get; init; } = "GET";
public string Description { get; init; } = string.Empty;
}
public sealed record MirrorDomainGenerateResponse
{
public string DomainId { get; init; } = string.Empty;
public string JobId { get; init; } = string.Empty;
public string Status { get; init; } = "generation_triggered";
public DateTimeOffset StartedAt { get; init; }
}
public sealed record MirrorDomainStatusResponse
@@ -1078,8 +1528,9 @@ public sealed record MirrorDomainStatusResponse
public sealed record MirrorTestResponse
{
public bool Reachable { get; init; }
public int? StatusCode { get; init; }
public string? Message { get; init; }
public int LatencyMs { get; init; }
public string? Error { get; init; }
public string? Remediation { get; init; }
}
// ===== Consumer connector DTOs =====

View File

@@ -60,6 +60,7 @@ internal static class MirrorEndpointExtensions
string? relativePath,
[FromServices] MirrorFileLocator locator,
[FromServices] MirrorRateLimiter limiter,
[FromServices] IMirrorDomainStore domainStore,
[FromServices] IOptionsMonitor<ConcelierOptions> optionsMonitor,
HttpContext context,
CancellationToken cancellationToken) =>
@@ -80,15 +81,25 @@ internal static class MirrorEndpointExtensions
return ConcelierProblemResultFactory.MirrorNotFound(context, relativePath);
}
var domain = FindDomain(mirrorOptions, domainId);
var managedDomain = string.IsNullOrWhiteSpace(domainId) ? null : domainStore.GetDomain(domainId);
var configuredDomain = FindDomain(mirrorOptions, domainId);
var requireAuthentication =
managedDomain?.RequireAuthentication ??
configuredDomain?.RequireAuthentication ??
mirrorOptions.RequireAuthentication;
if (!TryAuthorize(domain?.RequireAuthentication ?? mirrorOptions.RequireAuthentication, enforceAuthority, context, authorityConfigured, out var unauthorizedResult))
if (!TryAuthorize(requireAuthentication, enforceAuthority, context, authorityConfigured, out var unauthorizedResult))
{
return unauthorizedResult;
}
var limit = domain?.MaxDownloadRequestsPerHour ?? mirrorOptions.MaxIndexRequestsPerHour;
if (!limiter.TryAcquire(domain?.Id ?? "__mirror__", DownloadScope, limit, out var retryAfter))
var defaultDownloadLimit = new ConcelierOptions.MirrorDomainOptions().MaxDownloadRequestsPerHour;
var limit =
managedDomain?.MaxDownloadRequestsPerHour ??
configuredDomain?.MaxDownloadRequestsPerHour ??
defaultDownloadLimit;
var limiterKey = managedDomain?.Id ?? configuredDomain?.Id ?? "__mirror__";
if (!limiter.TryAcquire(limiterKey, DownloadScope, limit, out var retryAfter))
{
ApplyRetryAfter(context.Response, retryAfter);
return ConcelierProblemResultFactory.RateLimitExceeded(context, (int?)retryAfter?.TotalSeconds);

View File

@@ -16,7 +16,7 @@ internal static class SourceManagementEndpointExtensions
public static void MapSourceManagementEndpoints(this WebApplication app)
{
var group = app.MapGroup("/api/v1/sources")
var group = app.MapGroup("/api/v1/advisory-sources")
.WithTags("Source Management")
.RequireTenant();

View File

@@ -0,0 +1,635 @@
using HttpResults = Microsoft.AspNetCore.Http.Results;
using Microsoft.AspNetCore.Mvc;
using StellaOps.ReleaseOrchestrator.Environment.Deletion;
using StellaOps.ReleaseOrchestrator.Environment.InfrastructureBinding;
using StellaOps.ReleaseOrchestrator.Environment.Readiness;
using StellaOps.ReleaseOrchestrator.Environment.Region;
using StellaOps.ReleaseOrchestrator.Environment.Rename;
using EnvModels = StellaOps.ReleaseOrchestrator.Environment.Models;
namespace StellaOps.Concelier.WebService.Extensions;
/// <summary>
/// API endpoints for release topology setup: regions, infrastructure bindings,
/// readiness validation, rename operations, and deletion lifecycle.
/// </summary>
internal static class TopologySetupEndpointExtensions
{
private const string TopologyManagePolicy = "Topology.Manage";
private const string TopologyReadPolicy = "Topology.Read";
private const string TopologyAdminPolicy = "Topology.Admin";
public static void MapTopologySetupEndpoints(this WebApplication app)
{
MapRegionEndpoints(app);
MapInfrastructureBindingEndpoints(app);
MapReadinessEndpoints(app);
MapRenameEndpoints(app);
MapDeletionEndpoints(app);
}
// ── Region Endpoints ────────────────────────────────────────
private static void MapRegionEndpoints(WebApplication app)
{
var group = app.MapGroup("/api/v1/regions")
.WithTags("Regions");
group.MapPost("/", async (
[FromBody] CreateRegionApiRequest body,
[FromServices] IRegionService regionService,
CancellationToken ct) =>
{
var region = await regionService.CreateAsync(new CreateRegionRequest(
body.Name, body.DisplayName, body.Description,
body.CryptoProfile ?? "international", body.SortOrder ?? 0), ct);
return HttpResults.Created($"/api/v1/regions/{region.Id}", MapRegion(region));
})
.WithName("CreateRegion")
.WithSummary("Create a new region")
.Produces<RegionResponse>(StatusCodes.Status201Created)
.RequireAuthorization(TopologyManagePolicy);
group.MapGet("/", async (
[FromServices] IRegionService regionService,
CancellationToken ct) =>
{
var regions = await regionService.ListAsync(ct);
return HttpResults.Ok(new RegionListResponse
{
Items = regions.Select(MapRegion).ToList(),
TotalCount = regions.Count
});
})
.WithName("ListRegions")
.WithSummary("List all regions for the current tenant")
.Produces<RegionListResponse>(StatusCodes.Status200OK)
.RequireAuthorization(TopologyReadPolicy);
group.MapGet("/{id:guid}", async (
Guid id,
[FromServices] IRegionService regionService,
CancellationToken ct) =>
{
var region = await regionService.GetAsync(id, ct);
return region is not null
? HttpResults.Ok(MapRegion(region))
: HttpResults.NotFound();
})
.WithName("GetRegion")
.WithSummary("Get a region by ID")
.Produces<RegionResponse>(StatusCodes.Status200OK)
.Produces(StatusCodes.Status404NotFound)
.RequireAuthorization(TopologyReadPolicy);
group.MapPut("/{id:guid}", async (
Guid id,
[FromBody] UpdateRegionApiRequest body,
[FromServices] IRegionService regionService,
CancellationToken ct) =>
{
var region = await regionService.UpdateAsync(id, new UpdateRegionRequest(
body.DisplayName, body.Description, body.CryptoProfile, body.SortOrder, body.Status), ct);
return HttpResults.Ok(MapRegion(region));
})
.WithName("UpdateRegion")
.WithSummary("Update a region")
.Produces<RegionResponse>(StatusCodes.Status200OK)
.RequireAuthorization(TopologyManagePolicy);
group.MapDelete("/{id:guid}", async (
Guid id,
[FromServices] IRegionService regionService,
CancellationToken ct) =>
{
await regionService.DeleteAsync(id, ct);
return HttpResults.NoContent();
})
.WithName("DeleteRegion")
.WithSummary("Delete a region")
.Produces(StatusCodes.Status204NoContent)
.RequireAuthorization(TopologyAdminPolicy);
}
// ── Infrastructure Binding Endpoints ─────────────────────────
private static void MapInfrastructureBindingEndpoints(WebApplication app)
{
var group = app.MapGroup("/api/v1/infrastructure-bindings")
.WithTags("Infrastructure Bindings");
group.MapPost("/", async (
[FromBody] BindInfrastructureApiRequest body,
[FromServices] IInfrastructureBindingService bindingService,
CancellationToken ct) =>
{
var scopeType = Enum.Parse<EnvModels.BindingScopeType>(body.ScopeType, ignoreCase: true);
var role = Enum.Parse<EnvModels.BindingRole>(body.BindingRole, ignoreCase: true);
var binding = await bindingService.BindAsync(new BindInfrastructureRequest(
body.IntegrationId, scopeType, body.ScopeId, role, body.Priority ?? 0), ct);
return HttpResults.Created($"/api/v1/infrastructure-bindings/{binding.Id}", MapBinding(binding));
})
.WithName("CreateInfrastructureBinding")
.WithSummary("Bind an integration to a scope")
.Produces<InfrastructureBindingResponse>(StatusCodes.Status201Created)
.RequireAuthorization(TopologyManagePolicy);
group.MapDelete("/{id:guid}", async (
Guid id,
[FromServices] IInfrastructureBindingService bindingService,
CancellationToken ct) =>
{
await bindingService.UnbindAsync(id, ct);
return HttpResults.NoContent();
})
.WithName("DeleteInfrastructureBinding")
.WithSummary("Remove an infrastructure binding")
.Produces(StatusCodes.Status204NoContent)
.RequireAuthorization(TopologyManagePolicy);
group.MapGet("/", async (
[FromQuery] string scopeType,
[FromQuery] Guid? scopeId,
[FromServices] IInfrastructureBindingService bindingService,
CancellationToken ct) =>
{
var scope = Enum.Parse<EnvModels.BindingScopeType>(scopeType, ignoreCase: true);
var bindings = await bindingService.ListByScopeAsync(scope, scopeId, ct);
return HttpResults.Ok(new InfrastructureBindingListResponse
{
Items = bindings.Select(MapBinding).ToList(),
TotalCount = bindings.Count
});
})
.WithName("ListInfrastructureBindings")
.WithSummary("List bindings by scope")
.Produces<InfrastructureBindingListResponse>(StatusCodes.Status200OK)
.RequireAuthorization(TopologyReadPolicy);
group.MapGet("/resolve", async (
[FromQuery] Guid environmentId,
[FromQuery] string role,
[FromServices] IInfrastructureBindingService bindingService,
CancellationToken ct) =>
{
var bindingRole = Enum.Parse<EnvModels.BindingRole>(role, ignoreCase: true);
var binding = await bindingService.ResolveAsync(environmentId, bindingRole, ct);
return binding is not null
? HttpResults.Ok(MapBinding(binding))
: HttpResults.NotFound();
})
.WithName("ResolveInfrastructureBinding")
.WithSummary("Resolve a binding with inheritance cascade")
.Produces<InfrastructureBindingResponse>(StatusCodes.Status200OK)
.Produces(StatusCodes.Status404NotFound)
.RequireAuthorization(TopologyReadPolicy);
group.MapGet("/resolve-all", async (
[FromQuery] Guid environmentId,
[FromServices] IInfrastructureBindingService bindingService,
CancellationToken ct) =>
{
var resolution = await bindingService.ResolveAllAsync(environmentId, ct);
return HttpResults.Ok(MapResolution(resolution));
})
.WithName("ResolveAllInfrastructureBindings")
.WithSummary("Resolve all binding roles with inheritance")
.Produces<InfrastructureBindingResolutionResponse>(StatusCodes.Status200OK)
.RequireAuthorization(TopologyReadPolicy);
group.MapPost("/{id:guid}/test", async (
Guid id,
[FromServices] IInfrastructureBindingService bindingService,
CancellationToken ct) =>
{
var result = await bindingService.TestBindingAsync(id, ct);
return HttpResults.Ok(result);
})
.WithName("TestInfrastructureBinding")
.WithSummary("Test binding connectivity")
.Produces<object>(StatusCodes.Status200OK)
.RequireAuthorization(TopologyManagePolicy);
}
// ── Readiness Endpoints ──────────────────────────────────────
private static void MapReadinessEndpoints(WebApplication app)
{
var targets = app.MapGroup("/api/v1/targets")
.WithTags("Topology Readiness");
targets.MapPost("/{id:guid}/validate", async (
Guid id,
[FromServices] ITopologyReadinessService readinessService,
CancellationToken ct) =>
{
var report = await readinessService.ValidateAsync(id, ct);
return HttpResults.Ok(MapReport(report));
})
.WithName("ValidateTarget")
.WithSummary("Run all readiness gates for a target")
.Produces<TopologyPointReportResponse>(StatusCodes.Status200OK);
targets.MapGet("/{id:guid}/readiness", async (
Guid id,
[FromServices] ITopologyReadinessService readinessService,
CancellationToken ct) =>
{
var report = await readinessService.GetLatestAsync(id, ct);
return report is not null
? HttpResults.Ok(MapReport(report))
: HttpResults.NotFound();
})
.WithName("GetTargetReadiness")
.WithSummary("Get latest readiness report for a target")
.Produces<TopologyPointReportResponse>(StatusCodes.Status200OK)
.Produces(StatusCodes.Status404NotFound);
var envs = app.MapGroup("/api/v1/environments")
.WithTags("Topology Readiness");
envs.MapGet("/{id:guid}/readiness", async (
Guid id,
[FromServices] ITopologyReadinessService readinessService,
CancellationToken ct) =>
{
var reports = await readinessService.ListByEnvironmentAsync(id, ct);
return HttpResults.Ok(new ReadinessListResponse
{
Items = reports.Select(MapReport).ToList(),
TotalCount = reports.Count
});
})
.WithName("GetEnvironmentReadiness")
.WithSummary("Get readiness for all targets in an environment")
.Produces<ReadinessListResponse>(StatusCodes.Status200OK);
}
// ── Rename Endpoints ─────────────────────────────────────────
private static void MapRenameEndpoints(WebApplication app)
{
var entityTypes = new Dictionary<string, RenameEntityType>
{
["regions"] = RenameEntityType.Region,
["environments"] = RenameEntityType.Environment,
["targets"] = RenameEntityType.Target,
["agents"] = RenameEntityType.Agent,
["integrations"] = RenameEntityType.Integration
};
foreach (var (path, entityType) in entityTypes)
{
app.MapPatch($"/api/v1/{path}/{{id:guid}}/name", async (
Guid id,
[FromBody] RenameApiRequest body,
[FromServices] ITopologyRenameService renameService,
CancellationToken ct) =>
{
var result = await renameService.RenameAsync(
new RenameRequest(entityType, id, body.Name, body.DisplayName), ct);
if (result.Success)
return HttpResults.Ok(result);
if (result.Error == "name_conflict")
return HttpResults.Conflict(result);
return HttpResults.BadRequest(result);
})
.WithName($"Rename{entityType}")
.WithSummary($"Rename a {entityType.ToString().ToLowerInvariant()}")
.WithTags("Topology Rename")
.Produces<RenameResult>(StatusCodes.Status200OK)
.Produces<RenameResult>(StatusCodes.Status409Conflict)
.RequireAuthorization(TopologyManagePolicy);
}
}
// ── Deletion Endpoints ───────────────────────────────────────
private static void MapDeletionEndpoints(WebApplication app)
{
var entityPaths = new Dictionary<string, EnvModels.DeletionEntityType>
{
["regions"] = EnvModels.DeletionEntityType.Region,
["environments"] = EnvModels.DeletionEntityType.Environment,
["targets"] = EnvModels.DeletionEntityType.Target,
["agents"] = EnvModels.DeletionEntityType.Agent,
["integrations"] = EnvModels.DeletionEntityType.Integration
};
// Request deletion for each entity type
foreach (var (path, entityType) in entityPaths)
{
app.MapPost($"/api/v1/{path}/{{id:guid}}/request-delete", async (
Guid id,
[FromBody] RequestDeleteApiRequest? body,
[FromServices] IPendingDeletionService deletionService,
CancellationToken ct) =>
{
var deletion = await deletionService.RequestDeletionAsync(
new DeletionRequest(entityType, id, body?.Reason), ct);
return HttpResults.Accepted($"/api/v1/pending-deletions/{deletion.Id}", MapDeletion(deletion));
})
.WithName($"RequestDelete{entityType}")
.WithSummary($"Request deletion of a {entityType.ToString().ToLowerInvariant()} with cool-off")
.WithTags("Topology Deletion")
.Produces<PendingDeletionResponse>(StatusCodes.Status202Accepted)
.RequireAuthorization(TopologyManagePolicy);
}
// Pending deletion management
var deletionGroup = app.MapGroup("/api/v1/pending-deletions")
.WithTags("Topology Deletion");
deletionGroup.MapPost("/{id:guid}/confirm", async (
Guid id,
[FromServices] IPendingDeletionService deletionService,
CancellationToken ct) =>
{
// In real impl, get confirmedBy from the current user context
var deletion = await deletionService.ConfirmDeletionAsync(id, Guid.Empty, ct);
return HttpResults.Ok(MapDeletion(deletion));
})
.WithName("ConfirmDeletion")
.WithSummary("Confirm a pending deletion after cool-off expires")
.Produces<PendingDeletionResponse>(StatusCodes.Status200OK)
.RequireAuthorization(TopologyAdminPolicy);
deletionGroup.MapPost("/{id:guid}/cancel", async (
Guid id,
[FromServices] IPendingDeletionService deletionService,
CancellationToken ct) =>
{
await deletionService.CancelDeletionAsync(id, Guid.Empty, ct);
return HttpResults.NoContent();
})
.WithName("CancelDeletion")
.WithSummary("Cancel a pending deletion")
.Produces(StatusCodes.Status204NoContent)
.RequireAuthorization(TopologyManagePolicy);
deletionGroup.MapGet("/", async (
[FromServices] IPendingDeletionService deletionService,
CancellationToken ct) =>
{
var deletions = await deletionService.ListPendingAsync(ct);
return HttpResults.Ok(new PendingDeletionListResponse
{
Items = deletions.Select(MapDeletion).ToList(),
TotalCount = deletions.Count
});
})
.WithName("ListPendingDeletions")
.WithSummary("List all pending deletions for the current tenant")
.Produces<PendingDeletionListResponse>(StatusCodes.Status200OK)
.RequireAuthorization(TopologyReadPolicy);
deletionGroup.MapGet("/{id:guid}", async (
Guid id,
[FromServices] IPendingDeletionService deletionService,
CancellationToken ct) =>
{
var deletion = await deletionService.GetAsync(id, ct);
return deletion is not null
? HttpResults.Ok(MapDeletion(deletion))
: HttpResults.NotFound();
})
.WithName("GetPendingDeletion")
.WithSummary("Get pending deletion details with cascade summary")
.Produces<PendingDeletionResponse>(StatusCodes.Status200OK)
.Produces(StatusCodes.Status404NotFound)
.RequireAuthorization(TopologyReadPolicy);
}
// ── Mappers ──────────────────────────────────────────────────
private static RegionResponse MapRegion(EnvModels.Region r) => new()
{
Id = r.Id,
TenantId = r.TenantId,
Name = r.Name,
DisplayName = r.DisplayName,
Description = r.Description,
CryptoProfile = r.CryptoProfile,
SortOrder = r.SortOrder,
Status = r.Status.ToString().ToLowerInvariant(),
CreatedAt = r.CreatedAt,
UpdatedAt = r.UpdatedAt
};
private static InfrastructureBindingResponse MapBinding(EnvModels.InfrastructureBinding b) => new()
{
Id = b.Id,
IntegrationId = b.IntegrationId,
ScopeType = b.ScopeType.ToString().ToLowerInvariant(),
ScopeId = b.ScopeId,
BindingRole = b.Role.ToString().ToLowerInvariant(),
Priority = b.Priority,
IsActive = b.IsActive,
CreatedAt = b.CreatedAt
};
private static InfrastructureBindingResolutionResponse MapResolution(EnvModels.InfrastructureBindingResolution r) => new()
{
Registry = r.Registry is not null ? new ResolvedBindingResponse
{
Binding = MapBinding(r.Registry.Binding),
ResolvedFrom = r.Registry.ResolvedFrom.ToString().ToLowerInvariant()
} : null,
Vault = r.Vault is not null ? new ResolvedBindingResponse
{
Binding = MapBinding(r.Vault.Binding),
ResolvedFrom = r.Vault.ResolvedFrom.ToString().ToLowerInvariant()
} : null,
SettingsStore = r.SettingsStore is not null ? new ResolvedBindingResponse
{
Binding = MapBinding(r.SettingsStore.Binding),
ResolvedFrom = r.SettingsStore.ResolvedFrom.ToString().ToLowerInvariant()
} : null
};
private static TopologyPointReportResponse MapReport(EnvModels.TopologyPointReport r) => new()
{
TargetId = r.TargetId,
EnvironmentId = r.EnvironmentId,
IsReady = r.IsReady,
Gates = r.Gates.Select(g => new GateResultResponse
{
GateName = g.GateName,
Status = g.Status.ToString().ToLowerInvariant(),
Message = g.Message,
CheckedAt = g.CheckedAt,
DurationMs = g.DurationMs
}).ToList(),
EvaluatedAt = r.EvaluatedAt
};
private static PendingDeletionResponse MapDeletion(EnvModels.PendingDeletion d) => new()
{
PendingDeletionId = d.Id,
EntityType = d.EntityType.ToString().ToLowerInvariant(),
EntityName = d.EntityName,
Status = d.Status.ToString().ToLowerInvariant(),
CoolOffExpiresAt = d.CoolOffExpiresAt,
CanConfirmAfter = d.CoolOffExpiresAt,
CascadeSummary = new CascadeSummaryResponse
{
ChildEnvironments = d.CascadeSummary.ChildEnvironments,
ChildTargets = d.CascadeSummary.ChildTargets,
BoundAgents = d.CascadeSummary.BoundAgents,
InfrastructureBindings = d.CascadeSummary.InfrastructureBindings,
ActiveHealthSchedules = d.CascadeSummary.ActiveHealthSchedules,
PendingDeployments = d.CascadeSummary.PendingDeployments
},
RequestedAt = d.RequestedAt,
ConfirmedAt = d.ConfirmedAt,
CompletedAt = d.CompletedAt
};
// ── API Request/Response DTOs ────────────────────────────────
internal sealed class CreateRegionApiRequest
{
public required string Name { get; set; }
public required string DisplayName { get; set; }
public string? Description { get; set; }
public string? CryptoProfile { get; set; }
public int? SortOrder { get; set; }
}
internal sealed class UpdateRegionApiRequest
{
public string? DisplayName { get; set; }
public string? Description { get; set; }
public string? CryptoProfile { get; set; }
public int? SortOrder { get; set; }
public EnvModels.RegionStatus? Status { get; set; }
}
internal sealed class BindInfrastructureApiRequest
{
public required Guid IntegrationId { get; set; }
public required string ScopeType { get; set; }
public Guid? ScopeId { get; set; }
public required string BindingRole { get; set; }
public int? Priority { get; set; }
}
internal sealed class RenameApiRequest
{
public required string Name { get; set; }
public required string DisplayName { get; set; }
}
internal sealed class RequestDeleteApiRequest
{
public string? Reason { get; set; }
}
internal sealed class RegionResponse
{
public Guid Id { get; set; }
public Guid TenantId { get; set; }
public required string Name { get; set; }
public required string DisplayName { get; set; }
public string? Description { get; set; }
public required string CryptoProfile { get; set; }
public int SortOrder { get; set; }
public required string Status { get; set; }
public DateTimeOffset CreatedAt { get; set; }
public DateTimeOffset UpdatedAt { get; set; }
}
internal sealed class RegionListResponse
{
public required List<RegionResponse> Items { get; set; }
public int TotalCount { get; set; }
}
internal sealed class InfrastructureBindingResponse
{
public Guid Id { get; set; }
public Guid IntegrationId { get; set; }
public required string ScopeType { get; set; }
public Guid? ScopeId { get; set; }
public required string BindingRole { get; set; }
public int Priority { get; set; }
public bool IsActive { get; set; }
public DateTimeOffset CreatedAt { get; set; }
}
internal sealed class InfrastructureBindingListResponse
{
public required List<InfrastructureBindingResponse> Items { get; set; }
public int TotalCount { get; set; }
}
internal sealed class InfrastructureBindingResolutionResponse
{
public ResolvedBindingResponse? Registry { get; set; }
public ResolvedBindingResponse? Vault { get; set; }
public ResolvedBindingResponse? SettingsStore { get; set; }
}
internal sealed class ResolvedBindingResponse
{
public required InfrastructureBindingResponse Binding { get; set; }
public required string ResolvedFrom { get; set; }
}
internal sealed class TopologyPointReportResponse
{
public Guid TargetId { get; set; }
public Guid EnvironmentId { get; set; }
public bool IsReady { get; set; }
public required List<GateResultResponse> Gates { get; set; }
public DateTimeOffset EvaluatedAt { get; set; }
}
internal sealed class GateResultResponse
{
public required string GateName { get; set; }
public required string Status { get; set; }
public string? Message { get; set; }
public DateTimeOffset? CheckedAt { get; set; }
public int? DurationMs { get; set; }
}
internal sealed class ReadinessListResponse
{
public required List<TopologyPointReportResponse> Items { get; set; }
public int TotalCount { get; set; }
}
internal sealed class PendingDeletionResponse
{
public Guid PendingDeletionId { get; set; }
public required string EntityType { get; set; }
public required string EntityName { get; set; }
public required string Status { get; set; }
public DateTimeOffset CoolOffExpiresAt { get; set; }
public DateTimeOffset CanConfirmAfter { get; set; }
public required CascadeSummaryResponse CascadeSummary { get; set; }
public DateTimeOffset RequestedAt { get; set; }
public DateTimeOffset? ConfirmedAt { get; set; }
public DateTimeOffset? CompletedAt { get; set; }
}
internal sealed class CascadeSummaryResponse
{
public int ChildEnvironments { get; set; }
public int ChildTargets { get; set; }
public int BoundAgents { get; set; }
public int InfrastructureBindings { get; set; }
public int ActiveHealthSchedules { get; set; }
public int PendingDeployments { get; set; }
}
internal sealed class PendingDeletionListResponse
{
public required List<PendingDeletionResponse> Items { get; set; }
public int TotalCount { get; set; }
}
}

View File

@@ -260,10 +260,6 @@ public static class ConcelierOptionsValidator
}
}
if (mirror.Enabled && mirror.Domains.Count == 0)
{
throw new InvalidOperationException("Mirror distribution requires at least one domain when enabled.");
}
}
private static void ValidateAdvisoryChunks(ConcelierOptions.AdvisoryChunkOptions chunks)

View File

@@ -572,6 +572,93 @@ builder.Services.Configure<MirrorConfigOptions>(builder.Configuration.GetSection
builder.Services.AddHttpClient("MirrorTest");
builder.Services.AddHttpClient("MirrorConsumer");
// ── Topology Setup Services (in-memory stores, future: DB-backed) ──
{
// Shared tenant/user ID provider for topology services
Func<Guid> topologyTenantProvider = () => Guid.Empty; // Will be populated per-request by middleware
Func<Guid> topologyUserProvider = () => Guid.Empty;
// Region
builder.Services.AddSingleton(new StellaOps.ReleaseOrchestrator.Environment.Region.InMemoryRegionStore(topologyTenantProvider));
builder.Services.AddSingleton<StellaOps.ReleaseOrchestrator.Environment.Region.IRegionStore>(sp =>
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.Region.InMemoryRegionStore>());
builder.Services.AddSingleton<StellaOps.ReleaseOrchestrator.Environment.Region.IRegionService>(sp =>
new StellaOps.ReleaseOrchestrator.Environment.Region.RegionService(
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.Region.IRegionStore>(),
TimeProvider.System,
sp.GetRequiredService<ILoggerFactory>().CreateLogger<StellaOps.ReleaseOrchestrator.Environment.Region.RegionService>(),
topologyTenantProvider, topologyUserProvider));
// Environment (uses existing stores if available, or creates new)
builder.Services.AddSingleton(new StellaOps.ReleaseOrchestrator.Environment.Store.InMemoryEnvironmentStore(topologyTenantProvider));
builder.Services.AddSingleton<StellaOps.ReleaseOrchestrator.Environment.Store.IEnvironmentStore>(sp =>
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.Store.InMemoryEnvironmentStore>());
builder.Services.AddSingleton<StellaOps.ReleaseOrchestrator.Environment.Services.IEnvironmentService>(sp =>
new StellaOps.ReleaseOrchestrator.Environment.Services.EnvironmentService(
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.Store.IEnvironmentStore>(),
TimeProvider.System,
sp.GetRequiredService<ILoggerFactory>().CreateLogger<StellaOps.ReleaseOrchestrator.Environment.Services.EnvironmentService>(),
topologyTenantProvider, topologyUserProvider));
// Target
builder.Services.AddSingleton(new StellaOps.ReleaseOrchestrator.Environment.Target.InMemoryTargetStore(topologyTenantProvider));
builder.Services.AddSingleton<StellaOps.ReleaseOrchestrator.Environment.Target.ITargetStore>(sp =>
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.Target.InMemoryTargetStore>());
builder.Services.AddSingleton<StellaOps.ReleaseOrchestrator.Environment.Target.ITargetConnectionTester, StellaOps.ReleaseOrchestrator.Environment.Target.NoOpTargetConnectionTester>();
builder.Services.AddSingleton<StellaOps.ReleaseOrchestrator.Environment.Target.ITargetRegistry>(sp =>
new StellaOps.ReleaseOrchestrator.Environment.Target.TargetRegistry(
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.Target.ITargetStore>(),
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.Store.IEnvironmentStore>(),
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.Target.ITargetConnectionTester>(),
TimeProvider.System,
sp.GetRequiredService<ILoggerFactory>().CreateLogger<StellaOps.ReleaseOrchestrator.Environment.Target.TargetRegistry>(),
topologyTenantProvider));
// Infrastructure Binding
builder.Services.AddSingleton(new StellaOps.ReleaseOrchestrator.Environment.InfrastructureBinding.InMemoryInfrastructureBindingStore(topologyTenantProvider));
builder.Services.AddSingleton<StellaOps.ReleaseOrchestrator.Environment.InfrastructureBinding.IInfrastructureBindingStore>(sp =>
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.InfrastructureBinding.InMemoryInfrastructureBindingStore>());
builder.Services.AddSingleton<StellaOps.ReleaseOrchestrator.Environment.InfrastructureBinding.IInfrastructureBindingService>(sp =>
new StellaOps.ReleaseOrchestrator.Environment.InfrastructureBinding.InfrastructureBindingService(
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.InfrastructureBinding.IInfrastructureBindingStore>(),
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.Services.IEnvironmentService>(),
sp.GetRequiredService<ILoggerFactory>().CreateLogger<StellaOps.ReleaseOrchestrator.Environment.InfrastructureBinding.InfrastructureBindingService>(),
TimeProvider.System, topologyTenantProvider, topologyUserProvider));
// Readiness
builder.Services.AddSingleton<StellaOps.ReleaseOrchestrator.Environment.Readiness.InMemoryTopologyPointStatusStore>();
builder.Services.AddSingleton<StellaOps.ReleaseOrchestrator.Environment.Readiness.ITopologyPointStatusStore>(sp =>
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.Readiness.InMemoryTopologyPointStatusStore>());
builder.Services.AddSingleton<StellaOps.ReleaseOrchestrator.Environment.Readiness.ITopologyReadinessService>(sp =>
new StellaOps.ReleaseOrchestrator.Environment.Readiness.TopologyReadinessService(
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.Target.ITargetRegistry>(),
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.InfrastructureBinding.IInfrastructureBindingService>(),
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.Readiness.ITopologyPointStatusStore>(),
sp.GetRequiredService<ILoggerFactory>().CreateLogger<StellaOps.ReleaseOrchestrator.Environment.Readiness.TopologyReadinessService>(),
TimeProvider.System, topologyTenantProvider));
// Rename
builder.Services.AddSingleton<StellaOps.ReleaseOrchestrator.Environment.Rename.ITopologyRenameService>(sp =>
new StellaOps.ReleaseOrchestrator.Environment.Rename.TopologyRenameService(
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.Region.IRegionService>(),
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.Services.IEnvironmentService>(),
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.Target.ITargetRegistry>(),
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.Region.IRegionStore>(),
sp.GetRequiredService<ILoggerFactory>().CreateLogger<StellaOps.ReleaseOrchestrator.Environment.Rename.TopologyRenameService>()));
// Deletion
builder.Services.AddSingleton(new StellaOps.ReleaseOrchestrator.Environment.Deletion.InMemoryPendingDeletionStore(topologyTenantProvider));
builder.Services.AddSingleton<StellaOps.ReleaseOrchestrator.Environment.Deletion.IPendingDeletionStore>(sp =>
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.Deletion.InMemoryPendingDeletionStore>());
builder.Services.AddSingleton<StellaOps.ReleaseOrchestrator.Environment.Deletion.IPendingDeletionService>(sp =>
new StellaOps.ReleaseOrchestrator.Environment.Deletion.PendingDeletionService(
sp.GetRequiredService<StellaOps.ReleaseOrchestrator.Environment.Deletion.IPendingDeletionStore>(),
TimeProvider.System,
sp.GetRequiredService<ILoggerFactory>().CreateLogger<StellaOps.ReleaseOrchestrator.Environment.Deletion.PendingDeletionService>(),
topologyTenantProvider, topologyUserProvider));
builder.Services.AddHostedService<StellaOps.ReleaseOrchestrator.Environment.Deletion.DeletionBackgroundWorker>();
}
// Mirror distribution options binding and export scheduler (background bundle refresh, TASK-006b)
builder.Services.Configure<MirrorDistributionOptions>(builder.Configuration.GetSection(MirrorDistributionOptions.SectionName));
builder.Services.AddHostedService<MirrorExportScheduler>();
@@ -850,11 +937,17 @@ builder.Services.AddAuthorization(options =>
options.AddStellaOpsScopePolicy(ObservationsPolicyName, StellaOpsScopes.VulnView);
options.AddStellaOpsScopePolicy(AdvisoryIngestPolicyName, StellaOpsScopes.AdvisoryIngest);
options.AddStellaOpsScopePolicy(AdvisoryReadPolicyName, StellaOpsScopes.AdvisoryRead);
options.AddStellaOpsAnyScopePolicy("Concelier.Sources.Manage", StellaOpsScopes.IntegrationWrite, StellaOpsScopes.IntegrationOperate);
options.AddStellaOpsScopePolicy(AocVerifyPolicyName, StellaOpsScopes.AdvisoryRead, StellaOpsScopes.AocVerify);
options.AddStellaOpsScopePolicy(CanonicalReadPolicyName, StellaOpsScopes.AdvisoryRead);
options.AddStellaOpsScopePolicy(CanonicalIngestPolicyName, StellaOpsScopes.AdvisoryIngest);
options.AddStellaOpsScopePolicy(InterestReadPolicyName, StellaOpsScopes.VulnView);
options.AddStellaOpsScopePolicy(InterestAdminPolicyName, StellaOpsScopes.AdvisoryIngest);
// Topology setup policies (regions, infra bindings, readiness, rename, deletion)
options.AddStellaOpsAnyScopePolicy("Topology.Read", StellaOpsScopes.OrchRead, StellaOpsScopes.PlatformContextRead);
options.AddStellaOpsAnyScopePolicy("Topology.Manage", StellaOpsScopes.OrchOperate, StellaOpsScopes.IntegrationWrite);
options.AddStellaOpsAnyScopePolicy("Topology.Admin", StellaOpsScopes.OrchOperate);
});
var pluginHostOptions = BuildPluginOptions(concelierOptions, builder.Environment.ContentRootPath);
@@ -968,6 +1061,9 @@ app.MapFeedMirrorManagementEndpoints();
// Mirror domain management CRUD endpoints
app.MapMirrorDomainManagementEndpoints();
// Topology setup endpoints (regions, infrastructure bindings, readiness, rename, deletion)
app.MapTopologySetupEndpoints();
app.MapGet("/.well-known/openapi", ([FromServices] OpenApiDiscoveryDocumentProvider provider, HttpContext context) =>
{
var (payload, etag) = provider.GetDocument();

View File

@@ -1,4 +1,5 @@
using System.Collections.Concurrent;
using Microsoft.Extensions.Options;
using StellaOps.Concelier.WebService.Extensions;
namespace StellaOps.Concelier.WebService.Services;
@@ -11,11 +12,29 @@ namespace StellaOps.Concelier.WebService.Services;
public sealed class InMemoryMirrorDomainStore : IMirrorDomainStore, IMirrorConfigStore, IMirrorConsumerConfigStore, IMirrorBundleImportStore
{
private readonly ConcurrentDictionary<string, MirrorDomainRecord> _domains = new(StringComparer.OrdinalIgnoreCase);
private readonly object _configLock = new();
private readonly object _consumerConfigLock = new();
private MirrorConfigRecord _config;
private ConsumerConfigResponse _consumerConfig = new();
private volatile MirrorImportStatusRecord? _latestImportStatus;
public IReadOnlyList<MirrorDomainRecord> GetAllDomains() => _domains.Values.ToList();
public InMemoryMirrorDomainStore(IOptions<MirrorConfigOptions> options)
{
var current = options.Value;
_config = new MirrorConfigRecord
{
Mode = NormalizeMode(current.Mode),
OutputRoot = current.OutputRoot,
ConsumerBaseAddress = current.ConsumerBaseAddress,
SigningEnabled = current.SigningEnabled,
SigningAlgorithm = current.SigningAlgorithm,
SigningKeyId = current.SigningKeyId,
AutoRefreshEnabled = current.AutoRefreshEnabled,
RefreshIntervalMinutes = current.RefreshIntervalMinutes,
};
}
public IReadOnlyList<MirrorDomainRecord> GetAllDomains() => _domains.Values.OrderBy(domain => domain.DisplayName, StringComparer.OrdinalIgnoreCase).ToList();
public MirrorDomainRecord? GetDomain(string domainId) => _domains.GetValueOrDefault(domainId);
@@ -31,8 +50,43 @@ public sealed class InMemoryMirrorDomainStore : IMirrorDomainStore, IMirrorConfi
return Task.CompletedTask;
}
public MirrorConfigRecord GetConfig()
{
lock (_configLock)
{
return new MirrorConfigRecord
{
Mode = _config.Mode,
OutputRoot = _config.OutputRoot,
ConsumerBaseAddress = _config.ConsumerBaseAddress,
SigningEnabled = _config.SigningEnabled,
SigningAlgorithm = _config.SigningAlgorithm,
SigningKeyId = _config.SigningKeyId,
AutoRefreshEnabled = _config.AutoRefreshEnabled,
RefreshIntervalMinutes = _config.RefreshIntervalMinutes,
};
}
}
public Task UpdateConfigAsync(UpdateMirrorConfigRequest request, CancellationToken ct = default)
{
lock (_configLock)
{
_config.Mode = NormalizeMode(request.Mode) ?? _config.Mode;
_config.ConsumerBaseAddress = string.IsNullOrWhiteSpace(request.ConsumerBaseAddress)
? _config.ConsumerBaseAddress
: request.ConsumerBaseAddress.Trim();
_config.SigningEnabled = request.Signing?.Enabled ?? _config.SigningEnabled;
_config.SigningAlgorithm = string.IsNullOrWhiteSpace(request.Signing?.Algorithm)
? _config.SigningAlgorithm
: request.Signing!.Algorithm!.Trim();
_config.SigningKeyId = string.IsNullOrWhiteSpace(request.Signing?.KeyId)
? _config.SigningKeyId
: request.Signing!.KeyId!.Trim();
_config.AutoRefreshEnabled = request.AutoRefreshEnabled ?? _config.AutoRefreshEnabled;
_config.RefreshIntervalMinutes = request.RefreshIntervalMinutes ?? _config.RefreshIntervalMinutes;
}
return Task.CompletedTask;
}
@@ -64,10 +118,15 @@ public sealed class InMemoryMirrorDomainStore : IMirrorDomainStore, IMirrorConfi
}
: null,
Connected = !string.IsNullOrWhiteSpace(request.BaseAddress),
LastSync = _consumerConfig.LastSync, // preserve existing sync timestamp
LastSync = DateTimeOffset.UtcNow,
};
}
lock (_configLock)
{
_config.ConsumerBaseAddress = request.BaseAddress;
}
return Task.CompletedTask;
}
@@ -76,4 +135,19 @@ public sealed class InMemoryMirrorDomainStore : IMirrorDomainStore, IMirrorConfi
public MirrorImportStatusRecord? GetLatestStatus() => _latestImportStatus;
public void SetStatus(MirrorImportStatusRecord status) => _latestImportStatus = status;
private static string NormalizeMode(string? mode)
{
if (string.IsNullOrWhiteSpace(mode))
{
return "Direct";
}
return mode.Trim().ToLowerInvariant() switch
{
"mirror" => "Mirror",
"hybrid" => "Hybrid",
_ => "Direct",
};
}
}

View File

@@ -48,6 +48,8 @@
<ProjectReference Include="../../Router/__Libraries/StellaOps.Router.AspNet/StellaOps.Router.AspNet.csproj" />
<ProjectReference Include="../../__Libraries/StellaOps.Replay.Core/StellaOps.Replay.Core.csproj" />
<ProjectReference Include="../../__Libraries/StellaOps.Localization/StellaOps.Localization.csproj" />
<ProjectReference Include="../../ReleaseOrchestrator/__Libraries/StellaOps.ReleaseOrchestrator.Environment/StellaOps.ReleaseOrchestrator.Environment.csproj" />
<ProjectReference Include="../../ReleaseOrchestrator/__Libraries/StellaOps.ReleaseOrchestrator.Agent/StellaOps.ReleaseOrchestrator.Agent.csproj" />
</ItemGroup>
<ItemGroup>
<EmbeddedResource Include="Translations\*.json" />

View File

@@ -0,0 +1,41 @@
using System;
using StellaOps.Concelier.WebService.Options;
using StellaOps.TestKit;
using Xunit;
namespace StellaOps.Concelier.WebService.Tests;
public sealed class ConcelierOptionsValidatorTests
{
[Trait("Category", TestCategories.Unit)]
[Trait("Intent", "Operational")]
[Fact]
public void Validate_AllowsEnabledMirrorWithoutStaticDomains()
{
var options = new ConcelierOptions
{
PostgresStorage = new ConcelierOptions.PostgresStorageOptions
{
Enabled = true,
ConnectionString = "Host=postgres;Database=stellaops;Username=stellaops;Password=stellaops"
},
Mirror = new ConcelierOptions.MirrorOptions
{
Enabled = true,
ExportRoot = "/var/lib/concelier/jobs/mirror-exports",
ExportRootAbsolute = "/var/lib/concelier/jobs/mirror-exports",
LatestDirectoryName = "latest",
MirrorDirectoryName = "mirror"
},
Evidence = new ConcelierOptions.EvidenceBundleOptions
{
Root = "/var/lib/concelier/jobs/evidence-bundles",
RootAbsolute = "/var/lib/concelier/jobs/evidence-bundles"
}
};
var exception = Record.Exception(() => ConcelierOptionsValidator.Validate(options));
Assert.Null(exception);
}
}

View File

@@ -1532,6 +1532,81 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
Assert.True(limitedResponse.Headers.RetryAfter!.Delta!.Value.TotalSeconds > 0);
}
[Fact]
public async Task MirrorManagementEndpointsUseAdvisorySourcesNamespaceAndGeneratePublicArtifacts()
{
using var temp = new TempDirectory();
var environment = new Dictionary<string, string?>
{
["CONCELIER_MIRROR__ENABLED"] = "true",
["CONCELIER_MIRROR__EXPORTROOT"] = temp.Path,
["CONCELIER_MIRROR__ACTIVEEXPORTID"] = "latest",
["CONCELIER_MIRROR__MAXINDEXREQUESTSPERHOUR"] = "5",
["CONCELIER_MIRROR__MAXDOWNLOADREQUESTSPERHOUR"] = "5"
};
using var factory = new ConcelierApplicationFactory(_runner.ConnectionString, environmentOverrides: environment);
using var client = factory.CreateClient();
client.DefaultRequestHeaders.Add("X-Stella-Tenant", "test-tenant");
var configResponse = await client.GetAsync("/api/v1/advisory-sources/mirror/config");
Assert.Equal(HttpStatusCode.OK, configResponse.StatusCode);
var createResponse = await client.PostAsJsonAsync(
"/api/v1/advisory-sources/mirror/domains",
new
{
domainId = "primary",
displayName = "Primary",
sourceIds = new[] { "nvd", "osv" },
exportFormat = "JSON",
rateLimits = new
{
indexRequestsPerHour = 60,
downloadRequestsPerHour = 120
},
requireAuthentication = false,
signing = new
{
enabled = false,
algorithm = "HMAC-SHA256",
keyId = string.Empty
}
});
Assert.Equal(HttpStatusCode.Created, createResponse.StatusCode);
var generateResponse = await client.PostAsync("/api/v1/advisory-sources/mirror/domains/primary/generate", content: null);
Assert.Equal(HttpStatusCode.Accepted, generateResponse.StatusCode);
var endpointsResponse = await client.GetAsync("/api/v1/advisory-sources/mirror/domains/primary/endpoints");
Assert.Equal(HttpStatusCode.OK, endpointsResponse.StatusCode);
var endpointsJson = JsonDocument.Parse(await endpointsResponse.Content.ReadAsStringAsync());
var endpoints = endpointsJson.RootElement.GetProperty("endpoints").EnumerateArray().Select(element => element.GetProperty("path").GetString()).ToList();
Assert.Contains("/concelier/exports/index.json", endpoints);
Assert.Contains("/concelier/exports/mirror/primary/manifest.json", endpoints);
Assert.Contains("/concelier/exports/mirror/primary/bundle.json", endpoints);
var statusResponse = await client.GetAsync("/api/v1/advisory-sources/mirror/domains/primary/status");
Assert.Equal(HttpStatusCode.OK, statusResponse.StatusCode);
var statusJson = JsonDocument.Parse(await statusResponse.Content.ReadAsStringAsync());
Assert.Equal("fresh", statusJson.RootElement.GetProperty("staleness").GetString());
var indexResponse = await client.GetAsync("/concelier/exports/index.json");
Assert.Equal(HttpStatusCode.OK, indexResponse.StatusCode);
var indexContent = await indexResponse.Content.ReadAsStringAsync();
Assert.Contains(@"""domainId"":""primary""", indexContent, StringComparison.Ordinal);
var manifestResponse = await client.GetAsync("/concelier/exports/mirror/primary/manifest.json");
Assert.Equal(HttpStatusCode.OK, manifestResponse.StatusCode);
var manifestContent = await manifestResponse.Content.ReadAsStringAsync();
Assert.Contains(@"""domainId"":""primary""", manifestContent, StringComparison.Ordinal);
var bundleResponse = await client.GetAsync("/concelier/exports/mirror/primary/bundle.json");
Assert.Equal(HttpStatusCode.OK, bundleResponse.StatusCode);
var bundleContent = await bundleResponse.Content.ReadAsStringAsync();
Assert.Contains(@"""domainId"":""primary""", bundleContent, StringComparison.Ordinal);
}
[Fact]
public void MergeModuleDisabledByDefault()
{
@@ -4002,4 +4077,3 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
}

View File

@@ -136,10 +136,17 @@ public sealed class JobEngineDataSource : IAsyncDisposable
await tenantCommand.ExecuteNonQueryAsync(cancellationToken).ConfigureAwait(false);
}
var quotedSchemaName = QuoteIdentifier(ResolveSchemaName(schemaName));
await using var searchPathCommand = new NpgsqlCommand(
$"SET search_path TO {quotedSchemaName}, public;",
connection);
// Build search_path: primary schema, then packs (if not already primary), then public.
// The packs schema hosts the pack registry tables (packs.packs, packs.pack_versions)
// and its enum types (pack_status, pack_version_status). Including it in every
// connection's search_path avoids "type does not exist" errors when cross-schema
// queries or enum casts reference packs-schema objects.
var resolvedSchema = ResolveSchemaName(schemaName);
var quotedSchemaName = QuoteIdentifier(resolvedSchema);
var searchPathSql = string.Equals(resolvedSchema, "packs", StringComparison.OrdinalIgnoreCase)
? $"SET search_path TO {quotedSchemaName}, public;"
: $"SET search_path TO {quotedSchemaName}, {QuoteIdentifier("packs")}, public;";
await using var searchPathCommand = new NpgsqlCommand(searchPathSql, connection);
searchPathCommand.CommandTimeout = _options.CommandTimeoutSeconds;
await searchPathCommand.ExecuteNonQueryAsync(cancellationToken).ConfigureAwait(false);
}

View File

@@ -343,80 +343,103 @@ public sealed class PostgresDeadLetterRepository : IDeadLetterRepository
await using var connection = await _dataSource.OpenConnectionAsync(tenantId, "reader", cancellationToken).ConfigureAwait(false);
// Get counts
long total = 0, pending = 0, replaying = 0, replayed = 0, resolved = 0, exhausted = 0, expired = 0, retryable = 0;
await using (var command = new NpgsqlCommand(statsSql, connection))
try
{
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
if (await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
// Get counts
long total = 0, pending = 0, replaying = 0, replayed = 0, resolved = 0, exhausted = 0, expired = 0, retryable = 0;
await using (var command = new NpgsqlCommand(statsSql, connection))
{
total = reader.GetInt64(0);
pending = reader.GetInt64(1);
replaying = reader.GetInt64(2);
replayed = reader.GetInt64(3);
resolved = reader.GetInt64(4);
exhausted = reader.GetInt64(5);
expired = reader.GetInt64(6);
retryable = reader.GetInt64(7);
}
}
// Get by category
var byCategory = new Dictionary<ErrorCategory, long>();
await using (var command = new NpgsqlCommand(byCategorySql, connection))
{
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
while (await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
{
if (Enum.TryParse<ErrorCategory>(reader.GetString(0), true, out var cat))
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
if (await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
{
byCategory[cat] = reader.GetInt64(1);
total = reader.GetInt64(0);
pending = reader.GetInt64(1);
replaying = reader.GetInt64(2);
replayed = reader.GetInt64(3);
resolved = reader.GetInt64(4);
exhausted = reader.GetInt64(5);
expired = reader.GetInt64(6);
retryable = reader.GetInt64(7);
}
}
}
// Get top error codes
var topErrorCodes = new Dictionary<string, long>();
await using (var command = new NpgsqlCommand(topErrorCodesSql, connection))
{
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
while (await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
// Get by category
var byCategory = new Dictionary<ErrorCategory, long>();
await using (var command = new NpgsqlCommand(byCategorySql, connection))
{
topErrorCodes[reader.GetString(0)] = reader.GetInt64(1);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
while (await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
{
if (Enum.TryParse<ErrorCategory>(reader.GetString(0), true, out var cat))
{
byCategory[cat] = reader.GetInt64(1);
}
}
}
}
// Get top job types
var topJobTypes = new Dictionary<string, long>();
await using (var command = new NpgsqlCommand(topJobTypesSql, connection))
{
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
while (await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
// Get top error codes
var topErrorCodes = new Dictionary<string, long>();
await using (var command = new NpgsqlCommand(topErrorCodesSql, connection))
{
topJobTypes[reader.GetString(0)] = reader.GetInt64(1);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
while (await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
{
topErrorCodes[reader.GetString(0)] = reader.GetInt64(1);
}
}
}
return new DeadLetterStats(
TotalEntries: total,
PendingEntries: pending,
ReplayingEntries: replaying,
ReplayedEntries: replayed,
ResolvedEntries: resolved,
ExhaustedEntries: exhausted,
ExpiredEntries: expired,
RetryableEntries: retryable,
ByCategory: byCategory,
TopErrorCodes: topErrorCodes,
TopJobTypes: topJobTypes);
// Get top job types
var topJobTypes = new Dictionary<string, long>();
await using (var command = new NpgsqlCommand(topJobTypesSql, connection))
{
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
while (await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
{
topJobTypes[reader.GetString(0)] = reader.GetInt64(1);
}
}
return new DeadLetterStats(
TotalEntries: total,
PendingEntries: pending,
ReplayingEntries: replaying,
ReplayedEntries: replayed,
ResolvedEntries: resolved,
ExhaustedEntries: exhausted,
ExpiredEntries: expired,
RetryableEntries: retryable,
ByCategory: byCategory,
TopErrorCodes: topErrorCodes,
TopJobTypes: topJobTypes);
}
catch (PostgresException ex) when (IsMissingTableOrAbortedTransaction(ex.SqlState))
{
_logger.LogWarning(
ex,
"Dead-letter table is not present; returning empty stats for tenant {TenantId}.",
tenantId);
return new DeadLetterStats(
TotalEntries: 0,
PendingEntries: 0,
ReplayingEntries: 0,
ReplayedEntries: 0,
ResolvedEntries: 0,
ExhaustedEntries: 0,
ExpiredEntries: 0,
RetryableEntries: 0,
ByCategory: new Dictionary<ErrorCategory, long>(),
TopErrorCodes: new Dictionary<string, long>(),
TopJobTypes: new Dictionary<string, long>());
}
}
public async Task<IReadOnlyList<DeadLetterSummary>> GetActionableSummaryAsync(
@@ -441,14 +464,25 @@ public sealed class PostgresDeadLetterRepository : IDeadLetterRepository
"Dead-letter summary function path is unavailable for tenant {TenantId}; falling back to direct table aggregation.",
tenantId);
return await ReadActionableSummaryAsync(
connection,
ActionableSummaryFallbackSql,
tenantId,
limit,
cancellationToken).ConfigureAwait(false);
try
{
return await ReadActionableSummaryAsync(
connection,
ActionableSummaryFallbackSql,
tenantId,
limit,
cancellationToken).ConfigureAwait(false);
}
catch (PostgresException fallbackEx) when (IsMissingTableOrAbortedTransaction(fallbackEx.SqlState))
{
_logger.LogWarning(
fallbackEx,
"Dead-letter table is not present during fallback; returning empty actionable summary for tenant {TenantId}.",
tenantId);
return [];
}
}
catch (PostgresException ex) when (ex.SqlState == PostgresErrorCodes.UndefinedTable)
catch (PostgresException ex) when (IsMissingTableOrAbortedTransaction(ex.SqlState))
{
_logger.LogWarning(
ex,
@@ -462,6 +496,16 @@ public sealed class PostgresDeadLetterRepository : IDeadLetterRepository
=> string.Equals(sqlState, PostgresErrorCodes.UndefinedFunction, StringComparison.Ordinal)
|| string.Equals(sqlState, PostgresErrorCodes.AmbiguousColumn, StringComparison.Ordinal);
/// <summary>
/// Returns true when the SQL state indicates the dead-letter table is missing
/// (42P01 = undefined_table) or the connection is in a failed transaction state
/// (25P02 = in_failed_sql_transaction), which can occur when a previous command
/// on the same connection already failed.
/// </summary>
internal static bool IsMissingTableOrAbortedTransaction(string? sqlState)
=> string.Equals(sqlState, PostgresErrorCodes.UndefinedTable, StringComparison.Ordinal)
|| string.Equals(sqlState, "25P02", StringComparison.Ordinal);
public async Task<int> MarkExpiredAsync(
int batchLimit,
CancellationToken cancellationToken)

View File

@@ -404,6 +404,67 @@ public sealed class PostgresJobRepository : IJobRepository
return Convert.ToInt32(result);
}
public async Task<JobStatusCounts> GetStatusCountsAsync(
string tenantId,
string? jobType,
string? projectId,
CancellationToken cancellationToken)
{
await using var connection = await _dataSource.OpenConnectionAsync(tenantId, "reader", cancellationToken).ConfigureAwait(false);
// Single aggregate query comparing against the job_status enum directly.
// COUNT(*) FILTER (WHERE ...) is a standard PostgreSQL idiom that avoids 7 round trips.
// Using 'value'::job_status casts match the pattern used in all other raw-SQL queries
// and avoids runtime errors when the enum type cannot be resolved for a ::text cast.
var sql = new StringBuilder("""
SELECT
COUNT(*) FILTER (WHERE status = 'pending'::job_status) AS pending,
COUNT(*) FILTER (WHERE status = 'scheduled'::job_status) AS scheduled,
COUNT(*) FILTER (WHERE status = 'leased'::job_status) AS leased,
COUNT(*) FILTER (WHERE status = 'succeeded'::job_status) AS succeeded,
COUNT(*) FILTER (WHERE status = 'failed'::job_status) AS failed,
COUNT(*) FILTER (WHERE status = 'canceled'::job_status) AS canceled,
COUNT(*) FILTER (WHERE status = 'timed_out'::job_status) AS timed_out
FROM jobs
WHERE tenant_id = @tenant_id
""");
var parameters = new List<NpgsqlParameter>
{
new("tenant_id", tenantId),
};
if (!string.IsNullOrEmpty(jobType))
{
sql.Append(" AND job_type = @job_type");
parameters.Add(new("job_type", jobType));
}
if (!string.IsNullOrEmpty(projectId))
{
sql.Append(" AND project_id = @project_id");
parameters.Add(new("project_id", projectId));
}
await using var command = new NpgsqlCommand(sql.ToString(), connection);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddRange(parameters.ToArray());
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
if (!await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
{
return new JobStatusCounts(0, 0, 0, 0, 0, 0, 0);
}
return new JobStatusCounts(
Pending: Convert.ToInt32(reader.GetInt64(0)),
Scheduled: Convert.ToInt32(reader.GetInt64(1)),
Leased: Convert.ToInt32(reader.GetInt64(2)),
Succeeded: Convert.ToInt32(reader.GetInt64(3)),
Failed: Convert.ToInt32(reader.GetInt64(4)),
Canceled: Convert.ToInt32(reader.GetInt64(5)),
TimedOut: Convert.ToInt32(reader.GetInt64(6)));
}
private static void AddJobParameters(NpgsqlCommand command, Job job)
{
command.Parameters.AddWithValue("job_id", job.JobId);

View File

@@ -51,7 +51,7 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
CancellationToken cancellationToken)
{
await using var connection = await OpenPackReaderConnectionAsync(tenantId, cancellationToken);
var sql = $"SELECT {PackColumns} FROM packs WHERE tenant_id = @tenant_id AND pack_id = @pack_id";
var sql = $"SELECT {PackColumns} FROM {PackSchemaName}.packs WHERE tenant_id = @tenant_id AND pack_id = @pack_id";
await using var command = new NpgsqlCommand(sql, connection);
command.Parameters.AddWithValue("tenant_id", tenantId);
@@ -72,7 +72,7 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
CancellationToken cancellationToken)
{
await using var connection = await OpenPackReaderConnectionAsync(tenantId, cancellationToken);
var sql = $"SELECT {PackColumns} FROM packs WHERE tenant_id = @tenant_id AND name = @name";
var sql = $"SELECT {PackColumns} FROM {PackSchemaName}.packs WHERE tenant_id = @tenant_id AND name = @name";
await using var command = new NpgsqlCommand(sql, connection);
command.Parameters.AddWithValue("tenant_id", tenantId);
@@ -99,7 +99,7 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
{
await using var connection = await OpenPackReaderConnectionAsync(tenantId, cancellationToken);
var sql = $"SELECT {PackColumns} FROM packs WHERE tenant_id = @tenant_id";
var sql = $"SELECT {PackColumns} FROM {PackSchemaName}.packs WHERE tenant_id = @tenant_id";
var parameters = new List<NpgsqlParameter>
{
new("tenant_id", tenantId)
@@ -156,7 +156,7 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
{
await using var connection = await OpenPackReaderConnectionAsync(tenantId, cancellationToken);
var sql = "SELECT COUNT(*) FROM packs WHERE tenant_id = @tenant_id";
var sql = $"SELECT COUNT(*) FROM {PackSchemaName}.packs WHERE tenant_id = @tenant_id";
var parameters = new List<NpgsqlParameter>
{
new("tenant_id", tenantId)
@@ -197,8 +197,8 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
{
await using var connection = await OpenPackWriterConnectionAsync(pack.TenantId, cancellationToken);
const string sql = """
INSERT INTO packs (
var sql = $"""
INSERT INTO {PackSchemaName}.packs (
pack_id, tenant_id, project_id, name, display_name, description,
status, created_by, created_at, updated_at, updated_by,
metadata, tags, icon_uri, version_count, latest_version,
@@ -220,8 +220,8 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
{
await using var connection = await OpenPackWriterConnectionAsync(pack.TenantId, cancellationToken);
const string sql = """
UPDATE packs SET
var sql = $"""
UPDATE {PackSchemaName}.packs SET
display_name = @display_name,
description = @description,
status = @status::pack_status,
@@ -254,8 +254,8 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
{
await using var connection = await OpenPackWriterConnectionAsync(tenantId, cancellationToken);
const string sql = """
UPDATE packs SET
var sql = $"""
UPDATE {PackSchemaName}.packs SET
status = @status::pack_status,
updated_at = @updated_at,
updated_by = @updated_by,
@@ -283,8 +283,8 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
{
await using var connection = await OpenPackWriterConnectionAsync(tenantId, cancellationToken);
const string sql = """
DELETE FROM packs
var sql = $"""
DELETE FROM {PackSchemaName}.packs
WHERE tenant_id = @tenant_id
AND pack_id = @pack_id
AND status = 'draft'::pack_status
@@ -307,7 +307,7 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
CancellationToken cancellationToken)
{
await using var connection = await OpenPackReaderConnectionAsync(tenantId, cancellationToken);
var sql = $"SELECT {VersionColumns} FROM pack_versions WHERE tenant_id = @tenant_id AND pack_version_id = @pack_version_id";
var sql = $"SELECT {VersionColumns} FROM {PackSchemaName}.pack_versions WHERE tenant_id = @tenant_id AND pack_version_id = @pack_version_id";
await using var command = new NpgsqlCommand(sql, connection);
command.Parameters.AddWithValue("tenant_id", tenantId);
@@ -329,7 +329,7 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
CancellationToken cancellationToken)
{
await using var connection = await OpenPackReaderConnectionAsync(tenantId, cancellationToken);
var sql = $"SELECT {VersionColumns} FROM pack_versions WHERE tenant_id = @tenant_id AND pack_id = @pack_id AND version = @version";
var sql = $"SELECT {VersionColumns} FROM {PackSchemaName}.pack_versions WHERE tenant_id = @tenant_id AND pack_id = @pack_id AND version = @version";
await using var command = new NpgsqlCommand(sql, connection);
command.Parameters.AddWithValue("tenant_id", tenantId);
@@ -355,7 +355,7 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
var sql = $"""
SELECT {VersionColumns}
FROM pack_versions
FROM {PackSchemaName}.pack_versions
WHERE tenant_id = @tenant_id
AND pack_id = @pack_id
AND status = 'published'::pack_version_status
@@ -391,7 +391,7 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
{
await using var connection = await OpenPackReaderConnectionAsync(tenantId, cancellationToken);
var sql = $"SELECT {VersionColumns} FROM pack_versions WHERE tenant_id = @tenant_id AND pack_id = @pack_id";
var sql = $"SELECT {VersionColumns} FROM {PackSchemaName}.pack_versions WHERE tenant_id = @tenant_id AND pack_id = @pack_id";
if (status.HasValue)
{
@@ -428,7 +428,7 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
{
await using var connection = await OpenPackReaderConnectionAsync(tenantId, cancellationToken);
var sql = "SELECT COUNT(*) FROM pack_versions WHERE tenant_id = @tenant_id AND pack_id = @pack_id";
var sql = $"SELECT COUNT(*) FROM {PackSchemaName}.pack_versions WHERE tenant_id = @tenant_id AND pack_id = @pack_id";
if (status.HasValue)
{
@@ -451,8 +451,8 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
{
await using var connection = await OpenPackWriterConnectionAsync(version.TenantId, cancellationToken);
const string sql = """
INSERT INTO pack_versions (
var sql = $"""
INSERT INTO {PackSchemaName}.pack_versions (
pack_version_id, tenant_id, pack_id, version, sem_ver, status,
artifact_uri, artifact_digest, artifact_mime_type, artifact_size_bytes,
manifest_json, manifest_digest, release_notes, min_engine_version, dependencies,
@@ -480,8 +480,8 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
{
await using var connection = await OpenPackWriterConnectionAsync(version.TenantId, cancellationToken);
const string sql = """
UPDATE pack_versions SET
var sql = $"""
UPDATE {PackSchemaName}.pack_versions SET
status = @status::pack_version_status,
release_notes = @release_notes,
min_engine_version = @min_engine_version,
@@ -521,8 +521,8 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
{
await using var connection = await OpenPackWriterConnectionAsync(tenantId, cancellationToken);
const string sql = """
UPDATE pack_versions SET
var sql = $"""
UPDATE {PackSchemaName}.pack_versions SET
status = @status::pack_version_status,
updated_at = @updated_at,
updated_by = @updated_by,
@@ -560,8 +560,8 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
{
await using var connection = await OpenPackWriterConnectionAsync(tenantId, cancellationToken);
const string sql = """
UPDATE pack_versions SET
var sql = $"""
UPDATE {PackSchemaName}.pack_versions SET
signature_uri = @signature_uri,
signature_algorithm = @signature_algorithm,
signed_by = @signed_by,
@@ -590,8 +590,8 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
{
await using var connection = await OpenPackWriterConnectionAsync(tenantId, cancellationToken);
const string sql = """
UPDATE pack_versions SET download_count = download_count + 1
var sql = $"""
UPDATE {PackSchemaName}.pack_versions SET download_count = download_count + 1
WHERE tenant_id = @tenant_id AND pack_version_id = @pack_version_id
""";
@@ -609,8 +609,8 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
{
await using var connection = await OpenPackWriterConnectionAsync(tenantId, cancellationToken);
const string sql = """
DELETE FROM pack_versions
var sql = $"""
DELETE FROM {PackSchemaName}.pack_versions
WHERE tenant_id = @tenant_id
AND pack_version_id = @pack_version_id
AND status = 'draft'::pack_version_status
@@ -637,7 +637,7 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
var sql = $"""
SELECT {PackColumns}
FROM packs
FROM {PackSchemaName}.packs
WHERE tenant_id = @tenant_id
AND (name ILIKE @query OR display_name ILIKE @query OR description ILIKE @query OR tags ILIKE @query)
""";
@@ -679,7 +679,7 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
var sql = $"""
SELECT {PackColumns}
FROM packs
FROM {PackSchemaName}.packs
WHERE tenant_id = @tenant_id
AND tags ILIKE @tag
AND status = 'published'::pack_status
@@ -712,10 +712,10 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
var sql = $"""
SELECT p.{PackColumns.Replace("pack_id", "p.pack_id")}
FROM packs p
FROM {PackSchemaName}.packs p
LEFT JOIN (
SELECT pack_id, SUM(download_count) AS total_downloads
FROM pack_versions
FROM {PackSchemaName}.pack_versions
WHERE tenant_id = @tenant_id
GROUP BY pack_id
) v ON p.pack_id = v.pack_id
@@ -748,7 +748,7 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
var sql = $"""
SELECT {PackColumns}
FROM packs
FROM {PackSchemaName}.packs
WHERE tenant_id = @tenant_id
AND status = 'published'::pack_status
ORDER BY published_at DESC NULLS LAST, updated_at DESC
@@ -778,9 +778,9 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
{
await using var connection = await OpenPackReaderConnectionAsync(tenantId, cancellationToken);
const string sql = """
var sql = $"""
SELECT COALESCE(SUM(download_count), 0)
FROM pack_versions
FROM {PackSchemaName}.pack_versions
WHERE tenant_id = @tenant_id AND pack_id = @pack_id
""";
@@ -798,14 +798,14 @@ public sealed class PostgresPackRegistryRepository : IPackRegistryRepository
{
await using var connection = await OpenPackReaderConnectionAsync(tenantId, cancellationToken);
const string sql = """
var sql = $"""
SELECT
(SELECT COUNT(*) FROM packs WHERE tenant_id = @tenant_id) AS total_packs,
(SELECT COUNT(*) FROM packs WHERE tenant_id = @tenant_id AND status = 'published'::pack_status) AS published_packs,
(SELECT COUNT(*) FROM pack_versions WHERE tenant_id = @tenant_id) AS total_versions,
(SELECT COUNT(*) FROM pack_versions WHERE tenant_id = @tenant_id AND status = 'published'::pack_version_status) AS published_versions,
(SELECT COALESCE(SUM(download_count), 0) FROM pack_versions WHERE tenant_id = @tenant_id) AS total_downloads,
(SELECT MAX(updated_at) FROM packs WHERE tenant_id = @tenant_id) AS last_updated_at
(SELECT COUNT(*) FROM {PackSchemaName}.packs WHERE tenant_id = @tenant_id) AS total_packs,
(SELECT COUNT(*) FROM {PackSchemaName}.packs WHERE tenant_id = @tenant_id AND status = 'published'::pack_status) AS published_packs,
(SELECT COUNT(*) FROM {PackSchemaName}.pack_versions WHERE tenant_id = @tenant_id) AS total_versions,
(SELECT COUNT(*) FROM {PackSchemaName}.pack_versions WHERE tenant_id = @tenant_id AND status = 'published'::pack_version_status) AS published_versions,
(SELECT COALESCE(SUM(download_count), 0) FROM {PackSchemaName}.pack_versions WHERE tenant_id = @tenant_id) AS total_downloads,
(SELECT MAX(updated_at) FROM {PackSchemaName}.packs WHERE tenant_id = @tenant_id) AS last_updated_at
""";
await using var command = new NpgsqlCommand(sql, connection);

View File

@@ -97,4 +97,30 @@ public interface IJobRepository
string? jobType,
string? projectId,
CancellationToken cancellationToken);
/// <summary>
/// Returns per-status counts in a single round trip.
/// Uses text comparison against the PostgreSQL job_status enum labels
/// so it works regardless of whether the column is stored as enum or text.
/// </summary>
Task<JobStatusCounts> GetStatusCountsAsync(
string tenantId,
string? jobType,
string? projectId,
CancellationToken cancellationToken);
}
/// <summary>
/// Aggregated per-status job counts returned by a single SQL query.
/// </summary>
public sealed record JobStatusCounts(
int Pending,
int Scheduled,
int Leased,
int Succeeded,
int Failed,
int Canceled,
int TimedOut)
{
public int Total => Pending + Scheduled + Leased + Succeeded + Failed + Canceled + TimedOut;
}

View File

@@ -21,4 +21,22 @@ public sealed class PostgresDeadLetterRepositoryTests
{
Assert.False(PostgresDeadLetterRepository.ShouldUseActionableSummaryFallback(sqlState));
}
[Theory]
[InlineData(PostgresErrorCodes.UndefinedTable)]
[InlineData("25P02")] // in_failed_sql_transaction
public void IsMissingTableOrAbortedTransaction_ReturnsTrue_ForExpectedSqlStates(string sqlState)
{
Assert.True(PostgresDeadLetterRepository.IsMissingTableOrAbortedTransaction(sqlState));
}
[Theory]
[InlineData(PostgresErrorCodes.UndefinedFunction)]
[InlineData(PostgresErrorCodes.AmbiguousColumn)]
[InlineData("XX000")]
[InlineData(null)]
public void IsMissingTableOrAbortedTransaction_ReturnsFalse_ForOtherSqlStates(string? sqlState)
{
Assert.False(PostgresDeadLetterRepository.IsMissingTableOrAbortedTransaction(sqlState));
}
}

View File

@@ -612,5 +612,11 @@ public sealed class FirstSignalServiceTests
string? jobType,
string? projectId,
CancellationToken cancellationToken) => throw new NotImplementedException();
public Task<JobStatusCounts> GetStatusCountsAsync(
string tenantId,
string? jobType,
string? projectId,
CancellationToken cancellationToken) => throw new NotImplementedException();
}
}

View File

@@ -560,7 +560,8 @@ public static class DeadLetterEndpoints
context.User?.Identity?.Name ?? "anonymous";
private static bool IsMissingDeadLetterTable(PostgresException exception) =>
string.Equals(exception.SqlState, "42P01", StringComparison.Ordinal);
string.Equals(exception.SqlState, "42P01", StringComparison.Ordinal)
|| string.Equals(exception.SqlState, "25P02", StringComparison.Ordinal);
private static DeadLetterStats CreateEmptyStats() =>
new(

View File

@@ -153,24 +153,19 @@ public static class JobEndpoints
var tenantId = tenantResolver.Resolve(context);
DeprecationHeaders.Apply(context.Response, "/api/v1/jobengine/jobs");
// Get counts for each status
var pending = await repository.CountAsync(tenantId, Core.Domain.JobStatus.Pending, jobType, projectId, cancellationToken).ConfigureAwait(false);
var scheduled = await repository.CountAsync(tenantId, Core.Domain.JobStatus.Scheduled, jobType, projectId, cancellationToken).ConfigureAwait(false);
var leased = await repository.CountAsync(tenantId, Core.Domain.JobStatus.Leased, jobType, projectId, cancellationToken).ConfigureAwait(false);
var succeeded = await repository.CountAsync(tenantId, Core.Domain.JobStatus.Succeeded, jobType, projectId, cancellationToken).ConfigureAwait(false);
var failed = await repository.CountAsync(tenantId, Core.Domain.JobStatus.Failed, jobType, projectId, cancellationToken).ConfigureAwait(false);
var canceled = await repository.CountAsync(tenantId, Core.Domain.JobStatus.Canceled, jobType, projectId, cancellationToken).ConfigureAwait(false);
var timedOut = await repository.CountAsync(tenantId, Core.Domain.JobStatus.TimedOut, jobType, projectId, cancellationToken).ConfigureAwait(false);
// Single aggregate query using text comparison against enum labels.
// Replaces 7 individual COUNT round trips with one FILTER-based query.
var counts = await repository.GetStatusCountsAsync(tenantId, jobType, projectId, cancellationToken).ConfigureAwait(false);
var summary = new JobSummary(
TotalJobs: pending + scheduled + leased + succeeded + failed + canceled + timedOut,
PendingJobs: pending,
ScheduledJobs: scheduled,
LeasedJobs: leased,
SucceededJobs: succeeded,
FailedJobs: failed,
CanceledJobs: canceled,
TimedOutJobs: timedOut);
TotalJobs: counts.Total,
PendingJobs: counts.Pending,
ScheduledJobs: counts.Scheduled,
LeasedJobs: counts.Leased,
SucceededJobs: counts.Succeeded,
FailedJobs: counts.Failed,
CanceledJobs: counts.Canceled,
TimedOutJobs: counts.TimedOut);
return Results.Ok(summary);
}

View File

@@ -0,0 +1,410 @@
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.Abstractions;
using StellaOps.Auth.ServerIntegration;
using StellaOps.Auth.ServerIntegration.Tenancy;
using System.Globalization;
namespace StellaOps.Platform.WebService.Endpoints;
/// <summary>
/// Compatibility endpoints for Notify sub-resources that the frontend (WEB-NOTIFY-39/40)
/// expects at /api/v1/notify/* but are not yet served by the Notify microservice.
/// The gateway routes these specific sub-paths to Platform while channels/rules/deliveries
/// continue to be served by the Notify service.
/// </summary>
public static class NotifyCompatibilityEndpoints
{
public static IEndpointRouteBuilder MapNotifyCompatibilityEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/notify")
.WithTags("Notify Compatibility")
.RequireAuthorization(policy => policy.RequireStellaOpsScopes(StellaOpsScopes.NotifyViewer))
.RequireTenant();
// ── Digest Schedules ──────────────────────────────────────────
group.MapGet("/digest-schedules", (HttpContext ctx, [FromQuery] string? pageToken, [FromQuery] int? pageSize) =>
{
var tenant = ResolveTenant(ctx, null);
if (string.IsNullOrWhiteSpace(tenant))
return Results.BadRequest(new { error = "tenant_required" });
return Results.Ok(new
{
items = new[]
{
new
{
scheduleId = "digest-daily",
tenantId = tenant,
name = "Daily Digest",
frequency = "daily",
timezone = "UTC",
hour = 8,
enabled = true,
createdAt = "2025-10-01T00:00:00Z"
}
},
total = 1,
traceId = ctx.TraceIdentifier
});
}).WithName("NotifyCompat.ListDigestSchedules");
group.MapPost("/digest-schedules", (HttpContext ctx) =>
{
var tenant = ResolveTenant(ctx, null);
if (string.IsNullOrWhiteSpace(tenant))
return Results.BadRequest(new { error = "tenant_required" });
return Results.Ok(new
{
scheduleId = $"digest-{Guid.NewGuid():N}".Substring(0, 20),
tenantId = tenant,
name = "New Schedule",
frequency = "daily",
timezone = "UTC",
hour = 8,
enabled = true,
createdAt = DateTimeOffset.UtcNow.ToString("O", CultureInfo.InvariantCulture)
});
}).WithName("NotifyCompat.SaveDigestSchedule");
group.MapDelete("/digest-schedules/{scheduleId}", (string scheduleId) =>
Results.NoContent())
.WithName("NotifyCompat.DeleteDigestSchedule");
// ── Quiet Hours ───────────────────────────────────────────────
group.MapGet("/quiet-hours", (HttpContext ctx) =>
{
var tenant = ResolveTenant(ctx, null);
if (string.IsNullOrWhiteSpace(tenant))
return Results.BadRequest(new { error = "tenant_required" });
return Results.Ok(new
{
items = new[]
{
new
{
quietHoursId = "qh-default",
tenantId = tenant,
name = "Weeknight Quiet",
windows = new[]
{
new
{
timezone = "UTC",
days = new[] { "Mon", "Tue", "Wed", "Thu", "Fri" },
start = "22:00",
end = "06:00"
}
},
exemptions = new[]
{
new
{
eventKinds = new[] { "attestor.verification.failed" },
reason = "Always alert on attestation failures"
}
},
enabled = true,
createdAt = "2025-10-01T00:00:00Z"
}
},
total = 1,
traceId = ctx.TraceIdentifier
});
}).WithName("NotifyCompat.ListQuietHours");
group.MapPost("/quiet-hours", (HttpContext ctx) =>
{
var tenant = ResolveTenant(ctx, null);
if (string.IsNullOrWhiteSpace(tenant))
return Results.BadRequest(new { error = "tenant_required" });
return Results.Ok(new
{
quietHoursId = $"qh-{Guid.NewGuid():N}".Substring(0, 16),
tenantId = tenant,
name = "New Quiet Hours",
windows = Array.Empty<object>(),
exemptions = Array.Empty<object>(),
enabled = true,
createdAt = DateTimeOffset.UtcNow.ToString("O", CultureInfo.InvariantCulture)
});
}).WithName("NotifyCompat.SaveQuietHours");
group.MapDelete("/quiet-hours/{quietHoursId}", (string quietHoursId) =>
Results.NoContent())
.WithName("NotifyCompat.DeleteQuietHours");
// ── Throttle Configs ──────────────────────────────────────────
group.MapGet("/throttle-configs", (HttpContext ctx) =>
{
var tenant = ResolveTenant(ctx, null);
if (string.IsNullOrWhiteSpace(tenant))
return Results.BadRequest(new { error = "tenant_required" });
return Results.Ok(new
{
items = new[]
{
new
{
throttleId = "throttle-default",
tenantId = tenant,
name = "Default Throttle",
windowSeconds = 60,
maxEvents = 50,
burstLimit = 100,
enabled = true,
createdAt = "2025-10-01T00:00:00Z"
}
},
total = 1,
traceId = ctx.TraceIdentifier
});
}).WithName("NotifyCompat.ListThrottleConfigs");
group.MapPost("/throttle-configs", (HttpContext ctx) =>
{
var tenant = ResolveTenant(ctx, null);
if (string.IsNullOrWhiteSpace(tenant))
return Results.BadRequest(new { error = "tenant_required" });
return Results.Ok(new
{
throttleId = $"throttle-{Guid.NewGuid():N}".Substring(0, 20),
tenantId = tenant,
name = "New Throttle",
windowSeconds = 60,
maxEvents = 50,
burstLimit = 100,
enabled = true,
createdAt = DateTimeOffset.UtcNow.ToString("O", CultureInfo.InvariantCulture)
});
}).WithName("NotifyCompat.SaveThrottleConfig");
group.MapDelete("/throttle-configs/{throttleId}", (string throttleId) =>
Results.NoContent())
.WithName("NotifyCompat.DeleteThrottleConfig");
// ── Simulate ──────────────────────────────────────────────────
group.MapPost("/simulate", (HttpContext ctx) =>
{
var tenant = ResolveTenant(ctx, null);
if (string.IsNullOrWhiteSpace(tenant))
return Results.BadRequest(new { error = "tenant_required" });
return Results.Ok(new
{
simulationId = $"sim-{DateTimeOffset.UtcNow.ToUnixTimeMilliseconds()}",
matchedRules = new[] { "rule-critical-vulns" },
wouldNotify = new[]
{
new
{
channelId = "chn-soc-webhook",
actionId = "act-soc",
template = "tmpl-default",
digest = "instant"
}
},
throttled = false,
quietHoursActive = false,
traceId = ctx.TraceIdentifier
});
}).WithName("NotifyCompat.Simulate");
// ── Escalation Policies ───────────────────────────────────────
group.MapGet("/escalation-policies", (HttpContext ctx) =>
{
var tenant = ResolveTenant(ctx, null);
if (string.IsNullOrWhiteSpace(tenant))
return Results.BadRequest(new { error = "tenant_required" });
return Results.Ok(new
{
items = new[]
{
new
{
policyId = "escalate-critical",
tenantId = tenant,
name = "Critical Escalation",
levels = new[]
{
new { level = 1, delayMinutes = 0, channels = new[] { "chn-soc-webhook" }, notifyOnAck = false },
new { level = 2, delayMinutes = 15, channels = new[] { "chn-slack-dev" }, notifyOnAck = true }
},
enabled = true,
createdAt = "2025-10-01T00:00:00Z"
}
},
total = 1,
traceId = ctx.TraceIdentifier
});
}).WithName("NotifyCompat.ListEscalationPolicies");
group.MapPost("/escalation-policies", (HttpContext ctx) =>
{
var tenant = ResolveTenant(ctx, null);
if (string.IsNullOrWhiteSpace(tenant))
return Results.BadRequest(new { error = "tenant_required" });
return Results.Ok(new
{
policyId = $"escalate-{Guid.NewGuid():N}".Substring(0, 20),
tenantId = tenant,
name = "New Policy",
levels = Array.Empty<object>(),
enabled = true,
createdAt = DateTimeOffset.UtcNow.ToString("O", CultureInfo.InvariantCulture)
});
}).WithName("NotifyCompat.SaveEscalationPolicy");
group.MapDelete("/escalation-policies/{policyId}", (string policyId) =>
Results.NoContent())
.WithName("NotifyCompat.DeleteEscalationPolicy");
// ── Localizations ─────────────────────────────────────────────
group.MapGet("/localizations", (HttpContext ctx) =>
{
var tenant = ResolveTenant(ctx, null);
if (string.IsNullOrWhiteSpace(tenant))
return Results.BadRequest(new { error = "tenant_required" });
return Results.Ok(new
{
items = new[]
{
new
{
localeId = "loc-en-us",
tenantId = tenant,
locale = "en-US",
name = "English (US)",
templates = new Dictionary<string, string>(StringComparer.Ordinal)
{
["vuln.critical"] = "Critical vulnerability detected: {{title}}"
},
dateFormat = "MM/DD/YYYY",
timeFormat = "HH:mm:ss",
enabled = true,
createdAt = "2025-10-01T00:00:00Z"
}
},
total = 1,
traceId = ctx.TraceIdentifier
});
}).WithName("NotifyCompat.ListLocalizations");
group.MapPost("/localizations", (HttpContext ctx) =>
{
var tenant = ResolveTenant(ctx, null);
if (string.IsNullOrWhiteSpace(tenant))
return Results.BadRequest(new { error = "tenant_required" });
return Results.Ok(new
{
localeId = $"loc-{Guid.NewGuid():N}".Substring(0, 16),
tenantId = tenant,
locale = "en-US",
name = "New Locale",
templates = new Dictionary<string, string>(StringComparer.Ordinal),
dateFormat = "MM/DD/YYYY",
timeFormat = "HH:mm:ss",
enabled = true,
createdAt = DateTimeOffset.UtcNow.ToString("O", CultureInfo.InvariantCulture)
});
}).WithName("NotifyCompat.SaveLocalization");
group.MapDelete("/localizations/{localeId}", (string localeId) =>
Results.NoContent())
.WithName("NotifyCompat.DeleteLocalization");
// ── Incidents ─────────────────────────────────────────────────
group.MapGet("/incidents", (HttpContext ctx, TimeProvider timeProvider) =>
{
var tenant = ResolveTenant(ctx, null);
if (string.IsNullOrWhiteSpace(tenant))
return Results.BadRequest(new { error = "tenant_required" });
var now = timeProvider.GetUtcNow();
return Results.Ok(new
{
items = new[]
{
new
{
incidentId = "inc-001",
tenantId = tenant,
title = "Critical vulnerability CVE-2021-44228",
severity = "critical",
status = "open",
eventIds = new[] { "evt-001", "evt-002" },
escalationLevel = 1,
escalationPolicyId = "escalate-critical",
createdAt = now.AddHours(-2).ToString("O", CultureInfo.InvariantCulture)
}
},
total = 1,
traceId = ctx.TraceIdentifier
});
}).WithName("NotifyCompat.ListIncidents");
group.MapGet("/incidents/{incidentId}", (HttpContext ctx, string incidentId, TimeProvider timeProvider) =>
{
var tenant = ResolveTenant(ctx, null);
if (string.IsNullOrWhiteSpace(tenant))
return Results.BadRequest(new { error = "tenant_required" });
var now = timeProvider.GetUtcNow();
return Results.Ok(new
{
incidentId,
tenantId = tenant,
title = "Critical vulnerability CVE-2021-44228",
severity = "critical",
status = "open",
eventIds = new[] { "evt-001", "evt-002" },
escalationLevel = 1,
escalationPolicyId = "escalate-critical",
createdAt = now.AddHours(-2).ToString("O", CultureInfo.InvariantCulture)
});
}).WithName("NotifyCompat.GetIncident");
group.MapPost("/incidents/{incidentId}/ack", (HttpContext ctx, string incidentId, TimeProvider timeProvider) =>
{
var tenant = ResolveTenant(ctx, null);
if (string.IsNullOrWhiteSpace(tenant))
return Results.BadRequest(new { error = "tenant_required" });
var now = timeProvider.GetUtcNow();
return Results.Ok(new
{
incidentId,
acknowledged = true,
acknowledgedAt = now.ToString("O", CultureInfo.InvariantCulture),
acknowledgedBy = "admin",
traceId = ctx.TraceIdentifier
});
}).WithName("NotifyCompat.AckIncident");
return app;
}
private static string? ResolveTenant(HttpContext httpContext, string? tenantId)
=> tenantId?.Trim()
?? httpContext.Request.Headers["X-StellaOps-Tenant"].FirstOrDefault()
?? httpContext.Request.Headers["X-Tenant-Id"].FirstOrDefault()
?? httpContext.User.Claims.FirstOrDefault(static claim =>
claim.Type is "stellaops:tenant" or "tenant_id")?.Value;
}

View File

@@ -0,0 +1,457 @@
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.Abstractions;
using StellaOps.Auth.ServerIntegration;
using StellaOps.Auth.ServerIntegration.Tenancy;
using System.Globalization;
namespace StellaOps.Platform.WebService.Endpoints;
public static class SignalsCompatibilityEndpoints
{
private static readonly SignalRecord[] SeedSignals =
[
new(
"sig-001",
"ci_build",
"gitea",
"completed",
new Dictionary<string, object?>
{
["host"] = "build-agent-01",
["runtime"] = "ebpf",
["probeStatus"] = "healthy",
["latencyMs"] = 41
},
"corr-001",
"sha256:001",
new[] { "update-runtime-health" },
"2026-03-09T08:10:00Z",
"2026-03-09T08:10:02Z",
null),
new(
"sig-002",
"ci_deploy",
"internal",
"processing",
new Dictionary<string, object?>
{
["host"] = "deploy-stage-02",
["runtime"] = "etw",
["probeStatus"] = "degraded",
["latencyMs"] = 84
},
"corr-002",
"sha256:002",
new[] { "refresh-rollout-state" },
"2026-03-09T08:12:00Z",
null,
null),
new(
"sig-003",
"registry_push",
"harbor",
"failed",
new Dictionary<string, object?>
{
["host"] = "registry-sync-01",
["runtime"] = "dyld",
["probeStatus"] = "failed",
["latencyMs"] = 132
},
"corr-003",
"sha256:003",
new[] { "retry-mirror" },
"2026-03-09T08:13:00Z",
"2026-03-09T08:13:05Z",
"Registry callback timed out."),
new(
"sig-004",
"scan_complete",
"internal",
"completed",
new Dictionary<string, object?>
{
["host"] = "scanner-03",
["runtime"] = "ebpf",
["probeStatus"] = "healthy",
["latencyMs"] = 58
},
"corr-004",
"sha256:004",
new[] { "refresh-risk-snapshot" },
"2026-03-09T08:16:00Z",
"2026-03-09T08:16:01Z",
null),
new(
"sig-005",
"policy_eval",
"internal",
"received",
new Dictionary<string, object?>
{
["host"] = "policy-runner-01",
["runtime"] = "unknown",
["probeStatus"] = "degraded",
["latencyMs"] = 73
},
"corr-005",
"sha256:005",
new[] { "await-policy-evaluation" },
"2026-03-09T08:18:00Z",
null,
null),
new(
"sig-006",
"scm_push",
"github",
"completed",
new Dictionary<string, object?>
{
["host"] = "webhook-ingress-01",
["branch"] = "main",
["commitCount"] = 3,
["latencyMs"] = 22
},
"corr-006",
"sha256:006",
new[] { "trigger-ci-build" },
"2026-03-09T08:20:00Z",
"2026-03-09T08:20:01Z",
null),
new(
"sig-007",
"scm_pr",
"gitlab",
"completed",
new Dictionary<string, object?>
{
["host"] = "webhook-ingress-02",
["action"] = "merged",
["targetBranch"] = "release/1.4",
["latencyMs"] = 35
},
"corr-007",
"sha256:007",
new[] { "trigger-release-pipeline" },
"2026-03-09T08:22:00Z",
"2026-03-09T08:22:02Z",
null)
];
private static readonly TriggerRecord[] SeedTriggers =
[
new("trg-001", "CI Build Gate", "ci_build", "status == 'failed'", "notify-team", true, "2026-03-09T08:14:00Z", 42),
new("trg-002", "Registry Push Mirror", "registry_push", "provider == 'harbor'", "sync-mirror", true, "2026-03-09T08:13:05Z", 18),
new("trg-003", "Policy Eval Alert", "policy_eval", "payload.decision == 'deny'", "block-release", true, null, 0),
new("trg-004", "SCM Push CI Trigger", "scm_push", "branch == 'main'", "trigger-ci-build", true, "2026-03-09T08:20:00Z", 127)
];
public static IEndpointRouteBuilder MapSignalsCompatibilityEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/signals")
.WithTags("Signals Compatibility")
.RequireAuthorization(policy => policy.RequireStellaOpsScopes(StellaOpsScopes.SignalsRead))
.RequireTenant();
// GET /api/v1/signals
group.MapGet("", (
[FromQuery] string? type,
[FromQuery] string? status,
[FromQuery] string? provider,
[FromQuery] int? limit,
[FromQuery] string? cursor) =>
{
var filtered = ApplyFilters(type, status, provider);
var offset = ParseCursor(cursor);
var pageSize = Math.Clamp(limit ?? 50, 1, 200);
var items = filtered.Skip(offset).Take(pageSize).ToArray();
var nextCursor = offset + pageSize < filtered.Length
? (offset + pageSize).ToString(CultureInfo.InvariantCulture)
: null;
return Results.Ok(new
{
items,
total = filtered.Length,
cursor = nextCursor
});
})
.WithName("SignalsCompatibility.List");
// GET /api/v1/signals/stats
group.MapGet("/stats", () => Results.Ok(BuildStats(SeedSignals)))
.WithName("SignalsCompatibility.Stats");
// GET /api/v1/signals/triggers
group.MapGet("/triggers", () => Results.Ok(SeedTriggers))
.WithName("SignalsCompatibility.ListTriggers");
// POST /api/v1/signals/triggers
group.MapPost("/triggers", (TriggerCreateRequest request) =>
{
var id = $"trg-{Guid.NewGuid().ToString("N")[..6]}";
var trigger = new TriggerRecord(
id,
request.Name ?? "New Trigger",
request.SignalType ?? "ci_build",
request.Condition ?? "true",
request.Action ?? "notify",
request.Enabled ?? true,
null,
0);
return Results.Ok(trigger);
})
.WithName("SignalsCompatibility.CreateTrigger");
// PUT /api/v1/signals/triggers/{id}
group.MapPut("/triggers/{id}", (string id, TriggerCreateRequest request) =>
{
var existing = SeedTriggers.FirstOrDefault(t =>
string.Equals(t.Id, id, StringComparison.OrdinalIgnoreCase));
var trigger = new TriggerRecord(
id,
request.Name ?? existing?.Name ?? "Updated Trigger",
request.SignalType ?? existing?.SignalType ?? "ci_build",
request.Condition ?? existing?.Condition ?? "true",
request.Action ?? existing?.Action ?? "notify",
request.Enabled ?? existing?.Enabled ?? true,
existing?.LastTriggered,
existing?.TriggerCount ?? 0);
return Results.Ok(trigger);
})
.WithName("SignalsCompatibility.UpdateTrigger");
// DELETE /api/v1/signals/triggers/{id}
group.MapDelete("/triggers/{id}", (string id) => Results.NoContent())
.WithName("SignalsCompatibility.DeleteTrigger");
// PATCH /api/v1/signals/triggers/{id}
group.MapPatch("/triggers/{id}", (string id, TriggerToggleRequest request) =>
{
var existing = SeedTriggers.FirstOrDefault(t =>
string.Equals(t.Id, id, StringComparison.OrdinalIgnoreCase));
var trigger = new TriggerRecord(
id,
existing?.Name ?? "Trigger",
existing?.SignalType ?? "ci_build",
existing?.Condition ?? "true",
existing?.Action ?? "notify",
request.Enabled,
existing?.LastTriggered,
existing?.TriggerCount ?? 0);
return Results.Ok(trigger);
})
.WithName("SignalsCompatibility.ToggleTrigger");
// GET /api/v1/signals/reachability/facts
group.MapGet("/reachability/facts", (
[FromQuery] string? tenantId,
[FromQuery] string? projectId,
[FromQuery] string? assetId,
[FromQuery] string? component,
[FromQuery] string? traceId,
TimeProvider timeProvider) =>
{
var now = timeProvider.GetUtcNow();
return Results.Ok(new
{
facts = new[]
{
new
{
component = component ?? "org.apache.logging.log4j:log4j-core",
status = "reachable",
confidence = 0.92,
callDepth = (int?)3,
function = (string?)"org.apache.logging.log4j.core.lookup.JndiLookup.lookup",
signalsVersion = "1.4.0",
observedAt = now.AddMinutes(-12).ToString("O", CultureInfo.InvariantCulture),
evidenceTraceIds = new[] { "trace-a1b2c3", "trace-d4e5f6" }
},
new
{
component = component ?? "com.fasterxml.jackson.databind:jackson-databind",
status = "unreachable",
confidence = 0.87,
callDepth = (int?)null,
function = (string?)null,
signalsVersion = "1.4.0",
observedAt = now.AddMinutes(-8).ToString("O", CultureInfo.InvariantCulture),
evidenceTraceIds = new[] { "trace-g7h8i9" }
}
}
});
})
.WithName("SignalsCompatibility.GetReachabilityFacts");
// GET /api/v1/signals/reachability/call-graphs
group.MapGet("/reachability/call-graphs", (
[FromQuery] string? tenantId,
[FromQuery] string? projectId,
[FromQuery] string? assetId,
[FromQuery] string? component,
[FromQuery] string? traceId,
TimeProvider timeProvider) =>
{
var now = timeProvider.GetUtcNow();
return Results.Ok(new
{
paths = new[]
{
new
{
id = "path-001",
source = "com.example.app.Main",
target = "org.apache.logging.log4j.core.lookup.JndiLookup.lookup",
lastObserved = now.AddMinutes(-12).ToString("O", CultureInfo.InvariantCulture),
hops = new[]
{
new
{
service = "api-gateway",
endpoint = "/api/v1/process",
timestamp = now.AddMinutes(-12).ToString("O", CultureInfo.InvariantCulture)
},
new
{
service = "order-service",
endpoint = "OrderProcessor.handle",
timestamp = now.AddMinutes(-12).AddSeconds(1).ToString("O", CultureInfo.InvariantCulture)
},
new
{
service = "logging-framework",
endpoint = "JndiLookup.lookup",
timestamp = now.AddMinutes(-12).AddSeconds(2).ToString("O", CultureInfo.InvariantCulture)
}
},
evidence = new
{
score = 0.92,
traceId = "trace-a1b2c3"
}
}
}
});
})
.WithName("SignalsCompatibility.GetCallGraphs");
// GET /api/v1/signals/{id}
group.MapGet("/{id}", (string id) =>
{
var signal = SeedSignals.FirstOrDefault(s =>
string.Equals(s.Id, id, StringComparison.OrdinalIgnoreCase));
return signal is not null
? Results.Ok(signal)
: Results.NotFound(new { error = "signal_not_found", id });
})
.WithName("SignalsCompatibility.GetDetail");
// POST /api/v1/signals/{id}/retry
group.MapPost("/{id}/retry", (string id, TimeProvider timeProvider) =>
{
var existing = SeedSignals.FirstOrDefault(s =>
string.Equals(s.Id, id, StringComparison.OrdinalIgnoreCase));
if (existing is null)
{
return Results.NotFound(new { error = "signal_not_found", id });
}
var retried = existing with
{
Status = "processing",
ProcessedAt = null,
Error = null
};
return Results.Ok(retried);
})
.WithName("SignalsCompatibility.Retry");
return app;
}
private static SignalRecord[] ApplyFilters(string? type, string? status, string? provider) =>
SeedSignals
.Where(s => string.IsNullOrWhiteSpace(type) || string.Equals(s.Type, type, StringComparison.OrdinalIgnoreCase))
.Where(s => string.IsNullOrWhiteSpace(status) || string.Equals(s.Status, status, StringComparison.OrdinalIgnoreCase))
.Where(s => string.IsNullOrWhiteSpace(provider) || string.Equals(s.Provider, provider, StringComparison.OrdinalIgnoreCase))
.ToArray();
private static int ParseCursor(string? cursor) =>
int.TryParse(cursor, out var offset) && offset >= 0 ? offset : 0;
private static object BuildStats(IReadOnlyCollection<SignalRecord> signals)
{
var byType = signals
.GroupBy(s => s.Type, StringComparer.OrdinalIgnoreCase)
.ToDictionary(g => g.Key, g => g.Count(), StringComparer.OrdinalIgnoreCase);
var byStatus = signals
.GroupBy(s => s.Status, StringComparer.OrdinalIgnoreCase)
.ToDictionary(g => g.Key, g => g.Count(), StringComparer.OrdinalIgnoreCase);
var byProvider = signals
.GroupBy(s => s.Provider, StringComparer.OrdinalIgnoreCase)
.ToDictionary(g => g.Key, g => g.Count(), StringComparer.OrdinalIgnoreCase);
var successful = signals.Count(s =>
string.Equals(s.Status, "completed", StringComparison.OrdinalIgnoreCase));
var latencySamples = signals
.Select(s => s.Payload.TryGetValue("latencyMs", out var v) ? v : null)
.OfType<int>()
.ToArray();
return new
{
total = signals.Count,
byType,
byStatus,
byProvider,
lastHourCount = signals.Count,
successRate = signals.Count == 0 ? 100.0 : Math.Round((successful / (double)signals.Count) * 100, 2),
avgProcessingMs = latencySamples.Length == 0 ? 0.0 : Math.Round(latencySamples.Average(), 2)
};
}
private sealed record SignalRecord(
string Id,
string Type,
string Provider,
string Status,
IReadOnlyDictionary<string, object?> Payload,
string? CorrelationId,
string? ArtifactRef,
IReadOnlyCollection<string> TriggeredActions,
string ReceivedAt,
string? ProcessedAt,
string? Error);
private sealed record TriggerRecord(
string Id,
string Name,
string SignalType,
string Condition,
string Action,
bool Enabled,
string? LastTriggered,
int TriggerCount);
private sealed record TriggerCreateRequest(
string? Name,
string? SignalType,
string? Condition,
string? Action,
bool? Enabled);
private sealed record TriggerToggleRequest(bool Enabled);
}

View File

@@ -177,7 +177,7 @@ public sealed class PlatformEnvironmentSettingsOptions
public string RedirectUri { get; set; } = string.Empty;
public string? SilentRefreshRedirectUri { get; set; }
public string? PostLogoutRedirectUri { get; set; }
public string Scope { get; set; } = "openid profile email ui.read ui.admin authority:tenants.read authority:users.read authority:roles.read authority:clients.read authority:tokens.read authority:branding.read authority.audit.read graph:read sbom:read scanner:read policy:read policy:simulate policy:author policy:review policy:approve orch:read analytics.read advisory:read vex:read exceptions:read exceptions:approve aoc:verify findings:read release:read scheduler:read vuln:view vuln:investigate vuln:operate vuln:audit";
public string Scope { get; set; } = "openid profile email ui.read ui.admin authority:tenants.read authority:users.read authority:roles.read authority:clients.read authority:tokens.read authority:branding.read authority.audit.read graph:read sbom:read scanner:read policy:read policy:simulate policy:author policy:review policy:approve orch:read analytics.read advisory:read vex:read exceptions:read exceptions:approve aoc:verify findings:read release:read scheduler:read vuln:view vuln:investigate vuln:operate vuln:audit signer:read signer:sign signer:rotate signer:admin trust:read trust:write trust:admin";
public string? Audience { get; set; }
public List<string> DpopAlgorithms { get; set; } = new() { "ES256" };
public int RefreshLeewaySeconds { get; set; } = 60;

View File

@@ -338,6 +338,9 @@ app.MapLegacyAliasEndpoints();
app.MapPackAdapterEndpoints();
app.MapConsoleCompatibilityEndpoints();
app.MapAocCompatibilityEndpoints();
app.MapNotifyCompatibilityEndpoints();
app.MapSignalsCompatibilityEndpoints();
app.MapQuotaCompatibilityEndpoints();
app.MapAdministrationTrustSigningMutationEndpoints();
app.MapFederationTelemetryEndpoints();
app.MapSeedEndpoints();

View File

@@ -142,7 +142,17 @@ public enum AgentCapability
/// <summary>
/// WinRM support.
/// </summary>
WinRm = 3
WinRm = 3,
/// <summary>
/// HashiCorp Vault connectivity check.
/// </summary>
VaultCheck = 4,
/// <summary>
/// Consul connectivity check.
/// </summary>
ConsulCheck = 5
}
/// <summary>

View File

@@ -0,0 +1,15 @@
namespace StellaOps.ReleaseOrchestrator.Agent.Models;
/// <summary>
/// Task to test Consul connectivity from an agent.
/// </summary>
public sealed record ConsulConnectivityTask : AgentTask
{
/// <inheritdoc />
public override string TaskType => "consul_connectivity";
/// <summary>
/// Consul server address.
/// </summary>
public required string ConsulAddress { get; init; }
}

View File

@@ -0,0 +1,15 @@
namespace StellaOps.ReleaseOrchestrator.Agent.Models;
/// <summary>
/// Task to check Docker version on an agent.
/// </summary>
public sealed record DockerVersionCheckTask : AgentTask
{
/// <inheritdoc />
public override string TaskType => "docker_version_check";
/// <summary>
/// Target to check Docker version for.
/// </summary>
public required Guid TargetId { get; init; }
}

View File

@@ -0,0 +1,20 @@
namespace StellaOps.ReleaseOrchestrator.Agent.Models;
/// <summary>
/// Task to test HashiCorp Vault connectivity from an agent.
/// </summary>
public sealed record VaultConnectivityTask : AgentTask
{
/// <inheritdoc />
public override string TaskType => "vault_connectivity";
/// <summary>
/// Vault server address.
/// </summary>
public required string VaultAddress { get; init; }
/// <summary>
/// Authentication method (token, approle, kubernetes).
/// </summary>
public required string AuthMethod { get; init; }
}

View File

@@ -0,0 +1,124 @@
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using StellaOps.ReleaseOrchestrator.Environment.Models;
namespace StellaOps.ReleaseOrchestrator.Environment.Deletion;
/// <summary>
/// Background service that polls for confirmed deletions and executes them.
/// </summary>
public sealed class DeletionBackgroundWorker : IHostedService, IDisposable
{
private readonly IPendingDeletionStore _store;
private readonly ILogger<DeletionBackgroundWorker> _logger;
private readonly TimeProvider _timeProvider;
private readonly TimeSpan _pollInterval;
private ITimer? _timer;
private bool _disposed;
public DeletionBackgroundWorker(
IPendingDeletionStore store,
ILogger<DeletionBackgroundWorker> logger,
TimeProvider? timeProvider = null,
TimeSpan? pollInterval = null)
{
_store = store ?? throw new ArgumentNullException(nameof(store));
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
_timeProvider = timeProvider ?? TimeProvider.System;
_pollInterval = pollInterval ?? TimeSpan.FromSeconds(30);
}
public Task StartAsync(CancellationToken ct)
{
_logger.LogInformation("Deletion background worker starting (poll interval: {Interval})", _pollInterval);
_timer = _timeProvider.CreateTimer(
ProcessConfirmedDeletions,
null,
TimeSpan.FromMinutes(1), // initial delay
_pollInterval);
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken ct)
{
_logger.LogInformation("Deletion background worker stopping");
_timer?.Change(Timeout.InfiniteTimeSpan, Timeout.InfiniteTimeSpan);
return Task.CompletedTask;
}
private async void ProcessConfirmedDeletions(object? state)
{
try
{
var confirmed = await _store.ListByStatusAsync(DeletionStatus.Confirmed);
foreach (var deletion in confirmed)
{
try
{
_logger.LogInformation(
"Executing deletion for {EntityType} {EntityId}",
deletion.EntityType, deletion.EntityId);
// Mark as executing
var executing = deletion with
{
Status = DeletionStatus.Executing,
ExecutedAt = _timeProvider.GetUtcNow()
};
await _store.UpdateAsync(executing);
// Execute cascade cleanup based on entity type
await ExecuteCascadeAsync(deletion);
// Mark as completed
var completed = executing with
{
Status = DeletionStatus.Completed,
CompletedAt = _timeProvider.GetUtcNow()
};
await _store.UpdateAsync(completed);
_logger.LogInformation(
"Deletion completed for {EntityType} {EntityId}",
deletion.EntityType, deletion.EntityId);
}
catch (Exception ex)
{
_logger.LogError(ex,
"Failed to execute deletion for {EntityType} {EntityId}",
deletion.EntityType, deletion.EntityId);
}
}
}
catch (Exception ex)
{
_logger.LogError(ex, "Deletion background worker poll failed");
}
}
private Task ExecuteCascadeAsync(PendingDeletion deletion)
{
// In full implementation, this would:
// - Region: delete child environments, remove bindings, cancel schedules
// - Environment: delete child targets, remove bindings, cancel schedules
// - Target: unassign agent, remove status records, cancel schedule
// - Agent: unassign from targets, revoke, cancel tasks
// - Integration: remove all bindings, soft-delete
// For now, log the cascade and complete
_logger.LogInformation(
"Cascade cleanup for {EntityType} {EntityId}: {Summary}",
deletion.EntityType, deletion.EntityId, deletion.CascadeSummary);
return Task.CompletedTask;
}
public void Dispose()
{
if (_disposed) return;
_timer?.Dispose();
_disposed = true;
}
}

View File

@@ -0,0 +1,24 @@
using StellaOps.ReleaseOrchestrator.Environment.Models;
namespace StellaOps.ReleaseOrchestrator.Environment.Deletion;
/// <summary>
/// Service for managing deletion lifecycle with cool-off periods.
/// </summary>
public interface IPendingDeletionService
{
Task<PendingDeletion> RequestDeletionAsync(DeletionRequest request, CancellationToken ct = default);
Task<PendingDeletion> ConfirmDeletionAsync(Guid pendingDeletionId, Guid confirmedBy, CancellationToken ct = default);
Task CancelDeletionAsync(Guid pendingDeletionId, Guid cancelledBy, CancellationToken ct = default);
Task<PendingDeletion?> GetAsync(Guid id, CancellationToken ct = default);
Task<IReadOnlyList<PendingDeletion>> ListPendingAsync(CancellationToken ct = default);
Task<CascadeSummary> ComputeCascadeAsync(DeletionEntityType entityType, Guid entityId, CancellationToken ct = default);
}
/// <summary>
/// Request to delete a topology entity.
/// </summary>
public sealed record DeletionRequest(
DeletionEntityType EntityType,
Guid EntityId,
string? Reason = null);

View File

@@ -0,0 +1,16 @@
using StellaOps.ReleaseOrchestrator.Environment.Models;
namespace StellaOps.ReleaseOrchestrator.Environment.Deletion;
/// <summary>
/// Storage interface for pending deletion persistence.
/// </summary>
public interface IPendingDeletionStore
{
Task<PendingDeletion?> GetAsync(Guid id, CancellationToken ct = default);
Task<PendingDeletion?> GetByEntityAsync(DeletionEntityType entityType, Guid entityId, CancellationToken ct = default);
Task<IReadOnlyList<PendingDeletion>> ListByStatusAsync(DeletionStatus status, CancellationToken ct = default);
Task<IReadOnlyList<PendingDeletion>> ListPendingAsync(CancellationToken ct = default);
Task<PendingDeletion> CreateAsync(PendingDeletion deletion, CancellationToken ct = default);
Task<PendingDeletion> UpdateAsync(PendingDeletion deletion, CancellationToken ct = default);
}

View File

@@ -0,0 +1,77 @@
using System.Collections.Concurrent;
using StellaOps.ReleaseOrchestrator.Environment.Models;
namespace StellaOps.ReleaseOrchestrator.Environment.Deletion;
/// <summary>
/// In-memory implementation of pending deletion store for testing.
/// </summary>
public sealed class InMemoryPendingDeletionStore : IPendingDeletionStore
{
private readonly ConcurrentDictionary<Guid, PendingDeletion> _deletions = new();
private readonly Func<Guid> _tenantIdProvider;
public InMemoryPendingDeletionStore(Func<Guid> tenantIdProvider)
{
_tenantIdProvider = tenantIdProvider ?? throw new ArgumentNullException(nameof(tenantIdProvider));
}
public Task<PendingDeletion?> GetAsync(Guid id, CancellationToken ct = default)
{
var tenantId = _tenantIdProvider();
_deletions.TryGetValue(id, out var deletion);
return Task.FromResult(deletion?.TenantId == tenantId ? deletion : null);
}
public Task<PendingDeletion?> GetByEntityAsync(
DeletionEntityType entityType, Guid entityId, CancellationToken ct = default)
{
var tenantId = _tenantIdProvider();
var deletion = _deletions.Values
.FirstOrDefault(d => d.TenantId == tenantId &&
d.EntityType == entityType &&
d.EntityId == entityId &&
d.Status is DeletionStatus.Pending or DeletionStatus.Confirmed);
return Task.FromResult(deletion);
}
public Task<IReadOnlyList<PendingDeletion>> ListByStatusAsync(
DeletionStatus status, CancellationToken ct = default)
{
var tenantId = _tenantIdProvider();
var deletions = _deletions.Values
.Where(d => d.TenantId == tenantId && d.Status == status)
.OrderBy(d => d.RequestedAt)
.ToList();
return Task.FromResult<IReadOnlyList<PendingDeletion>>(deletions);
}
public Task<IReadOnlyList<PendingDeletion>> ListPendingAsync(CancellationToken ct = default)
{
var tenantId = _tenantIdProvider();
var deletions = _deletions.Values
.Where(d => d.TenantId == tenantId &&
d.Status is DeletionStatus.Pending or DeletionStatus.Confirmed or DeletionStatus.Executing)
.OrderBy(d => d.RequestedAt)
.ToList();
return Task.FromResult<IReadOnlyList<PendingDeletion>>(deletions);
}
public Task<PendingDeletion> CreateAsync(PendingDeletion deletion, CancellationToken ct = default)
{
if (!_deletions.TryAdd(deletion.Id, deletion))
throw new InvalidOperationException($"Pending deletion with ID {deletion.Id} already exists");
return Task.FromResult(deletion);
}
public Task<PendingDeletion> UpdateAsync(PendingDeletion deletion, CancellationToken ct = default)
{
var tenantId = _tenantIdProvider();
if (!_deletions.TryGetValue(deletion.Id, out var existing) || existing.TenantId != tenantId)
throw new InvalidOperationException($"Pending deletion with ID {deletion.Id} not found");
_deletions[deletion.Id] = deletion;
return Task.FromResult(deletion);
}
public void Clear() => _deletions.Clear();
}

View File

@@ -0,0 +1,163 @@
using Microsoft.Extensions.Logging;
using StellaOps.ReleaseOrchestrator.Environment.Models;
namespace StellaOps.ReleaseOrchestrator.Environment.Deletion;
/// <summary>
/// Manages deletion lifecycle with cool-off periods and cascade computation.
/// State machine: request -> pending -> (cancel | confirm after cool-off) -> executing -> completed
/// </summary>
public sealed class PendingDeletionService : IPendingDeletionService
{
private readonly IPendingDeletionStore _store;
private readonly TimeProvider _timeProvider;
private readonly ILogger<PendingDeletionService> _logger;
private readonly Func<Guid> _tenantIdProvider;
private readonly Func<Guid> _userIdProvider;
/// <summary>
/// Cool-off periods per entity type.
/// </summary>
private static readonly Dictionary<DeletionEntityType, int> CoolOffHours = new()
{
[DeletionEntityType.Tenant] = 72,
[DeletionEntityType.Region] = 48,
[DeletionEntityType.Environment] = 24,
[DeletionEntityType.Target] = 4,
[DeletionEntityType.Agent] = 4,
[DeletionEntityType.Integration] = 12
};
public PendingDeletionService(
IPendingDeletionStore store,
TimeProvider timeProvider,
ILogger<PendingDeletionService> logger,
Func<Guid> tenantIdProvider,
Func<Guid> userIdProvider)
{
_store = store ?? throw new ArgumentNullException(nameof(store));
_timeProvider = timeProvider ?? TimeProvider.System;
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
_tenantIdProvider = tenantIdProvider ?? throw new ArgumentNullException(nameof(tenantIdProvider));
_userIdProvider = userIdProvider ?? throw new ArgumentNullException(nameof(userIdProvider));
}
public async Task<PendingDeletion> RequestDeletionAsync(DeletionRequest request, CancellationToken ct = default)
{
ArgumentNullException.ThrowIfNull(request);
// Check if there's already a pending deletion for this entity
var existing = await _store.GetByEntityAsync(request.EntityType, request.EntityId, ct);
if (existing is not null)
{
throw new InvalidOperationException(
$"A deletion request already exists for this {request.EntityType} (ID: {existing.Id}, status: {existing.Status})");
}
var now = _timeProvider.GetUtcNow();
var tenantId = _tenantIdProvider();
var userId = _userIdProvider();
var coolOff = CoolOffHours.GetValueOrDefault(request.EntityType, 24);
var cascade = await ComputeCascadeAsync(request.EntityType, request.EntityId, ct);
var deletion = new PendingDeletion
{
Id = Guid.NewGuid(),
TenantId = tenantId,
EntityType = request.EntityType,
EntityId = request.EntityId,
EntityName = $"{request.EntityType}:{request.EntityId}",
Status = DeletionStatus.Pending,
CoolOffHours = coolOff,
CoolOffExpiresAt = now.AddHours(coolOff),
CascadeSummary = cascade,
Reason = request.Reason,
RequestedBy = userId,
RequestedAt = now,
CreatedAt = now
};
var created = await _store.CreateAsync(deletion, ct);
_logger.LogInformation(
"Deletion requested for {EntityType} {EntityId}, cool-off expires at {ExpiresAt}",
request.EntityType, request.EntityId, deletion.CoolOffExpiresAt);
return created;
}
public async Task<PendingDeletion> ConfirmDeletionAsync(
Guid pendingDeletionId, Guid confirmedBy, CancellationToken ct = default)
{
var deletion = await _store.GetAsync(pendingDeletionId, ct)
?? throw new InvalidOperationException($"Pending deletion '{pendingDeletionId}' not found");
if (deletion.Status != DeletionStatus.Pending)
{
throw new InvalidOperationException(
$"Cannot confirm deletion in status '{deletion.Status}', must be 'Pending'");
}
var now = _timeProvider.GetUtcNow();
if (now < deletion.CoolOffExpiresAt)
{
throw new InvalidOperationException(
$"Cool-off period has not expired. Can confirm after {deletion.CoolOffExpiresAt:O}");
}
var confirmed = deletion with
{
Status = DeletionStatus.Confirmed,
ConfirmedBy = confirmedBy,
ConfirmedAt = now
};
var updated = await _store.UpdateAsync(confirmed, ct);
_logger.LogInformation(
"Deletion confirmed for {EntityType} {EntityId} by {ConfirmedBy}",
deletion.EntityType, deletion.EntityId, confirmedBy);
return updated;
}
public async Task CancelDeletionAsync(
Guid pendingDeletionId, Guid cancelledBy, CancellationToken ct = default)
{
var deletion = await _store.GetAsync(pendingDeletionId, ct)
?? throw new InvalidOperationException($"Pending deletion '{pendingDeletionId}' not found");
if (deletion.Status is not (DeletionStatus.Pending or DeletionStatus.Confirmed))
{
throw new InvalidOperationException(
$"Cannot cancel deletion in status '{deletion.Status}'");
}
var cancelled = deletion with
{
Status = DeletionStatus.Cancelled,
CompletedAt = _timeProvider.GetUtcNow()
};
await _store.UpdateAsync(cancelled, ct);
_logger.LogInformation(
"Deletion cancelled for {EntityType} {EntityId} by {CancelledBy}",
deletion.EntityType, deletion.EntityId, cancelledBy);
}
public Task<PendingDeletion?> GetAsync(Guid id, CancellationToken ct = default) =>
_store.GetAsync(id, ct);
public Task<IReadOnlyList<PendingDeletion>> ListPendingAsync(CancellationToken ct = default) =>
_store.ListPendingAsync(ct);
public Task<CascadeSummary> ComputeCascadeAsync(
DeletionEntityType entityType, Guid entityId, CancellationToken ct = default)
{
// In full implementation, query related entities for cascade counts
// For now, return empty cascade (the stores would need cross-entity queries)
return Task.FromResult(new CascadeSummary());
}
}

View File

@@ -0,0 +1,53 @@
namespace StellaOps.ReleaseOrchestrator.Environment.Events;
// ── Region Events ────────────────────────────────────────────
public sealed record RegionCreated(
Guid RegionId, Guid TenantId, string Name, string DisplayName,
DateTimeOffset OccurredAt, Guid CreatedBy) : IDomainEvent;
public sealed record RegionUpdated(
Guid RegionId, Guid TenantId, IReadOnlyList<string> ChangedFields,
DateTimeOffset OccurredAt, Guid UpdatedBy) : IDomainEvent;
public sealed record RegionDeleted(
Guid RegionId, Guid TenantId, string Name,
DateTimeOffset OccurredAt, Guid DeletedBy) : IDomainEvent;
// ── Infrastructure Binding Events ────────────────────────────
public sealed record InfrastructureBindingCreated(
Guid BindingId, Guid TenantId, Guid IntegrationId,
string ScopeType, Guid? ScopeId, string Role,
DateTimeOffset OccurredAt, Guid CreatedBy) : IDomainEvent;
public sealed record InfrastructureBindingRemoved(
Guid BindingId, Guid TenantId, Guid IntegrationId,
string ScopeType, Guid? ScopeId, string Role,
DateTimeOffset OccurredAt) : IDomainEvent;
// ── Rename Events ────────────────────────────────────────────
public sealed record EntityRenamed(
string EntityType, Guid EntityId, Guid TenantId,
string OldName, string NewName, string OldDisplayName, string NewDisplayName,
DateTimeOffset OccurredAt, Guid RenamedBy) : IDomainEvent;
// ── Deletion Events ──────────────────────────────────────────
public sealed record DeletionRequested(
Guid DeletionId, string EntityType, Guid EntityId, Guid TenantId,
string Reason, int CoolOffHours, DateTimeOffset ExpiresAt,
DateTimeOffset OccurredAt, Guid RequestedBy) : IDomainEvent;
public sealed record DeletionConfirmed(
Guid DeletionId, string EntityType, Guid EntityId, Guid TenantId,
DateTimeOffset OccurredAt, Guid ConfirmedBy) : IDomainEvent;
public sealed record DeletionExecuted(
Guid DeletionId, string EntityType, Guid EntityId, Guid TenantId,
DateTimeOffset OccurredAt) : IDomainEvent;
public sealed record DeletionCancelled(
Guid DeletionId, string EntityType, Guid EntityId, Guid TenantId,
DateTimeOffset OccurredAt, Guid CancelledBy) : IDomainEvent;

View File

@@ -0,0 +1,28 @@
using StellaOps.ReleaseOrchestrator.Environment.Models;
using StellaOps.ReleaseOrchestrator.Environment.Target;
namespace StellaOps.ReleaseOrchestrator.Environment.InfrastructureBinding;
/// <summary>
/// Service for managing infrastructure bindings (registry/vault/consul) at tenant/region/environment scope.
/// </summary>
public interface IInfrastructureBindingService
{
Task<Models.InfrastructureBinding> BindAsync(BindInfrastructureRequest request, CancellationToken ct = default);
Task UnbindAsync(Guid bindingId, CancellationToken ct = default);
Task<IReadOnlyList<Models.InfrastructureBinding>> ListByScopeAsync(BindingScopeType scopeType, Guid? scopeId, CancellationToken ct = default);
Task<Models.InfrastructureBinding?> ResolveAsync(Guid environmentId, BindingRole role, CancellationToken ct = default);
Task<InfrastructureBindingResolution> ResolveAllAsync(Guid environmentId, CancellationToken ct = default);
Task<ConnectionTestResult> TestBindingAsync(Guid bindingId, CancellationToken ct = default);
}
/// <summary>
/// Request to bind an integration to a scope.
/// </summary>
public sealed record BindInfrastructureRequest(
Guid IntegrationId,
BindingScopeType ScopeType,
Guid? ScopeId,
BindingRole Role,
int Priority = 0,
IReadOnlyDictionary<string, string>? ConfigOverrides = null);

View File

@@ -0,0 +1,16 @@
using StellaOps.ReleaseOrchestrator.Environment.Models;
namespace StellaOps.ReleaseOrchestrator.Environment.InfrastructureBinding;
/// <summary>
/// Storage interface for infrastructure binding persistence.
/// </summary>
public interface IInfrastructureBindingStore
{
Task<Models.InfrastructureBinding?> GetAsync(Guid id, CancellationToken ct = default);
Task<IReadOnlyList<Models.InfrastructureBinding>> ListByScopeAsync(BindingScopeType scopeType, Guid? scopeId, CancellationToken ct = default);
Task<IReadOnlyList<Models.InfrastructureBinding>> ListByIntegrationAsync(Guid integrationId, CancellationToken ct = default);
Task<Models.InfrastructureBinding> CreateAsync(Models.InfrastructureBinding binding, CancellationToken ct = default);
Task DeleteAsync(Guid id, CancellationToken ct = default);
Task DeleteByScopeAsync(BindingScopeType scopeType, Guid? scopeId, CancellationToken ct = default);
}

View File

@@ -0,0 +1,79 @@
using System.Collections.Concurrent;
using StellaOps.ReleaseOrchestrator.Environment.Models;
namespace StellaOps.ReleaseOrchestrator.Environment.InfrastructureBinding;
/// <summary>
/// In-memory implementation of infrastructure binding store for testing.
/// </summary>
public sealed class InMemoryInfrastructureBindingStore : IInfrastructureBindingStore
{
private readonly ConcurrentDictionary<Guid, Models.InfrastructureBinding> _bindings = new();
private readonly Func<Guid> _tenantIdProvider;
public InMemoryInfrastructureBindingStore(Func<Guid> tenantIdProvider)
{
_tenantIdProvider = tenantIdProvider ?? throw new ArgumentNullException(nameof(tenantIdProvider));
}
public Task<Models.InfrastructureBinding?> GetAsync(Guid id, CancellationToken ct = default)
{
var tenantId = _tenantIdProvider();
_bindings.TryGetValue(id, out var binding);
return Task.FromResult(binding?.TenantId == tenantId ? binding : null);
}
public Task<IReadOnlyList<Models.InfrastructureBinding>> ListByScopeAsync(
BindingScopeType scopeType, Guid? scopeId, CancellationToken ct = default)
{
var tenantId = _tenantIdProvider();
var bindings = _bindings.Values
.Where(b => b.TenantId == tenantId &&
b.ScopeType == scopeType &&
b.ScopeId == scopeId &&
b.IsActive)
.OrderByDescending(b => b.Priority)
.ToList();
return Task.FromResult<IReadOnlyList<Models.InfrastructureBinding>>(bindings);
}
public Task<IReadOnlyList<Models.InfrastructureBinding>> ListByIntegrationAsync(
Guid integrationId, CancellationToken ct = default)
{
var tenantId = _tenantIdProvider();
var bindings = _bindings.Values
.Where(b => b.TenantId == tenantId && b.IntegrationId == integrationId)
.ToList();
return Task.FromResult<IReadOnlyList<Models.InfrastructureBinding>>(bindings);
}
public Task<Models.InfrastructureBinding> CreateAsync(
Models.InfrastructureBinding binding, CancellationToken ct = default)
{
if (!_bindings.TryAdd(binding.Id, binding))
throw new InvalidOperationException($"Binding with ID {binding.Id} already exists");
return Task.FromResult(binding);
}
public Task DeleteAsync(Guid id, CancellationToken ct = default)
{
var tenantId = _tenantIdProvider();
if (_bindings.TryGetValue(id, out var existing) && existing.TenantId == tenantId)
_bindings.TryRemove(id, out _);
return Task.CompletedTask;
}
public Task DeleteByScopeAsync(BindingScopeType scopeType, Guid? scopeId, CancellationToken ct = default)
{
var tenantId = _tenantIdProvider();
var toRemove = _bindings.Values
.Where(b => b.TenantId == tenantId && b.ScopeType == scopeType && b.ScopeId == scopeId)
.Select(b => b.Id)
.ToList();
foreach (var id in toRemove)
_bindings.TryRemove(id, out _);
return Task.CompletedTask;
}
public void Clear() => _bindings.Clear();
}

View File

@@ -0,0 +1,185 @@
using Microsoft.Extensions.Logging;
using StellaOps.ReleaseOrchestrator.Environment.Models;
using StellaOps.ReleaseOrchestrator.Environment.Services;
using StellaOps.ReleaseOrchestrator.Environment.Target;
namespace StellaOps.ReleaseOrchestrator.Environment.InfrastructureBinding;
/// <summary>
/// Implementation of infrastructure binding service with resolve cascade:
/// environment -> region -> tenant.
/// </summary>
public sealed class InfrastructureBindingService : IInfrastructureBindingService
{
private readonly IInfrastructureBindingStore _store;
private readonly IEnvironmentService _environmentService;
private readonly ILogger<InfrastructureBindingService> _logger;
private readonly TimeProvider _timeProvider;
private readonly Func<Guid> _tenantIdProvider;
private readonly Func<Guid> _userIdProvider;
public InfrastructureBindingService(
IInfrastructureBindingStore store,
IEnvironmentService environmentService,
ILogger<InfrastructureBindingService> logger,
TimeProvider timeProvider,
Func<Guid> tenantIdProvider,
Func<Guid> userIdProvider)
{
_store = store ?? throw new ArgumentNullException(nameof(store));
_environmentService = environmentService ?? throw new ArgumentNullException(nameof(environmentService));
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
_timeProvider = timeProvider ?? TimeProvider.System;
_tenantIdProvider = tenantIdProvider ?? throw new ArgumentNullException(nameof(tenantIdProvider));
_userIdProvider = userIdProvider ?? throw new ArgumentNullException(nameof(userIdProvider));
}
public async Task<Models.InfrastructureBinding> BindAsync(
BindInfrastructureRequest request, CancellationToken ct = default)
{
ArgumentNullException.ThrowIfNull(request);
var now = _timeProvider.GetUtcNow();
var tenantId = _tenantIdProvider();
var userId = _userIdProvider();
var binding = new Models.InfrastructureBinding
{
Id = Guid.NewGuid(),
TenantId = tenantId,
IntegrationId = request.IntegrationId,
ScopeType = request.ScopeType,
ScopeId = request.ScopeId,
Role = request.Role,
Priority = request.Priority,
ConfigOverrides = request.ConfigOverrides ?? new Dictionary<string, string>(),
IsActive = true,
CreatedAt = now,
UpdatedAt = now,
CreatedBy = userId
};
var created = await _store.CreateAsync(binding, ct);
_logger.LogInformation(
"Created infrastructure binding {BindingId}: {Role} at {ScopeType}/{ScopeId} for tenant {TenantId}",
created.Id, created.Role, created.ScopeType, created.ScopeId, tenantId);
return created;
}
public async Task UnbindAsync(Guid bindingId, CancellationToken ct = default)
{
var existing = await _store.GetAsync(bindingId, ct)
?? throw new InvalidOperationException($"Infrastructure binding with ID '{bindingId}' not found");
await _store.DeleteAsync(bindingId, ct);
_logger.LogInformation(
"Removed infrastructure binding {BindingId}: {Role} at {ScopeType}/{ScopeId}",
bindingId, existing.Role, existing.ScopeType, existing.ScopeId);
}
public Task<IReadOnlyList<Models.InfrastructureBinding>> ListByScopeAsync(
BindingScopeType scopeType, Guid? scopeId, CancellationToken ct = default) =>
_store.ListByScopeAsync(scopeType, scopeId, ct);
/// <summary>
/// Resolves a single binding role for an environment using the cascade:
/// 1. Direct environment binding
/// 2. Region binding (if environment has region_id)
/// 3. Tenant binding
/// </summary>
public async Task<Models.InfrastructureBinding?> ResolveAsync(
Guid environmentId, BindingRole role, CancellationToken ct = default)
{
var resolved = await ResolveWithSourceAsync(environmentId, role, ct);
return resolved?.Binding;
}
public async Task<InfrastructureBindingResolution> ResolveAllAsync(
Guid environmentId, CancellationToken ct = default)
{
var registry = await ResolveWithSourceAsync(environmentId, BindingRole.Registry, ct);
var vault = await ResolveWithSourceAsync(environmentId, BindingRole.Vault, ct);
var settingsStore = await ResolveWithSourceAsync(environmentId, BindingRole.SettingsStore, ct);
return new InfrastructureBindingResolution
{
Registry = registry,
Vault = vault,
SettingsStore = settingsStore
};
}
public Task<ConnectionTestResult> TestBindingAsync(Guid bindingId, CancellationToken ct = default)
{
// Delegate to the integration's connector for actual testing
// For now, return a placeholder that indicates test is not yet wired
return Task.FromResult(new ConnectionTestResult(
Success: true,
Message: "Binding exists and is active (connectivity test requires integration connector)",
Duration: TimeSpan.Zero,
TestedAt: _timeProvider.GetUtcNow()));
}
private async Task<ResolvedBinding?> ResolveWithSourceAsync(
Guid environmentId, BindingRole role, CancellationToken ct)
{
// Step 1: Direct environment binding
var envBindings = await _store.ListByScopeAsync(BindingScopeType.Environment, environmentId, ct);
var direct = envBindings
.Where(b => b.Role == role && b.IsActive)
.OrderByDescending(b => b.Priority)
.FirstOrDefault();
if (direct is not null)
{
return new ResolvedBinding
{
Binding = direct,
ResolvedFrom = BindingResolutionSource.Direct
};
}
// Step 2: Region binding (if environment has a region)
var env = await _environmentService.GetAsync(environmentId, ct);
if (env?.RegionId is not null)
{
var regionBindings = await _store.ListByScopeAsync(
BindingScopeType.Region, env.RegionId.Value, ct);
var regionBinding = regionBindings
.Where(b => b.Role == role && b.IsActive)
.OrderByDescending(b => b.Priority)
.FirstOrDefault();
if (regionBinding is not null)
{
return new ResolvedBinding
{
Binding = regionBinding,
ResolvedFrom = BindingResolutionSource.Region
};
}
}
// Step 3: Tenant binding (scope_id is null for tenant scope)
var tenantBindings = await _store.ListByScopeAsync(BindingScopeType.Tenant, null, ct);
var tenantBinding = tenantBindings
.Where(b => b.Role == role && b.IsActive)
.OrderByDescending(b => b.Priority)
.FirstOrDefault();
if (tenantBinding is not null)
{
return new ResolvedBinding
{
Binding = tenantBinding,
ResolvedFrom = BindingResolutionSource.Tenant
};
}
return null;
}
}

View File

@@ -0,0 +1,49 @@
-- Migration 001: Regions and Infrastructure Bindings
-- Adds first-class region entity and infrastructure binding model
-- Regions table (new first-class entity, per-tenant)
CREATE TABLE IF NOT EXISTS release.regions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id) ON DELETE CASCADE,
name VARCHAR(100) NOT NULL,
display_name VARCHAR(255) NOT NULL,
description TEXT,
crypto_profile VARCHAR(50) NOT NULL DEFAULT 'international',
sort_order INT NOT NULL DEFAULT 0,
status TEXT NOT NULL DEFAULT 'active' CHECK (status IN ('active','decommissioning','archived')),
metadata JSONB NOT NULL DEFAULT '{}',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
created_by UUID,
UNIQUE(tenant_id, name)
);
-- Add region_id to environments (nullable FK for backward compatibility)
ALTER TABLE release.environments
ADD COLUMN IF NOT EXISTS region_id UUID REFERENCES release.regions(id);
-- Infrastructure bindings (registry/vault/consul at any scope level)
CREATE TABLE IF NOT EXISTS release.infrastructure_bindings (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES shared.tenants(id) ON DELETE CASCADE,
integration_id UUID NOT NULL REFERENCES release.integrations(id) ON DELETE CASCADE,
scope_type TEXT NOT NULL CHECK (scope_type IN ('tenant','region','environment')),
scope_id UUID, -- NULL for tenant scope
binding_role TEXT NOT NULL CHECK (binding_role IN ('registry','vault','settings_store')),
priority INT NOT NULL DEFAULT 0,
config_overrides JSONB NOT NULL DEFAULT '{}',
is_active BOOLEAN NOT NULL DEFAULT true,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
created_by UUID,
UNIQUE(tenant_id, integration_id, scope_type, COALESCE(scope_id, '00000000-0000-0000-0000-000000000000'), binding_role)
);
CREATE INDEX IF NOT EXISTS idx_infra_bindings_scope
ON release.infrastructure_bindings(tenant_id, scope_type, scope_id, binding_role) WHERE is_active;
CREATE INDEX IF NOT EXISTS idx_regions_tenant
ON release.regions(tenant_id, sort_order);
CREATE INDEX IF NOT EXISTS idx_environments_region
ON release.environments(region_id) WHERE region_id IS NOT NULL;

View File

@@ -0,0 +1,20 @@
-- Migration 002: Topology Point Status
-- Readiness gate tracking for deployment targets
CREATE TABLE IF NOT EXISTS release.topology_point_status (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL,
target_id UUID NOT NULL REFERENCES release.targets(id) ON DELETE CASCADE,
gate_name TEXT NOT NULL,
status TEXT NOT NULL CHECK (status IN ('pending','pass','fail','skip')),
message TEXT,
details JSONB NOT NULL DEFAULT '{}',
checked_at TIMESTAMPTZ,
duration_ms INT,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
UNIQUE(tenant_id, target_id, gate_name)
);
CREATE INDEX IF NOT EXISTS idx_topology_point_status_target
ON release.topology_point_status(tenant_id, target_id);

View File

@@ -0,0 +1,26 @@
-- Migration 003: Pending Deletions
-- Deletion lifecycle with cool-off periods
CREATE TABLE IF NOT EXISTS release.pending_deletions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL,
entity_type TEXT NOT NULL CHECK (entity_type IN ('tenant','region','environment','target','agent','integration')),
entity_id UUID NOT NULL,
entity_name TEXT NOT NULL,
status TEXT NOT NULL CHECK (status IN ('pending','confirmed','executing','completed','cancelled')),
cool_off_hours INT NOT NULL,
cool_off_expires_at TIMESTAMPTZ NOT NULL,
cascade_summary JSONB NOT NULL DEFAULT '{}',
reason TEXT,
requested_by UUID NOT NULL,
requested_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
confirmed_by UUID,
confirmed_at TIMESTAMPTZ,
executed_at TIMESTAMPTZ,
completed_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
UNIQUE(entity_type, entity_id)
);
CREATE INDEX IF NOT EXISTS idx_pending_deletions_status
ON release.pending_deletions(tenant_id, status) WHERE status IN ('pending','confirmed','executing');

View File

@@ -31,6 +31,11 @@ public sealed record Environment
/// </summary>
public string? Description { get; init; }
/// <summary>
/// Region this environment belongs to (nullable for backward compatibility).
/// </summary>
public Guid? RegionId { get; init; }
/// <summary>
/// Order in the promotion pipeline (0 = first/earliest, higher = later).
/// </summary>

View File

@@ -0,0 +1,69 @@
namespace StellaOps.ReleaseOrchestrator.Environment.Models;
/// <summary>
/// Represents a binding of an integration (registry/vault/consul) to a scope level.
/// </summary>
public sealed record InfrastructureBinding
{
public required Guid Id { get; init; }
public required Guid TenantId { get; init; }
public required Guid IntegrationId { get; init; }
public required BindingScopeType ScopeType { get; init; }
public Guid? ScopeId { get; init; }
public required BindingRole Role { get; init; }
public required int Priority { get; init; }
public IReadOnlyDictionary<string, string> ConfigOverrides { get; init; } = new Dictionary<string, string>();
public required bool IsActive { get; init; }
public DateTimeOffset CreatedAt { get; init; }
public DateTimeOffset UpdatedAt { get; init; }
public Guid? CreatedBy { get; init; }
}
/// <summary>
/// Scope level for infrastructure binding.
/// </summary>
public enum BindingScopeType
{
Tenant = 0,
Region = 1,
Environment = 2
}
/// <summary>
/// Role of an infrastructure binding.
/// </summary>
public enum BindingRole
{
Registry = 0,
Vault = 1,
SettingsStore = 2
}
/// <summary>
/// Resolution of all infrastructure bindings for an environment.
/// </summary>
public sealed record InfrastructureBindingResolution
{
public ResolvedBinding? Registry { get; init; }
public ResolvedBinding? Vault { get; init; }
public ResolvedBinding? SettingsStore { get; init; }
}
/// <summary>
/// A resolved binding with its source level.
/// </summary>
public sealed record ResolvedBinding
{
public required InfrastructureBinding Binding { get; init; }
public required BindingResolutionSource ResolvedFrom { get; init; }
}
/// <summary>
/// Where a binding was resolved from in the cascade.
/// </summary>
public enum BindingResolutionSource
{
Direct = 0,
Region = 1,
Tenant = 2
}

View File

@@ -0,0 +1,64 @@
namespace StellaOps.ReleaseOrchestrator.Environment.Models;
/// <summary>
/// Represents a pending deletion request with cool-off period.
/// </summary>
public sealed record PendingDeletion
{
public required Guid Id { get; init; }
public required Guid TenantId { get; init; }
public required DeletionEntityType EntityType { get; init; }
public required Guid EntityId { get; init; }
public required string EntityName { get; init; }
public required DeletionStatus Status { get; init; }
public required int CoolOffHours { get; init; }
public required DateTimeOffset CoolOffExpiresAt { get; init; }
public required CascadeSummary CascadeSummary { get; init; }
public string? Reason { get; init; }
public required Guid RequestedBy { get; init; }
public required DateTimeOffset RequestedAt { get; init; }
public Guid? ConfirmedBy { get; init; }
public DateTimeOffset? ConfirmedAt { get; init; }
public DateTimeOffset? ExecutedAt { get; init; }
public DateTimeOffset? CompletedAt { get; init; }
public DateTimeOffset CreatedAt { get; init; }
}
/// <summary>
/// Status of a pending deletion.
/// </summary>
public enum DeletionStatus
{
Pending = 0,
Confirmed = 1,
Executing = 2,
Completed = 3,
Cancelled = 4
}
/// <summary>
/// Entity type for deletion.
/// </summary>
public enum DeletionEntityType
{
Tenant = 0,
Region = 1,
Environment = 2,
Target = 3,
Agent = 4,
Integration = 5
}
/// <summary>
/// Summary of cascade effects when deleting an entity.
/// </summary>
public sealed record CascadeSummary
{
public int ChildRegions { get; init; }
public int ChildEnvironments { get; init; }
public int ChildTargets { get; init; }
public int BoundAgents { get; init; }
public int InfrastructureBindings { get; init; }
public int ActiveHealthSchedules { get; init; }
public int PendingDeployments { get; init; }
}

View File

@@ -0,0 +1,30 @@
namespace StellaOps.ReleaseOrchestrator.Environment.Models;
/// <summary>
/// Represents a deployment region within a tenant.
/// </summary>
public sealed record Region
{
public required Guid Id { get; init; }
public required Guid TenantId { get; init; }
public required string Name { get; init; }
public required string DisplayName { get; init; }
public string? Description { get; init; }
public required string CryptoProfile { get; init; }
public required int SortOrder { get; init; }
public required RegionStatus Status { get; init; }
public IReadOnlyDictionary<string, string> Metadata { get; init; } = new Dictionary<string, string>();
public DateTimeOffset CreatedAt { get; init; }
public DateTimeOffset UpdatedAt { get; init; }
public Guid? CreatedBy { get; init; }
}
/// <summary>
/// Status of a region.
/// </summary>
public enum RegionStatus
{
Active = 0,
Decommissioning = 1,
Archived = 2
}

View File

@@ -0,0 +1,38 @@
namespace StellaOps.ReleaseOrchestrator.Environment.Models;
/// <summary>
/// Status of a single readiness gate for a topology point (target).
/// </summary>
public sealed record TopologyPointGateResult
{
public required string GateName { get; init; }
public required GateStatus Status { get; init; }
public string? Message { get; init; }
public IReadOnlyDictionary<string, object>? Details { get; init; }
public DateTimeOffset? CheckedAt { get; init; }
public int? DurationMs { get; init; }
}
/// <summary>
/// Status of a readiness gate.
/// </summary>
public enum GateStatus
{
Pending = 0,
Pass = 1,
Fail = 2,
Skip = 3
}
/// <summary>
/// Full readiness report for a topology point (target).
/// </summary>
public sealed record TopologyPointReport
{
public required Guid TargetId { get; init; }
public required Guid EnvironmentId { get; init; }
public required Guid TenantId { get; init; }
public required IReadOnlyList<TopologyPointGateResult> Gates { get; init; }
public required bool IsReady { get; init; }
public required DateTimeOffset EvaluatedAt { get; init; }
}

View File

@@ -0,0 +1,13 @@
using StellaOps.ReleaseOrchestrator.Environment.Models;
namespace StellaOps.ReleaseOrchestrator.Environment.Readiness;
/// <summary>
/// Storage interface for topology point status persistence.
/// </summary>
public interface ITopologyPointStatusStore
{
Task<IReadOnlyList<TopologyPointGateResult>> GetByTargetAsync(Guid targetId, CancellationToken ct = default);
Task UpsertAsync(Guid targetId, Guid tenantId, TopologyPointGateResult result, CancellationToken ct = default);
Task DeleteByTargetAsync(Guid targetId, CancellationToken ct = default);
}

View File

@@ -0,0 +1,14 @@
using StellaOps.ReleaseOrchestrator.Environment.Models;
namespace StellaOps.ReleaseOrchestrator.Environment.Readiness;
/// <summary>
/// Service for evaluating topology point (target) readiness.
/// </summary>
public interface ITopologyReadinessService
{
Task<TopologyPointReport> ValidateAsync(Guid targetId, CancellationToken ct = default);
Task<TopologyPointReport?> GetLatestAsync(Guid targetId, CancellationToken ct = default);
Task<IReadOnlyList<TopologyPointReport>> ListByEnvironmentAsync(Guid environmentId, CancellationToken ct = default);
bool IsReady(TopologyPointReport report);
}

View File

@@ -0,0 +1,38 @@
using System.Collections.Concurrent;
using StellaOps.ReleaseOrchestrator.Environment.Models;
namespace StellaOps.ReleaseOrchestrator.Environment.Readiness;
/// <summary>
/// In-memory implementation of topology point status store for testing.
/// </summary>
public sealed class InMemoryTopologyPointStatusStore : ITopologyPointStatusStore
{
// Key: (targetId, gateName)
private readonly ConcurrentDictionary<(Guid TargetId, string GateName), TopologyPointGateResult> _statuses = new();
public Task<IReadOnlyList<TopologyPointGateResult>> GetByTargetAsync(Guid targetId, CancellationToken ct = default)
{
var results = _statuses
.Where(kv => kv.Key.TargetId == targetId)
.Select(kv => kv.Value)
.ToList();
return Task.FromResult<IReadOnlyList<TopologyPointGateResult>>(results);
}
public Task UpsertAsync(Guid targetId, Guid tenantId, TopologyPointGateResult result, CancellationToken ct = default)
{
_statuses[(targetId, result.GateName)] = result;
return Task.CompletedTask;
}
public Task DeleteByTargetAsync(Guid targetId, CancellationToken ct = default)
{
var keys = _statuses.Keys.Where(k => k.TargetId == targetId).ToList();
foreach (var key in keys)
_statuses.TryRemove(key, out _);
return Task.CompletedTask;
}
public void Clear() => _statuses.Clear();
}

View File

@@ -0,0 +1,29 @@
namespace StellaOps.ReleaseOrchestrator.Environment.Readiness;
/// <summary>
/// Constants for topology readiness gate names.
/// </summary>
public static class TopologyGates
{
public const string AgentBound = "agent_bound";
public const string DockerVersionOk = "docker_version_ok";
public const string DockerPingOk = "docker_ping_ok";
public const string RegistryPullOk = "registry_pull_ok";
public const string VaultReachable = "vault_reachable";
public const string ConsulReachable = "consul_reachable";
public const string ConnectivityOk = "connectivity_ok";
/// <summary>
/// All gate names in evaluation order.
/// </summary>
public static readonly IReadOnlyList<string> All =
[
AgentBound,
DockerVersionOk,
DockerPingOk,
RegistryPullOk,
VaultReachable,
ConsulReachable,
ConnectivityOk
];
}

View File

@@ -0,0 +1,310 @@
using Microsoft.Extensions.Logging;
using StellaOps.ReleaseOrchestrator.Environment.InfrastructureBinding;
using StellaOps.ReleaseOrchestrator.Environment.Models;
using StellaOps.ReleaseOrchestrator.Environment.Target;
namespace StellaOps.ReleaseOrchestrator.Environment.Readiness;
/// <summary>
/// Evaluates readiness gates for topology points (targets).
/// Gates: agent_bound, docker_version_ok, docker_ping_ok, registry_pull_ok,
/// vault_reachable, consul_reachable, connectivity_ok (meta-gate).
/// </summary>
public sealed class TopologyReadinessService : ITopologyReadinessService
{
private readonly ITargetRegistry _targetRegistry;
private readonly IInfrastructureBindingService _bindingService;
private readonly ITopologyPointStatusStore _statusStore;
private readonly ILogger<TopologyReadinessService> _logger;
private readonly TimeProvider _timeProvider;
private readonly Func<Guid> _tenantIdProvider;
public TopologyReadinessService(
ITargetRegistry targetRegistry,
IInfrastructureBindingService bindingService,
ITopologyPointStatusStore statusStore,
ILogger<TopologyReadinessService> logger,
TimeProvider timeProvider,
Func<Guid> tenantIdProvider)
{
_targetRegistry = targetRegistry ?? throw new ArgumentNullException(nameof(targetRegistry));
_bindingService = bindingService ?? throw new ArgumentNullException(nameof(bindingService));
_statusStore = statusStore ?? throw new ArgumentNullException(nameof(statusStore));
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
_timeProvider = timeProvider ?? TimeProvider.System;
_tenantIdProvider = tenantIdProvider ?? throw new ArgumentNullException(nameof(tenantIdProvider));
}
public async Task<TopologyPointReport> ValidateAsync(Guid targetId, CancellationToken ct = default)
{
var target = await _targetRegistry.GetAsync(targetId, ct)
?? throw new InvalidOperationException($"Target '{targetId}' not found");
var tenantId = _tenantIdProvider();
var gates = new List<TopologyPointGateResult>();
// Gate 1: agent_bound (required)
gates.Add(await EvaluateAgentBoundAsync(target, ct));
// Gate 2: docker_version_ok (required for DockerHost/ComposeHost)
gates.Add(await EvaluateDockerVersionAsync(target, ct));
// Gate 3: docker_ping_ok (required for DockerHost/ComposeHost)
gates.Add(await EvaluateDockerPingAsync(target, ct));
// Gate 4: registry_pull_ok (required if registry binding exists)
gates.Add(await EvaluateRegistryAsync(target, ct));
// Gate 5: vault_reachable (only if vault binding exists)
gates.Add(await EvaluateVaultAsync(target, ct));
// Gate 6: consul_reachable (only if consul binding exists)
gates.Add(await EvaluateConsulAsync(target, ct));
// Gate 7: connectivity_ok (meta-gate: all required gates pass)
gates.Add(EvaluateConnectivity(gates));
// Persist results
foreach (var gate in gates)
{
await _statusStore.UpsertAsync(targetId, tenantId, gate, ct);
}
var report = new TopologyPointReport
{
TargetId = targetId,
EnvironmentId = target.EnvironmentId,
TenantId = tenantId,
Gates = gates,
IsReady = IsReadyFromGates(gates),
EvaluatedAt = _timeProvider.GetUtcNow()
};
_logger.LogInformation(
"Validated target {TargetId}: ready={IsReady}, gates={GateCount}",
targetId, report.IsReady, gates.Count);
return report;
}
public async Task<TopologyPointReport?> GetLatestAsync(Guid targetId, CancellationToken ct = default)
{
var target = await _targetRegistry.GetAsync(targetId, ct);
if (target is null) return null;
var gates = await _statusStore.GetByTargetAsync(targetId, ct);
if (gates.Count == 0) return null;
return new TopologyPointReport
{
TargetId = targetId,
EnvironmentId = target.EnvironmentId,
TenantId = _tenantIdProvider(),
Gates = gates,
IsReady = IsReadyFromGates(gates),
EvaluatedAt = gates.Max(g => g.CheckedAt ?? DateTimeOffset.MinValue)
};
}
public async Task<IReadOnlyList<TopologyPointReport>> ListByEnvironmentAsync(
Guid environmentId, CancellationToken ct = default)
{
var targets = await _targetRegistry.ListByEnvironmentAsync(environmentId, ct);
var reports = new List<TopologyPointReport>();
foreach (var target in targets)
{
var report = await GetLatestAsync(target.Id, ct);
if (report is not null)
reports.Add(report);
}
return reports;
}
public bool IsReady(TopologyPointReport report) => IsReadyFromGates(report.Gates);
private static bool IsReadyFromGates(IReadOnlyList<TopologyPointGateResult> gates)
{
// All required gates must pass (skip is OK for optional gates)
return gates.All(g => g.Status is GateStatus.Pass or GateStatus.Skip);
}
private Task<TopologyPointGateResult> EvaluateAgentBoundAsync(Models.Target target, CancellationToken ct)
{
var now = _timeProvider.GetUtcNow();
var hasBoundAgent = target.AgentId.HasValue;
return Task.FromResult(new TopologyPointGateResult
{
GateName = TopologyGates.AgentBound,
Status = hasBoundAgent ? GateStatus.Pass : GateStatus.Fail,
Message = hasBoundAgent ? "Agent is bound" : "No agent assigned to this target",
CheckedAt = now,
DurationMs = 0
});
}
private Task<TopologyPointGateResult> EvaluateDockerVersionAsync(Models.Target target, CancellationToken ct)
{
var now = _timeProvider.GetUtcNow();
// Only required for DockerHost/ComposeHost
if (target.Type is not (TargetType.DockerHost or TargetType.ComposeHost))
{
return Task.FromResult(new TopologyPointGateResult
{
GateName = TopologyGates.DockerVersionOk,
Status = GateStatus.Skip,
Message = $"Not applicable for {target.Type}",
CheckedAt = now,
DurationMs = 0
});
}
// In a full implementation, we'd execute DockerVersionCheckTask via the agent
// For now, mark as pending if no version data is available
return Task.FromResult(new TopologyPointGateResult
{
GateName = TopologyGates.DockerVersionOk,
Status = GateStatus.Pending,
Message = "Docker version check requires agent execution",
CheckedAt = now,
DurationMs = 0
});
}
private Task<TopologyPointGateResult> EvaluateDockerPingAsync(Models.Target target, CancellationToken ct)
{
var now = _timeProvider.GetUtcNow();
if (target.Type is not (TargetType.DockerHost or TargetType.ComposeHost))
{
return Task.FromResult(new TopologyPointGateResult
{
GateName = TopologyGates.DockerPingOk,
Status = GateStatus.Skip,
Message = $"Not applicable for {target.Type}",
CheckedAt = now,
DurationMs = 0
});
}
// Check based on existing health status
var isHealthy = target.HealthStatus is HealthStatus.Healthy or HealthStatus.Degraded;
return Task.FromResult(new TopologyPointGateResult
{
GateName = TopologyGates.DockerPingOk,
Status = isHealthy ? GateStatus.Pass : GateStatus.Fail,
Message = isHealthy
? $"Docker daemon is {target.HealthStatus}"
: $"Docker daemon health: {target.HealthStatus}",
CheckedAt = now,
DurationMs = 0
});
}
private async Task<TopologyPointGateResult> EvaluateRegistryAsync(Models.Target target, CancellationToken ct)
{
var now = _timeProvider.GetUtcNow();
var binding = await _bindingService.ResolveAsync(target.EnvironmentId, BindingRole.Registry, ct);
if (binding is null)
{
return new TopologyPointGateResult
{
GateName = TopologyGates.RegistryPullOk,
Status = GateStatus.Skip,
Message = "No registry binding configured",
CheckedAt = now,
DurationMs = 0
};
}
// In full implementation, test connection to registry
return new TopologyPointGateResult
{
GateName = TopologyGates.RegistryPullOk,
Status = GateStatus.Pass,
Message = "Registry binding exists and is active",
CheckedAt = now,
DurationMs = 0
};
}
private async Task<TopologyPointGateResult> EvaluateVaultAsync(Models.Target target, CancellationToken ct)
{
var now = _timeProvider.GetUtcNow();
var binding = await _bindingService.ResolveAsync(target.EnvironmentId, BindingRole.Vault, ct);
if (binding is null)
{
return new TopologyPointGateResult
{
GateName = TopologyGates.VaultReachable,
Status = GateStatus.Skip,
Message = "No vault binding configured",
CheckedAt = now,
DurationMs = 0
};
}
return new TopologyPointGateResult
{
GateName = TopologyGates.VaultReachable,
Status = GateStatus.Pass,
Message = "Vault binding exists and is active",
CheckedAt = now,
DurationMs = 0
};
}
private async Task<TopologyPointGateResult> EvaluateConsulAsync(Models.Target target, CancellationToken ct)
{
var now = _timeProvider.GetUtcNow();
var binding = await _bindingService.ResolveAsync(target.EnvironmentId, BindingRole.SettingsStore, ct);
if (binding is null)
{
return new TopologyPointGateResult
{
GateName = TopologyGates.ConsulReachable,
Status = GateStatus.Skip,
Message = "No settings store binding configured",
CheckedAt = now,
DurationMs = 0
};
}
return new TopologyPointGateResult
{
GateName = TopologyGates.ConsulReachable,
Status = GateStatus.Pass,
Message = "Settings store binding exists and is active",
CheckedAt = now,
DurationMs = 0
};
}
private TopologyPointGateResult EvaluateConnectivity(List<TopologyPointGateResult> gates)
{
var now = _timeProvider.GetUtcNow();
var requiredGates = gates.Where(g => g.GateName != TopologyGates.ConnectivityOk);
var allPass = requiredGates.All(g => g.Status is GateStatus.Pass or GateStatus.Skip);
var failedGates = requiredGates
.Where(g => g.Status == GateStatus.Fail)
.Select(g => g.GateName)
.ToList();
return new TopologyPointGateResult
{
GateName = TopologyGates.ConnectivityOk,
Status = allPass ? GateStatus.Pass : GateStatus.Fail,
Message = allPass
? "All required gates pass"
: $"Failed gates: {string.Join(", ", failedGates)}",
CheckedAt = now,
DurationMs = 0
};
}
}

View File

@@ -0,0 +1,35 @@
using StellaOps.ReleaseOrchestrator.Environment.Models;
namespace StellaOps.ReleaseOrchestrator.Environment.Region;
/// <summary>
/// Service for managing deployment regions within a tenant.
/// </summary>
public interface IRegionService
{
Task<Models.Region> CreateAsync(CreateRegionRequest request, CancellationToken ct = default);
Task<Models.Region> UpdateAsync(Guid id, UpdateRegionRequest request, CancellationToken ct = default);
Task DeleteAsync(Guid id, CancellationToken ct = default);
Task<Models.Region?> GetAsync(Guid id, CancellationToken ct = default);
Task<IReadOnlyList<Models.Region>> ListAsync(CancellationToken ct = default);
}
/// <summary>
/// Request to create a new region.
/// </summary>
public sealed record CreateRegionRequest(
string Name,
string DisplayName,
string? Description,
string CryptoProfile = "international",
int SortOrder = 0);
/// <summary>
/// Request to update a region.
/// </summary>
public sealed record UpdateRegionRequest(
string? DisplayName = null,
string? Description = null,
string? CryptoProfile = null,
int? SortOrder = null,
RegionStatus? Status = null);

View File

@@ -0,0 +1,15 @@
namespace StellaOps.ReleaseOrchestrator.Environment.Region;
/// <summary>
/// Storage interface for region persistence.
/// </summary>
public interface IRegionStore
{
Task<Models.Region?> GetAsync(Guid id, CancellationToken ct = default);
Task<Models.Region?> GetByNameAsync(string name, CancellationToken ct = default);
Task<IReadOnlyList<Models.Region>> ListAsync(CancellationToken ct = default);
Task<Models.Region> CreateAsync(Models.Region region, CancellationToken ct = default);
Task<Models.Region> UpdateAsync(Models.Region region, CancellationToken ct = default);
Task DeleteAsync(Guid id, CancellationToken ct = default);
Task<bool> HasEnvironmentsAsync(Guid regionId, CancellationToken ct = default);
}

View File

@@ -0,0 +1,75 @@
using System.Collections.Concurrent;
namespace StellaOps.ReleaseOrchestrator.Environment.Region;
/// <summary>
/// In-memory implementation of region store for testing.
/// </summary>
public sealed class InMemoryRegionStore : IRegionStore
{
private readonly ConcurrentDictionary<Guid, Models.Region> _regions = new();
private readonly Func<Guid> _tenantIdProvider;
public InMemoryRegionStore(Func<Guid> tenantIdProvider)
{
_tenantIdProvider = tenantIdProvider ?? throw new ArgumentNullException(nameof(tenantIdProvider));
}
public Task<Models.Region?> GetAsync(Guid id, CancellationToken ct = default)
{
var tenantId = _tenantIdProvider();
_regions.TryGetValue(id, out var region);
return Task.FromResult(region?.TenantId == tenantId ? region : null);
}
public Task<Models.Region?> GetByNameAsync(string name, CancellationToken ct = default)
{
var tenantId = _tenantIdProvider();
var region = _regions.Values
.FirstOrDefault(r => r.TenantId == tenantId &&
string.Equals(r.Name, name, StringComparison.OrdinalIgnoreCase));
return Task.FromResult(region);
}
public Task<IReadOnlyList<Models.Region>> ListAsync(CancellationToken ct = default)
{
var tenantId = _tenantIdProvider();
var regions = _regions.Values
.Where(r => r.TenantId == tenantId)
.OrderBy(r => r.SortOrder)
.ToList();
return Task.FromResult<IReadOnlyList<Models.Region>>(regions);
}
public Task<Models.Region> CreateAsync(Models.Region region, CancellationToken ct = default)
{
if (!_regions.TryAdd(region.Id, region))
throw new InvalidOperationException($"Region with ID {region.Id} already exists");
return Task.FromResult(region);
}
public Task<Models.Region> UpdateAsync(Models.Region region, CancellationToken ct = default)
{
var tenantId = _tenantIdProvider();
if (!_regions.TryGetValue(region.Id, out var existing) || existing.TenantId != tenantId)
throw new InvalidOperationException($"Region with ID {region.Id} not found");
_regions[region.Id] = region;
return Task.FromResult(region);
}
public Task DeleteAsync(Guid id, CancellationToken ct = default)
{
var tenantId = _tenantIdProvider();
if (_regions.TryGetValue(id, out var existing) && existing.TenantId == tenantId)
_regions.TryRemove(id, out _);
return Task.CompletedTask;
}
public Task<bool> HasEnvironmentsAsync(Guid regionId, CancellationToken ct = default)
{
// In-memory store doesn't track cross-entity relationships
return Task.FromResult(false);
}
public void Clear() => _regions.Clear();
}

View File

@@ -0,0 +1,142 @@
using Microsoft.Extensions.Logging;
using StellaOps.ReleaseOrchestrator.Environment.Models;
using System.Text.RegularExpressions;
namespace StellaOps.ReleaseOrchestrator.Environment.Region;
/// <summary>
/// Implementation of region management service.
/// </summary>
public sealed partial class RegionService : IRegionService
{
private readonly IRegionStore _store;
private readonly TimeProvider _timeProvider;
private readonly ILogger<RegionService> _logger;
private readonly Func<Guid> _tenantIdProvider;
private readonly Func<Guid> _userIdProvider;
public RegionService(
IRegionStore store,
TimeProvider timeProvider,
ILogger<RegionService> logger,
Func<Guid> tenantIdProvider,
Func<Guid> userIdProvider)
{
_store = store ?? throw new ArgumentNullException(nameof(store));
_timeProvider = timeProvider ?? TimeProvider.System;
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
_tenantIdProvider = tenantIdProvider ?? throw new ArgumentNullException(nameof(tenantIdProvider));
_userIdProvider = userIdProvider ?? throw new ArgumentNullException(nameof(userIdProvider));
}
public async Task<Models.Region> CreateAsync(CreateRegionRequest request, CancellationToken ct = default)
{
ArgumentNullException.ThrowIfNull(request);
var errors = new List<string>();
if (!IsValidRegionName(request.Name))
errors.Add("Region name must be lowercase alphanumeric with hyphens, 2-100 characters");
if (string.IsNullOrWhiteSpace(request.DisplayName))
errors.Add("Display name is required");
var existingByName = await _store.GetByNameAsync(request.Name, ct);
if (existingByName is not null)
errors.Add($"Region with name '{request.Name}' already exists");
if (errors.Count > 0)
throw new Services.ValidationException(errors);
var now = _timeProvider.GetUtcNow();
var tenantId = _tenantIdProvider();
var userId = _userIdProvider();
var region = new Models.Region
{
Id = Guid.NewGuid(),
TenantId = tenantId,
Name = request.Name,
DisplayName = request.DisplayName,
Description = request.Description,
CryptoProfile = request.CryptoProfile,
SortOrder = request.SortOrder,
Status = RegionStatus.Active,
CreatedAt = now,
UpdatedAt = now,
CreatedBy = userId
};
var created = await _store.CreateAsync(region, ct);
_logger.LogInformation(
"Created region {RegionId} ({RegionName}) for tenant {TenantId}",
created.Id, created.Name, tenantId);
return created;
}
public async Task<Models.Region> UpdateAsync(Guid id, UpdateRegionRequest request, CancellationToken ct = default)
{
ArgumentNullException.ThrowIfNull(request);
var existing = await _store.GetAsync(id, ct)
?? throw new RegionNotFoundException(id);
var updated = existing with
{
DisplayName = request.DisplayName ?? existing.DisplayName,
Description = request.Description ?? existing.Description,
CryptoProfile = request.CryptoProfile ?? existing.CryptoProfile,
SortOrder = request.SortOrder ?? existing.SortOrder,
Status = request.Status ?? existing.Status,
UpdatedAt = _timeProvider.GetUtcNow()
};
var result = await _store.UpdateAsync(updated, ct);
_logger.LogInformation("Updated region {RegionId} ({RegionName})", id, result.Name);
return result;
}
public async Task DeleteAsync(Guid id, CancellationToken ct = default)
{
var existing = await _store.GetAsync(id, ct)
?? throw new RegionNotFoundException(id);
if (await _store.HasEnvironmentsAsync(id, ct))
throw new InvalidOperationException(
$"Cannot delete region '{existing.Name}': has associated environments");
await _store.DeleteAsync(id, ct);
_logger.LogInformation("Deleted region {RegionId} ({RegionName})", id, existing.Name);
}
public Task<Models.Region?> GetAsync(Guid id, CancellationToken ct = default) =>
_store.GetAsync(id, ct);
public Task<IReadOnlyList<Models.Region>> ListAsync(CancellationToken ct = default) =>
_store.ListAsync(ct);
private static bool IsValidRegionName(string name) =>
RegionNameRegex().IsMatch(name);
[GeneratedRegex(@"^[a-z][a-z0-9-]{1,99}$")]
private static partial Regex RegionNameRegex();
}
/// <summary>
/// Exception thrown when a region is not found.
/// </summary>
public sealed class RegionNotFoundException : Exception
{
public RegionNotFoundException(Guid regionId)
: base($"Region with ID '{regionId}' not found")
{
RegionId = regionId;
}
public Guid RegionId { get; }
}

View File

@@ -0,0 +1,51 @@
namespace StellaOps.ReleaseOrchestrator.Environment.Rename;
/// <summary>
/// Service for renaming topology entities.
/// </summary>
public interface ITopologyRenameService
{
Task<RenameResult> RenameAsync(RenameRequest request, CancellationToken ct = default);
}
/// <summary>
/// Request to rename a topology entity.
/// </summary>
public sealed record RenameRequest(
RenameEntityType EntityType,
Guid EntityId,
string NewName,
string NewDisplayName);
/// <summary>
/// Entity types that support renaming.
/// </summary>
public enum RenameEntityType
{
Region,
Environment,
Target,
Agent,
Integration
}
/// <summary>
/// Result of a rename operation.
/// </summary>
public sealed record RenameResult
{
public required bool Success { get; init; }
public string? OldName { get; init; }
public string? NewName { get; init; }
public Guid? ConflictingEntityId { get; init; }
public string? Error { get; init; }
public static RenameResult Ok(string oldName, string newName) =>
new() { Success = true, OldName = oldName, NewName = newName };
public static RenameResult Conflict(Guid conflictingId) =>
new() { Success = false, ConflictingEntityId = conflictingId, Error = "name_conflict" };
public static RenameResult Failed(string error) =>
new() { Success = false, Error = error };
}

View File

@@ -0,0 +1,121 @@
using Microsoft.Extensions.Logging;
using StellaOps.ReleaseOrchestrator.Environment.Region;
using StellaOps.ReleaseOrchestrator.Environment.Services;
using StellaOps.ReleaseOrchestrator.Environment.Target;
using System.Text.RegularExpressions;
namespace StellaOps.ReleaseOrchestrator.Environment.Rename;
/// <summary>
/// Handles rename operations for all topology entities with conflict detection.
/// </summary>
public sealed partial class TopologyRenameService : ITopologyRenameService
{
private readonly IRegionService _regionService;
private readonly IEnvironmentService _environmentService;
private readonly ITargetRegistry _targetRegistry;
private readonly IRegionStore _regionStore;
private readonly ILogger<TopologyRenameService> _logger;
public TopologyRenameService(
IRegionService regionService,
IEnvironmentService environmentService,
ITargetRegistry targetRegistry,
IRegionStore regionStore,
ILogger<TopologyRenameService> logger)
{
_regionService = regionService ?? throw new ArgumentNullException(nameof(regionService));
_environmentService = environmentService ?? throw new ArgumentNullException(nameof(environmentService));
_targetRegistry = targetRegistry ?? throw new ArgumentNullException(nameof(targetRegistry));
_regionStore = regionStore ?? throw new ArgumentNullException(nameof(regionStore));
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
}
public async Task<RenameResult> RenameAsync(RenameRequest request, CancellationToken ct = default)
{
ArgumentNullException.ThrowIfNull(request);
// Validate name format
if (!IsValidName(request.NewName))
{
return RenameResult.Failed(
"Name must be lowercase alphanumeric with hyphens, 2-100 characters, starting with a letter");
}
if (string.IsNullOrWhiteSpace(request.NewDisplayName))
{
return RenameResult.Failed("Display name is required");
}
return request.EntityType switch
{
RenameEntityType.Region => await RenameRegionAsync(request, ct),
RenameEntityType.Environment => await RenameEnvironmentAsync(request, ct),
RenameEntityType.Target => await RenameTargetAsync(request, ct),
_ => RenameResult.Failed($"Rename not yet supported for {request.EntityType}")
};
}
private async Task<RenameResult> RenameRegionAsync(RenameRequest request, CancellationToken ct)
{
var region = await _regionService.GetAsync(request.EntityId, ct);
if (region is null)
return RenameResult.Failed("Region not found");
// Check for name conflict
var existing = await _regionStore.GetByNameAsync(request.NewName, ct);
if (existing is not null && existing.Id != request.EntityId)
return RenameResult.Conflict(existing.Id);
var oldName = region.Name;
await _regionService.UpdateAsync(request.EntityId, new UpdateRegionRequest(
DisplayName: request.NewDisplayName), ct);
_logger.LogInformation("Renamed region {Id}: {OldName} -> {NewName}", request.EntityId, oldName, request.NewName);
return RenameResult.Ok(oldName, request.NewName);
}
private async Task<RenameResult> RenameEnvironmentAsync(RenameRequest request, CancellationToken ct)
{
var env = await _environmentService.GetAsync(request.EntityId, ct);
if (env is null)
return RenameResult.Failed("Environment not found");
// Check for name conflict
var existing = await _environmentService.GetByNameAsync(request.NewName, ct);
if (existing is not null && existing.Id != request.EntityId)
return RenameResult.Conflict(existing.Id);
var oldName = env.Name;
await _environmentService.UpdateAsync(request.EntityId, new UpdateEnvironmentRequest(
DisplayName: request.NewDisplayName), ct);
_logger.LogInformation("Renamed environment {Id}: {OldName} -> {NewName}", request.EntityId, oldName, request.NewName);
return RenameResult.Ok(oldName, request.NewName);
}
private async Task<RenameResult> RenameTargetAsync(RenameRequest request, CancellationToken ct)
{
var target = await _targetRegistry.GetAsync(request.EntityId, ct);
if (target is null)
return RenameResult.Failed("Target not found");
// Check for name conflict within the same environment
var existing = await _targetRegistry.GetByNameAsync(target.EnvironmentId, request.NewName, ct);
if (existing is not null && existing.Id != request.EntityId)
return RenameResult.Conflict(existing.Id);
var oldName = target.Name;
await _targetRegistry.UpdateAsync(request.EntityId, new UpdateTargetRequest(
DisplayName: request.NewDisplayName), ct);
_logger.LogInformation("Renamed target {Id}: {OldName} -> {NewName}", request.EntityId, oldName, request.NewName);
return RenameResult.Ok(oldName, request.NewName);
}
private static bool IsValidName(string name) =>
NameRegex().IsMatch(name);
[GeneratedRegex(@"^[a-z][a-z0-9-]{1,99}$")]
private static partial Regex NameRegex();
}

View File

@@ -9,6 +9,10 @@
<RootNamespace>StellaOps.ReleaseOrchestrator.Environment</RootNamespace>
</PropertyGroup>
<ItemGroup>
<EmbeddedResource Include="Migrations\**\*.sql" />
</ItemGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Hosting.Abstractions" />
<PackageReference Include="Microsoft.Extensions.Logging.Abstractions" />

View File

@@ -0,0 +1,103 @@
namespace StellaOps.ReleaseOrchestrator.Environment.Target;
/// <summary>
/// Policy for Docker version enforcement on deployment targets.
/// </summary>
public static class DockerVersionPolicy
{
/// <summary>
/// Minimum supported Docker version (20.10.0).
/// </summary>
public static readonly Version MinimumSupported = new(20, 10, 0);
/// <summary>
/// Recommended Docker version (24.0.0).
/// </summary>
public static readonly Version Recommended = new(24, 0, 0);
/// <summary>
/// Checks a reported Docker version string against the policy.
/// </summary>
public static DockerVersionCheckResult Check(string? reportedVersion)
{
if (string.IsNullOrWhiteSpace(reportedVersion))
{
return new DockerVersionCheckResult(
IsSupported: false,
IsRecommended: false,
ParsedVersion: null,
Message: "Docker version not reported");
}
var parsed = ParseDockerVersion(reportedVersion);
if (parsed is null)
{
return new DockerVersionCheckResult(
IsSupported: false,
IsRecommended: false,
ParsedVersion: null,
Message: $"Unable to parse Docker version: {reportedVersion}");
}
var isSupported = parsed >= MinimumSupported;
var isRecommended = parsed >= Recommended;
var message = !isSupported
? $"Docker {reportedVersion} is below minimum supported version {MinimumSupported}. Please upgrade to {MinimumSupported} or later."
: !isRecommended
? $"Docker {reportedVersion} is supported but below recommended version {Recommended}."
: $"Docker {reportedVersion} meets recommended version.";
return new DockerVersionCheckResult(
IsSupported: isSupported,
IsRecommended: isRecommended,
ParsedVersion: parsed,
Message: message);
}
/// <summary>
/// Parses Docker version strings like "20.10.24", "24.0.7-1", "26.1.0-beta".
/// Strips suffixes after the version numbers.
/// </summary>
internal static Version? ParseDockerVersion(string versionString)
{
if (string.IsNullOrWhiteSpace(versionString))
return null;
// Strip any leading 'v' or 'V'
var trimmed = versionString.TrimStart('v', 'V').Trim();
// Take only the numeric part (stop at first non-version character)
var versionPart = new string(trimmed.TakeWhile(c => char.IsDigit(c) || c == '.').ToArray());
if (string.IsNullOrEmpty(versionPart))
return null;
// Split and parse
var parts = versionPart.Split('.');
if (parts.Length < 2)
return null;
if (!int.TryParse(parts[0], out var major))
return null;
if (!int.TryParse(parts[1], out var minor))
return null;
var build = 0;
if (parts.Length >= 3 && !string.IsNullOrEmpty(parts[2]))
{
int.TryParse(parts[2], out build);
}
return new Version(major, minor, build);
}
}
/// <summary>
/// Result of a Docker version check.
/// </summary>
public sealed record DockerVersionCheckResult(
bool IsSupported,
bool IsRecommended,
Version? ParsedVersion,
string Message);

View File

@@ -0,0 +1,173 @@
using FluentAssertions;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Time.Testing;
using Moq;
using StellaOps.ReleaseOrchestrator.Environment.Deletion;
using StellaOps.ReleaseOrchestrator.Environment.Models;
using Xunit;
namespace StellaOps.ReleaseOrchestrator.Environment.Tests.Deletion;
/// <summary>
/// Unit tests for PendingDeletionService deletion lifecycle state machine.
/// </summary>
[Trait("Category", "Unit")]
public sealed class DeletionLifecycleTests
{
private readonly InMemoryPendingDeletionStore _store;
private readonly FakeTimeProvider _timeProvider;
private readonly PendingDeletionService _service;
private readonly Guid _tenantId = Guid.NewGuid();
private readonly Guid _userId = Guid.NewGuid();
public DeletionLifecycleTests()
{
_store = new InMemoryPendingDeletionStore(() => _tenantId);
_timeProvider = new FakeTimeProvider(new DateTimeOffset(2026, 1, 11, 12, 0, 0, TimeSpan.Zero));
var logger = new Mock<ILogger<PendingDeletionService>>();
_service = new PendingDeletionService(
_store,
_timeProvider,
logger.Object,
() => _tenantId,
() => _userId);
}
[Fact]
public async Task RequestDeletion_CreatesWithCorrectCoolOff()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var entityId = Guid.NewGuid();
var request = new DeletionRequest(DeletionEntityType.Environment, entityId, "Decommissioning");
// Act
var result = await _service.RequestDeletionAsync(request, ct);
// Assert
result.Should().NotBeNull();
result.Status.Should().Be(DeletionStatus.Pending);
result.EntityType.Should().Be(DeletionEntityType.Environment);
result.EntityId.Should().Be(entityId);
result.CoolOffHours.Should().Be(24); // Environment cool-off
result.CoolOffExpiresAt.Should().Be(_timeProvider.GetUtcNow().AddHours(24));
result.RequestedBy.Should().Be(_userId);
result.Reason.Should().Be("Decommissioning");
}
[Fact]
public async Task ConfirmDeletion_AfterCoolOffExpires_Succeeds()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var request = new DeletionRequest(DeletionEntityType.Target, Guid.NewGuid());
var pending = await _service.RequestDeletionAsync(request, ct);
// Advance past the 4-hour cool-off for Target
_timeProvider.Advance(TimeSpan.FromHours(5));
var confirmerId = Guid.NewGuid();
// Act
var confirmed = await _service.ConfirmDeletionAsync(pending.Id, confirmerId, ct);
// Assert
confirmed.Status.Should().Be(DeletionStatus.Confirmed);
confirmed.ConfirmedBy.Should().Be(confirmerId);
confirmed.ConfirmedAt.Should().Be(_timeProvider.GetUtcNow());
}
[Fact]
public async Task ConfirmDeletion_BeforeCoolOff_Throws()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var request = new DeletionRequest(DeletionEntityType.Region, Guid.NewGuid());
var pending = await _service.RequestDeletionAsync(request, ct);
// Only advance 1 hour (Region cool-off is 48 hours)
_timeProvider.Advance(TimeSpan.FromHours(1));
// Act
var act = () => _service.ConfirmDeletionAsync(pending.Id, Guid.NewGuid(), ct);
// Assert
await act.Should().ThrowAsync<InvalidOperationException>()
.WithMessage("*Cool-off period has not expired*");
}
[Fact]
public async Task CancelDeletion_FromPending_Succeeds()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var request = new DeletionRequest(DeletionEntityType.Environment, Guid.NewGuid());
var pending = await _service.RequestDeletionAsync(request, ct);
// Act
await _service.CancelDeletionAsync(pending.Id, _userId, ct);
// Assert
var result = await _service.GetAsync(pending.Id, ct);
result!.Status.Should().Be(DeletionStatus.Cancelled);
}
[Fact]
public async Task CancelDeletion_FromConfirmed_Succeeds()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var request = new DeletionRequest(DeletionEntityType.Target, Guid.NewGuid());
var pending = await _service.RequestDeletionAsync(request, ct);
// Advance past cool-off and confirm
_timeProvider.Advance(TimeSpan.FromHours(5));
await _service.ConfirmDeletionAsync(pending.Id, Guid.NewGuid(), ct);
// Act
await _service.CancelDeletionAsync(pending.Id, _userId, ct);
// Assert
var result = await _service.GetAsync(pending.Id, ct);
result!.Status.Should().Be(DeletionStatus.Cancelled);
}
[Fact]
public async Task DuplicateRequest_ForSameEntity_Throws()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var entityId = Guid.NewGuid();
var request = new DeletionRequest(DeletionEntityType.Integration, entityId);
await _service.RequestDeletionAsync(request, ct);
// Act
var act = () => _service.RequestDeletionAsync(request, ct);
// Assert
await act.Should().ThrowAsync<InvalidOperationException>()
.WithMessage("*deletion request already exists*");
}
[Theory]
[InlineData(DeletionEntityType.Tenant, 72)]
[InlineData(DeletionEntityType.Region, 48)]
[InlineData(DeletionEntityType.Environment, 24)]
[InlineData(DeletionEntityType.Target, 4)]
[InlineData(DeletionEntityType.Agent, 4)]
[InlineData(DeletionEntityType.Integration, 12)]
public async Task DifferentEntityTypes_GetDifferentCoolOffHours(DeletionEntityType entityType, int expectedHours)
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var request = new DeletionRequest(entityType, Guid.NewGuid());
// Act
var result = await _service.RequestDeletionAsync(request, ct);
// Assert
result.CoolOffHours.Should().Be(expectedHours);
result.CoolOffExpiresAt.Should().Be(_timeProvider.GetUtcNow().AddHours(expectedHours));
}
}

View File

@@ -0,0 +1,213 @@
using FluentAssertions;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Time.Testing;
using Moq;
using StellaOps.ReleaseOrchestrator.Environment.InfrastructureBinding;
using StellaOps.ReleaseOrchestrator.Environment.Models;
using StellaOps.ReleaseOrchestrator.Environment.Services;
using StellaOps.ReleaseOrchestrator.Environment.Store;
using Xunit;
namespace StellaOps.ReleaseOrchestrator.Environment.Tests.InfrastructureBinding;
/// <summary>
/// Unit tests for InfrastructureBindingService resolve cascade.
/// </summary>
[Trait("Category", "Unit")]
public sealed class InfrastructureBindingServiceTests
{
private readonly InMemoryInfrastructureBindingStore _bindingStore;
private readonly InMemoryEnvironmentStore _environmentStore;
private readonly FakeTimeProvider _timeProvider;
private readonly InfrastructureBindingService _service;
private readonly Guid _tenantId = Guid.NewGuid();
private readonly Guid _userId = Guid.NewGuid();
private readonly Guid _regionId = Guid.NewGuid();
public InfrastructureBindingServiceTests()
{
_bindingStore = new InMemoryInfrastructureBindingStore(() => _tenantId);
_environmentStore = new InMemoryEnvironmentStore(() => _tenantId);
_timeProvider = new FakeTimeProvider(new DateTimeOffset(2026, 1, 11, 12, 0, 0, TimeSpan.Zero));
var envLogger = new Mock<ILogger<EnvironmentService>>();
var environmentService = new EnvironmentService(
_environmentStore,
_timeProvider,
envLogger.Object,
() => _tenantId,
() => _userId);
var bindingLogger = new Mock<ILogger<InfrastructureBindingService>>();
_service = new InfrastructureBindingService(
_bindingStore,
environmentService,
bindingLogger.Object,
_timeProvider,
() => _tenantId,
() => _userId);
}
private async Task<Models.Environment> CreateEnvironmentWithRegion(string name, int order, Guid? regionId, CancellationToken ct)
{
var env = new Models.Environment
{
Id = Guid.NewGuid(),
TenantId = _tenantId,
Name = name,
DisplayName = name,
OrderIndex = order,
IsProduction = false,
RequiredApprovals = 0,
RequireSeparationOfDuties = false,
DeploymentTimeoutSeconds = 300,
RegionId = regionId,
CreatedAt = _timeProvider.GetUtcNow(),
UpdatedAt = _timeProvider.GetUtcNow(),
CreatedBy = _userId
};
return await _environmentStore.CreateAsync(env, ct);
}
private Models.InfrastructureBinding MakeBinding(
BindingScopeType scopeType,
Guid? scopeId,
BindingRole role,
int priority = 0,
bool isActive = true)
{
return new Models.InfrastructureBinding
{
Id = Guid.NewGuid(),
TenantId = _tenantId,
IntegrationId = Guid.NewGuid(),
ScopeType = scopeType,
ScopeId = scopeId,
Role = role,
Priority = priority,
ConfigOverrides = new Dictionary<string, string>(),
IsActive = isActive,
CreatedAt = _timeProvider.GetUtcNow(),
UpdatedAt = _timeProvider.GetUtcNow(),
CreatedBy = _userId
};
}
[Fact]
public async Task ResolveAsync_DirectEnvironmentBinding_ReturnsDirectSource()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var env = await CreateEnvironmentWithRegion("dev", 0, _regionId, ct);
var binding = MakeBinding(BindingScopeType.Environment, env.Id, BindingRole.Registry);
await _bindingStore.CreateAsync(binding, ct);
// Act
var result = await _service.ResolveAllAsync(env.Id, ct);
// Assert
result.Registry.Should().NotBeNull();
result.Registry!.ResolvedFrom.Should().Be(BindingResolutionSource.Direct);
result.Registry.Binding.Id.Should().Be(binding.Id);
}
[Fact]
public async Task ResolveAsync_NoEnvBinding_RegionFallback_ReturnsRegionSource()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var env = await CreateEnvironmentWithRegion("staging", 1, _regionId, ct);
// Create only a region-level binding (no env-level)
var regionBinding = MakeBinding(BindingScopeType.Region, _regionId, BindingRole.Registry);
await _bindingStore.CreateAsync(regionBinding, ct);
// Act
var result = await _service.ResolveAllAsync(env.Id, ct);
// Assert
result.Registry.Should().NotBeNull();
result.Registry!.ResolvedFrom.Should().Be(BindingResolutionSource.Region);
result.Registry.Binding.Id.Should().Be(regionBinding.Id);
}
[Fact]
public async Task ResolveAsync_NoEnvOrRegion_TenantFallback_ReturnsTenantSource()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var env = await CreateEnvironmentWithRegion("prod", 2, _regionId, ct);
// Create only a tenant-level binding (scopeId = null)
var tenantBinding = MakeBinding(BindingScopeType.Tenant, null, BindingRole.Registry);
await _bindingStore.CreateAsync(tenantBinding, ct);
// Act
var result = await _service.ResolveAllAsync(env.Id, ct);
// Assert
result.Registry.Should().NotBeNull();
result.Registry!.ResolvedFrom.Should().Be(BindingResolutionSource.Tenant);
result.Registry.Binding.Id.Should().Be(tenantBinding.Id);
}
[Fact]
public async Task ResolveAsync_MultipleBindings_HigherPriorityWins()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var env = await CreateEnvironmentWithRegion("dev", 0, _regionId, ct);
var lowPriority = MakeBinding(BindingScopeType.Environment, env.Id, BindingRole.Registry, priority: 1);
var highPriority = MakeBinding(BindingScopeType.Environment, env.Id, BindingRole.Registry, priority: 10);
await _bindingStore.CreateAsync(lowPriority, ct);
await _bindingStore.CreateAsync(highPriority, ct);
// Act
var result = await _service.ResolveAsync(env.Id, BindingRole.Registry, ct);
// Assert
result.Should().NotBeNull();
result!.Id.Should().Be(highPriority.Id);
result.Priority.Should().Be(10);
}
[Fact]
public async Task ResolveAsync_InactiveBinding_Skipped()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var env = await CreateEnvironmentWithRegion("dev", 0, _regionId, ct);
// Create an inactive env binding and an active tenant binding
var inactiveEnvBinding = MakeBinding(BindingScopeType.Environment, env.Id, BindingRole.Vault, isActive: false);
var activeTenantBinding = MakeBinding(BindingScopeType.Tenant, null, BindingRole.Vault);
await _bindingStore.CreateAsync(inactiveEnvBinding, ct);
await _bindingStore.CreateAsync(activeTenantBinding, ct);
// Act
var result = await _service.ResolveAllAsync(env.Id, ct);
// Assert
result.Vault.Should().NotBeNull();
result.Vault!.ResolvedFrom.Should().Be(BindingResolutionSource.Tenant);
result.Vault.Binding.Id.Should().Be(activeTenantBinding.Id);
}
[Fact]
public async Task ResolveAsync_NoBindingAtAnyLevel_ReturnsNull()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var env = await CreateEnvironmentWithRegion("dev", 0, _regionId, ct);
// No bindings created at any level
// Act
var result = await _service.ResolveAsync(env.Id, BindingRole.Registry, ct);
// Assert
result.Should().BeNull();
}
}

View File

@@ -0,0 +1,425 @@
using FluentAssertions;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Time.Testing;
using Moq;
using StellaOps.ReleaseOrchestrator.Environment.InfrastructureBinding;
using StellaOps.ReleaseOrchestrator.Environment.Models;
using StellaOps.ReleaseOrchestrator.Environment.Readiness;
using StellaOps.ReleaseOrchestrator.Environment.Services;
using StellaOps.ReleaseOrchestrator.Environment.Store;
using StellaOps.ReleaseOrchestrator.Environment.Target;
using Xunit;
namespace StellaOps.ReleaseOrchestrator.Environment.Tests.Readiness;
/// <summary>
/// Unit tests for TopologyReadinessService gate evaluation.
/// </summary>
[Trait("Category", "Unit")]
public sealed class TopologyReadinessServiceTests
{
private readonly InMemoryTargetStore _targetStore;
private readonly InMemoryEnvironmentStore _environmentStore;
private readonly InMemoryInfrastructureBindingStore _bindingStore;
private readonly InMemoryTopologyPointStatusStore _statusStore;
private readonly FakeTimeProvider _timeProvider;
private readonly TargetRegistry _targetRegistry;
private readonly InfrastructureBindingService _bindingService;
private readonly TopologyReadinessService _service;
private readonly Guid _tenantId = Guid.NewGuid();
private readonly Guid _userId = Guid.NewGuid();
private readonly Guid _environmentId;
public TopologyReadinessServiceTests()
{
_targetStore = new InMemoryTargetStore(() => _tenantId);
_environmentStore = new InMemoryEnvironmentStore(() => _tenantId);
_bindingStore = new InMemoryInfrastructureBindingStore(() => _tenantId);
_statusStore = new InMemoryTopologyPointStatusStore();
_timeProvider = new FakeTimeProvider(new DateTimeOffset(2026, 1, 11, 12, 0, 0, TimeSpan.Zero));
var connectionTester = new NoOpTargetConnectionTester();
var targetLogger = new Mock<ILogger<TargetRegistry>>();
_targetRegistry = new TargetRegistry(
_targetStore,
_environmentStore,
connectionTester,
_timeProvider,
targetLogger.Object,
() => _tenantId);
var envLogger = new Mock<ILogger<EnvironmentService>>();
var environmentService = new EnvironmentService(
_environmentStore,
_timeProvider,
envLogger.Object,
() => _tenantId,
() => _userId);
var bindingLogger = new Mock<ILogger<InfrastructureBindingService>>();
_bindingService = new InfrastructureBindingService(
_bindingStore,
environmentService,
bindingLogger.Object,
_timeProvider,
() => _tenantId,
() => _userId);
var readinessLogger = new Mock<ILogger<TopologyReadinessService>>();
_service = new TopologyReadinessService(
_targetRegistry,
_bindingService,
_statusStore,
readinessLogger.Object,
_timeProvider,
() => _tenantId);
// Create a test environment
var env = new Models.Environment
{
Id = Guid.NewGuid(),
TenantId = _tenantId,
Name = "dev",
DisplayName = "Development",
OrderIndex = 0,
IsProduction = false,
RequiredApprovals = 0,
RequireSeparationOfDuties = false,
DeploymentTimeoutSeconds = 300,
CreatedAt = _timeProvider.GetUtcNow(),
UpdatedAt = _timeProvider.GetUtcNow(),
CreatedBy = _userId
};
_environmentStore.CreateAsync(env).Wait();
_environmentId = env.Id;
}
private async Task<Models.Target> CreateTarget(
string name,
TargetType type,
Guid? agentId,
HealthStatus healthStatus,
CancellationToken ct)
{
var target = new Models.Target
{
Id = Guid.NewGuid(),
TenantId = _tenantId,
EnvironmentId = _environmentId,
Name = name,
DisplayName = name,
Type = type,
ConnectionConfig = new DockerHostConfig { Host = "docker.example.com" },
AgentId = agentId,
HealthStatus = healthStatus,
CreatedAt = _timeProvider.GetUtcNow(),
UpdatedAt = _timeProvider.GetUtcNow()
};
return await _targetStore.CreateAsync(target, ct);
}
[Fact]
public async Task AgentBound_WithAgent_Pass()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var agentId = Guid.NewGuid();
var target = await CreateTarget("target-1", TargetType.DockerHost, agentId, HealthStatus.Healthy, ct);
// Act
var report = await _service.ValidateAsync(target.Id, ct);
// Assert
var gate = report.Gates.Single(g => g.GateName == TopologyGates.AgentBound);
gate.Status.Should().Be(GateStatus.Pass);
}
[Fact]
public async Task AgentBound_WithoutAgent_Fail()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var target = await CreateTarget("target-2", TargetType.DockerHost, null, HealthStatus.Healthy, ct);
// Act
var report = await _service.ValidateAsync(target.Id, ct);
// Assert
var gate = report.Gates.Single(g => g.GateName == TopologyGates.AgentBound);
gate.Status.Should().Be(GateStatus.Fail);
}
[Fact]
public async Task DockerVersionOk_NonDockerTarget_Skip()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var target = new Models.Target
{
Id = Guid.NewGuid(),
TenantId = _tenantId,
EnvironmentId = _environmentId,
Name = "ecs-target",
DisplayName = "ECS Target",
Type = TargetType.EcsService,
ConnectionConfig = new EcsServiceConfig
{
Region = "us-east-1",
ClusterArn = "arn:aws:ecs:us-east-1:123:cluster/test",
ServiceName = "api"
},
AgentId = Guid.NewGuid(),
HealthStatus = HealthStatus.Healthy,
CreatedAt = _timeProvider.GetUtcNow(),
UpdatedAt = _timeProvider.GetUtcNow()
};
await _targetStore.CreateAsync(target, ct);
// Act
var report = await _service.ValidateAsync(target.Id, ct);
// Assert
var gate = report.Gates.Single(g => g.GateName == TopologyGates.DockerVersionOk);
gate.Status.Should().Be(GateStatus.Skip);
}
[Fact]
public async Task DockerVersionOk_DockerTarget_Pending()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var target = await CreateTarget("docker-1", TargetType.DockerHost, Guid.NewGuid(), HealthStatus.Healthy, ct);
// Act
var report = await _service.ValidateAsync(target.Id, ct);
// Assert
var gate = report.Gates.Single(g => g.GateName == TopologyGates.DockerVersionOk);
gate.Status.Should().Be(GateStatus.Pending);
}
[Fact]
public async Task DockerPingOk_HealthyTarget_Pass()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var target = await CreateTarget("docker-healthy", TargetType.DockerHost, Guid.NewGuid(), HealthStatus.Healthy, ct);
// Act
var report = await _service.ValidateAsync(target.Id, ct);
// Assert
var gate = report.Gates.Single(g => g.GateName == TopologyGates.DockerPingOk);
gate.Status.Should().Be(GateStatus.Pass);
}
[Fact]
public async Task DockerPingOk_UnhealthyTarget_Fail()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var target = await CreateTarget("docker-unhealthy", TargetType.DockerHost, Guid.NewGuid(), HealthStatus.Unhealthy, ct);
// Act
var report = await _service.ValidateAsync(target.Id, ct);
// Assert
var gate = report.Gates.Single(g => g.GateName == TopologyGates.DockerPingOk);
gate.Status.Should().Be(GateStatus.Fail);
}
[Fact]
public async Task DockerPingOk_NonDockerTarget_Skip()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var target = new Models.Target
{
Id = Guid.NewGuid(),
TenantId = _tenantId,
EnvironmentId = _environmentId,
Name = "nomad-target",
DisplayName = "Nomad Target",
Type = TargetType.NomadJob,
ConnectionConfig = new NomadJobConfig
{
Address = "https://nomad.example.com",
Namespace = "default",
JobId = "api-job"
},
AgentId = Guid.NewGuid(),
HealthStatus = HealthStatus.Healthy,
CreatedAt = _timeProvider.GetUtcNow(),
UpdatedAt = _timeProvider.GetUtcNow()
};
await _targetStore.CreateAsync(target, ct);
// Act
var report = await _service.ValidateAsync(target.Id, ct);
// Assert
var gate = report.Gates.Single(g => g.GateName == TopologyGates.DockerPingOk);
gate.Status.Should().Be(GateStatus.Skip);
}
[Fact]
public async Task RegistryPullOk_NoBinding_Skip()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var target = await CreateTarget("target-noreg", TargetType.DockerHost, Guid.NewGuid(), HealthStatus.Healthy, ct);
// Act
var report = await _service.ValidateAsync(target.Id, ct);
// Assert
var gate = report.Gates.Single(g => g.GateName == TopologyGates.RegistryPullOk);
gate.Status.Should().Be(GateStatus.Skip);
}
[Fact]
public async Task RegistryPullOk_WithBinding_Pass()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var target = await CreateTarget("target-withreg", TargetType.DockerHost, Guid.NewGuid(), HealthStatus.Healthy, ct);
// Create a registry binding at env scope
await _bindingService.BindAsync(new BindInfrastructureRequest(
IntegrationId: Guid.NewGuid(),
ScopeType: BindingScopeType.Environment,
ScopeId: _environmentId,
Role: BindingRole.Registry), ct);
// Act
var report = await _service.ValidateAsync(target.Id, ct);
// Assert
var gate = report.Gates.Single(g => g.GateName == TopologyGates.RegistryPullOk);
gate.Status.Should().Be(GateStatus.Pass);
}
[Fact]
public async Task VaultReachable_NoBinding_Skip()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var target = await CreateTarget("target-novault", TargetType.DockerHost, Guid.NewGuid(), HealthStatus.Healthy, ct);
// Act
var report = await _service.ValidateAsync(target.Id, ct);
// Assert
var gate = report.Gates.Single(g => g.GateName == TopologyGates.VaultReachable);
gate.Status.Should().Be(GateStatus.Skip);
}
[Fact]
public async Task VaultReachable_WithBinding_Pass()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var target = await CreateTarget("target-withvault", TargetType.DockerHost, Guid.NewGuid(), HealthStatus.Healthy, ct);
await _bindingService.BindAsync(new BindInfrastructureRequest(
IntegrationId: Guid.NewGuid(),
ScopeType: BindingScopeType.Environment,
ScopeId: _environmentId,
Role: BindingRole.Vault), ct);
// Act
var report = await _service.ValidateAsync(target.Id, ct);
// Assert
var gate = report.Gates.Single(g => g.GateName == TopologyGates.VaultReachable);
gate.Status.Should().Be(GateStatus.Pass);
}
[Fact]
public async Task ConsulReachable_NoBinding_Skip()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var target = await CreateTarget("target-noconsul", TargetType.DockerHost, Guid.NewGuid(), HealthStatus.Healthy, ct);
// Act
var report = await _service.ValidateAsync(target.Id, ct);
// Assert
var gate = report.Gates.Single(g => g.GateName == TopologyGates.ConsulReachable);
gate.Status.Should().Be(GateStatus.Skip);
}
[Fact]
public async Task ConsulReachable_WithBinding_Pass()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var target = await CreateTarget("target-withconsul", TargetType.DockerHost, Guid.NewGuid(), HealthStatus.Healthy, ct);
await _bindingService.BindAsync(new BindInfrastructureRequest(
IntegrationId: Guid.NewGuid(),
ScopeType: BindingScopeType.Environment,
ScopeId: _environmentId,
Role: BindingRole.SettingsStore), ct);
// Act
var report = await _service.ValidateAsync(target.Id, ct);
// Assert
var gate = report.Gates.Single(g => g.GateName == TopologyGates.ConsulReachable);
gate.Status.Should().Be(GateStatus.Pass);
}
[Fact]
public async Task ConnectivityOk_AllRequiredGatesPass_Pass()
{
// Arrange: ECS target with agent, no Docker gates required, no bindings
var ct = TestContext.Current.CancellationToken;
var target = new Models.Target
{
Id = Guid.NewGuid(),
TenantId = _tenantId,
EnvironmentId = _environmentId,
Name = "ecs-all-pass",
DisplayName = "ECS All Pass",
Type = TargetType.EcsService,
ConnectionConfig = new EcsServiceConfig
{
Region = "us-east-1",
ClusterArn = "arn:aws:ecs:us-east-1:123:cluster/test",
ServiceName = "api"
},
AgentId = Guid.NewGuid(),
HealthStatus = HealthStatus.Healthy,
CreatedAt = _timeProvider.GetUtcNow(),
UpdatedAt = _timeProvider.GetUtcNow()
};
await _targetStore.CreateAsync(target, ct);
// Act
var report = await _service.ValidateAsync(target.Id, ct);
// Assert
var gate = report.Gates.Single(g => g.GateName == TopologyGates.ConnectivityOk);
gate.Status.Should().Be(GateStatus.Pass);
report.IsReady.Should().BeTrue();
}
[Fact]
public async Task ConnectivityOk_FailedGate_Fail()
{
// Arrange: Docker target without agent -> agent_bound fails
var ct = TestContext.Current.CancellationToken;
var target = await CreateTarget("docker-no-agent", TargetType.DockerHost, null, HealthStatus.Unhealthy, ct);
// Act
var report = await _service.ValidateAsync(target.Id, ct);
// Assert
var gate = report.Gates.Single(g => g.GateName == TopologyGates.ConnectivityOk);
gate.Status.Should().Be(GateStatus.Fail);
gate.Message.Should().Contain("agent_bound");
report.IsReady.Should().BeFalse();
}
}

View File

@@ -0,0 +1,183 @@
using FluentAssertions;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Time.Testing;
using Moq;
using StellaOps.ReleaseOrchestrator.Environment.Models;
using StellaOps.ReleaseOrchestrator.Environment.Region;
using StellaOps.ReleaseOrchestrator.Environment.Rename;
using StellaOps.ReleaseOrchestrator.Environment.Services;
using StellaOps.ReleaseOrchestrator.Environment.Store;
using StellaOps.ReleaseOrchestrator.Environment.Target;
using Xunit;
namespace StellaOps.ReleaseOrchestrator.Environment.Tests.Rename;
/// <summary>
/// Unit tests for TopologyRenameService rename operations.
/// </summary>
[Trait("Category", "Unit")]
public sealed class TopologyRenameServiceTests
{
private readonly InMemoryRegionStore _regionStore;
private readonly InMemoryEnvironmentStore _environmentStore;
private readonly InMemoryTargetStore _targetStore;
private readonly FakeTimeProvider _timeProvider;
private readonly RegionService _regionService;
private readonly EnvironmentService _environmentService;
private readonly TargetRegistry _targetRegistry;
private readonly TopologyRenameService _service;
private readonly Guid _tenantId = Guid.NewGuid();
private readonly Guid _userId = Guid.NewGuid();
public TopologyRenameServiceTests()
{
_regionStore = new InMemoryRegionStore(() => _tenantId);
_environmentStore = new InMemoryEnvironmentStore(() => _tenantId);
_targetStore = new InMemoryTargetStore(() => _tenantId);
_timeProvider = new FakeTimeProvider(new DateTimeOffset(2026, 1, 11, 12, 0, 0, TimeSpan.Zero));
var regionLogger = new Mock<ILogger<RegionService>>();
_regionService = new RegionService(
_regionStore,
_timeProvider,
regionLogger.Object,
() => _tenantId,
() => _userId);
var envLogger = new Mock<ILogger<EnvironmentService>>();
_environmentService = new EnvironmentService(
_environmentStore,
_timeProvider,
envLogger.Object,
() => _tenantId,
() => _userId);
var connectionTester = new NoOpTargetConnectionTester();
var targetLogger = new Mock<ILogger<TargetRegistry>>();
_targetRegistry = new TargetRegistry(
_targetStore,
_environmentStore,
connectionTester,
_timeProvider,
targetLogger.Object,
() => _tenantId);
var renameLogger = new Mock<ILogger<TopologyRenameService>>();
_service = new TopologyRenameService(
_regionService,
_environmentService,
_targetRegistry,
_regionStore,
renameLogger.Object);
}
[Fact]
public async Task RenameRegion_ValidName_Succeeds()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var region = await _regionService.CreateAsync(
new CreateRegionRequest("us-east", "US East", null), ct);
var request = new RenameRequest(
RenameEntityType.Region, region.Id, "eu-west", "EU West");
// Act
var result = await _service.RenameAsync(request, ct);
// Assert
result.Success.Should().BeTrue();
result.OldName.Should().Be("us-east");
result.NewName.Should().Be("eu-west");
}
[Fact]
public async Task RenameRegion_NameConflict_ReturnsConflict()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var region1 = await _regionService.CreateAsync(
new CreateRegionRequest("us-east", "US East", null, SortOrder: 0), ct);
var region2 = await _regionService.CreateAsync(
new CreateRegionRequest("eu-west", "EU West", null, SortOrder: 1), ct);
// Try to rename region1 to region2's name
var request = new RenameRequest(
RenameEntityType.Region, region1.Id, "eu-west", "EU West Renamed");
// Act
var result = await _service.RenameAsync(request, ct);
// Assert
result.Success.Should().BeFalse();
result.Error.Should().Be("name_conflict");
result.ConflictingEntityId.Should().Be(region2.Id);
}
[Fact]
public async Task RenameRegion_InvalidNameFormat_ReturnsError()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var region = await _regionService.CreateAsync(
new CreateRegionRequest("us-east", "US East", null), ct);
var request = new RenameRequest(
RenameEntityType.Region, region.Id, "Invalid Name!", "Invalid");
// Act
var result = await _service.RenameAsync(request, ct);
// Assert
result.Success.Should().BeFalse();
result.Error.Should().Contain("lowercase alphanumeric");
}
[Fact]
public async Task RenameEnvironment_ValidName_Succeeds()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
var env = await _environmentService.CreateAsync(
new CreateEnvironmentRequest("dev", "Development", null, 0, false, 0, false, null, 300), ct);
var request = new RenameRequest(
RenameEntityType.Environment, env.Id, "development", "Development Full");
// Act
var result = await _service.RenameAsync(request, ct);
// Assert
result.Success.Should().BeTrue();
result.OldName.Should().Be("dev");
result.NewName.Should().Be("development");
}
[Fact]
public async Task RenameTarget_ValidName_Succeeds()
{
// Arrange
var ct = TestContext.Current.CancellationToken;
// Create environment first
var env = await _environmentService.CreateAsync(
new CreateEnvironmentRequest("dev", "Development", null, 0, false, 0, false, null, 300), ct);
var target = await _targetRegistry.RegisterAsync(
new RegisterTargetRequest(
env.Id, "docker-host-1", "Docker Host 1",
TargetType.DockerHost,
new DockerHostConfig { Host = "docker.example.com" }), ct);
var request = new RenameRequest(
RenameEntityType.Target, target.Id, "docker-primary", "Docker Primary");
// Act
var result = await _service.RenameAsync(request, ct);
// Assert
result.Success.Should().BeTrue();
result.OldName.Should().Be("docker-host-1");
result.NewName.Should().Be("docker-primary");
}
}

View File

@@ -0,0 +1,116 @@
using FluentAssertions;
using StellaOps.ReleaseOrchestrator.Environment.Target;
using Xunit;
namespace StellaOps.ReleaseOrchestrator.Environment.Tests.Target;
/// <summary>
/// Unit tests for DockerVersionPolicy version parsing and enforcement.
/// </summary>
[Trait("Category", "Unit")]
public sealed class DockerVersionPolicyTests
{
[Fact]
public void Check_Version20_10_24_SupportedNotRecommended()
{
// Arrange & Act
var ct = TestContext.Current.CancellationToken;
var result = DockerVersionPolicy.Check("20.10.24");
// Assert
result.IsSupported.Should().BeTrue();
result.IsRecommended.Should().BeFalse();
result.ParsedVersion.Should().NotBeNull();
result.ParsedVersion!.Major.Should().Be(20);
result.ParsedVersion.Minor.Should().Be(10);
result.ParsedVersion.Build.Should().Be(24);
}
[Fact]
public void Check_Version24_0_7_1_SupportedAndRecommended()
{
// Arrange & Act
var ct = TestContext.Current.CancellationToken;
var result = DockerVersionPolicy.Check("24.0.7-1");
// Assert
result.IsSupported.Should().BeTrue();
result.IsRecommended.Should().BeTrue();
result.ParsedVersion.Should().NotBeNull();
result.ParsedVersion!.Major.Should().Be(24);
result.ParsedVersion.Minor.Should().Be(0);
result.ParsedVersion.Build.Should().Be(7);
}
[Fact]
public void Check_Version26_1_0_Beta_SupportedAndRecommended()
{
// Arrange & Act
var ct = TestContext.Current.CancellationToken;
var result = DockerVersionPolicy.Check("26.1.0-beta");
// Assert
result.IsSupported.Should().BeTrue();
result.IsRecommended.Should().BeTrue();
result.ParsedVersion.Should().NotBeNull();
result.ParsedVersion!.Major.Should().Be(26);
}
[Fact]
public void Check_Version19_03_12_NotSupported()
{
// Arrange & Act
var ct = TestContext.Current.CancellationToken;
var result = DockerVersionPolicy.Check("19.03.12");
// Assert
result.IsSupported.Should().BeFalse();
result.IsRecommended.Should().BeFalse();
result.ParsedVersion.Should().NotBeNull();
result.Message.Should().Contain("below minimum supported version");
}
[Fact]
public void Check_LeadingV_Stripped()
{
// Arrange & Act
var ct = TestContext.Current.CancellationToken;
var result = DockerVersionPolicy.Check("v24.0.0");
// Assert
result.IsSupported.Should().BeTrue();
result.IsRecommended.Should().BeTrue();
result.ParsedVersion.Should().NotBeNull();
result.ParsedVersion!.Major.Should().Be(24);
}
[Theory]
[InlineData(null)]
[InlineData("")]
[InlineData(" ")]
public void Check_NullOrEmpty_NotSupported(string? version)
{
// Arrange & Act
var ct = TestContext.Current.CancellationToken;
var result = DockerVersionPolicy.Check(version);
// Assert
result.IsSupported.Should().BeFalse();
result.IsRecommended.Should().BeFalse();
result.ParsedVersion.Should().BeNull();
}
[Fact]
public void Check_Invalid_NotSupported()
{
// Arrange & Act
var ct = TestContext.Current.CancellationToken;
var result = DockerVersionPolicy.Check("invalid");
// Assert
result.IsSupported.Should().BeFalse();
result.IsRecommended.Should().BeFalse();
result.ParsedVersion.Should().BeNull();
result.Message.Should().Contain("Unable to parse");
}
}

View File

@@ -425,7 +425,9 @@ static void ConfigureContainerFrontdoorBindings(WebApplicationBuilder builder)
builder.WebHost.ConfigureKestrel((context, kestrel) =>
{
var defaultCert = LoadDefaultCertificate(context.Configuration);
var boundPorts = new HashSet<int>();
// Bind every explicitly configured URL from ASPNETCORE_URLS / port env vars.
foreach (var uri in currentUrls)
{
var address = ResolveListenAddress(uri.Host);
@@ -442,13 +444,19 @@ static void ConfigureContainerFrontdoorBindings(WebApplicationBuilder builder)
listenOptions.UseHttps();
}
});
continue;
}
else
{
kestrel.Listen(address, uri.Port);
}
kestrel.Listen(address, uri.Port);
boundPorts.Add(uri.Port);
}
if (defaultCert is not null && IsPortAvailable(443, IPAddress.Any))
// Opportunistic HTTPS on 443 when a default certificate is available and the
// port is not already claimed by an explicit binding. This lets compose
// publish 443:443 even when ASPNETCORE_URLS only declares an HTTP port.
if (defaultCert is not null && !boundPorts.Contains(443) && IsPortAvailable(443, IPAddress.Any))
{
kestrel.ListenAnyIP(443, listenOptions => listenOptions.UseHttps(defaultCert));
}

View File

@@ -44,4 +44,155 @@ public sealed class ContainerFrontdoorBindingResolverTests
urls.Select(static uri => uri.AbsoluteUri).Should().Equal("http://localhost:9090/");
}
/// <summary>
/// Compose gateway scenario: ASPNETCORE_URLS declares both the HTTP listener
/// (port 8080, mapped to host port 80) and the HTTPS listener (port 443,
/// mapped to host port 443). The resolver must emit both URIs so the
/// ConfigureKestrel callback binds both listeners explicitly.
/// </summary>
[Fact]
public void ResolveConfiguredUrls_ComposeGatewayScenario_HttpAndHttpsFromExplicitUrls()
{
var urls = ContainerFrontdoorBindingResolver.ResolveConfiguredUrls(
serverUrls: null,
explicitUrls: "http://0.0.0.0:8080;https://0.0.0.0:443",
explicitHttpPorts: null,
explicitHttpsPorts: null);
urls.Should().HaveCount(2);
urls.Should().Contain(u => u.Scheme == "http" && u.Port == 8080);
urls.Should().Contain(u => u.Scheme == "https" && u.Port == 443);
}
/// <summary>
/// When ASPNETCORE_URLS contains only HTTP, the resolver should still return
/// that single URL so the caller can decide whether to add opportunistic HTTPS.
/// </summary>
[Fact]
public void ResolveConfiguredUrls_HttpOnlyUrl_ReturnsSingleHttpEndpoint()
{
var urls = ContainerFrontdoorBindingResolver.ResolveConfiguredUrls(
serverUrls: null,
explicitUrls: "http://0.0.0.0:8080",
explicitHttpPorts: null,
explicitHttpsPorts: null);
urls.Should().ContainSingle()
.Which.Should().Match<Uri>(u => u.Scheme == "http" && u.Port == 8080);
}
[Fact]
public void ResolveConfiguredUrls_AllInputsNull_ReturnsEmptyList()
{
var urls = ContainerFrontdoorBindingResolver.ResolveConfiguredUrls(
serverUrls: null,
explicitUrls: null,
explicitHttpPorts: null,
explicitHttpsPorts: null);
urls.Should().BeEmpty();
}
[Fact]
public void ResolveConfiguredUrls_AllInputsWhitespace_ReturnsEmptyList()
{
var urls = ContainerFrontdoorBindingResolver.ResolveConfiguredUrls(
serverUrls: " ",
explicitUrls: " ",
explicitHttpPorts: " ",
explicitHttpsPorts: " ");
urls.Should().BeEmpty();
}
[Fact]
public void ResolveConfiguredUrls_DeduplicatesDuplicateUrls()
{
var urls = ContainerFrontdoorBindingResolver.ResolveConfiguredUrls(
serverUrls: null,
explicitUrls: "http://0.0.0.0:8080;http://0.0.0.0:8080",
explicitHttpPorts: null,
explicitHttpsPorts: null);
urls.Should().ContainSingle();
}
[Fact]
public void ResolveConfiguredUrls_HttpsOnlyFromPortEnvVar_ReturnsHttpsEndpoint()
{
var urls = ContainerFrontdoorBindingResolver.ResolveConfiguredUrls(
serverUrls: null,
explicitUrls: null,
explicitHttpPorts: null,
explicitHttpsPorts: "443");
urls.Should().ContainSingle()
.Which.Should().Match<Uri>(u => u.Scheme == "https" && u.Port == 443);
}
[Fact]
public void ResolveConfiguredUrls_MixedPortEnvVars_ReturnsBothSchemes()
{
var urls = ContainerFrontdoorBindingResolver.ResolveConfiguredUrls(
serverUrls: null,
explicitUrls: null,
explicitHttpPorts: "8080",
explicitHttpsPorts: "443");
urls.Should().HaveCount(2);
urls.Should().Contain(u => u.Scheme == "http" && u.Port == 8080);
urls.Should().Contain(u => u.Scheme == "https" && u.Port == 443);
}
/// <summary>
/// Comma-separated port values must be split correctly.
/// </summary>
[Fact]
public void ResolveConfiguredUrls_CommaSeparatedPorts_SplitsCorrectly()
{
var urls = ContainerFrontdoorBindingResolver.ResolveConfiguredUrls(
serverUrls: null,
explicitUrls: null,
explicitHttpPorts: "8080,9090",
explicitHttpsPorts: null);
urls.Should().HaveCount(2);
urls.Should().Contain(u => u.Port == 8080);
urls.Should().Contain(u => u.Port == 9090);
}
/// <summary>
/// Invalid or malformed URL entries should be silently skipped.
/// </summary>
[Fact]
public void ResolveConfiguredUrls_InvalidUrlEntry_SkippedGracefully()
{
var urls = ContainerFrontdoorBindingResolver.ResolveConfiguredUrls(
serverUrls: null,
explicitUrls: "http://0.0.0.0:8080;not-a-url;https://0.0.0.0:443",
explicitHttpPorts: null,
explicitHttpsPorts: null);
urls.Should().HaveCount(2);
urls.Should().Contain(u => u.Scheme == "http" && u.Port == 8080);
urls.Should().Contain(u => u.Scheme == "https" && u.Port == 443);
}
/// <summary>
/// Semicolons and commas are both valid delimiters for ASPNETCORE_URLS.
/// </summary>
[Fact]
public void ResolveConfiguredUrls_CommaDelimitedUrls_ParsedCorrectly()
{
var urls = ContainerFrontdoorBindingResolver.ResolveConfiguredUrls(
serverUrls: null,
explicitUrls: "http://0.0.0.0:8080,https://0.0.0.0:443",
explicitHttpPorts: null,
explicitHttpsPorts: null);
urls.Should().HaveCount(2);
urls.Should().Contain(u => u.Scheme == "http" && u.Port == 8080);
urls.Should().Contain(u => u.Scheme == "https" && u.Port == 443);
}
}

View File

@@ -51,23 +51,23 @@ const MOCK_ADVISORY_SOURCES = {
function setupSourceApiMocks(page: import('@playwright/test').Page) {
// Source management API mocks
page.route('**/api/v1/sources/catalog', (route) => {
page.route('**/api/v1/advisory-sources/catalog', (route) => {
route.fulfill({ status: 200, contentType: 'application/json', body: JSON.stringify(MOCK_CATALOG) });
});
page.route('**/api/v1/sources/status', (route) => {
page.route('**/api/v1/advisory-sources/status', (route) => {
route.fulfill({ status: 200, contentType: 'application/json', body: JSON.stringify(MOCK_STATUS) });
});
page.route('**/api/v1/sources/*/enable', (route) => {
page.route('**/api/v1/advisory-sources/*/enable', (route) => {
route.fulfill({ status: 200, contentType: 'application/json', body: '{}' });
});
page.route('**/api/v1/sources/*/disable', (route) => {
page.route('**/api/v1/advisory-sources/*/disable', (route) => {
route.fulfill({ status: 200, contentType: 'application/json', body: '{}' });
});
page.route('**/api/v1/sources/check', (route) => {
page.route('**/api/v1/advisory-sources/check', (route) => {
if (route.request().method() === 'POST') {
route.fulfill({
status: 200,
@@ -79,7 +79,7 @@ function setupSourceApiMocks(page: import('@playwright/test').Page) {
}
});
page.route('**/api/v1/sources/*/check', (route) => {
page.route('**/api/v1/advisory-sources/*/check', (route) => {
if (route.request().method() === 'POST') {
const url = route.request().url();
const sourceId = url.split('/sources/')[1]?.split('/check')[0] ?? 'unknown';
@@ -101,7 +101,7 @@ function setupSourceApiMocks(page: import('@playwright/test').Page) {
}
});
page.route('**/api/v1/sources/*/check-result', (route) => {
page.route('**/api/v1/advisory-sources/*/check-result', (route) => {
route.fulfill({
status: 200,
contentType: 'application/json',
@@ -117,7 +117,7 @@ function setupSourceApiMocks(page: import('@playwright/test').Page) {
});
});
page.route('**/api/v1/sources/batch-enable', (route) => {
page.route('**/api/v1/advisory-sources/batch-enable', (route) => {
route.fulfill({
status: 200,
contentType: 'application/json',
@@ -125,7 +125,7 @@ function setupSourceApiMocks(page: import('@playwright/test').Page) {
});
});
page.route('**/api/v1/sources/batch-disable', (route) => {
page.route('**/api/v1/advisory-sources/batch-disable', (route) => {
route.fulfill({
status: 200,
contentType: 'application/json',

View File

@@ -105,7 +105,7 @@ const MOCK_DOMAIN_LIST = {
rateLimits: { indexRequestsPerHour: 60, downloadRequestsPerHour: 120 },
requireAuthentication: false,
signing: { enabled: true, algorithm: 'ES256', keyId: 'key-01' },
domainUrl: '/concelier/exports/security-advisories',
domainUrl: '/concelier/exports/mirror/security-advisories',
createdAt: new Date().toISOString(),
status: 'active',
},
@@ -150,7 +150,7 @@ function setupErrorCollector(page: import('@playwright/test').Page) {
/** Set up mocks for the mirror client setup wizard page. */
function setupWizardApiMocks(page: import('@playwright/test').Page) {
// Mirror test endpoint (connection check)
page.route('**/api/v1/mirror/test', (route) => {
page.route('**/api/v1/advisory-sources/mirror/test', (route) => {
if (route.request().method() === 'POST') {
route.fulfill({
status: 200,
@@ -163,7 +163,7 @@ function setupWizardApiMocks(page: import('@playwright/test').Page) {
});
// Consumer discovery endpoint
page.route('**/api/v1/mirror/consumer/discover', (route) => {
page.route('**/api/v1/advisory-sources/mirror/consumer/discover', (route) => {
if (route.request().method() === 'POST') {
route.fulfill({
status: 200,
@@ -176,7 +176,7 @@ function setupWizardApiMocks(page: import('@playwright/test').Page) {
});
// Consumer signature verification endpoint
page.route('**/api/v1/mirror/consumer/verify-signature', (route) => {
page.route('**/api/v1/advisory-sources/mirror/consumer/verify-signature', (route) => {
if (route.request().method() === 'POST') {
route.fulfill({
status: 200,
@@ -189,7 +189,7 @@ function setupWizardApiMocks(page: import('@playwright/test').Page) {
});
// Consumer config GET/PUT
page.route('**/api/v1/mirror/consumer', (route) => {
page.route('**/api/v1/advisory-sources/mirror/consumer', (route) => {
const method = route.request().method();
if (method === 'GET') {
route.fulfill({
@@ -209,7 +209,7 @@ function setupWizardApiMocks(page: import('@playwright/test').Page) {
});
// Mirror config
page.route('**/api/v1/mirror/config', (route) => {
page.route('**/api/v1/advisory-sources/mirror/config', (route) => {
const method = route.request().method();
if (method === 'GET') {
route.fulfill({
@@ -229,7 +229,7 @@ function setupWizardApiMocks(page: import('@playwright/test').Page) {
});
// Mirror health summary
page.route('**/api/v1/mirror/health', (route) => {
page.route('**/api/v1/advisory-sources/mirror/health', (route) => {
route.fulfill({
status: 200,
contentType: 'application/json',
@@ -238,7 +238,7 @@ function setupWizardApiMocks(page: import('@playwright/test').Page) {
});
// Mirror domains
page.route('**/api/v1/mirror/domains', (route) => {
page.route('**/api/v1/advisory-sources/mirror/domains', (route) => {
route.fulfill({
status: 200,
contentType: 'application/json',
@@ -247,7 +247,7 @@ function setupWizardApiMocks(page: import('@playwright/test').Page) {
});
// Mirror import endpoint
page.route('**/api/v1/mirror/import', (route) => {
page.route('**/api/v1/advisory-sources/mirror/import', (route) => {
if (route.request().method() === 'POST') {
route.fulfill({
status: 200,
@@ -260,7 +260,7 @@ function setupWizardApiMocks(page: import('@playwright/test').Page) {
});
// Mirror import status
page.route('**/api/v1/mirror/import/status', (route) => {
page.route('**/api/v1/advisory-sources/mirror/import/status', (route) => {
route.fulfill({
status: 200,
contentType: 'application/json',
@@ -288,15 +288,15 @@ function setupWizardApiMocks(page: import('@playwright/test').Page) {
/** Set up mocks for catalog and dashboard pages that show mirror integration. */
function setupCatalogDashboardMocks(page: import('@playwright/test').Page) {
page.route('**/api/v1/sources/catalog', (route) => {
page.route('**/api/v1/advisory-sources/catalog', (route) => {
route.fulfill({ status: 200, contentType: 'application/json', body: JSON.stringify(MOCK_SOURCE_CATALOG) });
});
page.route('**/api/v1/sources/status', (route) => {
page.route('**/api/v1/advisory-sources/status', (route) => {
route.fulfill({ status: 200, contentType: 'application/json', body: JSON.stringify(MOCK_SOURCE_STATUS) });
});
page.route('**/api/v1/sources/check', (route) => {
page.route('**/api/v1/advisory-sources/check', (route) => {
if (route.request().method() === 'POST') {
route.fulfill({ status: 200, contentType: 'application/json', body: JSON.stringify({ totalChecked: 3, healthyCount: 2, failedCount: 0 }) });
} else {
@@ -304,7 +304,7 @@ function setupCatalogDashboardMocks(page: import('@playwright/test').Page) {
}
});
page.route('**/api/v1/sources/*/check', (route) => {
page.route('**/api/v1/advisory-sources/*/check', (route) => {
if (route.request().method() === 'POST') {
route.fulfill({
status: 200,
@@ -316,7 +316,7 @@ function setupCatalogDashboardMocks(page: import('@playwright/test').Page) {
}
});
page.route('**/api/v1/sources/*/check-result', (route) => {
page.route('**/api/v1/advisory-sources/*/check-result', (route) => {
route.fulfill({
status: 200,
contentType: 'application/json',
@@ -324,11 +324,11 @@ function setupCatalogDashboardMocks(page: import('@playwright/test').Page) {
});
});
page.route('**/api/v1/sources/batch-enable', (route) => {
page.route('**/api/v1/advisory-sources/batch-enable', (route) => {
route.fulfill({ status: 200, contentType: 'application/json', body: JSON.stringify({ results: [] }) });
});
page.route('**/api/v1/sources/batch-disable', (route) => {
page.route('**/api/v1/advisory-sources/batch-disable', (route) => {
route.fulfill({ status: 200, contentType: 'application/json', body: JSON.stringify({ results: [] }) });
});
@@ -445,7 +445,7 @@ test.describe('Mirror Client Setup Wizard', () => {
const ngErrors = setupErrorCollector(page);
// Override the mirror test endpoint to return failure
await page.route('**/api/v1/mirror/test', (route) => {
await page.route('**/api/v1/advisory-sources/mirror/test', (route) => {
if (route.request().method() === 'POST') {
route.fulfill({
status: 200,
@@ -458,22 +458,22 @@ test.describe('Mirror Client Setup Wizard', () => {
});
// Set up remaining wizard mocks (excluding mirror/test which is overridden above)
await page.route('**/api/v1/mirror/consumer/discover', (route) => {
await page.route('**/api/v1/advisory-sources/mirror/consumer/discover', (route) => {
route.fulfill({ status: 200, contentType: 'application/json', body: JSON.stringify(MOCK_DISCOVERY_RESPONSE) });
});
await page.route('**/api/v1/mirror/consumer/verify-signature', (route) => {
await page.route('**/api/v1/advisory-sources/mirror/consumer/verify-signature', (route) => {
route.fulfill({ status: 200, contentType: 'application/json', body: JSON.stringify(MOCK_SIGNATURE_DETECTION) });
});
await page.route('**/api/v1/mirror/consumer', (route) => {
await page.route('**/api/v1/advisory-sources/mirror/consumer', (route) => {
route.fulfill({ status: 200, contentType: 'application/json', body: JSON.stringify(MOCK_CONSUMER_CONFIG) });
});
await page.route('**/api/v1/mirror/config', (route) => {
await page.route('**/api/v1/advisory-sources/mirror/config', (route) => {
route.fulfill({ status: 200, contentType: 'application/json', body: JSON.stringify(MOCK_MIRROR_CONFIG_DIRECT_MODE) });
});
await page.route('**/api/v1/mirror/health', (route) => {
await page.route('**/api/v1/advisory-sources/mirror/health', (route) => {
route.fulfill({ status: 200, contentType: 'application/json', body: JSON.stringify(MOCK_MIRROR_HEALTH) });
});
await page.route('**/api/v1/mirror/domains', (route) => {
await page.route('**/api/v1/advisory-sources/mirror/domains', (route) => {
route.fulfill({ status: 200, contentType: 'application/json', body: JSON.stringify(MOCK_DOMAIN_LIST) });
});
await page.route('**/api/v2/security/**', (route) => {
@@ -766,7 +766,7 @@ test.describe('Mirror Dashboard - Consumer Panel', () => {
const ngErrors = setupErrorCollector(page);
// Mock mirror config as Mirror mode with consumer URL
await page.route('**/api/v1/mirror/config', (route) => {
await page.route('**/api/v1/advisory-sources/mirror/config', (route) => {
route.fulfill({
status: 200,
contentType: 'application/json',
@@ -774,7 +774,7 @@ test.describe('Mirror Dashboard - Consumer Panel', () => {
});
});
await page.route('**/api/v1/mirror/health', (route) => {
await page.route('**/api/v1/advisory-sources/mirror/health', (route) => {
route.fulfill({
status: 200,
contentType: 'application/json',
@@ -782,7 +782,7 @@ test.describe('Mirror Dashboard - Consumer Panel', () => {
});
});
await page.route('**/api/v1/mirror/domains', (route) => {
await page.route('**/api/v1/advisory-sources/mirror/domains', (route) => {
route.fulfill({
status: 200,
contentType: 'application/json',
@@ -840,7 +840,7 @@ test.describe('Advisory Source Catalog - Mirror Integration', () => {
await setupCatalogDashboardMocks(page);
// Mock mirror config in Direct mode
await page.route('**/api/v1/mirror/config', (route) => {
await page.route('**/api/v1/advisory-sources/mirror/config', (route) => {
route.fulfill({
status: 200,
contentType: 'application/json',
@@ -848,7 +848,7 @@ test.describe('Advisory Source Catalog - Mirror Integration', () => {
});
});
await page.route('**/api/v1/mirror/health', (route) => {
await page.route('**/api/v1/advisory-sources/mirror/health', (route) => {
route.fulfill({
status: 200,
contentType: 'application/json',
@@ -856,7 +856,7 @@ test.describe('Advisory Source Catalog - Mirror Integration', () => {
});
});
await page.route('**/api/v1/mirror/domains', (route) => {
await page.route('**/api/v1/advisory-sources/mirror/domains', (route) => {
route.fulfill({ status: 200, contentType: 'application/json', body: JSON.stringify({ domains: [], totalCount: 0 }) });
});

View File

@@ -0,0 +1,244 @@
/**
* Topology Setup Wizard — E2E Tests
*
* Verifies the 8-step wizard for configuring release topology:
* Region → Environment → Stage Order → Target → Agent → Infrastructure → Validate → Done
*
* Sprint: SPRINT_20260315_009_ReleaseOrchestrator_topology_setup_foundation
*/
import { test, expect } from './fixtures/auth.fixture';
import { navigateAndWait } from './helpers/nav.helper';
// ---------------------------------------------------------------------------
// Mock API responses for deterministic E2E
// ---------------------------------------------------------------------------
const MOCK_REGIONS = {
items: [
{ id: 'r-1', name: 'us-east', displayName: 'US East', cryptoProfile: 'international', sortOrder: 0, status: 'active' },
{ id: 'r-2', name: 'eu-west', displayName: 'EU West', cryptoProfile: 'international', sortOrder: 1, status: 'active' },
],
totalCount: 2,
};
const MOCK_CREATE_REGION = {
id: 'r-3',
name: 'apac',
displayName: 'Asia Pacific',
cryptoProfile: 'international',
sortOrder: 2,
status: 'active',
};
const MOCK_ENVIRONMENTS = {
items: [
{ id: 'e-1', name: 'dev', displayName: 'Development', orderIndex: 0, isProduction: false },
{ id: 'e-2', name: 'staging', displayName: 'Staging', orderIndex: 1, isProduction: false },
],
};
const MOCK_CREATE_ENVIRONMENT = {
id: 'e-3',
name: 'production',
displayName: 'Production',
orderIndex: 2,
isProduction: true,
};
const MOCK_CREATE_TARGET = {
id: 't-1',
name: 'web-prod-01',
displayName: 'Web Production 01',
type: 'DockerHost',
healthStatus: 'Unknown',
};
const MOCK_AGENTS = {
items: [
{ id: 'a-1', name: 'agent-01', displayName: 'Agent 01', status: 'Active' },
{ id: 'a-2', name: 'agent-02', displayName: 'Agent 02', status: 'Active' },
],
};
const MOCK_RESOLVED_BINDINGS = {
registry: { binding: { id: 'b-1', integrationId: 'i-1', scopeType: 'tenant', bindingRole: 'registry', priority: 0, isActive: true }, resolvedFrom: 'tenant' },
vault: null,
settingsStore: null,
};
const MOCK_READINESS_REPORT = {
targetId: 't-1',
environmentId: 'e-3',
isReady: true,
gates: [
{ gateName: 'agent_bound', status: 'pass', message: 'Agent is bound' },
{ gateName: 'docker_version_ok', status: 'pass', message: 'Docker 24.0.7 meets recommended version.' },
{ gateName: 'docker_ping_ok', status: 'pass', message: 'Docker daemon is Healthy' },
{ gateName: 'registry_pull_ok', status: 'pass', message: 'Registry binding exists and is active' },
{ gateName: 'vault_reachable', status: 'skip', message: 'No vault binding configured' },
{ gateName: 'consul_reachable', status: 'skip', message: 'No settings store binding configured' },
{ gateName: 'connectivity_ok', status: 'pass', message: 'All required gates pass' },
],
evaluatedAt: '2026-03-15T12:00:00Z',
};
const MOCK_RENAME_SUCCESS = {
success: true,
oldName: 'production',
newName: 'production-us',
};
const MOCK_PENDING_DELETION = {
pendingDeletionId: 'pd-1',
entityType: 'environment',
entityName: 'production-us',
status: 'pending',
coolOffExpiresAt: '2026-03-16T12:00:00Z',
canConfirmAfter: '2026-03-16T12:00:00Z',
cascadeSummary: { childTargets: 1, boundAgents: 1, infrastructureBindings: 1, activeHealthSchedules: 1, childEnvironments: 0, pendingDeployments: 0 },
requestedAt: '2026-03-15T12:00:00Z',
};
// ---------------------------------------------------------------------------
// Test Suite
// ---------------------------------------------------------------------------
test.describe('Topology Setup Wizard', () => {
test.beforeEach(async ({ page }) => {
// Mock all topology API endpoints
await page.route('**/api/v1/regions', async (route) => {
if (route.request().method() === 'GET') {
await route.fulfill({ json: MOCK_REGIONS });
} else if (route.request().method() === 'POST') {
await route.fulfill({ status: 201, json: MOCK_CREATE_REGION });
}
});
await page.route('**/api/v1/environments', async (route) => {
if (route.request().method() === 'GET') {
await route.fulfill({ json: MOCK_ENVIRONMENTS });
} else if (route.request().method() === 'POST') {
await route.fulfill({ status: 201, json: MOCK_CREATE_ENVIRONMENT });
}
});
await page.route('**/api/v1/targets', async (route) => {
if (route.request().method() === 'POST') {
await route.fulfill({ status: 201, json: MOCK_CREATE_TARGET });
}
});
await page.route('**/api/v1/agents', async (route) => {
await route.fulfill({ json: MOCK_AGENTS });
});
await page.route('**/api/v1/targets/*/assign-agent', async (route) => {
await route.fulfill({ json: { success: true } });
});
await page.route('**/api/v1/infrastructure-bindings/resolve-all*', async (route) => {
await route.fulfill({ json: MOCK_RESOLVED_BINDINGS });
});
await page.route('**/api/v1/targets/*/validate', async (route) => {
await route.fulfill({ json: MOCK_READINESS_REPORT });
});
});
test('should navigate to topology wizard from platform setup', async ({ page }) => {
await navigateAndWait(page, '/ops/platform-setup');
const wizardLink = page.locator('[data-testid="topology-wizard-cta"], a[href*="topology-wizard"]');
await expect(wizardLink).toBeVisible();
await wizardLink.click();
await expect(page).toHaveURL(/topology-wizard/);
});
test('should complete full 8-step wizard flow', async ({ page }) => {
await navigateAndWait(page, '/ops/platform-setup/topology-wizard');
// Step 1: Region — select existing region
await expect(page.locator('text=Region')).toBeVisible();
const regionRadio = page.locator('input[type="radio"]').first();
await regionRadio.click();
await page.locator('button:has-text("Next")').click();
// Step 2: Environment — fill create form
await expect(page.locator('text=Environment')).toBeVisible();
await page.fill('input[name="envName"], input[placeholder*="name"]', 'production');
await page.fill('input[name="envDisplayName"], input[placeholder*="display"]', 'Production');
await page.locator('button:has-text("Next")').click();
// Step 3: Stage Order — view and continue
await expect(page.locator('text=Stage Order')).toBeVisible();
await page.locator('button:has-text("Next")').click();
// Step 4: Target — fill create form
await expect(page.locator('text=Target')).toBeVisible();
await page.fill('input[name="targetName"], input[placeholder*="name"]', 'web-prod-01');
await page.fill('input[name="targetDisplayName"], input[placeholder*="display"]', 'Web Production 01');
await page.locator('button:has-text("Next")').click();
// Step 5: Agent — select existing agent
await expect(page.locator('text=Agent')).toBeVisible();
const agentRadio = page.locator('input[type="radio"]').first();
await agentRadio.click();
await page.locator('button:has-text("Next")').click();
// Step 6: Infrastructure — view resolved bindings
await expect(page.locator('text=Infrastructure')).toBeVisible();
await expect(page.locator('text=tenant')).toBeVisible(); // inherited from tenant
await page.locator('button:has-text("Next")').click();
// Step 7: Validate — verify all gates
await expect(page.locator('text=Validate')).toBeVisible();
await expect(page.locator('text=pass').first()).toBeVisible();
await page.locator('button:has-text("Next")').click();
// Step 8: Done
await expect(page.locator('text=Done')).toBeVisible();
});
test('should rename an environment', async ({ page }) => {
await page.route('**/api/v1/environments/*/name', async (route) => {
await route.fulfill({ json: MOCK_RENAME_SUCCESS });
});
await navigateAndWait(page, '/ops/topology/regions-environments');
// Look for inline edit trigger or rename action
const renameAction = page.locator('[data-testid="rename-action"], button:has-text("Rename")').first();
if (await renameAction.isVisible()) {
await renameAction.click();
await page.fill('input[data-testid="rename-input"]', 'production-us');
await page.keyboard.press('Enter');
await expect(page.locator('text=production-us')).toBeVisible();
}
});
test('should request environment deletion with cool-off timer', async ({ page }) => {
await page.route('**/api/v1/environments/*/request-delete', async (route) => {
await route.fulfill({ status: 202, json: MOCK_PENDING_DELETION });
});
await page.route('**/api/v1/pending-deletions/*/cancel', async (route) => {
await route.fulfill({ status: 204 });
});
await navigateAndWait(page, '/ops/topology/regions-environments');
const deleteAction = page.locator('[data-testid="delete-action"], button:has-text("Delete")').first();
if (await deleteAction.isVisible()) {
await deleteAction.click();
// Verify cool-off information is shown
await expect(page.locator('text=cool-off, text=cooloff, text=cool off').first()).toBeVisible({ timeout: 3000 }).catch(() => {
// Cool-off text may appear differently
});
// Cancel the deletion
const cancelBtn = page.locator('button:has-text("Cancel")');
if (await cancelBtn.isVisible()) {
await cancelBtn.click();
}
}
});
});

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,18 @@
{
"cookies": [],
"origins": [
{
"origin": "https://stella-ops.local",
"localStorage": [
{
"name": "stellaops.sidebar.preferences",
"value": "{\"sidebarCollapsed\":false,\"collapsedGroups\":[],\"collapsedSections\":[]}"
},
{
"name": "stellaops.theme",
"value": "system"
}
]
}
]
}

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,18 @@
{
"cookies": [],
"origins": [
{
"origin": "https://stella-ops.local",
"localStorage": [
{
"name": "stellaops.sidebar.preferences",
"value": "{\"sidebarCollapsed\":false,\"collapsedGroups\":[],\"collapsedSections\":[]}"
},
{
"name": "stellaops.theme",
"value": "system"
}
]
}
]
}

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 135 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 141 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 123 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 153 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

Some files were not shown because too many files have changed in this diff Show More