feat: Implement Filesystem and MongoDB provenance writers for PackRun execution context
Some checks failed
Airgap Sealed CI Smoke / sealed-smoke (push) Has been cancelled
Docs CI / lint-and-preview (push) Has been cancelled
Export Center CI / export-ci (push) Has been cancelled

- Added `FilesystemPackRunProvenanceWriter` to write provenance manifests to the filesystem.
- Introduced `MongoPackRunArtifactReader` to read artifacts from MongoDB.
- Created `MongoPackRunProvenanceWriter` to store provenance manifests in MongoDB.
- Developed unit tests for filesystem and MongoDB provenance writers.
- Established `ITimelineEventStore` and `ITimelineIngestionService` interfaces for timeline event handling.
- Implemented `TimelineIngestionService` to validate and persist timeline events with hashing.
- Created PostgreSQL schema and migration scripts for timeline indexing.
- Added dependency injection support for timeline indexer services.
- Developed tests for timeline ingestion and schema validation.
This commit is contained in:
StellaOps Bot
2025-11-30 15:38:14 +02:00
parent 8f54ffa203
commit 17d45a6d30
276 changed files with 8618 additions and 688 deletions

View File

@@ -0,0 +1,53 @@
# Export Center · AGENTS Charter (Sprint 0164-0001-0001)
## Module Scope & Working Directory
- Working directory: `src/ExportCenter/**` (API/WebService, Worker, Core/Infrastructure libs, Trivy/Mirror/DevPortal adapters, RiskBundles pipeline, tests, seed/config). Cross-module edits require an explicit note in the sprint Decisions & Risks.
- Mission: produce deterministic evidence exports (JSON, Trivy DB, mirror/delta, devportal offline) with provenance, signing, and distribution (HTTP, OCI, object) that remain offline-friendly and tenant-safe.
## Roles
- **Backend engineer (.NET 10 / ASP.NET Core):** API surface, planner/run lifecycle, RBAC/tenant guards, SSE events, download endpoints.
- **Adapter engineer:** Trivy DB/Java DB, mirror delta, OCI distribution, encryption/KMS wrapping, pack-run integration.
- **Worker/Concurrency engineer:** job leasing, retries/idempotency, retention pruning, scheduler hooks.
- **Crypto/Provenance steward:** signing, DSSE/in-toto, age/AES-GCM envelope handling, provenance schemas.
- **QA automation:** WebApplicationFactory + Mongo/Mongo2Go fixtures, adapter regression harnesses, determinism checks, offline-kit verification scripts.
- **Docs steward:** keep `docs/modules/export-center/*.md`, sprint Decisions & Risks, and CLI docs aligned with behavior.
## Required Reading (treat as read before setting DOING)
- `docs/README.md`
- `docs/07_HIGH_LEVEL_ARCHITECTURE.md`
- `docs/modules/platform/architecture-overview.md`
- `docs/modules/export-center/architecture.md`
- `docs/modules/export-center/profiles.md`
- `docs/modules/export-center/trivy-adapter.md` (for 36-001/36-002)
- `docs/modules/export-center/mirror-bundles.md` (for 37-001/37-002)
- `docs/modules/export-center/provenance-and-signing.md`
- `docs/modules/export-center/operations/kms-envelope-pattern.md` (for 37-002 encryption/KMS)
- Sprint file `docs/implplan/SPRINT_0164_0001_0001_exportcenter_iii.md`
## Working Agreements
- Enforce tenant scoping and RBAC on every API, worker fetch, and distribution path; no cross-tenant exports unless explicitly whitelisted and logged.
- Maintain determinism: sorted outputs, canonical JSON, UTC RFC3339 timestamps, stable hashing; identical selectors yield identical manifests.
- Offline-first: avoid new external calls; OCI distribution must be feature-flagged/disable-able for air-gap; tests must not reach the network.
- Aggregation-Only Contract for evidence: no derived modifications; policy outputs stay separate and clearly labeled.
- Concurrency: default per-tenant run caps (4 active) and idempotent retries; cooperative cancellation must clean partial artefacts and audit outcome.
- Cross-module changes (Authority/Orchestrator/CLI) only when sprint explicitly covers them; log in Decisions & Risks.
## Coding & Observability Standards
- Target **.NET 10** with curated `local-nugets/`; MongoDB driver ≥ 3.x; ORAS/OCI client where applicable.
- Metrics under `StellaOps.ExportCenter.*`; tag `tenant`, `profile`, `adapter`, `result`; document new counters/histograms.
- Logs structured, no PII; include `runId`, `tenant`, `profile`, `adapter`, `correlationId`; map phases (`plan`, `resolve`, `adapter`, `manifest`, `sign`, `distribute`).
- SSE/telemetry events must be deterministic and replay-safe; backpressure aware.
- Signing/encryption: default cosign-style KMS signing; age/AES-GCM envelopes with key wrapping; store references in provenance only (no raw keys).
## Testing Rules
- API/worker tests: `StellaOps.ExportCenter.Tests` with WebApplicationFactory + in-memory/Mongo2Go fixtures; assert tenant guards, RBAC, quotas, SSE timelines.
- Adapter regression: deterministic fixtures for Trivy DB/Java DB, mirror delta/base comparison, OCI manifest generation; no network.
- Risk bundle pipeline: tests in `StellaOps.ExportCenter.RiskBundles.Tests` (or add) covering bundle layout, DSSE signatures, checksum publication.
- Determinism checks: stable ordering/hashes in manifests, provenance, and distribution descriptors; retry paths must not duplicate outputs.
- Keep tests air-gap friendly; seeded data under `seed-data/` or inline fixtures.
## Delivery Discipline
- Update sprint tracker statuses (`TODO → DOING → DONE/BLOCKED`) in `docs/implplan/SPRINT_0164_0001_0001_exportcenter_iii.md` when starting/finishing/blocking work; mirror design decisions in Decisions & Risks and Execution Log.
- If a decision is needed (API contract, KMS envelope pattern), mark the task `BLOCKED`, describe the decision in sprint Decisions & Risks, and continue with other unblocked tasks.
- When contracts or schemas change (API, manifest, provenance, adapter outputs), update module docs and link them from the sprint.
- Retain deterministic retention/pruning behavior; document feature flags and defaults in `docs/modules/export-center/operations/*.md` when modified.

View File

@@ -0,0 +1,218 @@
using Microsoft.Extensions.Logging;
using Npgsql;
using NpgsqlTypes;
using StellaOps.Orchestrator.Core.Domain;
using PackLogLevel = StellaOps.Orchestrator.Core.Domain.LogLevel;
using StellaOps.Orchestrator.Infrastructure.Repositories;
namespace StellaOps.Orchestrator.Infrastructure.Postgres;
/// <summary>
/// PostgreSQL implementation for pack run logs.
/// </summary>
public sealed class PostgresPackRunLogRepository : IPackRunLogRepository
{
private const string Columns = "log_id, pack_run_id, tenant_id, sequence, log_level, source, message, data, created_at";
private const string InsertSql = """
INSERT INTO pack_run_logs (log_id, tenant_id, pack_run_id, sequence, log_level, source, message, data, created_at)
VALUES (@log_id, @tenant_id, @pack_run_id, @sequence, @log_level, @source, @message, @data, @created_at)
""";
private const string SelectLogsSql = $"""
SELECT {Columns}
FROM pack_run_logs
WHERE tenant_id = @tenant_id AND pack_run_id = @pack_run_id AND sequence > @after
ORDER BY sequence
LIMIT @limit
""";
private const string SelectLogsByLevelSql = $"""
SELECT {Columns}
FROM pack_run_logs
WHERE tenant_id = @tenant_id AND pack_run_id = @pack_run_id AND sequence > @after AND log_level >= @min_level
ORDER BY sequence
LIMIT @limit
""";
private const string SearchLogsSql = $"""
SELECT {Columns}
FROM pack_run_logs
WHERE tenant_id = @tenant_id AND pack_run_id = @pack_run_id AND sequence > @after AND message ILIKE @pattern
ORDER BY sequence
LIMIT @limit
""";
private const string StatsSql = """
SELECT COUNT(*)::BIGINT, COALESCE(MAX(sequence), -1)
FROM pack_run_logs
WHERE tenant_id = @tenant_id AND pack_run_id = @pack_run_id
""";
private const string DeleteSql = """
DELETE FROM pack_run_logs
WHERE tenant_id = @tenant_id AND pack_run_id = @pack_run_id
""";
private readonly OrchestratorDataSource _dataSource;
private readonly ILogger<PostgresPackRunLogRepository> _logger;
public PostgresPackRunLogRepository(OrchestratorDataSource dataSource, ILogger<PostgresPackRunLogRepository> logger)
{
_dataSource = dataSource ?? throw new ArgumentNullException(nameof(dataSource));
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
}
public async Task AppendAsync(PackRunLog log, CancellationToken cancellationToken)
{
await using var connection = await _dataSource.OpenConnectionAsync(log.TenantId, "writer", cancellationToken).ConfigureAwait(false);
await using var command = new NpgsqlCommand(InsertSql, connection);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
AddParameters(command.Parameters, log);
await command.ExecuteNonQueryAsync(cancellationToken).ConfigureAwait(false);
}
public async Task AppendBatchAsync(IReadOnlyList<PackRunLog> logs, CancellationToken cancellationToken)
{
if (logs.Count == 0)
{
return;
}
var tenantId = logs[0].TenantId;
await using var connection = await _dataSource.OpenConnectionAsync(tenantId, "writer", cancellationToken).ConfigureAwait(false);
await using var batch = new NpgsqlBatch(connection);
foreach (var log in logs)
{
var cmd = new NpgsqlBatchCommand(InsertSql);
AddParameters(cmd.Parameters, log);
batch.BatchCommands.Add(cmd);
}
await batch.ExecuteNonQueryAsync(cancellationToken).ConfigureAwait(false);
}
public async Task<PackRunLogBatch> GetLogsAsync(string tenantId, Guid packRunId, long afterSequence, int limit, CancellationToken cancellationToken)
{
await using var connection = await _dataSource.OpenConnectionAsync(tenantId, "reader", cancellationToken).ConfigureAwait(false);
await using var command = new NpgsqlCommand(SelectLogsSql, connection);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
command.Parameters.AddWithValue("pack_run_id", packRunId);
command.Parameters.AddWithValue("after", afterSequence);
command.Parameters.AddWithValue("limit", limit);
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
return await ReadBatchAsync(reader, tenantId, packRunId, cancellationToken).ConfigureAwait(false);
}
public async Task<(long Count, long LatestSequence)> GetLogStatsAsync(string tenantId, Guid packRunId, CancellationToken cancellationToken)
{
await using var connection = await _dataSource.OpenConnectionAsync(tenantId, "reader", cancellationToken).ConfigureAwait(false);
await using var command = new NpgsqlCommand(StatsSql, connection);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
command.Parameters.AddWithValue("pack_run_id", packRunId);
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
if (!await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
{
return (0, -1);
}
var count = reader.GetInt64(0);
var latest = reader.GetInt64(1);
return (count, latest);
}
public async Task<PackRunLogBatch> GetLogsByLevelAsync(string tenantId, Guid packRunId, PackLogLevel minLevel, long afterSequence, int limit, CancellationToken cancellationToken)
{
await using var connection = await _dataSource.OpenConnectionAsync(tenantId, "reader", cancellationToken).ConfigureAwait(false);
await using var command = new NpgsqlCommand(SelectLogsByLevelSql, connection);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
command.Parameters.AddWithValue("pack_run_id", packRunId);
command.Parameters.AddWithValue("after", afterSequence);
command.Parameters.AddWithValue("limit", limit);
command.Parameters.AddWithValue("min_level", (int)minLevel);
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
return await ReadBatchAsync(reader, tenantId, packRunId, cancellationToken).ConfigureAwait(false);
}
public async Task<PackRunLogBatch> SearchLogsAsync(string tenantId, Guid packRunId, string pattern, long afterSequence, int limit, CancellationToken cancellationToken)
{
await using var connection = await _dataSource.OpenConnectionAsync(tenantId, "reader", cancellationToken).ConfigureAwait(false);
await using var command = new NpgsqlCommand(SearchLogsSql, connection);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
command.Parameters.AddWithValue("pack_run_id", packRunId);
command.Parameters.AddWithValue("after", afterSequence);
command.Parameters.AddWithValue("limit", limit);
command.Parameters.AddWithValue("pattern", $"%{pattern}%");
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
return await ReadBatchAsync(reader, tenantId, packRunId, cancellationToken).ConfigureAwait(false);
}
public async Task<long> DeleteLogsAsync(string tenantId, Guid packRunId, CancellationToken cancellationToken)
{
await using var connection = await _dataSource.OpenConnectionAsync(tenantId, "writer", cancellationToken).ConfigureAwait(false);
await using var command = new NpgsqlCommand(DeleteSql, connection);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
command.Parameters.AddWithValue("pack_run_id", packRunId);
var rows = await command.ExecuteNonQueryAsync(cancellationToken).ConfigureAwait(false);
return rows;
}
private static void AddParameters(NpgsqlParameterCollection parameters, PackRunLog log)
{
parameters.AddWithValue("log_id", log.LogId);
parameters.AddWithValue("tenant_id", log.TenantId);
parameters.AddWithValue("pack_run_id", log.PackRunId);
parameters.AddWithValue("sequence", log.Sequence);
parameters.AddWithValue("log_level", (int)log.Level);
parameters.AddWithValue("source", (object?)log.Source ?? DBNull.Value);
parameters.AddWithValue("message", log.Message);
parameters.Add(new NpgsqlParameter("data", NpgsqlDbType.Jsonb) { Value = (object?)log.Data ?? DBNull.Value });
parameters.AddWithValue("created_at", log.Timestamp);
}
private static async Task<PackRunLogBatch> ReadBatchAsync(NpgsqlDataReader reader, string tenantId, Guid packRunId, CancellationToken cancellationToken)
{
var logs = new List<PackRunLog>();
long startSequence = -1;
while (await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
{
var log = new PackRunLog(
LogId: reader.GetGuid(0),
TenantId: reader.GetString(2),
PackRunId: reader.GetGuid(1),
Sequence: reader.GetInt64(3),
Level: (PackLogLevel)reader.GetInt32(4),
Source: reader.IsDBNull(5) ? "unknown" : reader.GetString(5),
Message: reader.GetString(6),
Timestamp: reader.GetFieldValue<DateTimeOffset>(8),
Data: reader.IsDBNull(7) ? null : reader.GetString(7));
if (startSequence < 0)
{
startSequence = log.Sequence;
}
logs.Add(log);
}
if (startSequence < 0)
{
startSequence = 0;
}
return new PackRunLogBatch(packRunId, tenantId, startSequence, logs);
}
}

View File

@@ -0,0 +1,525 @@
using System.Globalization;
using Microsoft.Extensions.Logging;
using Npgsql;
using NpgsqlTypes;
using StellaOps.Orchestrator.Core.Domain;
using StellaOps.Orchestrator.Infrastructure.Repositories;
namespace StellaOps.Orchestrator.Infrastructure.Postgres;
/// <summary>
/// PostgreSQL implementation for pack run persistence.
/// </summary>
public sealed class PostgresPackRunRepository : IPackRunRepository
{
private const string Columns = """
pack_run_id, tenant_id, project_id, pack_id, pack_version, status, priority, attempt, max_attempts,
parameters, parameters_digest, idempotency_key, correlation_id, lease_id, task_runner_id, lease_until,
created_at, scheduled_at, leased_at, started_at, completed_at, not_before, reason, exit_code, duration_ms,
created_by, metadata
""";
private const string SelectByIdSql = $"SELECT {Columns} FROM pack_runs WHERE tenant_id = @tenant_id AND pack_run_id = @pack_run_id";
private const string SelectByIdempotencySql = $"SELECT {Columns} FROM pack_runs WHERE tenant_id = @tenant_id AND idempotency_key = @idempotency_key";
private const string InsertSql = """
INSERT INTO pack_runs (
pack_run_id, tenant_id, project_id, pack_id, pack_version, status, priority, attempt, max_attempts,
parameters, parameters_digest, idempotency_key, correlation_id, lease_id, task_runner_id, lease_until,
created_at, scheduled_at, leased_at, started_at, completed_at, not_before, reason, exit_code, duration_ms,
created_by, metadata)
VALUES (
@pack_run_id, @tenant_id, @project_id, @pack_id, @pack_version, @status::pack_run_status, @priority,
@attempt, @max_attempts, @parameters, @parameters_digest, @idempotency_key, @correlation_id, @lease_id,
@task_runner_id, @lease_until, @created_at, @scheduled_at, @leased_at, @started_at, @completed_at,
@not_before, @reason, @exit_code, @duration_ms, @created_by, @metadata)
""";
private const string UpdateStatusSql = """
UPDATE pack_runs
SET status = @status::pack_run_status,
attempt = @attempt,
lease_id = @lease_id,
task_runner_id = @task_runner_id,
lease_until = @lease_until,
scheduled_at = @scheduled_at,
leased_at = @leased_at,
started_at = @started_at,
completed_at = @completed_at,
not_before = @not_before,
reason = @reason,
exit_code = @exit_code,
duration_ms = @duration_ms
WHERE tenant_id = @tenant_id AND pack_run_id = @pack_run_id
""";
private const string LeaseNextSqlTemplate = """
UPDATE pack_runs
SET status = 'leased'::pack_run_status,
lease_id = @lease_id,
task_runner_id = @task_runner_id,
lease_until = @lease_until,
leased_at = @leased_at
WHERE tenant_id = @tenant_id
AND pack_run_id = (
SELECT pack_run_id
FROM pack_runs
WHERE tenant_id = @tenant_id
AND status = 'scheduled'::pack_run_status
AND (not_before IS NULL OR not_before <= @now)
{0}
ORDER BY priority DESC, created_at
LIMIT 1
FOR UPDATE SKIP LOCKED)
RETURNING {1};
""";
private const string ExtendLeaseSql = """
UPDATE pack_runs
SET lease_until = @new_lease_until
WHERE tenant_id = @tenant_id
AND pack_run_id = @pack_run_id
AND lease_id = @lease_id
AND status = 'leased'::pack_run_status
AND lease_until > @now
""";
private const string ReleaseLeaseSql = """
UPDATE pack_runs
SET status = @status::pack_run_status,
lease_id = NULL,
task_runner_id = NULL,
lease_until = NULL,
completed_at = CASE WHEN @completed_at IS NULL THEN completed_at ELSE @completed_at END,
reason = @reason
WHERE tenant_id = @tenant_id AND pack_run_id = @pack_run_id AND lease_id = @lease_id
""";
private const string ListSqlTemplate = "SELECT " + Columns + @"
FROM pack_runs
WHERE tenant_id = @tenant_id
{0}
ORDER BY created_at DESC
LIMIT @limit OFFSET @offset";
private const string CountSqlTemplate = @"SELECT COUNT(*)
FROM pack_runs
WHERE tenant_id = @tenant_id
{0}";
private const string ExpiredLeaseSql = $"""
SELECT {Columns}
FROM pack_runs
WHERE status = 'leased'::pack_run_status
AND lease_until < @cutoff
ORDER BY lease_until
LIMIT @limit
""";
private const string CancelPendingSql = """
UPDATE pack_runs
SET status = 'canceled'::pack_run_status,
reason = @reason,
completed_at = NOW()
WHERE tenant_id = @tenant_id
AND status = 'pending'::pack_run_status
{0}
""";
private readonly OrchestratorDataSource _dataSource;
private readonly ILogger<PostgresPackRunRepository> _logger;
public PostgresPackRunRepository(OrchestratorDataSource dataSource, ILogger<PostgresPackRunRepository> logger)
{
_dataSource = dataSource ?? throw new ArgumentNullException(nameof(dataSource));
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
}
public async Task<PackRun?> GetByIdAsync(string tenantId, Guid packRunId, CancellationToken cancellationToken)
{
await using var connection = await _dataSource.OpenConnectionAsync(tenantId, "reader", cancellationToken).ConfigureAwait(false);
await using var command = new NpgsqlCommand(SelectByIdSql, connection);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
command.Parameters.AddWithValue("pack_run_id", packRunId);
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
if (!await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
{
return null;
}
return Map(reader);
}
public async Task<PackRun?> GetByIdempotencyKeyAsync(string tenantId, string idempotencyKey, CancellationToken cancellationToken)
{
await using var connection = await _dataSource.OpenConnectionAsync(tenantId, "reader", cancellationToken).ConfigureAwait(false);
await using var command = new NpgsqlCommand(SelectByIdempotencySql, connection);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
command.Parameters.AddWithValue("idempotency_key", idempotencyKey);
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
if (!await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
{
return null;
}
return Map(reader);
}
public async Task CreateAsync(PackRun packRun, CancellationToken cancellationToken)
{
await using var connection = await _dataSource.OpenConnectionAsync(packRun.TenantId, "writer", cancellationToken).ConfigureAwait(false);
await using var command = new NpgsqlCommand(InsertSql, connection);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
AddParameters(command, packRun);
try
{
await command.ExecuteNonQueryAsync(cancellationToken).ConfigureAwait(false);
OrchestratorMetrics.PackRunCreated(packRun.TenantId, packRun.PackId);
}
catch (PostgresException ex) when (string.Equals(ex.SqlState, PostgresErrorCodes.UniqueViolation, StringComparison.Ordinal))
{
_logger.LogWarning(ex, "Duplicate pack run idempotency key {Key} for tenant {Tenant}", packRun.IdempotencyKey, packRun.TenantId);
throw;
}
}
public async Task UpdateStatusAsync(
string tenantId,
Guid packRunId,
PackRunStatus status,
int attempt,
Guid? leaseId,
string? taskRunnerId,
DateTimeOffset? leaseUntil,
DateTimeOffset? scheduledAt,
DateTimeOffset? leasedAt,
DateTimeOffset? startedAt,
DateTimeOffset? completedAt,
DateTimeOffset? notBefore,
string? reason,
int? exitCode,
long? durationMs,
CancellationToken cancellationToken)
{
await using var connection = await _dataSource.OpenConnectionAsync(tenantId, "writer", cancellationToken).ConfigureAwait(false);
await using var command = new NpgsqlCommand(UpdateStatusSql, connection);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
command.Parameters.AddWithValue("pack_run_id", packRunId);
command.Parameters.AddWithValue("status", StatusToString(status));
command.Parameters.AddWithValue("attempt", attempt);
command.Parameters.AddWithValue("lease_id", (object?)leaseId ?? DBNull.Value);
command.Parameters.AddWithValue("task_runner_id", (object?)taskRunnerId ?? DBNull.Value);
command.Parameters.AddWithValue("lease_until", (object?)leaseUntil ?? DBNull.Value);
command.Parameters.AddWithValue("scheduled_at", (object?)scheduledAt ?? DBNull.Value);
command.Parameters.AddWithValue("leased_at", (object?)leasedAt ?? DBNull.Value);
command.Parameters.AddWithValue("started_at", (object?)startedAt ?? DBNull.Value);
command.Parameters.AddWithValue("completed_at", (object?)completedAt ?? DBNull.Value);
command.Parameters.AddWithValue("not_before", (object?)notBefore ?? DBNull.Value);
command.Parameters.AddWithValue("reason", (object?)reason ?? DBNull.Value);
command.Parameters.AddWithValue("exit_code", (object?)exitCode ?? DBNull.Value);
command.Parameters.AddWithValue("duration_ms", (object?)durationMs ?? DBNull.Value);
await command.ExecuteNonQueryAsync(cancellationToken).ConfigureAwait(false);
}
public async Task<PackRun?> LeaseNextAsync(
string tenantId,
string? packId,
Guid leaseId,
string taskRunnerId,
DateTimeOffset leaseUntil,
CancellationToken cancellationToken)
{
var packFilter = string.IsNullOrWhiteSpace(packId) ? string.Empty : "AND pack_id = @pack_id";
var sql = string.Format(LeaseNextSqlTemplate, packFilter, Columns);
await using var connection = await _dataSource.OpenConnectionAsync(tenantId, "writer", cancellationToken).ConfigureAwait(false);
await using var command = new NpgsqlCommand(sql, connection);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
var now = DateTimeOffset.UtcNow;
command.Parameters.AddWithValue("tenant_id", tenantId);
command.Parameters.AddWithValue("lease_id", leaseId);
command.Parameters.AddWithValue("task_runner_id", taskRunnerId);
command.Parameters.AddWithValue("lease_until", leaseUntil);
command.Parameters.AddWithValue("leased_at", now);
command.Parameters.AddWithValue("now", now);
if (!string.IsNullOrWhiteSpace(packId))
{
command.Parameters.AddWithValue("pack_id", packId!);
}
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
if (!await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
{
return null;
}
return Map(reader);
}
public async Task<bool> ExtendLeaseAsync(string tenantId, Guid packRunId, Guid leaseId, DateTimeOffset newLeaseUntil, CancellationToken cancellationToken)
{
await using var connection = await _dataSource.OpenConnectionAsync(tenantId, "writer", cancellationToken).ConfigureAwait(false);
await using var command = new NpgsqlCommand(ExtendLeaseSql, connection);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
command.Parameters.AddWithValue("pack_run_id", packRunId);
command.Parameters.AddWithValue("lease_id", leaseId);
command.Parameters.AddWithValue("new_lease_until", newLeaseUntil);
command.Parameters.AddWithValue("now", DateTimeOffset.UtcNow);
var rows = await command.ExecuteNonQueryAsync(cancellationToken).ConfigureAwait(false);
return rows > 0;
}
public async Task ReleaseLeaseAsync(string tenantId, Guid packRunId, Guid leaseId, PackRunStatus newStatus, string? reason, CancellationToken cancellationToken)
{
await using var connection = await _dataSource.OpenConnectionAsync(tenantId, "writer", cancellationToken).ConfigureAwait(false);
await using var command = new NpgsqlCommand(ReleaseLeaseSql, connection);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
command.Parameters.AddWithValue("pack_run_id", packRunId);
command.Parameters.AddWithValue("lease_id", leaseId);
command.Parameters.AddWithValue("status", StatusToString(newStatus));
command.Parameters.AddWithValue("reason", (object?)reason ?? DBNull.Value);
command.Parameters.AddWithValue("completed_at", DateTimeOffset.UtcNow);
await command.ExecuteNonQueryAsync(cancellationToken).ConfigureAwait(false);
}
public async Task<IReadOnlyList<PackRun>> ListAsync(
string tenantId,
string? packId,
PackRunStatus? status,
string? projectId,
DateTimeOffset? createdAfter,
DateTimeOffset? createdBefore,
int limit,
int offset,
CancellationToken cancellationToken)
{
var filters = BuildFilters(packId, status, projectId, createdAfter, createdBefore, out var parameters);
var sql = string.Format(ListSqlTemplate, filters);
await using var connection = await _dataSource.OpenConnectionAsync(tenantId, "reader", cancellationToken).ConfigureAwait(false);
await using var command = new NpgsqlCommand(sql, connection);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
command.Parameters.AddWithValue("limit", limit);
command.Parameters.AddWithValue("offset", offset);
foreach (var param in parameters)
{
command.Parameters.Add(param);
}
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
var results = new List<PackRun>();
while (await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
{
results.Add(Map(reader));
}
return results;
}
public async Task<int> CountAsync(string tenantId, string? packId, PackRunStatus? status, string? projectId, CancellationToken cancellationToken)
{
var filters = BuildFilters(packId, status, projectId, null, null, out var parameters);
var sql = string.Format(CountSqlTemplate, filters);
await using var connection = await _dataSource.OpenConnectionAsync(tenantId, "reader", cancellationToken).ConfigureAwait(false);
await using var command = new NpgsqlCommand(sql, connection);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
foreach (var param in parameters)
{
command.Parameters.Add(param);
}
var countObj = await command.ExecuteScalarAsync(cancellationToken).ConfigureAwait(false);
return Convert.ToInt32(countObj, CultureInfo.InvariantCulture);
}
public async Task<IReadOnlyList<PackRun>> GetExpiredLeasesAsync(DateTimeOffset cutoff, int limit, CancellationToken cancellationToken)
{
await using var connection = await _dataSource.OpenConnectionAsync("", "reader", cancellationToken).ConfigureAwait(false);
await using var command = new NpgsqlCommand(ExpiredLeaseSql, connection);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("cutoff", cutoff);
command.Parameters.AddWithValue("limit", limit);
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
var results = new List<PackRun>();
while (await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
{
results.Add(Map(reader));
}
return results;
}
public async Task<int> CancelPendingAsync(string tenantId, string? packId, string reason, CancellationToken cancellationToken)
{
var filter = string.IsNullOrWhiteSpace(packId) ? string.Empty : "AND pack_id = @pack_id";
var sql = string.Format(CancelPendingSql, filter);
await using var connection = await _dataSource.OpenConnectionAsync(tenantId, "writer", cancellationToken).ConfigureAwait(false);
await using var command = new NpgsqlCommand(sql, connection);
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
command.Parameters.AddWithValue("tenant_id", tenantId);
command.Parameters.AddWithValue("reason", reason);
if (!string.IsNullOrWhiteSpace(packId))
{
command.Parameters.AddWithValue("pack_id", packId!);
}
var rows = await command.ExecuteNonQueryAsync(cancellationToken).ConfigureAwait(false);
return rows;
}
private static string BuildFilters(
string? packId,
PackRunStatus? status,
string? projectId,
DateTimeOffset? createdAfter,
DateTimeOffset? createdBefore,
out List<NpgsqlParameter> parameters)
{
var filters = new List<string>();
parameters = new List<NpgsqlParameter>();
if (!string.IsNullOrWhiteSpace(packId))
{
filters.Add("pack_id = @pack_id");
parameters.Add(new NpgsqlParameter("pack_id", packId!));
}
if (status.HasValue)
{
filters.Add("status = @status::pack_run_status");
parameters.Add(new NpgsqlParameter("status", StatusToString(status.Value)));
}
if (!string.IsNullOrWhiteSpace(projectId))
{
filters.Add("project_id = @project_id");
parameters.Add(new NpgsqlParameter("project_id", projectId!));
}
if (createdAfter.HasValue)
{
filters.Add("created_at >= @created_after");
parameters.Add(new NpgsqlParameter("created_after", createdAfter.Value));
}
if (createdBefore.HasValue)
{
filters.Add("created_at <= @created_before");
parameters.Add(new NpgsqlParameter("created_before", createdBefore.Value));
}
return filters.Count == 0 ? string.Empty : " AND " + string.Join(" AND ", filters);
}
private static void AddParameters(NpgsqlCommand command, PackRun packRun)
{
command.Parameters.AddWithValue("pack_run_id", packRun.PackRunId);
command.Parameters.AddWithValue("tenant_id", packRun.TenantId);
command.Parameters.AddWithValue("project_id", (object?)packRun.ProjectId ?? DBNull.Value);
command.Parameters.AddWithValue("pack_id", packRun.PackId);
command.Parameters.AddWithValue("pack_version", packRun.PackVersion);
command.Parameters.AddWithValue("status", StatusToString(packRun.Status));
command.Parameters.AddWithValue("priority", packRun.Priority);
command.Parameters.AddWithValue("attempt", packRun.Attempt);
command.Parameters.AddWithValue("max_attempts", packRun.MaxAttempts);
command.Parameters.AddWithValue("parameters", packRun.Parameters);
command.Parameters.AddWithValue("parameters_digest", packRun.ParametersDigest);
command.Parameters.AddWithValue("idempotency_key", packRun.IdempotencyKey);
command.Parameters.AddWithValue("correlation_id", (object?)packRun.CorrelationId ?? DBNull.Value);
command.Parameters.AddWithValue("lease_id", (object?)packRun.LeaseId ?? DBNull.Value);
command.Parameters.AddWithValue("task_runner_id", (object?)packRun.TaskRunnerId ?? DBNull.Value);
command.Parameters.AddWithValue("lease_until", (object?)packRun.LeaseUntil ?? DBNull.Value);
command.Parameters.AddWithValue("created_at", packRun.CreatedAt);
command.Parameters.AddWithValue("scheduled_at", (object?)packRun.ScheduledAt ?? DBNull.Value);
command.Parameters.AddWithValue("leased_at", (object?)packRun.LeasedAt ?? DBNull.Value);
command.Parameters.AddWithValue("started_at", (object?)packRun.StartedAt ?? DBNull.Value);
command.Parameters.AddWithValue("completed_at", (object?)packRun.CompletedAt ?? DBNull.Value);
command.Parameters.AddWithValue("not_before", (object?)packRun.NotBefore ?? DBNull.Value);
command.Parameters.AddWithValue("reason", (object?)packRun.Reason ?? DBNull.Value);
command.Parameters.AddWithValue("exit_code", (object?)packRun.ExitCode ?? DBNull.Value);
command.Parameters.AddWithValue("duration_ms", (object?)packRun.DurationMs ?? DBNull.Value);
command.Parameters.AddWithValue("created_by", packRun.CreatedBy);
command.Parameters.Add(new NpgsqlParameter("metadata", NpgsqlDbType.Jsonb)
{
Value = (object?)packRun.Metadata ?? DBNull.Value
});
}
private static string StatusToString(PackRunStatus status) => status switch
{
PackRunStatus.Pending => "pending",
PackRunStatus.Scheduled => "scheduled",
PackRunStatus.Leased => "leased",
PackRunStatus.Running => "running",
PackRunStatus.Succeeded => "succeeded",
PackRunStatus.Failed => "failed",
PackRunStatus.Canceled => "canceled",
PackRunStatus.TimedOut => "timed_out",
_ => throw new ArgumentOutOfRangeException(nameof(status), status, null)
};
private static PackRun Map(NpgsqlDataReader reader)
{
return new PackRun(
PackRunId: reader.GetGuid(0),
TenantId: reader.GetString(1),
ProjectId: reader.IsDBNull(2) ? null : reader.GetString(2),
PackId: reader.GetString(3),
PackVersion: reader.GetString(4),
Status: ParseStatus(reader.GetString(5)),
Priority: reader.GetInt32(6),
Attempt: reader.GetInt32(7),
MaxAttempts: reader.GetInt32(8),
Parameters: reader.GetString(9),
ParametersDigest: reader.GetString(10),
IdempotencyKey: reader.GetString(11),
CorrelationId: reader.IsDBNull(12) ? null : reader.GetString(12),
LeaseId: reader.IsDBNull(13) ? null : reader.GetGuid(13),
TaskRunnerId: reader.IsDBNull(14) ? null : reader.GetString(14),
LeaseUntil: reader.IsDBNull(15) ? null : reader.GetFieldValue<DateTimeOffset>(15),
CreatedAt: reader.GetFieldValue<DateTimeOffset>(16),
ScheduledAt: reader.IsDBNull(17) ? null : reader.GetFieldValue<DateTimeOffset>(17),
LeasedAt: reader.IsDBNull(18) ? null : reader.GetFieldValue<DateTimeOffset>(18),
StartedAt: reader.IsDBNull(19) ? null : reader.GetFieldValue<DateTimeOffset>(19),
CompletedAt: reader.IsDBNull(20) ? null : reader.GetFieldValue<DateTimeOffset>(20),
NotBefore: reader.IsDBNull(21) ? null : reader.GetFieldValue<DateTimeOffset>(21),
Reason: reader.IsDBNull(22) ? null : reader.GetString(22),
ExitCode: reader.IsDBNull(23) ? null : reader.GetInt32(23),
DurationMs: reader.IsDBNull(24) ? null : reader.GetInt64(24),
CreatedBy: reader.GetString(25),
Metadata: reader.IsDBNull(26) ? null : reader.GetString(26));
}
private static PackRunStatus ParseStatus(string value) => value switch
{
"pending" => PackRunStatus.Pending,
"scheduled" => PackRunStatus.Scheduled,
"leased" => PackRunStatus.Leased,
"running" => PackRunStatus.Running,
"succeeded" => PackRunStatus.Succeeded,
"failed" => PackRunStatus.Failed,
"canceled" => PackRunStatus.Canceled,
"timed_out" => PackRunStatus.TimedOut,
_ => throw new ArgumentOutOfRangeException(nameof(value), value, "Unknown pack_run_status")
};
}

View File

@@ -39,6 +39,8 @@ public static class ServiceCollectionExtensions
services.AddScoped<IThrottleRepository, PostgresThrottleRepository>();
services.AddScoped<IWatermarkRepository, PostgresWatermarkRepository>();
services.AddScoped<Infrastructure.Repositories.IBackfillRepository, PostgresBackfillRepository>();
services.AddScoped<IPackRunRepository, PostgresPackRunRepository>();
services.AddScoped<IPackRunLogRepository, PostgresPackRunLogRepository>();
// Register audit and ledger repositories
services.AddScoped<IAuditRepository, PostgresAuditRepository>();

View File

@@ -0,0 +1,81 @@
-- 006_pack_runs.sql
-- Pack run persistence and log streaming schema (ORCH-SVC-41/42-101)
BEGIN;
-- Enum for pack run lifecycle
CREATE TYPE pack_run_status AS ENUM (
'pending',
'scheduled',
'leased',
'running',
'succeeded',
'failed',
'canceled',
'timed_out'
);
-- Pack runs
CREATE TABLE pack_runs (
pack_run_id UUID NOT NULL,
tenant_id TEXT NOT NULL,
project_id TEXT,
pack_id TEXT NOT NULL,
pack_version TEXT NOT NULL,
status pack_run_status NOT NULL DEFAULT 'pending',
priority INTEGER NOT NULL DEFAULT 0,
attempt INTEGER NOT NULL DEFAULT 1,
max_attempts INTEGER NOT NULL DEFAULT 3,
parameters TEXT NOT NULL,
parameters_digest CHAR(64) NOT NULL,
idempotency_key TEXT NOT NULL,
correlation_id TEXT,
lease_id UUID,
task_runner_id TEXT,
lease_until TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
scheduled_at TIMESTAMPTZ,
leased_at TIMESTAMPTZ,
started_at TIMESTAMPTZ,
completed_at TIMESTAMPTZ,
not_before TIMESTAMPTZ,
reason TEXT,
exit_code INTEGER,
duration_ms BIGINT,
created_by TEXT NOT NULL,
metadata JSONB,
CONSTRAINT pk_pack_runs PRIMARY KEY (tenant_id, pack_run_id),
CONSTRAINT uq_pack_runs_idempotency UNIQUE (tenant_id, idempotency_key),
CONSTRAINT ck_pack_runs_attempt_positive CHECK (attempt >= 1),
CONSTRAINT ck_pack_runs_max_attempts_positive CHECK (max_attempts >= 1),
CONSTRAINT ck_pack_runs_parameters_digest_hex CHECK (parameters_digest ~ '^[0-9a-f]{64}$')
) PARTITION BY LIST (tenant_id);
CREATE TABLE pack_runs_default PARTITION OF pack_runs DEFAULT;
CREATE INDEX ix_pack_runs_status ON pack_runs (tenant_id, status, priority DESC, created_at);
CREATE INDEX ix_pack_runs_pack ON pack_runs (tenant_id, pack_id, status, created_at DESC);
CREATE INDEX ix_pack_runs_not_before ON pack_runs (tenant_id, not_before) WHERE not_before IS NOT NULL;
CREATE INDEX ix_pack_runs_lease_until ON pack_runs (tenant_id, lease_until) WHERE status = 'leased' AND lease_until IS NOT NULL;
-- Pack run logs
CREATE TABLE pack_run_logs (
log_id UUID NOT NULL,
tenant_id TEXT NOT NULL,
pack_run_id UUID NOT NULL,
sequence BIGINT NOT NULL,
log_level SMALLINT NOT NULL,
source TEXT,
message TEXT NOT NULL,
data JSONB,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
CONSTRAINT pk_pack_run_logs PRIMARY KEY (tenant_id, pack_run_id, sequence),
CONSTRAINT uq_pack_run_logs_log_id UNIQUE (log_id),
CONSTRAINT fk_pack_run_logs_run FOREIGN KEY (tenant_id, pack_run_id) REFERENCES pack_runs (tenant_id, pack_run_id)
) PARTITION BY LIST (tenant_id);
CREATE TABLE pack_run_logs_default PARTITION OF pack_run_logs DEFAULT;
CREATE INDEX ix_pack_run_logs_level ON pack_run_logs (tenant_id, pack_run_id, log_level, sequence);
CREATE INDEX ix_pack_run_logs_created ON pack_run_logs (tenant_id, pack_run_id, created_at);
COMMIT;

View File

@@ -0,0 +1,76 @@
using System.Text.Json;
using Microsoft.AspNetCore.Http;
using StellaOps.Orchestrator.WebService.Contracts;
using StellaOps.Orchestrator.WebService.Services;
namespace StellaOps.Orchestrator.Tests.ControlPlane;
/// <summary>
/// Unit coverage for OpenAPI discovery documents and deprecation headers (ORCH-OAS-61/63).
/// </summary>
public sealed class OpenApiDocumentsTests
{
[Fact]
public void DiscoveryDocument_ContainsServiceMetadata()
{
var doc = OpenApiDocuments.CreateDiscoveryDocument("1.2.3");
Assert.Equal("orchestrator", doc.Service);
Assert.Equal("3.1.0", doc.SpecVersion);
Assert.Equal("1.2.3", doc.Version);
Assert.Equal("/openapi/orchestrator.json", doc.Url);
Assert.Equal("application/json", doc.Format);
Assert.Equal("#/components/schemas/Error", doc.ErrorEnvelopeSchema);
Assert.True(doc.Notifications.ContainsKey("topic"));
}
[Fact]
public void Specification_IncludesKeyPathsAndIdempotencyHeaders()
{
var spec = OpenApiDocuments.CreateSpecification("1.2.3");
var json = JsonSerializer.Serialize(spec, OpenApiDocuments.SerializerOptions);
Assert.Contains("/api/v1/orchestrator/jobs", json);
Assert.DoesNotContain("/.well-known/openapi", json); // spec is per-service
Assert.Contains("Idempotency-Key", json);
Assert.Contains("deprecated", json);
Assert.Contains("error", json);
}
[Fact]
public void Specification_ExposesPaginationForJobs()
{
var spec = OpenApiDocuments.CreateSpecification("1.2.3");
var json = JsonSerializer.Serialize(spec, OpenApiDocuments.SerializerOptions);
Assert.Contains("/api/v1/orchestrator/jobs", json);
Assert.Contains("nextCursor", json);
Assert.Contains("cursor=", json); // RFC 8288 Link header example for SDK paginators
}
[Fact]
public void Specification_IncludesPackRunScheduleAndRetry()
{
var spec = OpenApiDocuments.CreateSpecification("1.2.3");
var json = JsonSerializer.Serialize(spec, OpenApiDocuments.SerializerOptions);
Assert.Contains("/api/v1/orchestrator/pack-runs", json);
Assert.Contains("SchedulePackRunRequest", json);
Assert.Contains("/api/v1/orchestrator/pack-runs/{packRunId}/retry", json);
Assert.Contains("RetryPackRunResponse", json);
}
[Fact]
public void DeprecationHeaders_AddsStandardMetadata()
{
var context = new DefaultHttpContext();
DeprecationHeaders.Apply(context.Response, "/api/v1/orchestrator/jobs");
var headers = context.Response.Headers;
Assert.Equal("true", headers["Deprecation"].ToString());
Assert.Contains("alternate", headers["Link"].ToString());
Assert.False(string.IsNullOrWhiteSpace(headers["Sunset"]));
Assert.Equal("orchestrator:legacy-endpoint", headers["X-StellaOps-Deprecated"].ToString());
}
}

View File

@@ -571,7 +571,9 @@ public sealed class ExportAlertTests
var after = DateTimeOffset.UtcNow;
Assert.NotNull(resolved.ResolvedAt);
Assert.InRange(resolved.ResolvedAt.Value, before, after);
var windowStart = before <= after ? before : after;
var windowEnd = before >= after ? before : after;
Assert.InRange(resolved.ResolvedAt.Value, windowStart, windowEnd);
Assert.Equal("Fixed database connection issue", resolved.ResolutionNotes);
Assert.False(resolved.IsActive);
}

View File

@@ -0,0 +1,120 @@
using System.Text;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging.Abstractions;
using Microsoft.Extensions.Options;
using StellaOps.Orchestrator.Core.Domain;
using StellaOps.Orchestrator.Infrastructure.Repositories;
using StellaOps.Orchestrator.WebService.Streaming;
using PackRunDomain = StellaOps.Orchestrator.Core.Domain.PackRun;
namespace StellaOps.Orchestrator.Tests.PackRuns;
public sealed class PackRunStreamCoordinatorTests
{
[Fact]
public async Task StreamAsync_TerminalRun_WritesInitialHeartbeatAndCompleted()
{
var now = DateTimeOffset.UtcNow;
var packRun = new PackRunDomain(
PackRunId: Guid.NewGuid(),
TenantId: "tenantA",
ProjectId: null,
PackId: "pack.demo",
PackVersion: "1.0.0",
Status: PackRunStatus.Succeeded,
Priority: 0,
Attempt: 1,
MaxAttempts: 3,
Parameters: "{}",
ParametersDigest: new string('a', 64),
IdempotencyKey: "idem-1",
CorrelationId: null,
LeaseId: null,
TaskRunnerId: "runner-1",
LeaseUntil: null,
CreatedAt: now.AddMinutes(-2),
ScheduledAt: now.AddMinutes(-2),
LeasedAt: now.AddMinutes(-1),
StartedAt: now.AddMinutes(-1),
CompletedAt: now,
NotBefore: null,
Reason: null,
ExitCode: 0,
DurationMs: 120_000,
CreatedBy: "tester",
Metadata: null);
var logRepo = new StubPackRunLogRepository((2, 5));
var streamOptions = Options.Create(new StreamOptions
{
PollInterval = TimeSpan.FromMilliseconds(150),
HeartbeatInterval = TimeSpan.FromMilliseconds(150),
MaxStreamDuration = TimeSpan.FromMinutes(1)
});
var coordinator = new PackRunStreamCoordinator(
new StubPackRunRepository(packRun),
logRepo,
streamOptions,
TimeProvider.System,
NullLogger<PackRunStreamCoordinator>.Instance);
var context = new DefaultHttpContext();
await using var body = new MemoryStream();
context.Response.Body = body;
await coordinator.StreamAsync(context, packRun.TenantId, packRun, CancellationToken.None);
body.Position = 0;
var payload = Encoding.UTF8.GetString(body.ToArray());
Assert.Contains("event: initial", payload);
Assert.Contains("event: heartbeat", payload);
Assert.Contains("event: completed", payload);
}
private sealed class StubPackRunRepository : IPackRunRepository
{
private readonly PackRunDomain _packRun;
public StubPackRunRepository(PackRunDomain packRun)
{
_packRun = packRun;
}
public Task<PackRunDomain?> GetByIdAsync(string tenantId, Guid packRunId, CancellationToken cancellationToken)
=> Task.FromResult<PackRunDomain?>(_packRun);
public Task<PackRunDomain?> GetByIdempotencyKeyAsync(string tenantId, string idempotencyKey, CancellationToken cancellationToken) => Task.FromResult<PackRunDomain?>(_packRun);
public Task CreateAsync(PackRunDomain packRun, CancellationToken cancellationToken) => Task.CompletedTask;
public Task UpdateStatusAsync(string tenantId, Guid packRunId, PackRunStatus status, int attempt, Guid? leaseId, string? taskRunnerId, DateTimeOffset? leaseUntil, DateTimeOffset? scheduledAt, DateTimeOffset? leasedAt, DateTimeOffset? startedAt, DateTimeOffset? completedAt, DateTimeOffset? notBefore, string? reason, int? exitCode, long? durationMs, CancellationToken cancellationToken) => Task.CompletedTask;
public Task<PackRunDomain?> LeaseNextAsync(string tenantId, string? packId, Guid leaseId, string taskRunnerId, DateTimeOffset leaseUntil, CancellationToken cancellationToken) => Task.FromResult<PackRunDomain?>(_packRun);
public Task<bool> ExtendLeaseAsync(string tenantId, Guid packRunId, Guid leaseId, DateTimeOffset newLeaseUntil, CancellationToken cancellationToken) => Task.FromResult(true);
public Task ReleaseLeaseAsync(string tenantId, Guid packRunId, Guid leaseId, PackRunStatus newStatus, string? reason, CancellationToken cancellationToken) => Task.CompletedTask;
public Task<IReadOnlyList<PackRunDomain>> ListAsync(string tenantId, string? packId, PackRunStatus? status, string? projectId, DateTimeOffset? createdAfter, DateTimeOffset? createdBefore, int limit, int offset, CancellationToken cancellationToken) => Task.FromResult<IReadOnlyList<PackRunDomain>>(new[] { _packRun });
public Task<int> CountAsync(string tenantId, string? packId, PackRunStatus? status, string? projectId, CancellationToken cancellationToken) => Task.FromResult(1);
public Task<IReadOnlyList<PackRunDomain>> GetExpiredLeasesAsync(DateTimeOffset cutoff, int limit, CancellationToken cancellationToken) => Task.FromResult<IReadOnlyList<PackRunDomain>>(Array.Empty<PackRunDomain>());
public Task<int> CancelPendingAsync(string tenantId, string? packId, string reason, CancellationToken cancellationToken) => Task.FromResult(0);
}
private sealed class StubPackRunLogRepository : IPackRunLogRepository
{
private readonly (long Count, long Latest) _stats;
public StubPackRunLogRepository((long Count, long Latest) stats)
{
_stats = stats;
}
public Task AppendAsync(PackRunLog log, CancellationToken cancellationToken) => Task.CompletedTask;
public Task AppendBatchAsync(IReadOnlyList<PackRunLog> logs, CancellationToken cancellationToken) => Task.CompletedTask;
public Task<PackRunLogBatch> GetLogsAsync(string tenantId, Guid packRunId, long afterSequence, int limit, CancellationToken cancellationToken)
=> Task.FromResult(new PackRunLogBatch(packRunId, tenantId, afterSequence, new List<PackRunLog>()));
public Task<(long Count, long LatestSequence)> GetLogStatsAsync(string tenantId, Guid packRunId, CancellationToken cancellationToken)
=> Task.FromResult(_stats);
public Task<PackRunLogBatch> GetLogsByLevelAsync(string tenantId, Guid packRunId, LogLevel minLevel, long afterSequence, int limit, CancellationToken cancellationToken)
=> Task.FromResult(new PackRunLogBatch(packRunId, tenantId, afterSequence, new List<PackRunLog>()));
public Task<PackRunLogBatch> SearchLogsAsync(string tenantId, Guid packRunId, string pattern, long afterSequence, int limit, CancellationToken cancellationToken)
=> Task.FromResult(new PackRunLogBatch(packRunId, tenantId, afterSequence, new List<PackRunLog>()));
public Task<long> DeleteLogsAsync(string tenantId, Guid packRunId, CancellationToken cancellationToken) => Task.FromResult(0L);
}
}

View File

@@ -6,6 +6,8 @@ namespace StellaOps.Orchestrator.WebService.Contracts;
/// Response representing a job.
/// </summary>
public sealed record JobResponse(
string TenantId,
string? ProjectId,
Guid JobId,
Guid? RunId,
string JobType,
@@ -26,6 +28,8 @@ public sealed record JobResponse(
string CreatedBy)
{
public static JobResponse FromDomain(Job job) => new(
job.TenantId,
job.ProjectId,
job.JobId,
job.RunId,
job.JobType,
@@ -50,6 +54,8 @@ public sealed record JobResponse(
/// Response representing a job with its full payload.
/// </summary>
public sealed record JobDetailResponse(
string TenantId,
string? ProjectId,
Guid JobId,
Guid? RunId,
string JobType,
@@ -75,6 +81,8 @@ public sealed record JobDetailResponse(
string CreatedBy)
{
public static JobDetailResponse FromDomain(Job job) => new(
job.TenantId,
job.ProjectId,
job.JobId,
job.RunId,
job.JobType,

View File

@@ -0,0 +1,760 @@
using System.Reflection;
using System.Text.Json;
using System.Text.Json.Serialization;
namespace StellaOps.Orchestrator.WebService.Contracts;
/// <summary>
/// Factory for per-service OpenAPI discovery and specification documents.
/// </summary>
public static class OpenApiDocuments
{
public static readonly JsonSerializerOptions SerializerOptions = new(JsonSerializerDefaults.Web)
{
WriteIndented = true
};
/// <summary>
/// Return the service build/version string based on the executing assembly.
/// </summary>
public static string GetServiceVersion()
=> Assembly.GetExecutingAssembly().GetName().Version?.ToString() ?? "0.0.0";
public static OpenApiDiscoveryDocument CreateDiscoveryDocument(string version)
{
return new OpenApiDiscoveryDocument(
Service: "orchestrator",
SpecVersion: "3.1.0",
Version: version,
Format: "application/json",
Url: "/openapi/orchestrator.json",
ErrorEnvelopeSchema: "#/components/schemas/Error",
Notifications: new Dictionary<string, string>
{
["topic"] = "orchestrator.contracts",
["event"] = "orchestrator.openapi.updated"
});
}
public static OpenApiSpecDocument CreateSpecification(string version)
{
var exampleJob = ExampleJob();
var exampleJobDetail = ExampleJobDetail();
var exampleClaimRequest = new
{
workerId = "worker-7f9",
jobType = "sbom.build",
idempotencyKey = "claim-12af",
leaseSeconds = 300,
taskRunnerId = "runner-01"
};
var exampleClaimResponse = new
{
jobId = Guid.Parse("11111111-2222-3333-4444-555555555555"),
leaseId = Guid.Parse("aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"),
leaseUntil = "2025-11-30T12:05:00Z",
job = exampleJobDetail
};
var examplePackRunRequest = new
{
packId = "pack.advisory.sbom",
packVersion = "1.2.3",
parameters = @"{""image"":""registry.example/app:1.0.0""}",
projectId = "proj-17",
idempotencyKey = "packrun-123",
priority = 5,
maxAttempts = 3
};
var examplePackRunResponse = new
{
packRunId = Guid.Parse("99999999-0000-1111-2222-333333333333"),
packId = "pack.advisory.sbom",
packVersion = "1.2.3",
status = "scheduled",
idempotencyKey = "packrun-123",
createdAt = "2025-11-30T12:00:00Z",
wasAlreadyScheduled = false
};
var exampleRetryRequest = new
{
parameters = @"{""image"":""registry.example/app:1.0.1""}",
idempotencyKey = "retry-123"
};
var exampleRetryResponse = new
{
originalPackRunId = Guid.Parse("99999999-0000-1111-2222-333333333333"),
newPackRunId = Guid.Parse("aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb"),
status = "scheduled",
createdAt = "2025-11-30T12:10:00Z"
};
var paths = new Dictionary<string, object>
{
["/api/v1/orchestrator/jobs"] = new
{
get = new
{
summary = "List jobs",
description = "Paginated job listing with deterministic cursor ordering and idempotent retries.",
parameters = new object[]
{
QueryParameter("status", "query", "Job status filter (pending|scheduled|leased|succeeded|failed)", "string", "scheduled"),
QueryParameter("jobType", "query", "Filter by job type", "string", "sbom.build"),
QueryParameter("projectId", "query", "Filter by project identifier", "string", "proj-17"),
QueryParameter("createdAfter", "query", "RFC3339 timestamp for start of window", "string", "2025-11-01T00:00:00Z"),
QueryParameter("createdBefore", "query", "RFC3339 timestamp for end of window", "string", "2025-11-30T00:00:00Z"),
QueryParameter("limit", "query", "Results per page (max 200)", "integer", 50),
QueryParameter("cursor", "query", "Opaque pagination cursor", "string", "c3RhcnQ6NTA=")
},
responses = new Dictionary<string, object>
{
["200"] = new
{
description = "Jobs page",
headers = new Dictionary<string, object>
{
["Link"] = new
{
description = "RFC 8288 pagination cursor links",
schema = new { type = "string" },
example = "</api/v1/orchestrator/jobs?cursor=c3RhcnQ6NTA=>; rel=\"next\""
},
["X-StellaOps-Api-Version"] = new
{
description = "Service build version",
schema = new { type = "string" },
example = version
}
},
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/JobList" },
examples = new Dictionary<string, object>
{
["default"] = new
{
value = new
{
jobs = new[] { exampleJob },
nextCursor = "c3RhcnQ6NTA="
}
}
}
}
}
},
["400"] = ErrorResponse("Invalid filter")
}
}
},
["/api/v1/orchestrator/jobs/{jobId}"] = new
{
get = new
{
summary = "Get job",
description = "Fetch job metadata by identifier.",
parameters = new object[]
{
RouteParameter("jobId", "Job identifier", "string")
},
responses = new Dictionary<string, object>
{
["200"] = new
{
description = "Job metadata",
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/Job" },
examples = new Dictionary<string, object>
{
["default"] = new { value = exampleJob }
}
}
}
},
["404"] = ErrorResponse("Not found")
}
}
},
["/api/v1/orchestrator/jobs/{jobId}/detail"] = new
{
get = new
{
summary = "Legacy job detail (deprecated)",
description = "Legacy payload-inclusive job detail; prefer GET /api/v1/orchestrator/jobs/{jobId} plus artifact lookup.",
deprecated = true,
parameters = new object[]
{
RouteParameter("jobId", "Job identifier", "string")
},
responses = new Dictionary<string, object>
{
["200"] = new
{
description = "Job detail including payload (deprecated)",
headers = StandardDeprecationHeaders("/api/v1/orchestrator/jobs/{jobId}"),
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/JobDetail" },
examples = new Dictionary<string, object>
{
["legacy"] = new { value = exampleJobDetail }
}
}
}
},
["404"] = ErrorResponse("Not found")
}
}
},
["/api/v1/orchestrator/jobs/summary"] = new
{
get = new
{
summary = "Legacy job summary (deprecated)",
description = "Legacy summary endpoint; use pagination + counts or analytics feed.",
deprecated = true,
responses = new Dictionary<string, object>
{
["200"] = new
{
description = "Summary counts",
headers = StandardDeprecationHeaders("/api/v1/orchestrator/jobs"),
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/JobSummary" },
examples = new Dictionary<string, object>
{
["summary"] = new
{
value = new { totalJobs = 120, pendingJobs = 12, scheduledJobs = 30, leasedJobs = 20, succeededJobs = 45, failedJobs = 8, canceledJobs = 3, timedOutJobs = 2 }
}
}
}
}
}
}
}
},
["/api/v1/orchestrator/pack-runs"] = new
{
post = new
{
summary = "Schedule pack run",
description = "Schedule an orchestrated pack run with idempotency and quota enforcement.",
requestBody = new
{
required = true,
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/SchedulePackRunRequest" },
examples = new Dictionary<string, object> { ["default"] = new { value = examplePackRunRequest } }
}
}
},
responses = new Dictionary<string, object>
{
["201"] = new
{
description = "Pack run scheduled",
headers = new Dictionary<string, object>
{
["Location"] = new { description = "Pack run resource URL", schema = new { type = "string" }, example = "/api/v1/orchestrator/pack-runs/99999999-0000-1111-2222-333333333333" }
},
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/SchedulePackRunResponse" },
examples = new Dictionary<string, object> { ["default"] = new { value = examplePackRunResponse } }
}
}
},
["429"] = new
{
description = "Quota exceeded",
headers = new Dictionary<string, object> { ["Retry-After"] = new { description = "Seconds until retry", schema = new { type = "integer" }, example = 60 } },
content = new Dictionary<string, object> { ["application/json"] = new { schema = new { @ref = "#/components/schemas/PackRunError" } } }
}
}
}
},
["/api/v1/orchestrator/pack-runs/{packRunId}/retry"] = new
{
post = new
{
summary = "Retry failed pack run",
description = "Create a new pack run based on a failed one with optional parameter override.",
parameters = new object[] { RouteParameter("packRunId", "Pack run identifier", "string") },
requestBody = new
{
required = true,
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/RetryPackRunRequest" },
examples = new Dictionary<string, object> { ["default"] = new { value = exampleRetryRequest } }
}
}
},
responses = new Dictionary<string, object>
{
["201"] = new
{
description = "Retry scheduled",
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/RetryPackRunResponse" },
examples = new Dictionary<string, object> { ["default"] = new { value = exampleRetryResponse } }
}
}
},
["404"] = ErrorResponse("Pack run not found"),
["409"] = new
{
description = "Retry not allowed",
content = new Dictionary<string, object> { ["application/json"] = new { schema = new { @ref = "#/components/schemas/PackRunError" } } }
}
}
}
},
["/api/v1/orchestrator/worker/claim"] = new
{
post = new
{
summary = "Claim next job",
description = "Idempotent worker claim endpoint with optional idempotency key and task runner context.",
parameters = new object[]
{
HeaderParameter("Idempotency-Key", "Optional idempotency key for claim replay safety", "string", "claim-12af")
},
requestBody = new
{
required = true,
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/WorkerClaimRequest" },
examples = new Dictionary<string, object>
{
["default"] = new { value = exampleClaimRequest }
}
}
}
},
responses = new Dictionary<string, object>
{
["200"] = new
{
description = "Job claim response",
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/WorkerClaimResponse" },
examples = new Dictionary<string, object>
{
["default"] = new { value = exampleClaimResponse }
}
}
}
},
["204"] = new { description = "No jobs available" },
["400"] = ErrorResponse("Invalid claim request")
}
}
},
["/healthz"] = new
{
get = new
{
summary = "Health check",
description = "Basic service health probe.",
responses = new Dictionary<string, object>
{
["200"] = new
{
description = "Healthy",
content = new Dictionary<string, object>
{
["application/json"] = new
{
examples = new Dictionary<string, object>
{
["example"] = new
{
value = new { status = "ok", timestamp = "2025-11-30T00:00:00Z" }
}
}
}
}
}
}
}
}
};
var components = new OpenApiComponents(
Schemas: new Dictionary<string, object>
{
["Error"] = new
{
type = "object",
properties = new
{
error = new { type = "string" },
detail = new { type = "string" }
},
required = new[] { "error" }
},
["Job"] = new
{
type = "object",
properties = new
{
jobId = new { type = "string", format = "uuid" },
runId = new { type = "string", format = "uuid", nullable = true },
jobType = new { type = "string" },
status = new { type = "string" },
priority = new { type = "integer" },
attempt = new { type = "integer" },
maxAttempts = new { type = "integer" },
correlationId = new { type = "string", nullable = true },
workerId = new { type = "string", nullable = true },
taskRunnerId = new { type = "string", nullable = true },
createdAt = new { type = "string", format = "date-time" },
scheduledAt = new { type = "string", format = "date-time", nullable = true },
leasedAt = new { type = "string", format = "date-time", nullable = true },
completedAt = new { type = "string", format = "date-time", nullable = true },
notBefore = new { type = "string", format = "date-time", nullable = true },
reason = new { type = "string", nullable = true },
replayOf = new { type = "string", format = "uuid", nullable = true },
createdBy = new { type = "string" }
},
required = new[] { "jobId", "jobType", "status", "priority", "attempt", "maxAttempts", "createdAt", "createdBy" }
},
["JobDetail"] = new
{
allOf = new object[]
{
new { @ref = "#/components/schemas/Job" },
new
{
type = "object",
properties = new
{
payloadDigest = new { type = "string" },
payload = new { type = "string" },
idempotencyKey = new { type = "string" },
leaseId = new { type = "string", format = "uuid", nullable = true },
leaseUntil = new { type = "string", format = "date-time", nullable = true }
}
}
}
},
["JobList"] = new
{
type = "object",
properties = new
{
jobs = new
{
type = "array",
items = new { @ref = "#/components/schemas/Job" }
},
nextCursor = new { type = "string", nullable = true }
},
required = new[] { "jobs" }
},
["JobSummary"] = new
{
type = "object",
properties = new
{
totalJobs = new { type = "integer" },
pendingJobs = new { type = "integer" },
scheduledJobs = new { type = "integer" },
leasedJobs = new { type = "integer" },
succeededJobs = new { type = "integer" },
failedJobs = new { type = "integer" },
canceledJobs = new { type = "integer" },
timedOutJobs = new { type = "integer" }
}
},
["WorkerClaimRequest"] = new
{
type = "object",
properties = new
{
workerId = new { type = "string" },
jobType = new { type = "string" },
idempotencyKey = new { type = "string", nullable = true },
leaseSeconds = new { type = "integer", nullable = true },
taskRunnerId = new { type = "string", nullable = true }
},
required = new[] { "workerId" }
},
["WorkerClaimResponse"] = new
{
type = "object",
properties = new
{
jobId = new { type = "string", format = "uuid" },
leaseId = new { type = "string", format = "uuid" },
leaseUntil = new { type = "string", format = "date-time" },
job = new { @ref = "#/components/schemas/JobDetail" }
},
required = new[] { "jobId", "leaseId", "leaseUntil", "job" }
},
["SchedulePackRunRequest"] = new
{
type = "object",
properties = new
{
packId = new { type = "string" },
packVersion = new { type = "string" },
parameters = new { type = "string", nullable = true },
projectId = new { type = "string", nullable = true },
idempotencyKey = new { type = "string", nullable = true },
correlationId = new { type = "string", nullable = true },
priority = new { type = "integer", nullable = true },
maxAttempts = new { type = "integer", nullable = true },
metadata = new { type = "string", nullable = true }
},
required = new[] { "packId", "packVersion" }
},
["SchedulePackRunResponse"] = new
{
type = "object",
properties = new
{
packRunId = new { type = "string", format = "uuid" },
packId = new { type = "string" },
packVersion = new { type = "string" },
status = new { type = "string" },
idempotencyKey = new { type = "string" },
createdAt = new { type = "string", format = "date-time" },
wasAlreadyScheduled = new { type = "boolean" }
},
required = new[] { "packRunId", "packId", "packVersion", "status", "createdAt", "wasAlreadyScheduled" }
},
["RetryPackRunRequest"] = new
{
type = "object",
properties = new
{
parameters = new { type = "string", nullable = true },
idempotencyKey = new { type = "string", nullable = true }
}
},
["RetryPackRunResponse"] = new
{
type = "object",
properties = new
{
originalPackRunId = new { type = "string", format = "uuid" },
newPackRunId = new { type = "string", format = "uuid" },
status = new { type = "string" },
createdAt = new { type = "string", format = "date-time" }
},
required = new[] { "originalPackRunId", "newPackRunId", "status", "createdAt" }
},
["PackRunError"] = new
{
type = "object",
properties = new
{
code = new { type = "string" },
message = new { type = "string" },
packRunId = new { type = "string", format = "uuid", nullable = true },
retryAfterSeconds = new { type = "integer", nullable = true }
},
required = new[] { "code", "message" }
}
},
Headers: new Dictionary<string, object>
{
["Deprecation"] = new { description = "RFC 8594 deprecation marker", schema = new { type = "string" }, example = "true" },
["Sunset"] = new { description = "Target removal date", schema = new { type = "string" }, example = "Tue, 31 Mar 2026 00:00:00 GMT" },
["Link"] = new { description = "Alternate endpoint for deprecated operation", schema = new { type = "string" } }
});
return new OpenApiSpecDocument(
OpenApi: "3.1.0",
Info: new OpenApiInfo("StellaOps Orchestrator API", version, "Scheduling and automation control plane APIs with pagination, idempotency, and error envelopes."),
Paths: paths,
Components: components,
Servers: new List<object>
{
new { url = "https://api.stella-ops.local" },
new { url = "http://localhost:5201" }
});
// Local helper functions keep the anonymous object creation terse.
static object QueryParameter(string name, string @in, string description, string type, object? example = null)
{
return new Dictionary<string, object?>
{
["name"] = name,
["in"] = @in,
["description"] = description,
["required"] = false,
["schema"] = new { type },
["example"] = example
};
}
static object RouteParameter(string name, string description, string type)
{
return new Dictionary<string, object?>
{
["name"] = name,
["in"] = "path",
["description"] = description,
["required"] = true,
["schema"] = new { type }
};
}
static object HeaderParameter(string name, string description, string type, object? example = null)
{
return new Dictionary<string, object?>
{
["name"] = name,
["in"] = "header",
["description"] = description,
["required"] = false,
["schema"] = new { type },
["example"] = example
};
}
static object ErrorResponse(string description)
{
return new
{
description,
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/Error" },
examples = new Dictionary<string, object>
{
["error"] = new { value = new { error = "invalid_request", detail = description } }
}
}
}
};
}
static Dictionary<string, object> StandardDeprecationHeaders(string alternate)
{
return new Dictionary<string, object>
{
["Deprecation"] = new { description = "This endpoint is deprecated", schema = new { type = "string" }, example = "true" },
["Link"] = new { description = "Alternate endpoint", schema = new { type = "string" }, example = $"<{alternate}>; rel=\"alternate\"" },
["Sunset"] = new { description = "Planned removal", schema = new { type = "string" }, example = "Tue, 31 Mar 2026 00:00:00 GMT" }
};
}
}
private static object ExampleJob()
{
return new
{
jobId = Guid.Parse("aaaaaaaa-1111-2222-3333-bbbbbbbbbbbb"),
runId = Guid.Parse("cccccccc-1111-2222-3333-dddddddddddd"),
jobType = "scan.image",
status = "scheduled",
priority = 5,
attempt = 0,
maxAttempts = 3,
correlationId = "corr-abc",
workerId = (string?)null,
taskRunnerId = "runner-01",
createdAt = "2025-11-30T12:00:00Z",
scheduledAt = "2025-11-30T12:05:00Z",
leasedAt = (string?)null,
completedAt = (string?)null,
notBefore = "2025-11-30T12:04:00Z",
reason = (string?)null,
replayOf = (string?)null,
createdBy = "scheduler"
};
}
private static object ExampleJobDetail()
{
return new
{
jobId = Guid.Parse("aaaaaaaa-1111-2222-3333-bbbbbbbbbbbb"),
runId = Guid.Parse("cccccccc-1111-2222-3333-dddddddddddd"),
jobType = "scan.image",
status = "leased",
priority = 5,
attempt = 1,
maxAttempts = 3,
payloadDigest = "sha256:abc123",
payload = "{\"image\":\"alpine:3.18\"}",
idempotencyKey = "claim-12af",
correlationId = "corr-abc",
leaseId = Guid.Parse("aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"),
leaseUntil = "2025-11-30T12:05:00Z",
workerId = "worker-7f9",
taskRunnerId = "runner-01",
createdAt = "2025-11-30T12:00:00Z",
scheduledAt = "2025-11-30T12:05:00Z",
leasedAt = "2025-11-30T12:00:15Z",
completedAt = (string?)null,
notBefore = "2025-11-30T12:04:00Z",
reason = (string?)null,
replayOf = (string?)null,
createdBy = "scheduler"
};
}
}
public sealed record OpenApiDiscoveryDocument(
[property: JsonPropertyName("service")] string Service,
[property: JsonPropertyName("specVersion")] string SpecVersion,
[property: JsonPropertyName("version")] string Version,
[property: JsonPropertyName("format")] string Format,
[property: JsonPropertyName("url")] string Url,
[property: JsonPropertyName("errorEnvelopeSchema")] string ErrorEnvelopeSchema,
[property: JsonPropertyName("notifications")] IReadOnlyDictionary<string, string> Notifications);
public sealed record OpenApiSpecDocument(
[property: JsonPropertyName("openapi")] string OpenApi,
[property: JsonPropertyName("info")] OpenApiInfo Info,
[property: JsonPropertyName("paths")] IReadOnlyDictionary<string, object> Paths,
[property: JsonPropertyName("components")] OpenApiComponents Components,
[property: JsonPropertyName("servers")] IReadOnlyList<object>? Servers = null);
public sealed record OpenApiInfo(
[property: JsonPropertyName("title")] string Title,
[property: JsonPropertyName("version")] string Version,
[property: JsonPropertyName("description")] string Description);
public sealed record OpenApiComponents(
[property: JsonPropertyName("schemas")] IReadOnlyDictionary<string, object> Schemas,
[property: JsonPropertyName("headers")] IReadOnlyDictionary<string, object>? Headers = null);

View File

@@ -97,6 +97,24 @@ public sealed record PackRunListResponse(
int TotalCount,
string? NextCursor);
/// <summary>
/// Manifest response summarizing pack run state and log statistics.
/// </summary>
public sealed record PackRunManifestResponse(
Guid PackRunId,
string PackId,
string PackVersion,
string Status,
int Attempt,
int MaxAttempts,
DateTimeOffset CreatedAt,
DateTimeOffset? ScheduledAt,
DateTimeOffset? StartedAt,
DateTimeOffset? CompletedAt,
string? Reason,
long LogCount,
long LatestSequence);
// ========== Task Runner (Worker) Requests/Responses ==========
/// <summary>

View File

@@ -120,6 +120,7 @@ public static class JobEndpoints
try
{
var tenantId = tenantResolver.Resolve(context);
DeprecationHeaders.Apply(context.Response, "/api/v1/orchestrator/jobs/{jobId}");
var job = await repository.GetByIdAsync(tenantId, jobId, cancellationToken).ConfigureAwait(false);
if (job is null)
@@ -146,6 +147,7 @@ public static class JobEndpoints
try
{
var tenantId = tenantResolver.Resolve(context);
DeprecationHeaders.Apply(context.Response, "/api/v1/orchestrator/jobs");
// Get counts for each status
var pending = await repository.CountAsync(tenantId, Core.Domain.JobStatus.Pending, jobType, projectId, cancellationToken).ConfigureAwait(false);

View File

@@ -0,0 +1,41 @@
using StellaOps.Orchestrator.WebService.Contracts;
namespace StellaOps.Orchestrator.WebService.Endpoints;
/// <summary>
/// OpenAPI discovery and specification endpoints.
/// </summary>
public static class OpenApiEndpoints
{
/// <summary>
/// Maps OpenAPI discovery endpoints.
/// </summary>
public static IEndpointRouteBuilder MapOpenApiEndpoints(this IEndpointRouteBuilder app)
{
app.MapGet("/.well-known/openapi", (HttpContext context) =>
{
var version = OpenApiDocuments.GetServiceVersion();
var discovery = OpenApiDocuments.CreateDiscoveryDocument(version);
context.Response.Headers.CacheControl = "private, max-age=300";
context.Response.Headers.ETag = $"W/\"oas-{version}\"";
context.Response.Headers["X-StellaOps-Service"] = "orchestrator";
context.Response.Headers["X-StellaOps-Api-Version"] = version;
return Results.Json(discovery, OpenApiDocuments.SerializerOptions);
})
.WithName("Orchestrator_OpenApiDiscovery")
.WithTags("OpenAPI");
app.MapGet("/openapi/orchestrator.json", () =>
{
var version = OpenApiDocuments.GetServiceVersion();
var spec = OpenApiDocuments.CreateSpecification(version);
return Results.Json(spec, OpenApiDocuments.SerializerOptions);
})
.WithName("Orchestrator_OpenApiSpec")
.WithTags("OpenAPI");
return app;
}
}

View File

@@ -1,3 +1,4 @@
using System.Globalization;
using System.Security.Cryptography;
using System.Text;
using System.Text.Json;
@@ -23,6 +24,11 @@ public static class PackRunEndpoints
private const int MaxExtendSeconds = 1800; // 30 minutes
private const int DefaultLogLimit = 100;
private const int MaxLogLimit = 1000;
private const string PackRunJobType = "pack-run";
private const int PackRunQuotaMaxActive = 10;
private const int PackRunQuotaMaxPerHour = 200;
private const int PackRunQuotaBurst = 20;
private const double PackRunQuotaRefillPerSecond = 1.0;
/// <summary>
/// Maps pack run endpoints to the route builder.
@@ -45,6 +51,10 @@ public static class PackRunEndpoints
.WithName("Orchestrator_ListPackRuns")
.WithDescription("List pack runs with filters");
group.MapGet("{packRunId:guid}/manifest", GetPackRunManifest)
.WithName("Orchestrator_GetPackRunManifest")
.WithDescription("Get pack run manifest including log stats and status");
// Task runner (worker) endpoints
group.MapPost("claim", ClaimPackRun)
.WithName("Orchestrator_ClaimPackRun")
@@ -90,6 +100,7 @@ public static class PackRunEndpoints
[FromBody] SchedulePackRunRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRunRepository packRunRepository,
[FromServices] IQuotaRepository quotaRepository,
[FromServices] IEventPublisher eventPublisher,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken)
@@ -107,6 +118,12 @@ public static class PackRunEndpoints
"invalid_request", "PackVersion is required", null, null));
}
if (string.IsNullOrWhiteSpace(request.ProjectId))
{
return Results.BadRequest(new PackRunErrorResponse(
"invalid_request", "ProjectId is required", null, null));
}
var tenantId = tenantResolver.Resolve(context);
var now = timeProvider.GetUtcNow();
var parameters = request.Parameters ?? "{}";
@@ -132,7 +149,7 @@ public static class PackRunEndpoints
var packRun = PackRun.Create(
packRunId: packRunId,
tenantId: tenantId,
projectId: request.ProjectId,
projectId: request.ProjectId!.Trim(),
packId: request.PackId,
packVersion: request.PackVersion,
parameters: parameters,
@@ -145,9 +162,49 @@ public static class PackRunEndpoints
metadata: request.Metadata,
createdAt: now);
// Enforce pack-run quota
var quotaResult = await TryConsumePackRunQuotaAsync(quotaRepository, tenantId, context.User?.Identity?.Name ?? "system", now, cancellationToken);
if (!quotaResult.Allowed)
{
if (quotaResult.RetryAfter.HasValue)
{
context.Response.Headers.RetryAfter = ((int)Math.Ceiling(quotaResult.RetryAfter.Value.TotalSeconds)).ToString(CultureInfo.InvariantCulture);
}
return Results.Json(
new PackRunErrorResponse(
"quota_exhausted",
"Pack run quota exceeded",
null,
quotaResult.RetryAfter.HasValue
? (int?)Math.Ceiling(quotaResult.RetryAfter.Value.TotalSeconds)
: null),
statusCode: StatusCodes.Status429TooManyRequests);
}
await packRunRepository.CreateAsync(packRun, cancellationToken);
// Mark as scheduled immediately
await packRunRepository.UpdateStatusAsync(
tenantId,
packRunId,
PackRunStatus.Scheduled,
packRun.Attempt,
null,
null,
null,
now,
null,
null,
null,
null,
null,
null,
null,
cancellationToken);
OrchestratorMetrics.PackRunCreated(tenantId, request.PackId);
OrchestratorMetrics.PackRunScheduled(tenantId, request.PackId);
// Publish event
var envelope = EventEnvelope.Create(
@@ -163,7 +220,7 @@ public static class PackRunEndpoints
packRunId,
request.PackId,
request.PackVersion,
packRun.Status.ToString().ToLowerInvariant(),
PackRunStatus.Scheduled.ToString().ToLowerInvariant(),
idempotencyKey,
now,
WasAlreadyScheduled: false));
@@ -188,6 +245,42 @@ public static class PackRunEndpoints
return Results.Ok(PackRunResponse.FromDomain(packRun));
}
private static async Task<IResult> GetPackRunManifest(
HttpContext context,
[FromRoute] Guid packRunId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRunRepository packRunRepository,
[FromServices] IPackRunLogRepository logRepository,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
var packRun = await packRunRepository.GetByIdAsync(tenantId, packRunId, cancellationToken);
if (packRun is null)
{
return Results.NotFound(new PackRunErrorResponse(
"not_found", $"Pack run {packRunId} not found", packRunId, null));
}
var (logCount, latestSeq) = await logRepository.GetLogStatsAsync(tenantId, packRunId, cancellationToken);
var response = new PackRunManifestResponse(
PackRunId: packRun.PackRunId,
PackId: packRun.PackId,
PackVersion: packRun.PackVersion,
Status: packRun.Status.ToString().ToLowerInvariant(),
Attempt: packRun.Attempt,
MaxAttempts: packRun.MaxAttempts,
CreatedAt: packRun.CreatedAt,
ScheduledAt: packRun.ScheduledAt,
StartedAt: packRun.StartedAt,
CompletedAt: packRun.CompletedAt,
Reason: packRun.Reason,
LogCount: logCount,
LatestSequence: latestSeq);
return Results.Ok(response);
}
private static async Task<IResult> ListPackRuns(
HttpContext context,
[FromQuery] string? packId,
@@ -403,6 +496,7 @@ public static class PackRunEndpoints
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRunRepository packRunRepository,
[FromServices] IPackRunLogRepository logRepository,
[FromServices] IQuotaRepository quotaRepository,
[FromServices] IArtifactRepository artifactRepository,
[FromServices] IEventPublisher eventPublisher,
[FromServices] TimeProvider timeProvider,
@@ -503,6 +597,8 @@ public static class PackRunEndpoints
OrchestratorMetrics.RecordPackRunDuration(tenantId, packRun.PackId, durationSeconds);
OrchestratorMetrics.RecordPackRunLogCount(tenantId, packRun.PackId, logCount + 1);
await ReleasePackRunQuotaAsync(quotaRepository, tenantId, cancellationToken);
// Publish event
var eventType = request.Success
? OrchestratorEventType.PackRunCompleted
@@ -664,6 +760,7 @@ public static class PackRunEndpoints
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRunRepository packRunRepository,
[FromServices] IPackRunLogRepository logRepository,
[FromServices] IQuotaRepository quotaRepository,
[FromServices] IEventPublisher eventPublisher,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken)
@@ -709,6 +806,8 @@ public static class PackRunEndpoints
OrchestratorMetrics.PackRunCanceled(tenantId, packRun.PackId);
await ReleasePackRunQuotaAsync(quotaRepository, tenantId, cancellationToken);
// Publish event
var envelope = EventEnvelope.Create(
eventType: OrchestratorEventType.PackRunFailed, // Use Failed for canceled
@@ -818,6 +917,102 @@ public static class PackRunEndpoints
packRun.Metadata);
}
private static async Task<(bool Allowed, TimeSpan? RetryAfter)> TryConsumePackRunQuotaAsync(
IQuotaRepository quotaRepository,
string tenantId,
string actor,
DateTimeOffset now,
CancellationToken cancellationToken)
{
var quota = await quotaRepository.GetByTenantAndJobTypeAsync(tenantId, PackRunJobType, cancellationToken).ConfigureAwait(false)
?? await CreateDefaultPackRunQuotaAsync(quotaRepository, tenantId, actor, now, cancellationToken).ConfigureAwait(false);
var tokens = Math.Min(quota.BurstCapacity, quota.CurrentTokens + (now - quota.LastRefillAt).TotalSeconds * quota.RefillRate);
var hourStart = quota.CurrentHourStart;
var hourCount = quota.CurrentHourCount;
if (now - hourStart >= TimeSpan.FromHours(1))
{
hourStart = now;
hourCount = 0;
}
if (tokens < 1)
{
var deficitSeconds = (1 - tokens) / quota.RefillRate;
return (false, TimeSpan.FromSeconds(Math.Ceiling(deficitSeconds)));
}
if (quota.CurrentActive >= quota.MaxActive)
{
return (false, TimeSpan.FromSeconds(5));
}
if (hourCount >= quota.MaxPerHour)
{
return (false, TimeSpan.FromMinutes(5));
}
tokens -= 1;
hourCount += 1;
await quotaRepository.UpdateStateAsync(
tenantId,
quota.QuotaId,
currentTokens: Math.Max(0, tokens),
lastRefillAt: now,
currentActive: quota.CurrentActive + 1,
currentHourCount: hourCount,
currentHourStart: hourStart,
updatedBy: actor,
cancellationToken: cancellationToken).ConfigureAwait(false);
return (true, null);
}
private static async Task ReleasePackRunQuotaAsync(IQuotaRepository quotaRepository, string tenantId, CancellationToken cancellationToken)
{
var quota = await quotaRepository.GetByTenantAndJobTypeAsync(tenantId, PackRunJobType, cancellationToken).ConfigureAwait(false);
if (quota is null || quota.CurrentActive <= 0)
{
return;
}
await quotaRepository.DecrementActiveAsync(tenantId, quota.QuotaId, cancellationToken).ConfigureAwait(false);
}
private static async Task<Quota> CreateDefaultPackRunQuotaAsync(
IQuotaRepository quotaRepository,
string tenantId,
string actor,
DateTimeOffset now,
CancellationToken cancellationToken)
{
var quota = new Quota(
QuotaId: Guid.NewGuid(),
TenantId: tenantId,
JobType: PackRunJobType,
MaxActive: PackRunQuotaMaxActive,
MaxPerHour: PackRunQuotaMaxPerHour,
BurstCapacity: PackRunQuotaBurst,
RefillRate: PackRunQuotaRefillPerSecond,
CurrentTokens: PackRunQuotaBurst,
LastRefillAt: now,
CurrentActive: 0,
CurrentHourCount: 0,
CurrentHourStart: now,
Paused: false,
PauseReason: null,
QuotaTicket: null,
CreatedAt: now,
UpdatedAt: now,
UpdatedBy: actor);
await quotaRepository.CreateAsync(quota, cancellationToken).ConfigureAwait(false);
OrchestratorMetrics.QuotaCreated(tenantId, PackRunJobType);
return quota;
}
private static string ComputeDigest(string content)
{
var bytes = Encoding.UTF8.GetBytes(content);

View File

@@ -26,6 +26,10 @@ public static class StreamEndpoints
.WithName("Orchestrator_StreamRun")
.WithDescription("Stream real-time run progress updates via SSE");
group.MapGet("pack-runs/{packRunId:guid}", StreamPackRun)
.WithName("Orchestrator_StreamPackRun")
.WithDescription("Stream real-time pack run log and status updates via SSE");
return group;
}
@@ -100,4 +104,38 @@ public static class StreamEndpoints
}
}
}
private static async Task StreamPackRun(
HttpContext context,
[FromRoute] Guid packRunId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRunRepository packRunRepository,
[FromServices] IPackRunStreamCoordinator streamCoordinator,
CancellationToken cancellationToken)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var packRun = await packRunRepository.GetByIdAsync(tenantId, packRunId, cancellationToken).ConfigureAwait(false);
if (packRun is null)
{
context.Response.StatusCode = StatusCodes.Status404NotFound;
await context.Response.WriteAsJsonAsync(new { error = "Pack run not found" }, cancellationToken).ConfigureAwait(false);
return;
}
await streamCoordinator.StreamAsync(context, tenantId, packRun, cancellationToken).ConfigureAwait(false);
}
catch (OperationCanceledException) when (cancellationToken.IsCancellationRequested)
{
}
catch (InvalidOperationException ex)
{
if (!context.Response.HasStarted)
{
context.Response.StatusCode = StatusCodes.Status400BadRequest;
await context.Response.WriteAsJsonAsync(new { error = ex.Message }, cancellationToken).ConfigureAwait(false);
}
}
}
}

View File

@@ -21,6 +21,7 @@ builder.Services.AddSingleton(TimeProvider.System);
builder.Services.Configure<StreamOptions>(builder.Configuration.GetSection(StreamOptions.SectionName));
builder.Services.AddSingleton<IJobStreamCoordinator, JobStreamCoordinator>();
builder.Services.AddSingleton<IRunStreamCoordinator, RunStreamCoordinator>();
builder.Services.AddSingleton<IPackRunStreamCoordinator, PackRunStreamCoordinator>();
// Register scale metrics and load shedding services
builder.Services.AddSingleton<ScaleMetrics>();
@@ -34,6 +35,9 @@ if (app.Environment.IsDevelopment())
app.MapOpenApi();
}
// OpenAPI discovery endpoints (available in all environments)
app.MapOpenApiEndpoints();
// Register health endpoints (replaces simple /healthz and /readyz)
app.MapHealthEndpoints();
@@ -45,6 +49,7 @@ app.MapSourceEndpoints();
app.MapRunEndpoints();
app.MapJobEndpoints();
app.MapDagEndpoints();
app.MapPackRunEndpoints();
// Register streaming endpoints
app.MapStreamEndpoints();

View File

@@ -0,0 +1,36 @@
using System.Globalization;
using Microsoft.AspNetCore.Http;
namespace StellaOps.Orchestrator.WebService.Services;
/// <summary>
/// Helper for applying HTTP deprecation metadata to legacy endpoints.
/// </summary>
public static class DeprecationHeaders
{
/// <summary>
/// Apply standard deprecation headers and alternate link hint to the response.
/// </summary>
/// <param name="response">HTTP response to annotate.</param>
/// <param name="alternate">Alternate endpoint that supersedes the deprecated one.</param>
/// <param name="sunset">Optional sunset date (UTC).</param>
public static void Apply(HttpResponse response, string alternate, DateTimeOffset? sunset = null)
{
// RFC 8594 recommends HTTP-date for Sunset; default to a near-term horizon to prompt migrations.
var sunsetValue = (sunset ?? new DateTimeOffset(2026, 03, 31, 0, 0, 0, TimeSpan.Zero))
.UtcDateTime
.ToString("r", CultureInfo.InvariantCulture);
if (!response.Headers.ContainsKey("Deprecation"))
{
response.Headers.Append("Deprecation", "true");
}
// Link: <...>; rel="alternate"; title="Replacement"
var linkValue = $"<{alternate}>; rel=\"alternate\"; title=\"Replacement endpoint\"";
response.Headers.Append("Link", linkValue);
response.Headers.Append("Sunset", sunsetValue);
response.Headers.Append("X-StellaOps-Deprecated", "orchestrator:legacy-endpoint");
}
}

View File

@@ -0,0 +1,200 @@
using System.Text.Json;
using Microsoft.Extensions.Options;
using StellaOps.Orchestrator.Core.Domain;
using StellaOps.Orchestrator.Infrastructure.Repositories;
namespace StellaOps.Orchestrator.WebService.Streaming;
public interface IPackRunStreamCoordinator
{
Task StreamAsync(HttpContext context, string tenantId, PackRun packRun, CancellationToken cancellationToken);
}
/// <summary>
/// Streams pack run status/log updates over SSE.
/// </summary>
public sealed class PackRunStreamCoordinator : IPackRunStreamCoordinator
{
private const int DefaultBatchSize = 200;
private static readonly JsonSerializerOptions SerializerOptions = new(JsonSerializerDefaults.Web);
private readonly IPackRunRepository _packRunRepository;
private readonly IPackRunLogRepository _logRepository;
private readonly TimeProvider _timeProvider;
private readonly StreamOptions _options;
private readonly ILogger<PackRunStreamCoordinator> _logger;
public PackRunStreamCoordinator(
IPackRunRepository packRunRepository,
IPackRunLogRepository logRepository,
IOptions<StreamOptions> options,
TimeProvider? timeProvider,
ILogger<PackRunStreamCoordinator> logger)
{
_packRunRepository = packRunRepository ?? throw new ArgumentNullException(nameof(packRunRepository));
_logRepository = logRepository ?? throw new ArgumentNullException(nameof(logRepository));
_options = (options ?? throw new ArgumentNullException(nameof(options))).Value.Validate();
_timeProvider = timeProvider ?? TimeProvider.System;
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
}
public async Task StreamAsync(HttpContext context, string tenantId, PackRun packRun, CancellationToken cancellationToken)
{
var response = context.Response;
SseWriter.ConfigureSseHeaders(response);
await SseWriter.WriteRetryAsync(response, _options.ReconnectDelay, cancellationToken).ConfigureAwait(false);
var (logCount, latestSeq) = await _logRepository.GetLogStatsAsync(tenantId, packRun.PackRunId, cancellationToken).ConfigureAwait(false);
await SseWriter.WriteEventAsync(response, "initial", PackRunSnapshotPayload.From(packRun, logCount, latestSeq), SerializerOptions, cancellationToken).ConfigureAwait(false);
await SseWriter.WriteEventAsync(response, "heartbeat", HeartbeatPayload.Create(_timeProvider.GetUtcNow()), SerializerOptions, cancellationToken).ConfigureAwait(false);
if (IsTerminal(packRun.Status))
{
await EmitCompletedAsync(response, packRun, logCount, latestSeq, cancellationToken).ConfigureAwait(false);
return;
}
var last = packRun;
var lastSeq = latestSeq;
var start = _timeProvider.GetUtcNow();
using var poll = new PeriodicTimer(_options.PollInterval);
using var heartbeat = new PeriodicTimer(_options.HeartbeatInterval);
try
{
while (!cancellationToken.IsCancellationRequested)
{
if (_timeProvider.GetUtcNow() - start > _options.MaxStreamDuration)
{
await SseWriter.WriteEventAsync(response, "timeout", new { packRunId = last.PackRunId, reason = "Max stream duration reached" }, SerializerOptions, cancellationToken).ConfigureAwait(false);
break;
}
var pollTask = poll.WaitForNextTickAsync(cancellationToken).AsTask();
var hbTask = heartbeat.WaitForNextTickAsync(cancellationToken).AsTask();
var completed = await Task.WhenAny(pollTask, hbTask).ConfigureAwait(false);
if (completed == hbTask && await hbTask.ConfigureAwait(false))
{
await SseWriter.WriteEventAsync(response, "heartbeat", HeartbeatPayload.Create(_timeProvider.GetUtcNow()), SerializerOptions, cancellationToken).ConfigureAwait(false);
continue;
}
if (completed == pollTask && await pollTask.ConfigureAwait(false))
{
var current = await _packRunRepository.GetByIdAsync(tenantId, last.PackRunId, cancellationToken).ConfigureAwait(false);
if (current is null)
{
await SseWriter.WriteEventAsync(response, "notFound", new NotFoundPayload(last.PackRunId.ToString(), "pack-run"), SerializerOptions, cancellationToken).ConfigureAwait(false);
break;
}
// Send new logs
var batch = await _logRepository.GetLogsAsync(tenantId, current.PackRunId, lastSeq, DefaultBatchSize, cancellationToken).ConfigureAwait(false);
if (batch.Logs.Count > 0)
{
lastSeq = batch.Logs[^1].Sequence;
await SseWriter.WriteEventAsync(response, "logs", batch.Logs.Select(PackRunLogPayload.FromDomain), SerializerOptions, cancellationToken).ConfigureAwait(false);
}
if (HasStatusChanged(last, current))
{
await SseWriter.WriteEventAsync(response, "statusChanged", PackRunSnapshotPayload.From(current, batch.Logs.Count, lastSeq), SerializerOptions, cancellationToken).ConfigureAwait(false);
last = current;
if (IsTerminal(current.Status))
{
await EmitCompletedAsync(response, current, batch.Logs.Count, lastSeq, cancellationToken).ConfigureAwait(false);
break;
}
}
}
}
}
catch (OperationCanceledException) when (cancellationToken.IsCancellationRequested)
{
_logger.LogDebug("Pack run stream cancelled for {PackRunId}.", last.PackRunId);
}
}
private static bool HasStatusChanged(PackRun previous, PackRun current)
{
return previous.Status != current.Status || previous.Attempt != current.Attempt || previous.LeaseId != current.LeaseId;
}
private async Task EmitCompletedAsync(HttpResponse response, PackRun packRun, long logCount, long latestSequence, CancellationToken cancellationToken)
{
var durationSeconds = packRun.CompletedAt.HasValue && packRun.StartedAt.HasValue
? (packRun.CompletedAt.Value - packRun.StartedAt.Value).TotalSeconds
: packRun.CompletedAt.HasValue ? (packRun.CompletedAt.Value - packRun.CreatedAt).TotalSeconds : 0;
var payload = new PackRunCompletedPayload(
PackRunId: packRun.PackRunId,
Status: packRun.Status.ToString().ToLowerInvariant(),
CompletedAt: packRun.CompletedAt ?? _timeProvider.GetUtcNow(),
DurationSeconds: durationSeconds,
LogCount: logCount,
LatestSequence: latestSequence);
await SseWriter.WriteEventAsync(response, "completed", payload, SerializerOptions, cancellationToken).ConfigureAwait(false);
}
private static bool IsTerminal(PackRunStatus status) =>
status is PackRunStatus.Succeeded or PackRunStatus.Failed or PackRunStatus.Canceled or PackRunStatus.TimedOut;
}
internal sealed record PackRunSnapshotPayload(
Guid PackRunId,
string Status,
string PackId,
string PackVersion,
int Attempt,
int MaxAttempts,
string? TaskRunnerId,
Guid? LeaseId,
DateTimeOffset CreatedAt,
DateTimeOffset? StartedAt,
DateTimeOffset? CompletedAt,
long LogCount,
long LatestSequence)
{
public static PackRunSnapshotPayload From(PackRun packRun, long logCount, long latestSequence) => new(
packRun.PackRunId,
packRun.Status.ToString().ToLowerInvariant(),
packRun.PackId,
packRun.PackVersion,
packRun.Attempt,
packRun.MaxAttempts,
packRun.TaskRunnerId,
packRun.LeaseId,
packRun.CreatedAt,
packRun.StartedAt,
packRun.CompletedAt,
logCount,
latestSequence);
}
internal sealed record PackRunLogPayload(
long Sequence,
string Level,
string Source,
string Message,
DateTimeOffset Timestamp,
string? Data)
{
public static PackRunLogPayload FromDomain(PackRunLog log) => new(
log.Sequence,
log.Level.ToString().ToLowerInvariant(),
log.Source,
log.Message,
log.Timestamp,
log.Data);
}
internal sealed record PackRunCompletedPayload(
Guid PackRunId,
string Status,
DateTimeOffset CompletedAt,
double DurationSeconds,
long LogCount,
long LatestSequence);

View File

@@ -0,0 +1,23 @@
# StellaOps Orchestrator · Sprint 0152-0001-0002 Mirror
Status mirror for `docs/implplan/SPRINT_0152_0001_0002_orchestrator_ii.md`. Update alongside the sprint file to avoid drift.
| # | Task ID | Status | Notes |
| --- | --- | --- | --- |
| 1 | ORCH-SVC-32-002 | DONE | DAG planner + job state machine implemented. |
| 2 | ORCH-SVC-32-003 | DONE | Read-only REST APIs with pagination/idempotency. |
| 3 | ORCH-SVC-32-004 | DONE | SSE streams, metrics, health probes delivered. |
| 4 | ORCH-SVC-32-005 | DONE | Worker claim/heartbeat/progress/complete endpoints live. |
| 5 | ORCH-SVC-33-001 | DONE | Sources control-plane validation + Postgres repos. |
| 6 | ORCH-SVC-33-002 | DONE | Adaptive rate limiting (token bucket + concurrency + backpressure). |
| 7 | ORCH-SVC-33-003 | DONE | Watermark/backfill manager with duplicate suppression. |
| 8 | ORCH-SVC-33-004 | DONE | Dead-letter store, replay, notifications. |
| 9 | ORCH-SVC-34-001 | DONE | Quotas + SLO burn-rate computation and alerts. |
| 10 | ORCH-SVC-34-002 | DONE | Audit log + run ledger export with signed manifest. |
| 11 | ORCH-SVC-34-003 | DONE | Perf/scale validation + autoscale/load-shed hooks. |
| 12 | ORCH-SVC-34-004 | DONE | GA packaging (Docker/Helm/air-gap bundle/provenance checklist). |
| 13 | ORCH-SVC-35-101 | DONE | Export job class registration + quotas and telemetry. |
| 14 | ORCH-SVC-36-101 | DONE | Export distribution + retention lifecycle metadata. |
| 15 | ORCH-SVC-37-101 | DONE | Scheduled exports, pruning, failure alerting. |
Last synced: 2025-11-30 (UTC).

26
src/Orchestrator/TASKS.md Normal file
View File

@@ -0,0 +1,26 @@
# Orchestrator · Sprint Mirrors (0151 / 0152)
Local status mirror for orchestration sprints to keep doc and code views aligned. Update this alongside the canonical sprint files:
- `docs/implplan/SPRINT_0151_0001_0001_orchestrator_i.md`
- `docs/implplan/SPRINT_0152_0001_0002_orchestrator_ii.md`
| Sprint | Task ID | Status | Notes |
| --- | --- | --- | --- |
| 0151 | ORCH-OAS-61-001 | DONE | Per-service OpenAPI doc with pagination/idempotency/error envelopes. |
| 0151 | ORCH-OAS-61-002 | DONE | `/.well-known/openapi` discovery and version metadata. |
| 0151 | ORCH-OAS-62-001 | DONE | OpenAPI + SDK smoke tests for pagination and pack-run schedule/retry endpoints. |
| 0151 | ORCH-OAS-63-001 | DONE | Deprecation headers/metadata for legacy job endpoints. |
| 0151 | ORCH-OBS-50-001 | BLOCKED | Waiting on Telemetry Core (Sprint 0174). |
| 0151 | ORCH-OBS-51-001 | BLOCKED | Depends on 50-001 and telemetry schema. |
| 0151 | ORCH-OBS-52-001 | BLOCKED | Needs event schema from Sprint 0150.A. |
| 0151 | ORCH-OBS-53-001 | BLOCKED | Evidence Locker capsule inputs not frozen. |
| 0151 | ORCH-OBS-54-001 | BLOCKED | Provenance attestations depend on 53-001. |
| 0151 | ORCH-OBS-55-001 | BLOCKED | Incident-mode hooks depend on 54-001. |
| 0151 | ORCH-AIRGAP-56-001 | BLOCKED | Await AirGap staleness contracts (Sprint 0120.A). |
| 0151 | ORCH-AIRGAP-56-002 | BLOCKED | Await upstream 56-001. |
| 0151 | ORCH-AIRGAP-57-001 | BLOCKED | Await upstream 56-002. |
| 0151 | ORCH-AIRGAP-58-001 | BLOCKED | Await upstream 57-001. |
| 0151 | ORCH-SVC-32-001 | DONE | Service bootstrap + initial schema/migrations. |
| 0152 | ORCH-SVC-32-002…37-101 | DONE | See `src/Orchestrator/StellaOps.Orchestrator/TASKS.md` for per-task detail. |
Last synced: 2025-11-30 (UTC).

View File

@@ -19,10 +19,13 @@ Execute Task Packs safely and deterministically. Provide remote pack execution,
## Required Reading
- `docs/modules/platform/architecture.md`
- `docs/modules/platform/architecture-overview.md`
- `docs/modules/taskrunner/architecture.md`
- `docs/product-advisories/29-Nov-2025 - Task Pack Orchestration and Automation.md`
- `docs/task-packs/spec.md`, `docs/task-packs/authoring-guide.md`, `docs/task-packs/runbook.md`
## Working Agreement
- 1. Update task status to `DOING`/`DONE` in both correspoding sprint file `/docs/implplan/SPRINT_*.md` and the local `TASKS.md` when you start or finish work.
- 2. Review this charter and the Required Reading documents before coding; confirm prerequisites are met.
- 3. Keep changes deterministic (stable ordering, timestamps, hashes) and align with offline/air-gap expectations.
- 4. Coordinate doc updates, tests, and cross-guild communication whenever contracts or workflows change.
- 3. Keep changes deterministic (stable ordering, timestamps, hashes) and align with offline/air-gap expectations; enforce plan-hash binding for every run.
- 4. Coordinate doc updates, tests, and cross-guild communication whenever contracts or workflows change; sync sprint Decisions/Risks when advisory-driven changes land.
- 5. Revert to `TODO` if you pause the task without shipping changes; leave notes in commit/PR descriptions for context.

View File

@@ -0,0 +1,16 @@
namespace StellaOps.TaskRunner.Core.Execution;
public interface IPackRunArtifactReader
{
Task<IReadOnlyList<PackRunArtifactRecord>> ListAsync(string runId, CancellationToken cancellationToken);
}
public sealed record PackRunArtifactRecord(
string Name,
string Type,
string? SourcePath,
string? StoredPath,
string Status,
string? Notes,
DateTimeOffset CapturedAt,
string? ExpressionJson = null);

View File

@@ -0,0 +1,6 @@
namespace StellaOps.TaskRunner.Core.Execution;
public interface IPackRunProvenanceWriter
{
Task WriteAsync(PackRunExecutionContext context, PackRunState state, CancellationToken cancellationToken);
}

View File

@@ -2,21 +2,24 @@ using StellaOps.TaskRunner.Core.Planning;
namespace StellaOps.TaskRunner.Core.Execution;
public sealed class PackRunExecutionContext
{
public PackRunExecutionContext(string runId, TaskPackPlan plan, DateTimeOffset requestedAt)
{
ArgumentException.ThrowIfNullOrWhiteSpace(runId);
ArgumentNullException.ThrowIfNull(plan);
RunId = runId;
Plan = plan;
RequestedAt = requestedAt;
}
public string RunId { get; }
public TaskPackPlan Plan { get; }
public DateTimeOffset RequestedAt { get; }
}
public sealed class PackRunExecutionContext
{
public PackRunExecutionContext(string runId, TaskPackPlan plan, DateTimeOffset requestedAt, string? tenantId = null)
{
ArgumentException.ThrowIfNullOrWhiteSpace(runId);
ArgumentNullException.ThrowIfNull(plan);
RunId = runId;
Plan = plan;
RequestedAt = requestedAt;
TenantId = string.IsNullOrWhiteSpace(tenantId) ? null : tenantId.Trim();
}
public string RunId { get; }
public TaskPackPlan Plan { get; }
public DateTimeOffset RequestedAt { get; }
public string? TenantId { get; }
}

View File

@@ -11,16 +11,18 @@ public sealed record PackRunState(
DateTimeOffset RequestedAt,
DateTimeOffset CreatedAt,
DateTimeOffset UpdatedAt,
IReadOnlyDictionary<string, PackRunStepStateRecord> Steps)
{
public static PackRunState Create(
string runId,
IReadOnlyDictionary<string, PackRunStepStateRecord> Steps,
string? TenantId = null)
{
public static PackRunState Create(
string runId,
string planHash,
TaskPackPlan plan,
TaskPackPlanFailurePolicy failurePolicy,
DateTimeOffset requestedAt,
IReadOnlyDictionary<string, PackRunStepStateRecord> steps,
DateTimeOffset timestamp)
DateTimeOffset timestamp,
string? tenantId = null)
=> new(
runId,
planHash,
@@ -29,8 +31,9 @@ public sealed record PackRunState(
requestedAt,
timestamp,
timestamp,
new ReadOnlyDictionary<string, PackRunStepStateRecord>(new Dictionary<string, PackRunStepStateRecord>(steps, StringComparer.Ordinal)));
}
new ReadOnlyDictionary<string, PackRunStepStateRecord>(new Dictionary<string, PackRunStepStateRecord>(steps, StringComparer.Ordinal)),
tenantId);
}
public sealed record PackRunStepStateRecord(
string StepId,

View File

@@ -74,7 +74,8 @@ public static class PackRunStateFactory
failurePolicy,
context.RequestedAt,
stepRecords,
timestamp);
timestamp,
context.TenantId);
}
private static Dictionary<string, PackRunSimulationNode> IndexSimulation(IReadOnlyList<PackRunSimulationNode> nodes)

View File

@@ -0,0 +1,65 @@
using StellaOps.TaskRunner.Core.Planning;
namespace StellaOps.TaskRunner.Core.Execution;
public static class ProvenanceManifestFactory
{
public static ProvenanceManifest Create(PackRunExecutionContext context, PackRunState state, DateTimeOffset completedAt)
{
ArgumentNullException.ThrowIfNull(context);
ArgumentNullException.ThrowIfNull(state);
var steps = state.Steps.Values
.OrderBy(step => step.StepId, StringComparer.Ordinal)
.Select(step => new ProvenanceStep(
step.StepId,
step.Kind.ToString(),
step.Status.ToString(),
step.Attempts,
step.LastTransitionAt,
step.StatusReason))
.ToList();
var outputs = context.Plan.Outputs
.Select(output => new ProvenanceOutput(output.Name, output.Type))
.ToList();
return new ProvenanceManifest(
context.RunId,
context.TenantId,
context.Plan.Hash,
context.Plan.Metadata.Name,
context.Plan.Metadata.Version,
context.Plan.Metadata.Description,
context.Plan.Metadata.Tags,
context.RequestedAt,
state.CreatedAt,
completedAt,
steps,
outputs);
}
}
public sealed record ProvenanceManifest(
string RunId,
string? TenantId,
string PlanHash,
string PackName,
string PackVersion,
string? PackDescription,
IReadOnlyList<string> PackTags,
DateTimeOffset RequestedAt,
DateTimeOffset CreatedAt,
DateTimeOffset CompletedAt,
IReadOnlyList<ProvenanceStep> Steps,
IReadOnlyList<ProvenanceOutput> Outputs);
public sealed record ProvenanceStep(
string Id,
string Kind,
string Status,
int Attempts,
DateTimeOffset? LastTransitionAt,
string? StatusReason);
public sealed record ProvenanceOutput(string Name, string Type);

View File

@@ -118,7 +118,8 @@ public sealed class FilePackRunStateStore : IPackRunStateStore
DateTimeOffset RequestedAt,
DateTimeOffset CreatedAt,
DateTimeOffset UpdatedAt,
IReadOnlyList<StepDocument> Steps)
IReadOnlyList<StepDocument> Steps,
string? TenantId)
{
public static StateDocument FromDomain(PackRunState state)
{
@@ -147,11 +148,12 @@ public sealed class FilePackRunStateStore : IPackRunStateStore
state.RequestedAt,
state.CreatedAt,
state.UpdatedAt,
steps);
steps,
state.TenantId);
}
public PackRunState ToDomain()
{
public PackRunState ToDomain()
{
var steps = Steps.ToDictionary(
step => step.StepId,
step => new PackRunStepStateRecord(
@@ -177,9 +179,10 @@ public sealed class FilePackRunStateStore : IPackRunStateStore
RequestedAt,
CreatedAt,
UpdatedAt,
steps);
steps,
TenantId);
}
}
}
private sealed record StepDocument(
string StepId,

View File

@@ -0,0 +1,75 @@
using System.Text.Json;
using StellaOps.TaskRunner.Core.Execution;
namespace StellaOps.TaskRunner.Infrastructure.Execution;
public sealed class FilesystemPackRunArtifactReader : IPackRunArtifactReader
{
private static readonly JsonSerializerOptions SerializerOptions = new(JsonSerializerDefaults.Web);
private readonly string rootPath;
public FilesystemPackRunArtifactReader(string rootPath)
{
ArgumentException.ThrowIfNullOrWhiteSpace(rootPath);
this.rootPath = Path.GetFullPath(rootPath);
}
public async Task<IReadOnlyList<PackRunArtifactRecord>> ListAsync(string runId, CancellationToken cancellationToken)
{
ArgumentException.ThrowIfNullOrWhiteSpace(runId);
var manifestPath = Path.Combine(rootPath, Sanitize(runId), "artifact-manifest.json");
if (!File.Exists(manifestPath))
{
return Array.Empty<PackRunArtifactRecord>();
}
await using var stream = File.OpenRead(manifestPath);
var manifest = await JsonSerializer.DeserializeAsync<ArtifactManifest>(stream, SerializerOptions, cancellationToken)
.ConfigureAwait(false);
if (manifest is null || manifest.Outputs is null)
{
return Array.Empty<PackRunArtifactRecord>();
}
return manifest.Outputs
.OrderBy(output => output.Name, StringComparer.Ordinal)
.Select(output => new PackRunArtifactRecord(
output.Name,
output.Type,
output.SourcePath,
output.StoredPath,
output.Status,
output.Notes,
manifest.UploadedAt,
output.ExpressionJson))
.ToList();
}
private static string Sanitize(string value)
{
var safe = value.Trim();
foreach (var invalid in Path.GetInvalidFileNameChars())
{
safe = safe.Replace(invalid, '_');
}
return string.IsNullOrWhiteSpace(safe) ? "run" : safe;
}
private sealed record ArtifactManifest(
string RunId,
DateTimeOffset UploadedAt,
List<ArtifactRecord> Outputs);
private sealed record ArtifactRecord(
string Name,
string Type,
string? SourcePath,
string? StoredPath,
string Status,
string? Notes,
string? ExpressionJson);
}

View File

@@ -0,0 +1,56 @@
using System.Text.Json;
using StellaOps.TaskRunner.Core.Execution;
namespace StellaOps.TaskRunner.Infrastructure.Execution;
public sealed class FilesystemPackRunProvenanceWriter : IPackRunProvenanceWriter
{
private static readonly JsonSerializerOptions SerializerOptions = new(JsonSerializerDefaults.Web)
{
WriteIndented = false
};
private readonly string rootPath;
private readonly TimeProvider timeProvider;
public FilesystemPackRunProvenanceWriter(string rootPath, TimeProvider? timeProvider = null)
{
ArgumentException.ThrowIfNullOrWhiteSpace(rootPath);
this.rootPath = Path.GetFullPath(rootPath);
this.timeProvider = timeProvider ?? TimeProvider.System;
}
public async Task WriteAsync(PackRunExecutionContext context, PackRunState state, CancellationToken cancellationToken)
{
ArgumentNullException.ThrowIfNull(context);
ArgumentNullException.ThrowIfNull(state);
var completedAt = timeProvider.GetUtcNow();
var manifest = ProvenanceManifestFactory.Create(context, state, completedAt);
var manifestPath = GetPath(context.RunId);
Directory.CreateDirectory(Path.GetDirectoryName(manifestPath)!);
await using var stream = File.Open(manifestPath, FileMode.Create, FileAccess.Write, FileShare.None);
await JsonSerializer.SerializeAsync(stream, manifest, SerializerOptions, cancellationToken).ConfigureAwait(false);
await stream.FlushAsync(cancellationToken).ConfigureAwait(false);
}
private string GetPath(string runId)
{
var safe = Sanitize(runId);
return Path.Combine(rootPath, "provenance", $"{safe}.json");
}
private static string Sanitize(string value)
{
var result = value.Trim();
foreach (var invalid in Path.GetInvalidFileNameChars())
{
result = result.Replace(invalid, '_');
}
return result;
}
}

View File

@@ -82,25 +82,25 @@ public sealed class MongoPackRunApprovalStore : IPackRunApprovalStore
.ConfigureAwait(false);
}
private static void EnsureIndexes(IMongoCollection<PackRunApprovalDocument> target)
public static IEnumerable<CreateIndexModel<PackRunApprovalDocument>> GetIndexModels()
{
var models = new[]
{
new CreateIndexModel<PackRunApprovalDocument>(
Builders<PackRunApprovalDocument>.IndexKeys
.Ascending(document => document.RunId)
.Ascending(document => document.ApprovalId),
new CreateIndexOptions { Unique = true }),
new CreateIndexModel<PackRunApprovalDocument>(
Builders<PackRunApprovalDocument>.IndexKeys
.Ascending(document => document.RunId)
.Ascending(document => document.Status))
};
yield return new CreateIndexModel<PackRunApprovalDocument>(
Builders<PackRunApprovalDocument>.IndexKeys
.Ascending(document => document.RunId)
.Ascending(document => document.ApprovalId),
new CreateIndexOptions { Unique = true, Name = "pack_run_approvals_run_approval" });
target.Indexes.CreateMany(models);
yield return new CreateIndexModel<PackRunApprovalDocument>(
Builders<PackRunApprovalDocument>.IndexKeys
.Ascending(document => document.RunId)
.Ascending(document => document.Status),
new CreateIndexOptions { Name = "pack_run_approvals_run_status" });
}
private sealed class PackRunApprovalDocument
private static void EnsureIndexes(IMongoCollection<PackRunApprovalDocument> target)
=> target.Indexes.CreateMany(GetIndexModels());
public sealed class PackRunApprovalDocument
{
[BsonId]
public ObjectId Id { get; init; }

View File

@@ -0,0 +1,42 @@
using MongoDB.Driver;
using StellaOps.TaskRunner.Core.Configuration;
using StellaOps.TaskRunner.Core.Execution;
namespace StellaOps.TaskRunner.Infrastructure.Execution;
public sealed class MongoPackRunArtifactReader : IPackRunArtifactReader
{
private readonly IMongoCollection<MongoPackRunArtifactUploader.PackRunArtifactDocument> collection;
public MongoPackRunArtifactReader(IMongoDatabase database, TaskRunnerMongoOptions options)
{
ArgumentNullException.ThrowIfNull(database);
ArgumentNullException.ThrowIfNull(options);
collection = database.GetCollection<MongoPackRunArtifactUploader.PackRunArtifactDocument>(options.ArtifactsCollection);
}
public async Task<IReadOnlyList<PackRunArtifactRecord>> ListAsync(string runId, CancellationToken cancellationToken)
{
ArgumentException.ThrowIfNullOrWhiteSpace(runId);
var filter = Builders<MongoPackRunArtifactUploader.PackRunArtifactDocument>.Filter.Eq(doc => doc.RunId, runId);
var documents = await collection
.Find(filter)
.SortBy(doc => doc.Name)
.ToListAsync(cancellationToken)
.ConfigureAwait(false);
return documents
.Select(doc => new PackRunArtifactRecord(
doc.Name,
doc.Type,
doc.SourcePath,
doc.StoredPath,
doc.Status,
doc.Notes,
new DateTimeOffset(doc.CapturedAt, TimeSpan.Zero),
doc.Expression?.ToJson()))
.ToList();
}
}

View File

@@ -149,24 +149,23 @@ public sealed class MongoPackRunArtifactUploader : IPackRunArtifactUploader
return parameter.Value;
}
private static void EnsureIndexes(IMongoCollection<PackRunArtifactDocument> target)
public static IEnumerable<CreateIndexModel<PackRunArtifactDocument>> GetIndexModels()
{
var models = new[]
{
new CreateIndexModel<PackRunArtifactDocument>(
Builders<PackRunArtifactDocument>.IndexKeys
.Ascending(document => document.RunId)
.Ascending(document => document.Name),
new CreateIndexOptions { Unique = true }),
new CreateIndexModel<PackRunArtifactDocument>(
Builders<PackRunArtifactDocument>.IndexKeys
.Ascending(document => document.RunId)
.Ascending(document => document.Status))
};
yield return new CreateIndexModel<PackRunArtifactDocument>(
Builders<PackRunArtifactDocument>.IndexKeys
.Ascending(document => document.RunId)
.Ascending(document => document.Name),
new CreateIndexOptions { Unique = true, Name = "pack_artifacts_run_name" });
target.Indexes.CreateMany(models);
yield return new CreateIndexModel<PackRunArtifactDocument>(
Builders<PackRunArtifactDocument>.IndexKeys
.Ascending(document => document.RunId),
new CreateIndexOptions { Name = "pack_artifacts_run" });
}
private static void EnsureIndexes(IMongoCollection<PackRunArtifactDocument> target)
=> target.Indexes.CreateMany(GetIndexModels());
public sealed class PackRunArtifactDocument
{
[BsonId]

View File

@@ -89,24 +89,24 @@ public sealed class MongoPackRunLogStore : IPackRunLogStore
.ConfigureAwait(false);
}
private static void EnsureIndexes(IMongoCollection<PackRunLogDocument> target)
public static IEnumerable<CreateIndexModel<PackRunLogDocument>> GetIndexModels()
{
var models = new[]
{
new CreateIndexModel<PackRunLogDocument>(
Builders<PackRunLogDocument>.IndexKeys
.Ascending(document => document.RunId)
.Ascending(document => document.Sequence),
new CreateIndexOptions { Unique = true }),
new CreateIndexModel<PackRunLogDocument>(
Builders<PackRunLogDocument>.IndexKeys
.Ascending(document => document.RunId)
.Ascending(document => document.Timestamp))
};
yield return new CreateIndexModel<PackRunLogDocument>(
Builders<PackRunLogDocument>.IndexKeys
.Ascending(document => document.RunId)
.Ascending(document => document.Sequence),
new CreateIndexOptions { Unique = true, Name = "pack_run_logs_run_sequence" });
target.Indexes.CreateMany(models);
yield return new CreateIndexModel<PackRunLogDocument>(
Builders<PackRunLogDocument>.IndexKeys
.Ascending(document => document.RunId)
.Ascending(document => document.Timestamp),
new CreateIndexOptions { Name = "pack_run_logs_run_timestamp" });
}
private static void EnsureIndexes(IMongoCollection<PackRunLogDocument> target)
=> target.Indexes.CreateMany(GetIndexModels());
public sealed class PackRunLogDocument
{
[BsonId]

View File

@@ -0,0 +1,67 @@
using System.Text.Json;
using MongoDB.Bson;
using MongoDB.Driver;
using StellaOps.TaskRunner.Core.Configuration;
using StellaOps.TaskRunner.Core.Execution;
namespace StellaOps.TaskRunner.Infrastructure.Execution;
public sealed class MongoPackRunProvenanceWriter : IPackRunProvenanceWriter
{
private static readonly JsonSerializerOptions SerializerOptions = new(JsonSerializerDefaults.Web);
private readonly IMongoCollection<ProvenanceDocument> collection;
private readonly TimeProvider timeProvider;
public MongoPackRunProvenanceWriter(IMongoDatabase database, TaskRunnerMongoOptions options, TimeProvider? timeProvider = null)
{
ArgumentNullException.ThrowIfNull(database);
ArgumentNullException.ThrowIfNull(options);
collection = database.GetCollection<ProvenanceDocument>(options.ArtifactsCollection);
this.timeProvider = timeProvider ?? TimeProvider.System;
}
public async Task WriteAsync(PackRunExecutionContext context, PackRunState state, CancellationToken cancellationToken)
{
ArgumentNullException.ThrowIfNull(context);
ArgumentNullException.ThrowIfNull(state);
var completedAt = timeProvider.GetUtcNow();
var manifest = ProvenanceManifestFactory.Create(context, state, completedAt);
var manifestJson = JsonSerializer.Serialize(manifest, SerializerOptions);
var manifestDocument = BsonDocument.Parse(manifestJson);
var document = new ProvenanceDocument
{
RunId = context.RunId,
Name = "provenance-manifest",
Type = "object",
Status = "materialized",
CapturedAt = completedAt.UtcDateTime,
Expression = manifestDocument
};
var filter = Builders<ProvenanceDocument>.Filter.And(
Builders<ProvenanceDocument>.Filter.Eq(doc => doc.RunId, context.RunId),
Builders<ProvenanceDocument>.Filter.Eq(doc => doc.Name, document.Name));
var options = new ReplaceOptions { IsUpsert = true };
await collection.ReplaceOneAsync(filter, document, options, cancellationToken).ConfigureAwait(false);
}
private sealed class ProvenanceDocument
{
public string RunId { get; init; } = default!;
public string Name { get; init; } = default!;
public string Type { get; init; } = default!;
public string Status { get; init; } = default!;
public DateTime CapturedAt { get; init; }
public BsonDocument Expression { get; init; } = default!;
}
}

View File

@@ -62,20 +62,23 @@ public sealed class MongoPackRunStateStore : IPackRunStateStore
.ToList();
}
private static void EnsureIndexes(IMongoCollection<PackRunStateDocument> target)
public static IEnumerable<CreateIndexModel<PackRunStateDocument>> GetIndexModels()
{
var models = new[]
{
new CreateIndexModel<PackRunStateDocument>(
Builders<PackRunStateDocument>.IndexKeys.Descending(document => document.UpdatedAt)),
new CreateIndexModel<PackRunStateDocument>(
Builders<PackRunStateDocument>.IndexKeys.Ascending(document => document.PlanHash))
};
yield return new CreateIndexModel<PackRunStateDocument>(
Builders<PackRunStateDocument>.IndexKeys.Descending(document => document.UpdatedAt),
new CreateIndexOptions { Name = "pack_runs_updatedAt_desc" });
target.Indexes.CreateMany(models);
yield return new CreateIndexModel<PackRunStateDocument>(
Builders<PackRunStateDocument>.IndexKeys
.Ascending(document => document.TenantId)
.Descending(document => document.UpdatedAt),
new CreateIndexOptions { Name = "pack_runs_tenant_updatedAt_desc", Sparse = true });
}
private sealed class PackRunStateDocument
private static void EnsureIndexes(IMongoCollection<PackRunStateDocument> target)
=> target.Indexes.CreateMany(GetIndexModels());
public sealed class PackRunStateDocument
{
[BsonId]
public string RunId { get; init; } = default!;
@@ -94,6 +97,8 @@ public sealed class MongoPackRunStateStore : IPackRunStateStore
public List<PackRunStepDocument> Steps { get; init; } = new();
public string? TenantId { get; init; }
public static PackRunStateDocument FromDomain(PackRunState state)
{
var planDocument = BsonDocument.Parse(JsonSerializer.Serialize(state.Plan, SerializerOptions));
@@ -113,7 +118,8 @@ public sealed class MongoPackRunStateStore : IPackRunStateStore
RequestedAt = state.RequestedAt.UtcDateTime,
CreatedAt = state.CreatedAt.UtcDateTime,
UpdatedAt = state.UpdatedAt.UtcDateTime,
Steps = steps
Steps = steps,
TenantId = state.TenantId
};
}
@@ -139,11 +145,12 @@ public sealed class MongoPackRunStateStore : IPackRunStateStore
new DateTimeOffset(RequestedAt, TimeSpan.Zero),
new DateTimeOffset(CreatedAt, TimeSpan.Zero),
new DateTimeOffset(UpdatedAt, TimeSpan.Zero),
new ReadOnlyDictionary<string, PackRunStepStateRecord>(stepRecords));
new ReadOnlyDictionary<string, PackRunStepStateRecord>(stepRecords),
TenantId);
}
}
private sealed class PackRunStepDocument
public sealed class PackRunStepDocument
{
public string StepId { get; init; } = default!;

View File

@@ -48,6 +48,16 @@ public sealed class PackRunApprovalDecisionService
return PackRunApprovalDecisionResult.NotFound;
}
if (!string.Equals(state.PlanHash, request.PlanHash, StringComparison.Ordinal))
{
_logger.LogWarning(
"Approval decision for run {RunId} rejected plan hash mismatch (expected {Expected}, got {Actual}).",
runId,
state.PlanHash,
request.PlanHash);
return PackRunApprovalDecisionResult.PlanHashMismatch;
}
var requestedAt = state.RequestedAt != default ? state.RequestedAt : state.CreatedAt;
var coordinator = PackRunApprovalCoordinator.Restore(state.Plan, approvals, requestedAt);
@@ -96,6 +106,7 @@ public sealed class PackRunApprovalDecisionService
public sealed record PackRunApprovalDecisionRequest(
string RunId,
string ApprovalId,
string PlanHash,
PackRunApprovalDecisionType Decision,
string? ActorId,
string? Summary);
@@ -110,6 +121,7 @@ public enum PackRunApprovalDecisionType
public sealed record PackRunApprovalDecisionResult(string Status)
{
public static PackRunApprovalDecisionResult NotFound { get; } = new("not_found");
public static PackRunApprovalDecisionResult PlanHashMismatch { get; } = new("plan_hash_mismatch");
public static PackRunApprovalDecisionResult Applied { get; } = new("applied");
public static PackRunApprovalDecisionResult Resumed { get; } = new("resumed");

View File

@@ -39,7 +39,7 @@ public sealed class PackRunApprovalDecisionServiceTests
NullLogger<PackRunApprovalDecisionService>.Instance);
var result = await service.ApplyAsync(
new PackRunApprovalDecisionRequest("run-1", "security-review", PackRunApprovalDecisionType.Approved, "approver@example.com", "LGTM"),
new PackRunApprovalDecisionRequest("run-1", "security-review", plan.Hash, PackRunApprovalDecisionType.Approved, "approver@example.com", "LGTM"),
CancellationToken.None);
Assert.Equal("resumed", result.Status);
@@ -62,13 +62,51 @@ public sealed class PackRunApprovalDecisionServiceTests
NullLogger<PackRunApprovalDecisionService>.Instance);
var result = await service.ApplyAsync(
new PackRunApprovalDecisionRequest("missing", "approval", PackRunApprovalDecisionType.Approved, "actor", null),
new PackRunApprovalDecisionRequest("missing", "approval", "hash", PackRunApprovalDecisionType.Approved, "actor", null),
CancellationToken.None);
Assert.Equal("not_found", result.Status);
Assert.False(scheduler.ScheduledContexts.Any());
}
[Fact]
public async Task ApplyAsync_ReturnsPlanHashMismatchWhenIncorrect()
{
var plan = TestPlanFactory.CreatePlan();
var state = TestPlanFactory.CreateState("run-1", plan);
var approval = new PackRunApprovalState(
"security-review",
new[] { "Packs.Approve" },
new[] { "step-a" },
Array.Empty<string>(),
null,
DateTimeOffset.UtcNow.AddMinutes(-5),
PackRunApprovalStatus.Pending);
var approvalStore = new InMemoryApprovalStore(new Dictionary<string, IReadOnlyList<PackRunApprovalState>>
{
["run-1"] = new List<PackRunApprovalState> { approval }
});
var stateStore = new InMemoryStateStore(new Dictionary<string, PackRunState>
{
["run-1"] = state
});
var scheduler = new RecordingScheduler();
var service = new PackRunApprovalDecisionService(
approvalStore,
stateStore,
scheduler,
NullLogger<PackRunApprovalDecisionService>.Instance);
var result = await service.ApplyAsync(
new PackRunApprovalDecisionRequest("run-1", "security-review", "wrong-hash", PackRunApprovalDecisionType.Approved, "actor", null),
CancellationToken.None);
Assert.Equal("plan_hash_mismatch", result.Status);
Assert.False(scheduler.ScheduledContexts.Any());
}
private sealed class InMemoryApprovalStore : IPackRunApprovalStore
{
private readonly Dictionary<string, List<PackRunApprovalState>> _approvals;

View File

@@ -0,0 +1,95 @@
using System.Text.Json;
using System.Text.Json.Nodes;
using MongoDB.Driver;
using StellaOps.TaskRunner.Core.Execution;
using StellaOps.TaskRunner.Core.Execution.Simulation;
using StellaOps.TaskRunner.Core.Planning;
using StellaOps.TaskRunner.Core.TaskPacks;
using StellaOps.TaskRunner.Infrastructure.Execution;
using Xunit;
namespace StellaOps.TaskRunner.Tests;
public sealed class PackRunProvenanceWriterTests
{
[Fact]
public async Task Filesystem_writer_emits_manifest()
{
var (context, state) = CreateRunState();
var completedAt = new DateTimeOffset(2025, 11, 30, 12, 30, 0, TimeSpan.Zero);
var temp = Directory.CreateTempSubdirectory();
try
{
var ct = TestContext.Current.CancellationToken;
var writer = new FilesystemPackRunProvenanceWriter(temp.FullName, new FixedTimeProvider(completedAt));
await writer.WriteAsync(context, state, ct);
var path = Path.Combine(temp.FullName, "provenance", "run-test.json");
Assert.True(File.Exists(path));
using var document = JsonDocument.Parse(await File.ReadAllTextAsync(path, ct));
var root = document.RootElement;
Assert.Equal("run-test", root.GetProperty("runId").GetString());
Assert.Equal("tenant-alpha", root.GetProperty("tenantId").GetString());
Assert.Equal(context.Plan.Hash, root.GetProperty("planHash").GetString());
Assert.Equal(completedAt, root.GetProperty("completedAt").GetDateTimeOffset());
}
finally
{
temp.Delete(recursive: true);
}
}
[Fact]
public async Task Mongo_writer_upserts_manifest()
{
await using var mongo = MongoTaskRunnerTestContext.Create();
var (context, state) = CreateRunState();
var completedAt = new DateTimeOffset(2025, 11, 30, 12, 0, 0, TimeSpan.Zero);
var ct = TestContext.Current.CancellationToken;
var options = mongo.CreateMongoOptions();
var writer = new MongoPackRunProvenanceWriter(mongo.Database, options, new FixedTimeProvider(completedAt));
await writer.WriteAsync(context, state, ct);
var collection = mongo.Database.GetCollection<MongoDB.Bson.BsonDocument>(options.ArtifactsCollection);
var saved = await collection
.Find(Builders<MongoDB.Bson.BsonDocument>.Filter.Eq("RunId", context.RunId))
.FirstOrDefaultAsync(ct);
Assert.NotNull(saved);
var manifest = saved!["Expression"].AsBsonDocument;
Assert.Equal("run-test", manifest["runId"].AsString);
Assert.Equal("tenant-alpha", manifest["tenantId"].AsString);
Assert.Equal(context.Plan.Hash, manifest["planHash"].AsString);
}
private static (PackRunExecutionContext Context, PackRunState State) CreateRunState()
{
var loader = new TaskPackManifestLoader();
var planner = new TaskPackPlanner();
var manifest = loader.Deserialize(TestManifests.Sample);
var plan = planner.Plan(manifest, new Dictionary<string, JsonNode?>()).Plan ?? throw new InvalidOperationException("Plan generation failed.");
var graphBuilder = new PackRunExecutionGraphBuilder();
var simulationEngine = new PackRunSimulationEngine();
var graph = graphBuilder.Build(plan);
var requestedAt = new DateTimeOffset(2025, 11, 30, 10, 0, 0, TimeSpan.Zero);
var context = new PackRunExecutionContext("run-test", plan, requestedAt, "tenant-alpha");
var state = PackRunStateFactory.CreateInitialState(context, graph, simulationEngine, requestedAt);
return (context, state);
}
private sealed class FixedTimeProvider : TimeProvider
{
private readonly DateTimeOffset now;
public FixedTimeProvider(DateTimeOffset now)
{
this.now = now;
}
public override DateTimeOffset GetUtcNow() => now;
}
}

View File

@@ -24,6 +24,10 @@
<ProjectReference Include="..\..\..\AirGap\StellaOps.AirGap.Policy\StellaOps.AirGap.Policy\StellaOps.AirGap.Policy.csproj" />
</ItemGroup>
<ItemGroup>
<Compile Include="..\StellaOps.TaskRunner.WebService\OpenApiMetadataFactory.cs" Link="Web/OpenApiMetadataFactory.cs" />
</ItemGroup>
<ItemGroup>
<Content Include="xunit.runner.json" CopyToOutputDirectory="PreserveNewest" />
</ItemGroup>

View File

@@ -1,12 +1,15 @@
using System.Collections.ObjectModel;
using System.Globalization;
using System.IO;
using System.Linq;
using System.Text;
using System.Text.Json;
using System.Text.Json.Nodes;
using MongoDB.Driver;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Options;
using StellaOps.TaskRunner.Core.Configuration;
using StellaOps.TaskRunner.Core.Execution;
using StellaOps.TaskRunner.Core.Execution.Simulation;
using StellaOps.TaskRunner.Core.Planning;
@@ -22,6 +25,7 @@ builder.Services.AddSingleton<TaskPackManifestLoader>();
builder.Services.AddSingleton<TaskPackPlanner>();
builder.Services.AddSingleton<PackRunSimulationEngine>();
builder.Services.AddSingleton<PackRunExecutionGraphBuilder>();
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddStellaOpsTelemetry(
builder.Configuration,
serviceName: "StellaOps.TaskRunner.WebService",
@@ -52,6 +56,7 @@ if (string.Equals(storageOptions.Mode, TaskRunnerStorageModes.Mongo, StringCompa
builder.Services.AddSingleton<IPackRunStateStore, MongoPackRunStateStore>();
builder.Services.AddSingleton<IPackRunLogStore, MongoPackRunLogStore>();
builder.Services.AddSingleton<IPackRunApprovalStore, MongoPackRunApprovalStore>();
builder.Services.AddSingleton<IPackRunArtifactReader, MongoPackRunArtifactReader>();
}
else
{
@@ -70,6 +75,11 @@ else
var options = sp.GetRequiredService<IOptions<TaskRunnerServiceOptions>>().Value;
return new FilePackRunLogStore(options.LogsPath);
});
builder.Services.AddSingleton<IPackRunArtifactReader>(sp =>
{
var options = sp.GetRequiredService<IOptions<TaskRunnerServiceOptions>>().Value;
return new FilesystemPackRunArtifactReader(options.ArtifactsPath);
});
}
builder.Services.AddSingleton(sp =>
@@ -83,10 +93,7 @@ builder.Services.AddOpenApi();
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
app.MapOpenApi();
}
app.MapOpenApi("/openapi");
app.MapPost("/v1/task-runner/simulations", async (
[FromBody] SimulationRequest request,
@@ -126,7 +133,35 @@ app.MapPost("/v1/task-runner/simulations", async (
return Results.Ok(response);
}).WithName("SimulateTaskPack");
app.MapPost("/v1/task-runner/runs", async (
app.MapPost("/v1/task-runner/runs", HandleCreateRun).WithName("CreatePackRun");
app.MapPost("/api/runs", HandleCreateRun).WithName("CreatePackRunApi");
app.MapGet("/v1/task-runner/runs/{runId}", HandleGetRunState).WithName("GetRunState");
app.MapGet("/api/runs/{runId}", HandleGetRunState).WithName("GetRunStateApi");
app.MapGet("/v1/task-runner/runs/{runId}/logs", HandleStreamRunLogs).WithName("StreamRunLogs");
app.MapGet("/api/runs/{runId}/logs", HandleStreamRunLogs).WithName("StreamRunLogsApi");
app.MapGet("/v1/task-runner/runs/{runId}/artifacts", HandleListArtifacts).WithName("ListRunArtifacts");
app.MapGet("/api/runs/{runId}/artifacts", HandleListArtifacts).WithName("ListRunArtifactsApi");
app.MapPost("/v1/task-runner/runs/{runId}/approvals/{approvalId}", HandleApplyApprovalDecision).WithName("ApplyApprovalDecision");
app.MapPost("/api/runs/{runId}/approvals/{approvalId}", HandleApplyApprovalDecision).WithName("ApplyApprovalDecisionApi");
app.MapPost("/v1/task-runner/runs/{runId}/cancel", HandleCancelRun).WithName("CancelRun");
app.MapPost("/api/runs/{runId}/cancel", HandleCancelRun).WithName("CancelRunApi");
app.MapGet("/.well-known/openapi", (HttpResponse response) =>
{
var metadata = OpenApiMetadataFactory.Create("/openapi");
response.Headers.ETag = metadata.ETag;
response.Headers.Append("X-Signature", metadata.Signature);
return Results.Ok(metadata);
}).WithName("GetOpenApiMetadata");
app.MapGet("/", () => Results.Redirect("/openapi"));
async Task<IResult> HandleCreateRun(
[FromBody] CreateRunRequest request,
TaskPackManifestLoader loader,
TaskPackPlanner planner,
@@ -135,7 +170,7 @@ app.MapPost("/v1/task-runner/runs", async (
IPackRunStateStore stateStore,
IPackRunLogStore logStore,
IPackRunJobScheduler scheduler,
CancellationToken cancellationToken) =>
CancellationToken cancellationToken)
{
if (request is null || string.IsNullOrWhiteSpace(request.Manifest))
{
@@ -174,7 +209,7 @@ app.MapPost("/v1/task-runner/runs", async (
}
var requestedAt = DateTimeOffset.UtcNow;
var context = new PackRunExecutionContext(runId, plan, requestedAt);
var context = new PackRunExecutionContext(runId, plan, requestedAt, request.TenantId);
var graph = executionGraphBuilder.Build(plan);
var state = PackRunStateFactory.CreateInitialState(context, graph, simulationEngine, requestedAt);
@@ -194,9 +229,15 @@ app.MapPost("/v1/task-runner/runs", async (
return Results.StatusCode(StatusCodes.Status500InternalServerError);
}
var metadata = new Dictionary<string, string>(StringComparer.Ordinal);
metadata["planHash"] = plan.Hash;
metadata["requestedAt"] = requestedAt.ToUniversalTime().ToString("O", CultureInfo.InvariantCulture);
var metadata = new Dictionary<string, string>(StringComparer.Ordinal)
{
["planHash"] = plan.Hash,
["requestedAt"] = requestedAt.ToUniversalTime().ToString("O", CultureInfo.InvariantCulture)
};
if (!string.IsNullOrWhiteSpace(context.TenantId))
{
metadata["tenantId"] = context.TenantId!;
}
await logStore.AppendAsync(
runId,
@@ -205,31 +246,31 @@ app.MapPost("/v1/task-runner/runs", async (
var response = RunStateMapper.ToResponse(state);
return Results.Created($"/v1/task-runner/runs/{runId}", response);
}).WithName("CreatePackRun");
}
app.MapGet("/v1/task-runner/runs/{runId}", async (
async Task<IResult> HandleGetRunState(
string runId,
IPackRunStateStore stateStore,
CancellationToken cancellationToken) =>
CancellationToken cancellationToken)
{
if (string.IsNullOrWhiteSpace(runId))
{
return Results.BadRequest(new { error = "runId is required." });
}
var state = await stateStore.GetAsync(runId, cancellationToken).ConfigureAwait(false);
if (state is null)
{
return Results.NotFound();
}
if (string.IsNullOrWhiteSpace(runId))
{
return Results.BadRequest(new { error = "runId is required." });
}
var state = await stateStore.GetAsync(runId, cancellationToken).ConfigureAwait(false);
if (state is null)
{
return Results.NotFound();
}
return Results.Ok(RunStateMapper.ToResponse(state));
}).WithName("GetRunState");
}
app.MapGet("/v1/task-runner/runs/{runId}/logs", async (
async Task<IResult> HandleStreamRunLogs(
string runId,
IPackRunLogStore logStore,
CancellationToken cancellationToken) =>
CancellationToken cancellationToken)
{
if (string.IsNullOrWhiteSpace(runId))
{
@@ -248,14 +289,14 @@ app.MapGet("/v1/task-runner/runs/{runId}/logs", async (
await RunLogMapper.WriteAsync(stream, entry, ct).ConfigureAwait(false);
}
}, "application/x-ndjson");
}).WithName("StreamRunLogs");
}
app.MapPost("/v1/task-runner/runs/{runId}/approvals/{approvalId}", async (
async Task<IResult> HandleApplyApprovalDecision(
string runId,
string approvalId,
[FromBody] ApprovalDecisionDto request,
PackRunApprovalDecisionService decisionService,
CancellationToken cancellationToken) =>
CancellationToken cancellationToken)
{
if (request is null)
{
@@ -267,8 +308,13 @@ app.MapPost("/v1/task-runner/runs/{runId}/approvals/{approvalId}", async (
return Results.BadRequest(new { error = "Invalid decision. Expected approved, rejected, or expired." });
}
if (string.IsNullOrWhiteSpace(request.PlanHash))
{
return Results.BadRequest(new { error = "planHash is required." });
}
var result = await decisionService.ApplyAsync(
new PackRunApprovalDecisionRequest(runId, approvalId, decisionType, request.ActorId, request.Summary),
new PackRunApprovalDecisionRequest(runId, approvalId, request.PlanHash, decisionType, request.ActorId, request.Summary),
cancellationToken).ConfigureAwait(false);
if (ReferenceEquals(result, PackRunApprovalDecisionResult.NotFound))
@@ -276,18 +322,105 @@ app.MapPost("/v1/task-runner/runs/{runId}/approvals/{approvalId}", async (
return Results.NotFound();
}
if (ReferenceEquals(result, PackRunApprovalDecisionResult.PlanHashMismatch))
{
return Results.Conflict(new { error = "Plan hash mismatch." });
}
return Results.Ok(new
{
status = result.Status,
resumed = result.ShouldResume
});
}).WithName("ApplyApprovalDecision");
}
app.MapGet("/", () => Results.Redirect("/openapi"));
app.Run();
static IDictionary<string, JsonNode?>? ConvertInputs(JsonObject? node)
async Task<IResult> HandleListArtifacts(
string runId,
IPackRunStateStore stateStore,
IPackRunArtifactReader artifactReader,
CancellationToken cancellationToken)
{
if (string.IsNullOrWhiteSpace(runId))
{
return Results.BadRequest(new { error = "runId is required." });
}
var state = await stateStore.GetAsync(runId, cancellationToken).ConfigureAwait(false);
if (state is null)
{
return Results.NotFound();
}
var artifacts = await artifactReader.ListAsync(runId, cancellationToken).ConfigureAwait(false);
var response = artifacts
.Select(artifact => new
{
artifact.Name,
artifact.Type,
artifact.SourcePath,
artifact.StoredPath,
artifact.Status,
artifact.Notes,
artifact.CapturedAt,
artifact.ExpressionJson
})
.ToList();
return Results.Ok(response);
}
async Task<IResult> HandleCancelRun(
string runId,
IPackRunStateStore stateStore,
IPackRunLogStore logStore,
CancellationToken cancellationToken)
{
if (string.IsNullOrWhiteSpace(runId))
{
return Results.BadRequest(new { error = "runId is required." });
}
var state = await stateStore.GetAsync(runId, cancellationToken).ConfigureAwait(false);
if (state is null)
{
return Results.NotFound();
}
var now = DateTimeOffset.UtcNow;
var updatedSteps = state.Steps.Values
.Select(step => step.Status is PackRunStepExecutionStatus.Succeeded or PackRunStepExecutionStatus.Skipped
? step
: step with
{
Status = PackRunStepExecutionStatus.Skipped,
StatusReason = "cancelled",
LastTransitionAt = now,
NextAttemptAt = null
})
.ToDictionary(step => step.StepId, step => step, StringComparer.Ordinal);
var updatedState = state with
{
UpdatedAt = now,
Steps = new ReadOnlyDictionary<string, PackRunStepStateRecord>(updatedSteps)
};
await stateStore.SaveAsync(updatedState, cancellationToken).ConfigureAwait(false);
var metadata = new Dictionary<string, string>(StringComparer.Ordinal)
{
["planHash"] = state.PlanHash
};
await logStore.AppendAsync(runId, new PackRunLogEntry(now, "warn", "run.cancel-requested", "Run cancellation requested.", null, metadata), cancellationToken).ConfigureAwait(false);
await logStore.AppendAsync(runId, new PackRunLogEntry(DateTimeOffset.UtcNow, "info", "run.cancelled", "Run cancelled; remaining steps marked as skipped.", null, metadata), cancellationToken).ConfigureAwait(false);
return Results.Accepted($"/v1/task-runner/runs/{runId}", new { status = "cancelled" });
}
app.Run();
static IDictionary<string, JsonNode?>? ConvertInputs(JsonObject? node)
{
if (node is null)
{
@@ -303,7 +436,7 @@ static IDictionary<string, JsonNode?>? ConvertInputs(JsonObject? node)
return dictionary;
}
internal sealed record CreateRunRequest(string? RunId, string Manifest, JsonObject? Inputs);
internal sealed record CreateRunRequest(string? RunId, string Manifest, JsonObject? Inputs, string? TenantId);
internal sealed record SimulationRequest(string Manifest, JsonObject? Inputs);
@@ -359,7 +492,7 @@ internal sealed record RunStateStepResponse(
DateTimeOffset? NextAttemptAt,
string? StatusReason);
internal sealed record ApprovalDecisionDto(string Decision, string? ActorId, string? Summary);
internal sealed record ApprovalDecisionDto(string Decision, string PlanHash, string? ActorId, string? Summary);
internal sealed record RunLogEntryResponse(
DateTimeOffset Timestamp,

View File

@@ -33,7 +33,7 @@
<ProjectReference Include="..\StellaOps.TaskRunner.Infrastructure\StellaOps.TaskRunner.Infrastructure.csproj"/>
<ProjectReference Include="..\..\Telemetry\StellaOps.Telemetry.Core\StellaOps.Telemetry.Core\StellaOps.Telemetry.Core.csproj"/>
<ProjectReference Include="..\..\..\Telemetry\StellaOps.Telemetry.Core\StellaOps.Telemetry.Core\StellaOps.Telemetry.Core.csproj"/>
</ItemGroup>

View File

@@ -9,6 +9,7 @@ public sealed class TaskRunnerServiceOptions
public string QueuePath { get; set; } = Path.Combine(AppContext.BaseDirectory, "queue");
public string ArchivePath { get; set; } = Path.Combine(AppContext.BaseDirectory, "queue", "archive");
public string LogsPath { get; set; } = Path.Combine(AppContext.BaseDirectory, "logs", "runs");
public string ArtifactsPath { get; set; } = Path.Combine(AppContext.BaseDirectory, "artifacts");
public TaskRunnerStorageOptions Storage { get; set; } = new();
}

View File

@@ -73,6 +73,13 @@ if (string.Equals(workerStorageOptions.Mode, TaskRunnerStorageModes.Mongo, Strin
builder.Services.AddSingleton<IPackRunLogStore, MongoPackRunLogStore>();
builder.Services.AddSingleton<IPackRunApprovalStore, MongoPackRunApprovalStore>();
builder.Services.AddSingleton<IPackRunArtifactUploader, MongoPackRunArtifactUploader>();
builder.Services.AddSingleton<IPackRunProvenanceWriter>(sp =>
{
var db = sp.GetRequiredService<IMongoDatabase>();
var options = sp.GetRequiredService<TaskRunnerMongoOptions>();
var timeProvider = sp.GetRequiredService<TimeProvider>();
return new MongoPackRunProvenanceWriter(db, options, timeProvider);
});
}
else
{
@@ -98,6 +105,12 @@ else
var logger = sp.GetRequiredService<ILogger<FilesystemPackRunArtifactUploader>>();
return new FilesystemPackRunArtifactUploader(options.ArtifactsPath, timeProvider, logger);
});
builder.Services.AddSingleton<IPackRunProvenanceWriter>(sp =>
{
var options = sp.GetRequiredService<IOptions<PackRunWorkerOptions>>().Value;
var timeProvider = sp.GetRequiredService<TimeProvider>();
return new FilesystemPackRunProvenanceWriter(options.ArtifactsPath, timeProvider);
});
}
builder.Services.AddHostedService<PackRunWorkerService>();

View File

@@ -24,6 +24,7 @@ public sealed class PackRunWorkerService : BackgroundService
private readonly PackRunSimulationEngine simulationEngine;
private readonly IPackRunStepExecutor executor;
private readonly IPackRunArtifactUploader artifactUploader;
private readonly IPackRunProvenanceWriter provenanceWriter;
private readonly IPackRunLogStore logStore;
private readonly ILogger<PackRunWorkerService> logger;
private readonly UpDownCounter<long> runningSteps;
@@ -36,17 +37,19 @@ public sealed class PackRunWorkerService : BackgroundService
PackRunSimulationEngine simulationEngine,
IPackRunStepExecutor executor,
IPackRunArtifactUploader artifactUploader,
IPackRunProvenanceWriter provenanceWriter,
IPackRunLogStore logStore,
IOptions<PackRunWorkerOptions> options,
ILogger<PackRunWorkerService> logger)
{
this.dispatcher = dispatcher ?? throw new ArgumentNullException(nameof(dispatcher));
this.processor = processor ?? throw new ArgumentNullException(nameof(processor));
this.stateStore = stateStore ?? throw new ArgumentNullException(nameof(stateStore));
this.stateStore = stateStore ?? throw new ArgumentNullException(nameof(stateStore));
this.graphBuilder = graphBuilder ?? throw new ArgumentNullException(nameof(graphBuilder));
this.simulationEngine = simulationEngine ?? throw new ArgumentNullException(nameof(simulationEngine));
this.executor = executor ?? throw new ArgumentNullException(nameof(executor));
this.artifactUploader = artifactUploader ?? throw new ArgumentNullException(nameof(artifactUploader));
this.provenanceWriter = provenanceWriter ?? throw new ArgumentNullException(nameof(provenanceWriter));
this.logStore = logStore ?? throw new ArgumentNullException(nameof(logStore));
this.options = options?.Value ?? throw new ArgumentNullException(nameof(options));
this.logger = logger ?? throw new ArgumentNullException(nameof(logger));
@@ -165,6 +168,7 @@ public sealed class PackRunWorkerService : BackgroundService
"Run finished successfully.",
cancellationToken).ConfigureAwait(false);
await artifactUploader.UploadAsync(context, updatedState, context.Plan.Outputs, cancellationToken).ConfigureAwait(false);
await provenanceWriter.WriteAsync(context, updatedState, cancellationToken).ConfigureAwait(false);
}
else
{

View File

@@ -36,7 +36,7 @@
<ProjectReference Include="..\StellaOps.TaskRunner.Infrastructure\StellaOps.TaskRunner.Infrastructure.csproj"/>
<ProjectReference Include="..\..\Telemetry\StellaOps.Telemetry.Core\StellaOps.Telemetry.Core\StellaOps.Telemetry.Core.csproj"/>
<ProjectReference Include="..\..\..\Telemetry\StellaOps.Telemetry.Core\StellaOps.Telemetry.Core\StellaOps.Telemetry.Core.csproj"/>
</ItemGroup>

View File

@@ -2,16 +2,16 @@
| Task ID | Status | Sprint | Dependency | Notes |
| --- | --- | --- | --- | --- |
| TASKRUN-41-001 | BLOCKED | SPRINT_0157_0001_0001_taskrunner_i | — | Blocked: TaskRunner architecture/API contracts and Sprint 120/130/140 inputs not published. |
| TASKRUN-AIRGAP-56-001 | TODO | SPRINT_0157_0001_0001_taskrunner_i | TASKRUN-41-001 | Sealed-mode plan validation; depends on 41-001. |
| TASKRUN-AIRGAP-56-002 | TODO | SPRINT_0157_0001_0001_taskrunner_i | TASKRUN-AIRGAP-56-001 | Bundle ingestion helpers; depends on 56-001. |
| TASKRUN-AIRGAP-57-001 | TODO | SPRINT_0157_0001_0001_taskrunner_i | TASKRUN-AIRGAP-56-002 | Sealed install enforcement; depends on 56-002. |
| TASKRUN-AIRGAP-58-001 | TODO | SPRINT_0157_0001_0001_taskrunner_i | TASKRUN-AIRGAP-57-001 | Evidence bundles for imports; depends on 57-001. |
| TASKRUN-41-001 | DONE (2025-11-30) | SPRINT_0157_0001_0001_taskrunner_i | — | Implemented run API, Mongo/file stores, approvals, provenance manifest per architecture contract. |
| TASKRUN-AIRGAP-56-001 | BLOCKED (2025-11-30) | SPRINT_0157_0001_0001_taskrunner_i | TASKRUN-41-001 | Sealed-mode plan validation; depends on 41-001. |
| TASKRUN-AIRGAP-56-002 | BLOCKED (2025-11-30) | SPRINT_0157_0001_0001_taskrunner_i | TASKRUN-AIRGAP-56-001 | Bundle ingestion helpers; depends on 56-001. |
| TASKRUN-AIRGAP-57-001 | BLOCKED (2025-11-30) | SPRINT_0157_0001_0001_taskrunner_i | TASKRUN-AIRGAP-56-002 | Sealed install enforcement; depends on 56-002. |
| TASKRUN-AIRGAP-58-001 | BLOCKED (2025-11-30) | SPRINT_0157_0001_0001_taskrunner_i | TASKRUN-AIRGAP-57-001 | Evidence bundles for imports; depends on 57-001. |
| TASKRUN-42-001 | BLOCKED (2025-11-25) | SPRINT_0157_0001_0001_taskrunner_i | — | Execution engine enhancements (loops/conditionals/maxParallel), simulation mode, policy gate integration. Blocked: loop/conditional semantics and policy-gate evaluation contract not published. |
| TASKRUN-OAS-61-001 | TODO | SPRINT_0157_0001_0001_taskrunner_i | TASKRUN-41-001 | Document APIs; depends on 41-001. |
| TASKRUN-OAS-61-002 | TODO | SPRINT_0157_0001_0001_taskrunner_i | TASKRUN-OAS-61-001 | Well-known OpenAPI endpoint; depends on 61-001. |
| TASKRUN-OAS-62-001 | TODO | SPRINT_0157_0001_0001_taskrunner_i | TASKRUN-OAS-61-002 | SDK examples; depends on 61-002. |
| TASKRUN-OAS-63-001 | TODO | SPRINT_0157_0001_0001_taskrunner_i | TASKRUN-OAS-62-001 | Deprecation headers/notifications; depends on 62-001. |
| TASKRUN-OAS-61-001 | BLOCKED (2025-11-30) | SPRINT_0157_0001_0001_taskrunner_i | TASKRUN-41-001 | Document APIs; depends on 41-001. |
| TASKRUN-OAS-61-002 | BLOCKED (2025-11-30) | SPRINT_0157_0001_0001_taskrunner_i | TASKRUN-OAS-61-001 | Well-known OpenAPI endpoint; depends on 61-001. |
| TASKRUN-OAS-62-001 | BLOCKED (2025-11-30) | SPRINT_0157_0001_0001_taskrunner_i | TASKRUN-OAS-61-002 | SDK examples; depends on 61-002. |
| TASKRUN-OAS-63-001 | BLOCKED (2025-11-30) | SPRINT_0157_0001_0001_taskrunner_i | TASKRUN-OAS-62-001 | Deprecation headers/notifications; depends on 62-001. |
| TASKRUN-OBS-50-001 | DONE (2025-11-25) | SPRINT_0157_0001_0001_taskrunner_i | — | Telemetry core adoption. |
| TASKRUN-OBS-51-001 | DONE (2025-11-25) | SPRINT_0157_0001_0001_taskrunner_i | TASKRUN-OBS-50-001 | Metrics/SLOs; depends on 50-001. |
| TASKRUN-OBS-52-001 | BLOCKED (2025-11-25) | SPRINT_0157_0001_0001_taskrunner_i | TASKRUN-OBS-51-001 | Timeline events; blocked: schema/evidence-pointer contract not published. |

View File

@@ -0,0 +1,16 @@
using StellaOps.TimelineIndexer.Core.Models;
namespace StellaOps.TimelineIndexer.Core.Abstractions;
/// <summary>
/// Persistence contract for timeline event ingestion.
/// Implementations must enforce tenant isolation and idempotency on (tenant_id, event_id).
/// </summary>
public interface ITimelineEventStore
{
/// <summary>
/// Inserts the event atomically (headers, payloads, digests).
/// Must be idempotent on (tenant_id, event_id) and return whether a new row was created.
/// </summary>
Task<bool> InsertAsync(TimelineEventEnvelope envelope, CancellationToken cancellationToken = default);
}

View File

@@ -0,0 +1,12 @@
using StellaOps.TimelineIndexer.Core.Models;
namespace StellaOps.TimelineIndexer.Core.Abstractions;
/// <summary>
/// Abstraction over transport-specific event subscriptions (NATS/Redis/etc.).
/// Implementations yield tenant-scoped timeline event envelopes in publish order.
/// </summary>
public interface ITimelineEventSubscriber
{
IAsyncEnumerable<TimelineEventEnvelope> SubscribeAsync(CancellationToken cancellationToken = default);
}

View File

@@ -0,0 +1,12 @@
using StellaOps.TimelineIndexer.Core.Models;
using StellaOps.TimelineIndexer.Core.Models.Results;
namespace StellaOps.TimelineIndexer.Core.Abstractions;
/// <summary>
/// High-level ingestion service that validates, hashes, and persists timeline events.
/// </summary>
public interface ITimelineIngestionService
{
Task<TimelineIngestResult> IngestAsync(TimelineEventEnvelope envelope, CancellationToken cancellationToken = default);
}

View File

@@ -0,0 +1,9 @@
using StellaOps.TimelineIndexer.Core.Models;
namespace StellaOps.TimelineIndexer.Core.Abstractions;
public interface ITimelineQueryService
{
Task<IReadOnlyList<TimelineEventView>> QueryAsync(string tenantId, TimelineQueryOptions options, CancellationToken cancellationToken = default);
Task<TimelineEventView?> GetAsync(string tenantId, string eventId, CancellationToken cancellationToken = default);
}

View File

@@ -0,0 +1,9 @@
using StellaOps.TimelineIndexer.Core.Models;
namespace StellaOps.TimelineIndexer.Core.Abstractions;
public interface ITimelineQueryStore
{
Task<IReadOnlyList<TimelineEventView>> QueryAsync(string tenantId, TimelineQueryOptions options, CancellationToken cancellationToken);
Task<TimelineEventView?> GetAsync(string tenantId, string eventId, CancellationToken cancellationToken);
}

View File

@@ -1,6 +0,0 @@
namespace StellaOps.TimelineIndexer.Core;
public class Class1
{
}

View File

@@ -0,0 +1,3 @@
namespace StellaOps.TimelineIndexer.Core.Models.Results;
public sealed record TimelineIngestResult(bool Inserted);

View File

@@ -0,0 +1,29 @@
namespace StellaOps.TimelineIndexer.Core.Models;
/// <summary>
/// Canonical ingestion envelope for timeline events.
/// Maps closely to orchestrator/notify envelopes while remaining transport-agnostic.
/// </summary>
public sealed class TimelineEventEnvelope
{
public required string EventId { get; init; }
public required string TenantId { get; init; }
public required string EventType { get; init; }
public required string Source { get; init; }
public required DateTimeOffset OccurredAt { get; init; }
public string? CorrelationId { get; init; }
public string? TraceId { get; init; }
public string? Actor { get; init; }
public string Severity { get; init; } = "info";
public string? PayloadHash { get; set; }
public string RawPayloadJson { get; init; } = "{}";
public string? NormalizedPayloadJson { get; init; }
public IDictionary<string, string>? Attributes { get; init; }
public string? BundleDigest { get; init; }
public Guid? BundleId { get; init; }
public string? AttestationSubject { get; init; }
public string? AttestationDigest { get; init; }
public string? ManifestUri { get; init; }
}

View File

@@ -0,0 +1,20 @@
namespace StellaOps.TimelineIndexer.Core.Models;
/// <summary>
/// Projected timeline event for query responses.
/// </summary>
public sealed class TimelineEventView
{
public required long EventSeq { get; init; }
public required string EventId { get; init; }
public required string TenantId { get; init; }
public required string EventType { get; init; }
public required string Source { get; init; }
public required DateTimeOffset OccurredAt { get; init; }
public required DateTimeOffset ReceivedAt { get; init; }
public string? CorrelationId { get; init; }
public string? TraceId { get; init; }
public string? Actor { get; init; }
public string Severity { get; init; } = "info";
public string? PayloadHash { get; init; }
}

View File

@@ -0,0 +1,15 @@
namespace StellaOps.TimelineIndexer.Core.Models;
/// <summary>
/// Query filters for timeline listing.
/// </summary>
public sealed class TimelineQueryOptions
{
public string? EventType { get; init; }
public string? CorrelationId { get; init; }
public string? TraceId { get; init; }
public string? Severity { get; init; }
public DateTimeOffset? Since { get; init; }
public long? AfterEventSeq { get; init; }
public int Limit { get; init; } = 100;
}

View File

@@ -0,0 +1,46 @@
using System.Security.Cryptography;
using System.Text;
using StellaOps.TimelineIndexer.Core.Abstractions;
using StellaOps.TimelineIndexer.Core.Models;
using StellaOps.TimelineIndexer.Core.Models.Results;
namespace StellaOps.TimelineIndexer.Core.Services;
/// <summary>
/// Validates and persists timeline events with deterministic hashing.
/// </summary>
public sealed class TimelineIngestionService(ITimelineEventStore store) : ITimelineIngestionService
{
public async Task<TimelineIngestResult> IngestAsync(TimelineEventEnvelope envelope, CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(envelope);
Validate(envelope);
if (string.IsNullOrWhiteSpace(envelope.PayloadHash))
{
envelope.PayloadHash = ComputePayloadHash(envelope.RawPayloadJson);
}
var inserted = await store.InsertAsync(envelope, cancellationToken).ConfigureAwait(false);
return new TimelineIngestResult(inserted);
}
private static void Validate(TimelineEventEnvelope envelope)
{
if (string.IsNullOrWhiteSpace(envelope.EventId))
throw new ArgumentException("event_id is required", nameof(envelope));
if (string.IsNullOrWhiteSpace(envelope.TenantId))
throw new ArgumentException("tenant_id is required", nameof(envelope));
if (string.IsNullOrWhiteSpace(envelope.EventType))
throw new ArgumentException("event_type is required", nameof(envelope));
if (string.IsNullOrWhiteSpace(envelope.Source))
throw new ArgumentException("source is required", nameof(envelope));
}
internal static string ComputePayloadHash(string payloadJson)
{
var bytes = Encoding.UTF8.GetBytes(payloadJson ?? string.Empty);
var hash = SHA256.HashData(bytes);
return $"sha256:{Convert.ToHexString(hash).ToLowerInvariant()}";
}
}

View File

@@ -0,0 +1,29 @@
using StellaOps.TimelineIndexer.Core.Abstractions;
using StellaOps.TimelineIndexer.Core.Models;
namespace StellaOps.TimelineIndexer.Core.Services;
public sealed class TimelineQueryService(ITimelineQueryStore store) : ITimelineQueryService
{
public Task<IReadOnlyList<TimelineEventView>> QueryAsync(string tenantId, TimelineQueryOptions options, CancellationToken cancellationToken = default)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
ArgumentNullException.ThrowIfNull(options);
return store.QueryAsync(tenantId, Normalize(options), cancellationToken);
}
public Task<TimelineEventView?> GetAsync(string tenantId, string eventId, CancellationToken cancellationToken = default)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
ArgumentException.ThrowIfNullOrWhiteSpace(eventId);
return store.GetAsync(tenantId, eventId, cancellationToken);
}
private static TimelineQueryOptions Normalize(TimelineQueryOptions options)
{
var limit = options.Limit;
if (limit <= 0) limit = 100;
if (limit > 500) limit = 500;
return options with { Limit = limit };
}
}

View File

@@ -1,6 +0,0 @@
namespace StellaOps.TimelineIndexer.Infrastructure;
public class Class1
{
}

View File

@@ -0,0 +1,111 @@
-- 001_initial_schema.sql
-- Establishes Timeline Indexer schema, RLS scaffolding, and evidence linkage tables.
CREATE EXTENSION IF NOT EXISTS pgcrypto;
CREATE SCHEMA IF NOT EXISTS timeline;
CREATE SCHEMA IF NOT EXISTS timeline_app;
-- Enforce tenant context for all RLS policies
CREATE OR REPLACE FUNCTION timeline_app.require_current_tenant()
RETURNS text
LANGUAGE plpgsql
AS $$
DECLARE
tenant_text text;
BEGIN
tenant_text := current_setting('app.current_tenant', true);
IF tenant_text IS NULL OR length(tenant_text) = 0 THEN
RAISE EXCEPTION 'app.current_tenant is not set for the current session';
END IF;
RETURN tenant_text;
END;
$$;
-- Severity enum keeps ordering deterministic and compact
DO $$
BEGIN
CREATE TYPE timeline.event_severity AS ENUM ('info', 'notice', 'warn', 'error', 'critical');
EXCEPTION
WHEN duplicate_object THEN NULL;
END
$$;
-- Core event header table (dedupe + ordering)
CREATE TABLE IF NOT EXISTS timeline.timeline_events
(
event_seq bigserial PRIMARY KEY,
event_id text NOT NULL,
tenant_id text NOT NULL,
source text NOT NULL,
event_type text NOT NULL,
occurred_at timestamptz NOT NULL,
received_at timestamptz NOT NULL DEFAULT (NOW() AT TIME ZONE 'UTC'),
correlation_id text,
trace_id text,
actor text,
severity timeline.event_severity NOT NULL DEFAULT 'info',
payload_hash text CHECK (payload_hash IS NULL OR payload_hash ~ '^[0-9a-f]{64}$'),
attributes jsonb NOT NULL DEFAULT '{}'::jsonb,
UNIQUE (tenant_id, event_id)
);
CREATE INDEX IF NOT EXISTS ix_timeline_events_tenant_occurred
ON timeline.timeline_events (tenant_id, occurred_at DESC, event_seq DESC);
CREATE INDEX IF NOT EXISTS ix_timeline_events_type
ON timeline.timeline_events (tenant_id, event_type, occurred_at DESC);
ALTER TABLE timeline.timeline_events ENABLE ROW LEVEL SECURITY;
CREATE POLICY IF NOT EXISTS timeline_events_isolation
ON timeline.timeline_events
USING (tenant_id = timeline_app.require_current_tenant())
WITH CHECK (tenant_id = timeline_app.require_current_tenant());
-- Raw and normalized payloads per event
CREATE TABLE IF NOT EXISTS timeline.timeline_event_details
(
event_id text NOT NULL,
tenant_id text NOT NULL,
envelope_version text NOT NULL,
raw_payload jsonb NOT NULL,
normalized_payload jsonb,
created_at timestamptz NOT NULL DEFAULT (NOW() AT TIME ZONE 'UTC'),
CONSTRAINT fk_event_details FOREIGN KEY (event_id, tenant_id)
REFERENCES timeline.timeline_events (event_id, tenant_id) ON DELETE CASCADE,
PRIMARY KEY (event_id, tenant_id)
);
ALTER TABLE timeline.timeline_event_details ENABLE ROW LEVEL SECURITY;
CREATE POLICY IF NOT EXISTS timeline_event_details_isolation
ON timeline.timeline_event_details
USING (tenant_id = timeline_app.require_current_tenant())
WITH CHECK (tenant_id = timeline_app.require_current_tenant());
-- Evidence linkage (bundle/attestation manifests)
CREATE TABLE IF NOT EXISTS timeline.timeline_event_digests
(
digest_id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id text NOT NULL,
event_id text NOT NULL,
bundle_id uuid,
bundle_digest text,
attestation_subject text,
attestation_digest text,
manifest_uri text,
created_at timestamptz NOT NULL DEFAULT (NOW() AT TIME ZONE 'UTC'),
CONSTRAINT fk_event_digest_event FOREIGN KEY (event_id, tenant_id)
REFERENCES timeline.timeline_events (event_id, tenant_id) ON DELETE CASCADE,
CONSTRAINT ck_bundle_digest_sha CHECK (bundle_digest IS NULL OR bundle_digest ~ '^sha256:[0-9a-f]{64}$'),
CONSTRAINT ck_attestation_digest_sha CHECK (attestation_digest IS NULL OR attestation_digest ~ '^sha256:[0-9a-f]{64}$')
);
CREATE INDEX IF NOT EXISTS ix_timeline_digests_event
ON timeline.timeline_event_digests (tenant_id, event_id);
CREATE INDEX IF NOT EXISTS ix_timeline_digests_bundle
ON timeline.timeline_event_digests (tenant_id, bundle_digest);
ALTER TABLE timeline.timeline_event_digests ENABLE ROW LEVEL SECURITY;
CREATE POLICY IF NOT EXISTS timeline_event_digests_isolation
ON timeline.timeline_event_digests
USING (tenant_id = timeline_app.require_current_tenant())
WITH CHECK (tenant_id = timeline_app.require_current_tenant());

View File

@@ -0,0 +1,112 @@
using Microsoft.Extensions.Logging;
using Npgsql;
using System.Text.Json;
using StellaOps.Infrastructure.Postgres.Repositories;
using StellaOps.TimelineIndexer.Core.Abstractions;
using StellaOps.TimelineIndexer.Core.Models;
namespace StellaOps.TimelineIndexer.Infrastructure.Db;
/// <summary>
/// Postgres-backed implementation of ITimelineEventStore.
/// </summary>
public sealed class TimelineEventStore(TimelineIndexerDataSource dataSource, ILogger<TimelineEventStore> logger)
: RepositoryBase<TimelineIndexerDataSource>(dataSource, logger), ITimelineEventStore
{
private const string InsertEventSql = """
INSERT INTO timeline.timeline_events
(event_id, tenant_id, source, event_type, occurred_at, correlation_id, trace_id, actor, severity, payload_hash, attributes)
VALUES
(@event_id, @tenant_id, @source, @event_type, @occurred_at, @correlation_id, @trace_id, @actor, @severity, @payload_hash, @attributes::jsonb)
ON CONFLICT (tenant_id, event_id) DO NOTHING
RETURNING event_seq;
""";
private const string InsertDetailSql = """
INSERT INTO timeline.timeline_event_details
(event_id, tenant_id, envelope_version, raw_payload, normalized_payload)
VALUES
(@event_id, @tenant_id, @envelope_version, @raw_payload::jsonb, @normalized_payload::jsonb)
ON CONFLICT (event_id, tenant_id) DO NOTHING;
""";
private const string InsertDigestSql = """
INSERT INTO timeline.timeline_event_digests
(tenant_id, event_id, bundle_id, bundle_digest, attestation_subject, attestation_digest, manifest_uri)
VALUES
(@tenant_id, @event_id, @bundle_id, @bundle_digest, @attestation_subject, @attestation_digest, @manifest_uri)
ON CONFLICT (event_id, tenant_id) DO NOTHING;
""";
public async Task<bool> InsertAsync(TimelineEventEnvelope envelope, CancellationToken cancellationToken = default)
{
await using var connection = await DataSource.OpenConnectionAsync(envelope.TenantId, "writer", cancellationToken)
.ConfigureAwait(false);
await using var transaction = await connection.BeginTransactionAsync(cancellationToken).ConfigureAwait(false);
var inserted = await InsertEventAsync(connection, envelope, cancellationToken).ConfigureAwait(false);
if (!inserted)
{
await transaction.RollbackAsync(cancellationToken).ConfigureAwait(false);
return false;
}
await InsertDetailAsync(connection, envelope, cancellationToken).ConfigureAwait(false);
await InsertDigestAsync(connection, envelope, cancellationToken).ConfigureAwait(false);
await transaction.CommitAsync(cancellationToken).ConfigureAwait(false);
return true;
}
private async Task<bool> InsertEventAsync(NpgsqlConnection connection, TimelineEventEnvelope envelope, CancellationToken cancellationToken)
{
await using var command = CreateCommand(InsertEventSql, connection);
AddParameter(command, "event_id", envelope.EventId);
AddParameter(command, "tenant_id", envelope.TenantId);
AddParameter(command, "source", envelope.Source);
AddParameter(command, "event_type", envelope.EventType);
AddParameter(command, "occurred_at", envelope.OccurredAt);
AddParameter(command, "correlation_id", envelope.CorrelationId);
AddParameter(command, "trace_id", envelope.TraceId);
AddParameter(command, "actor", envelope.Actor);
AddParameter(command, "severity", envelope.Severity);
AddParameter(command, "payload_hash", envelope.PayloadHash);
AddJsonbParameter(command, "attributes", envelope.Attributes is null
? "{}"
: JsonSerializer.Serialize(envelope.Attributes));
var result = await command.ExecuteScalarAsync(cancellationToken).ConfigureAwait(false);
return result is not null;
}
private async Task InsertDetailAsync(NpgsqlConnection connection, TimelineEventEnvelope envelope, CancellationToken cancellationToken)
{
await using var command = CreateCommand(InsertDetailSql, connection);
AddParameter(command, "event_id", envelope.EventId);
AddParameter(command, "tenant_id", envelope.TenantId);
AddParameter(command, "envelope_version", "orch.event.v1");
AddJsonbParameter(command, "raw_payload", envelope.RawPayloadJson);
AddJsonbParameter(command, "normalized_payload", envelope.NormalizedPayloadJson);
await command.ExecuteNonQueryAsync(cancellationToken).ConfigureAwait(false);
}
private async Task InsertDigestAsync(NpgsqlConnection connection, TimelineEventEnvelope envelope, CancellationToken cancellationToken)
{
if (envelope.BundleDigest is null && envelope.AttestationDigest is null && envelope.ManifestUri is null && envelope.BundleId is null)
{
return;
}
await using var command = CreateCommand(InsertDigestSql, connection);
AddParameter(command, "tenant_id", envelope.TenantId);
AddParameter(command, "event_id", envelope.EventId);
AddParameter(command, "bundle_id", envelope.BundleId);
AddParameter(command, "bundle_digest", envelope.BundleDigest);
AddParameter(command, "attestation_subject", envelope.AttestationSubject);
AddParameter(command, "attestation_digest", envelope.AttestationDigest);
AddParameter(command, "manifest_uri", envelope.ManifestUri);
await command.ExecuteNonQueryAsync(cancellationToken).ConfigureAwait(false);
}
}

View File

@@ -0,0 +1,47 @@
using System.Reflection;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
using StellaOps.Infrastructure.Postgres.Migrations;
using StellaOps.Infrastructure.Postgres.Options;
namespace StellaOps.TimelineIndexer.Infrastructure.Db;
/// <summary>
/// Runs embedded SQL migrations for the Timeline Indexer schema.
/// </summary>
public sealed class TimelineIndexerMigrationRunner
{
private readonly PostgresOptions _options;
private readonly ILogger<TimelineIndexerMigrationRunner> _logger;
private const string ResourcePrefix = "StellaOps.TimelineIndexer.Infrastructure.Db.Migrations";
public TimelineIndexerMigrationRunner(
IOptions<PostgresOptions> options,
ILogger<TimelineIndexerMigrationRunner> logger)
{
_options = options.Value ?? throw new ArgumentNullException(nameof(options));
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
}
/// <summary>
/// Apply all pending migrations from embedded resources.
/// </summary>
public Task<int> RunAsync(CancellationToken cancellationToken = default)
{
var schema = string.IsNullOrWhiteSpace(_options.SchemaName)
? TimelineIndexerDataSource.DefaultSchemaName
: _options.SchemaName!;
var runner = new MigrationRunner(
_options.ConnectionString,
schema,
moduleName: "TimelineIndexer",
_logger);
return runner.RunFromAssemblyAsync(
assembly: Assembly.GetExecutingAssembly(),
resourcePrefix: ResourcePrefix,
cancellationToken);
}
}

View File

@@ -0,0 +1,85 @@
using Npgsql;
using StellaOps.Infrastructure.Postgres.Repositories;
using StellaOps.TimelineIndexer.Core.Abstractions;
using StellaOps.TimelineIndexer.Core.Models;
namespace StellaOps.TimelineIndexer.Infrastructure.Db;
public sealed class TimelineQueryStore(TimelineIndexerDataSource dataSource, ILogger<TimelineQueryStore> logger)
: RepositoryBase<TimelineIndexerDataSource>(dataSource, logger), ITimelineQueryStore
{
private const string BaseSelect = """
SELECT event_seq, event_id, tenant_id, event_type, source, occurred_at, received_at, correlation_id, trace_id, actor, severity, payload_hash
FROM timeline.timeline_events
WHERE tenant_id = @tenant_id
""";
public async Task<IReadOnlyList<TimelineEventView>> QueryAsync(string tenantId, TimelineQueryOptions options, CancellationToken cancellationToken)
{
ArgumentNullException.ThrowIfNull(options);
var sql = new System.Text.StringBuilder(BaseSelect);
if (!string.IsNullOrWhiteSpace(options.EventType)) sql.Append(" AND event_type = @event_type");
if (!string.IsNullOrWhiteSpace(options.CorrelationId)) sql.Append(" AND correlation_id = @correlation_id");
if (!string.IsNullOrWhiteSpace(options.TraceId)) sql.Append(" AND trace_id = @trace_id");
if (!string.IsNullOrWhiteSpace(options.Severity)) sql.Append(" AND severity = @severity");
if (options.Since is not null) sql.Append(" AND occurred_at >= @since");
if (options.AfterEventSeq is not null) sql.Append(" AND event_seq < @after_seq");
sql.Append(" ORDER BY occurred_at DESC, event_seq DESC LIMIT @limit");
return await QueryAsync(
tenantId,
sql.ToString(),
cmd =>
{
AddParameter(cmd, "tenant_id", tenantId);
AddParameter(cmd, "event_type", options.EventType);
AddParameter(cmd, "correlation_id", options.CorrelationId);
AddParameter(cmd, "trace_id", options.TraceId);
AddParameter(cmd, "severity", options.Severity);
AddParameter(cmd, "since", options.Since);
AddParameter(cmd, "after_seq", options.AfterEventSeq);
AddParameter(cmd, "limit", Math.Clamp(options.Limit, 1, 500));
},
MapEvent,
cancellationToken).ConfigureAwait(false);
}
public async Task<TimelineEventView?> GetAsync(string tenantId, string eventId, CancellationToken cancellationToken)
{
const string sql = """
SELECT event_seq, event_id, tenant_id, event_type, source, occurred_at, received_at, correlation_id, trace_id, actor, severity, payload_hash
FROM timeline.timeline_events
WHERE tenant_id = @tenant_id AND event_id = @event_id
""";
return await QuerySingleOrDefaultAsync(
tenantId,
sql,
cmd =>
{
AddParameter(cmd, "tenant_id", tenantId);
AddParameter(cmd, "event_id", eventId);
},
MapEvent,
cancellationToken).ConfigureAwait(false);
}
private static TimelineEventView MapEvent(NpgsqlDataReader reader) => new()
{
EventSeq = reader.GetInt64(0),
EventId = reader.GetString(1),
TenantId = reader.GetString(2),
EventType = reader.GetString(3),
Source = reader.GetString(4),
OccurredAt = reader.GetFieldValue<DateTimeOffset>(5),
ReceivedAt = reader.GetFieldValue<DateTimeOffset>(6),
CorrelationId = GetNullableString(reader, 7),
TraceId = GetNullableString(reader, 8),
Actor = GetNullableString(reader, 9),
Severity = reader.GetString(10),
PayloadHash = GetNullableString(reader, 11)
};
}

View File

@@ -0,0 +1,30 @@
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using StellaOps.Infrastructure.Postgres.Options;
using StellaOps.TimelineIndexer.Infrastructure.Db;
using StellaOps.TimelineIndexer.Core.Abstractions;
using StellaOps.TimelineIndexer.Core.Services;
namespace StellaOps.TimelineIndexer.Infrastructure.DependencyInjection;
/// <summary>
/// Timeline Indexer PostgreSQL service registration helpers.
/// </summary>
public static class ServiceCollectionExtensions
{
private const string DefaultSection = "Postgres:Timeline";
/// <summary>
/// Registers Postgres options, data source, and migration runner for the Timeline Indexer.
/// </summary>
public static IServiceCollection AddTimelineIndexerPostgres(
this IServiceCollection services,
IConfiguration configuration,
string sectionName = DefaultSection)
{
services.Configure<PostgresOptions>(configuration.GetSection(sectionName));
services.AddSingleton<TimelineIndexerDataSource>();
services.AddSingleton<TimelineIndexerMigrationRunner>();
services.AddScoped<ITimelineEventStore, TimelineEventStore>();
services.AddScoped<ITimelineIngestionService, TimelineIngestionService>();
services.AddScoped<ITimel

View File

@@ -3,26 +3,38 @@
<ItemGroup>
<ProjectReference Include="..\StellaOps.TimelineIndexer.Core\StellaOps.TimelineIndexer.Core.csproj"/>
</ItemGroup>
<PropertyGroup>
<ItemGroup>
<ProjectReference Include="..\StellaOps.TimelineIndexer.Core\StellaOps.TimelineIndexer.Core.csproj"/>
<ProjectReference Include="..\..\..\__Libraries\StellaOps.Infrastructure.Postgres\StellaOps.Infrastructure.Postgres.csproj"/>
</ItemGroup>
<PropertyGroup>
<TargetFramework>net10.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
<LangVersion>preview</LangVersion>
<TreatWarningsAsErrors>true</TreatWarningsAsErrors>
</PropertyGroup>
</Project>
<TreatWarningsAsErrors>true</TreatWarningsAsErrors>
</PropertyGroup>
<ItemGroup>
<EmbeddedResource Include="Db/Migrations/*.sql" />
</ItemGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Logging.Abstractions" Version="10.0.0-rc.2.25502.107" />
<PackageReference Include="Microsoft.Extensions.Options" Version="10.0.0-rc.2.25502.107" />
<PackageReference Include="Npgsql" Version="9.0.2" />
</ItemGroup>
</Project>

View File

@@ -0,0 +1,16 @@
using StellaOps.TimelineIndexer.Core.Abstractions;
using StellaOps.TimelineIndexer.Core.Models;
namespace StellaOps.TimelineIndexer.Infrastructure.Subscriptions;
/// <summary>
/// Default no-op subscriber used until transport bindings are configured.
/// Keeps the ingestion worker running without requiring live brokers.
/// </summary>
public sealed class NullTimelineEventSubscriber : ITimelineEventSubscriber
{
public IAsyncEnumerable<TimelineEventEnvelope> SubscribeAsync(CancellationToken cancellationToken = default)
{
return AsyncEnumerable.Empty<TimelineEventEnvelope>();
}
}

View File

@@ -0,0 +1,32 @@
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
using StellaOps.Infrastructure.Postgres.Connections;
using StellaOps.Infrastructure.Postgres.Options;
namespace StellaOps.TimelineIndexer.Infrastructure;
/// <summary>
/// PostgreSQL data source for the Timeline Indexer module.
/// Sets the default schema and carries tenant context via app.current_tenant.
/// </summary>
public sealed class TimelineIndexerDataSource : DataSourceBase
{
public const string DefaultSchemaName = "timeline";
public TimelineIndexerDataSource(IOptions<PostgresOptions> options, ILogger<TimelineIndexerDataSource> logger)
: base(EnsureSchema(options.Value), logger)
{
}
protected override string ModuleName => "TimelineIndexer";
private static PostgresOptions EnsureSchema(PostgresOptions baseOptions)
{
if (string.IsNullOrWhiteSpace(baseOptions.SchemaName))
{
baseOptions.SchemaName = DefaultSchemaName;
}
return baseOptions;
}
}

View File

@@ -0,0 +1,41 @@
using System.Collections.Concurrent;
using System.Threading.Channels;
using StellaOps.TimelineIndexer.Core.Abstractions;
using StellaOps.TimelineIndexer.Core.Models;
namespace StellaOps.TimelineIndexer.Tests;
public sealed class InMemoryTimelineEventSubscriber : ITimelineEventSubscriber
{
private readonly Channel<TimelineEventEnvelope> _channel;
public InMemoryTimelineEventSubscriber(IEnumerable<TimelineEventEnvelope>? seed = null)
{
_channel = Channel.CreateUnbounded<TimelineEventEnvelope>(new UnboundedChannelOptions
{
SingleReader = false,
SingleWriter = false
});
if (seed is not null)
{
foreach (var envelope in seed)
{
_channel.Writer.TryWrite(envelope);
}
_channel.Writer.TryComplete();
}
}
public void Enqueue(TimelineEventEnvelope envelope)
{
_channel.Writer.TryWrite(envelope);
}
public void Complete() => _channel.Writer.TryComplete();
public IAsyncEnumerable<TimelineEventEnvelope> SubscribeAsync(CancellationToken cancellationToken = default)
{
return _channel.Reader.ReadAllAsync(cancellationToken);
}
}

View File

@@ -53,11 +53,11 @@
<ItemGroup>
<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.14.1"/>
@@ -111,25 +111,15 @@
<ItemGroup>
<ProjectReference Include="..\StellaOps.TimelineIndexer.Core\StellaOps.TimelineIndexer.Core.csproj"/>
<ProjectReference Include="..\StellaOps.TimelineIndexer.Infrastructure\StellaOps.TimelineIndexer.Infrastructure.csproj"/>
</ItemGroup>
</Project>
<ItemGroup>
<ProjectReference Include="..\StellaOps.TimelineIndexer.Core\StellaOps.TimelineIndexer.Core.csproj"/>
<ProjectReference Include="..\StellaOps.TimelineIndexer.Infrastructure\StellaOps.TimelineIndexer.Infrastructure.csproj"/>
<ProjectReference Include="..\StellaOps.TimelineIndexer.Worker\StellaOps.TimelineIndexer.Worker.csproj"/>
</ItemGroup>
<ItemGroup>
<Using Include="System"/>
<Using Include="System.Threading.Tasks"/>
</ItemGroup>
</Project>

View File

@@ -0,0 +1,65 @@
using StellaOps.TimelineIndexer.Core.Abstractions;
using StellaOps.TimelineIndexer.Core.Models;
using StellaOps.TimelineIndexer.Core.Services;
namespace StellaOps.TimelineIndexer.Tests;
public class TimelineIngestionServiceTests
{
[Fact]
public async Task Ingest_ComputesHash_WhenMissing()
{
var store = new FakeStore();
var service = new TimelineIngestionService(store);
var envelope = new TimelineEventEnvelope
{
EventId = "evt-1",
TenantId = "tenant-a",
EventType = "job.completed",
Source = "orchestrator",
OccurredAt = DateTimeOffset.UtcNow,
RawPayloadJson = """{"ok":true}"""
};
var result = await service.IngestAsync(envelope);
Assert.True(result.Inserted);
Assert.Equal("sha256:8ceeb2a3cfdd5c6c0257df04e3d6b7c29c6a54f9b89e0ee1d3f3f94a639a6a39", store.LastEnvelope?.PayloadHash);
}
[Fact]
public async Task Ingest_IsIdempotent_OnSameEventId()
{
var store = new FakeStore();
var service = new TimelineIngestionService(store);
var envelope = new TimelineEventEnvelope
{
EventId = "evt-dup",
TenantId = "tenant-a",
EventType = "job.completed",
Source = "orchestrator",
OccurredAt = DateTimeOffset.UtcNow,
RawPayloadJson = "{}"
};
var first = await service.IngestAsync(envelope);
var second = await service.IngestAsync(envelope);
Assert.True(first.Inserted);
Assert.False(second.Inserted);
}
private sealed class FakeStore : ITimelineEventStore
{
private readonly HashSet<(string tenant, string id)> _seen = new();
public TimelineEventEnvelope? LastEnvelope { get; private set; }
public Task<bool> InsertAsync(TimelineEventEnvelope envelope, CancellationToken cancellationToken = default)
{
LastEnvelope = envelope;
var key = (envelope.TenantId, envelope.EventId);
var inserted = _seen.Add(key);
return Task.FromResult(inserted);
}
}
}

View File

@@ -0,0 +1,65 @@
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging.Abstractions;
using StellaOps.TimelineIndexer.Core.Abstractions;
using StellaOps.TimelineIndexer.Core.Models;
using StellaOps.TimelineIndexer.Core.Models.Results;
using StellaOps.TimelineIndexer.Core.Services;
using StellaOps.TimelineIndexer.Worker;
namespace StellaOps.TimelineIndexer.Tests;
public sealed class TimelineIngestionWorkerTests
{
[Fact]
public async Task Worker_Ingests_And_Dedupes()
{
var subscriber = new InMemoryTimelineEventSubscriber();
var store = new RecordingStore();
var serviceCollection = new ServiceCollection();
serviceCollection.AddSingleton<ITimelineEventSubscriber>(subscriber);
serviceCollection.AddSingleton<ITimelineEventStore>(store);
serviceCollection.AddSingleton<ITimelineIngestionService, TimelineIngestionService>();
serviceCollection.AddSingleton<IHostedService, TimelineIngestionWorker>();
serviceCollection.AddLogging(builder => builder.AddProvider(NullLoggerProvider.Instance));
using var host = serviceCollection.BuildServiceProvider();
var hosted = host.GetRequiredService<IHostedService>();
var cts = new CancellationTokenSource(TimeSpan.FromSeconds(2));
await hosted.StartAsync(cts.Token);
var evt = new TimelineEventEnvelope
{
EventId = "evt-1",
TenantId = "tenant-a",
EventType = "test",
Source = "unit",
OccurredAt = DateTimeOffset.UtcNow,
RawPayloadJson = "{}"
};
subscriber.Enqueue(evt);
subscriber.Enqueue(evt); // duplicate
subscriber.Complete();
await Task.Delay(200, cts.Token);
await hosted.StopAsync(cts.Token);
Assert.Equal(1, store.InsertCalls); // duplicate dropped
Assert.Equal("sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a", store.LastHash); // hash of "{}"
}
private sealed class RecordingStore : ITimelineEventStore
{
private readonly HashSet<(string tenant, string id)> _seen = new();
public int InsertCalls { get; private set; }
public string? LastHash { get; private set; }
public Task<bool> InsertAsync(TimelineEventEnvelope envelope, CancellationToken cancellationToken = default)
{
InsertCalls++;
LastHash = envelope.PayloadHash;
return Task.FromResult(_seen.Add((envelope.TenantId, envelope.EventId)));
}
}
}

View File

@@ -0,0 +1,82 @@
using Xunit;
namespace StellaOps.TimelineIndexer.Tests;
public sealed class TimelineSchemaTests
{
private static string FindRepoRoot()
{
var dir = AppContext.BaseDirectory;
for (var i = 0; i < 10 && dir is not null; i++)
{
if (File.Exists(Path.Combine(dir, "StellaOps.sln")) ||
File.Exists(Path.Combine(dir, "Directory.Build.props")))
{
return dir;
}
dir = Directory.GetParent(dir)?.FullName;
}
throw new InvalidOperationException("Could not locate repository root from test base directory.");
}
private static string ReadMigrationSql()
{
var root = FindRepoRoot();
var path = Path.Combine(
root,
"src",
"TimelineIndexer",
"StellaOps.TimelineIndexer",
"StellaOps.TimelineIndexer.Infrastructure",
"Db",
"Migrations",
"001_initial_schema.sql");
if (!File.Exists(path))
{
throw new FileNotFoundException("Expected migration file was not found.", path);
}
return File.ReadAllText(path);
}
[Fact]
public void MigrationFile_Exists()
{
var root = FindRepoRoot();
var path = Path.Combine(
root,
"src",
"TimelineIndexer",
"StellaOps.TimelineIndexer",
"StellaOps.TimelineIndexer.Infrastructure",
"Db",
"Migrations",
"001_initial_schema.sql");
Assert.True(File.Exists(path), $"Migration script missing at {path}");
}
[Fact]
public void Migration_EnablesRlsPolicies()
{
var sql = ReadMigrationSql();
Assert.Contains("timeline_app.require_current_tenant", sql, StringComparison.Ordinal);
Assert.Contains("timeline_events_isolation", sql, StringComparison.Ordinal);
Assert.Contains("timeline_event_details_isolation", sql, StringComparison.Ordinal);
Assert.Contains("timeline_event_digests_isolation", sql, StringComparison.Ordinal);
Assert.Contains("ENABLE ROW LEVEL SECURITY", sql, StringComparison.OrdinalIgnoreCase);
}
[Fact]
public void Migration_DefinesUniqueEventConstraint()
{
var sql = ReadMigrationSql();
Assert.Contains("UNIQUE (tenant_id, event_id)", sql, StringComparison.Ordinal);
Assert.Contains("event_seq bigserial", sql, StringComparison.Ordinal);
}
}

View File

@@ -1,7 +1,18 @@
using StellaOps.TimelineIndexer.Worker;
var builder = Host.CreateApplicationBuilder(args);
builder.Services.AddHostedService<Worker>();
var host = builder.Build();
host.Run();
using Microsoft.Extensions.Configuration;
using StellaOps.TimelineIndexer.Core.Abstractions;
using StellaOps.TimelineIndexer.Infrastructure.DependencyInjection;
using StellaOps.TimelineIndexer.Infrastructure.Subscriptions;
using StellaOps.TimelineIndexer.Worker;
var builder = Host.CreateApplicationBuilder(args);
builder.Configuration.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true);
builder.Configuration.AddJsonFile("appsettings.Development.json", optional: true, reloadOnChange: true);
builder.Configuration.AddEnvironmentVariables(prefix: "TIMELINE_");
builder.Services.AddTimelineIndexerPostgres(builder.Configuration);
builder.Services.AddSingleton<ITimelineEventSubscriber, NullTimelineEventSubscriber>();
builder.Services.AddHostedService<TimelineIngestionWorker>();
var host = builder.Build();
host.Run();

View File

@@ -0,0 +1,71 @@
using System.Collections.Concurrent;
using System.Diagnostics.Metrics;
using System.Linq;
using StellaOps.TimelineIndexer.Core.Abstractions;
using StellaOps.TimelineIndexer.Core.Models;
namespace StellaOps.TimelineIndexer.Worker;
/// <summary>
/// Background consumer that reads timeline events from configured subscribers and persists them via the ingestion service.
/// </summary>
public sealed class TimelineIngestionWorker(
IEnumerable<ITimelineEventSubscriber> subscribers,
ITimelineIngestionService ingestionService,
ILogger<TimelineIngestionWorker> logger) : BackgroundService
{
private static readonly Meter Meter = new("StellaOps.TimelineIndexer", "1.0.0");
private static readonly Counter<long> IngestedCounter = Meter.CreateCounter<long>("timeline.ingested");
private static readonly Counter<long> DuplicateCounter = Meter.CreateCounter<long>("timeline.duplicates");
private static readonly Counter<long> FailedCounter = Meter.CreateCounter<long>("timeline.failed");
private readonly IEnumerable<ITimelineEventSubscriber> _subscribers = subscribers;
private readonly ITimelineIngestionService _ingestion = ingestionService;
private readonly ILogger<TimelineIngestionWorker> _logger = logger;
private readonly ConcurrentDictionary<(string tenant, string eventId), byte> _sessionSeen = new();
protected override Task ExecuteAsync(CancellationToken stoppingToken)
{
var tasks = _subscribers.Select(subscriber => ConsumeAsync(subscriber, stoppingToken)).ToArray();
return Task.WhenAll(tasks);
}
private async Task ConsumeAsync(ITimelineEventSubscriber subscriber, CancellationToken cancellationToken)
{
await foreach (var envelope in subscriber.SubscribeAsync(cancellationToken))
{
var key = (envelope.TenantId, envelope.EventId);
if (!_sessionSeen.TryAdd(key, 0))
{
DuplicateCounter.Add(1);
_logger.LogDebug("Skipped duplicate timeline event {EventId} for tenant {Tenant}", envelope.EventId, envelope.TenantId);
continue;
}
try
{
var result = await _ingestion.IngestAsync(envelope, cancellationToken).ConfigureAwait(false);
if (result.Inserted)
{
IngestedCounter.Add(1);
_logger.LogInformation("Ingested timeline event {EventId} from {Source} (tenant {Tenant})", envelope.EventId, envelope.Source, envelope.TenantId);
}
else
{
DuplicateCounter.Add(1);
_logger.LogDebug("Store reported duplicate for event {EventId} tenant {Tenant}", envelope.EventId, envelope.TenantId);
}
}
catch (OperationCanceledException) when (cancellationToken.IsCancellationRequested)
{
// Respect shutdown.
break;
}
catch (Exception ex)
{
FailedCounter.Add(1);
_logger.LogError(ex, "Failed to ingest timeline event {EventId} for tenant {Tenant}", envelope.EventId, envelope.TenantId);
}
}
}
}

View File

@@ -1,16 +0,0 @@
namespace StellaOps.TimelineIndexer.Worker;
public class Worker(ILogger<Worker> logger) : BackgroundService
{
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
if (logger.IsEnabled(LogLevel.Information))
{
logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
}
await Task.Delay(1000, stoppingToken);
}
}
}