feat: Add initial implementation of Vulnerability Resolver Jobs
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled
- Created project for StellaOps.Scanner.Analyzers.Native.Tests with necessary dependencies. - Documented roles and guidelines in AGENTS.md for Scheduler module. - Implemented IResolverJobService interface and InMemoryResolverJobService for handling resolver jobs. - Added ResolverBacklogNotifier and ResolverBacklogService for monitoring job metrics. - Developed API endpoints for managing resolver jobs and retrieving metrics. - Defined models for resolver job requests and responses. - Integrated dependency injection for resolver job services. - Implemented ImpactIndexSnapshot for persisting impact index data. - Introduced SignalsScoringOptions for configurable scoring weights in reachability scoring. - Added unit tests for ReachabilityScoringService and RuntimeFactsIngestionService. - Created dotnet-filter.sh script to handle command-line arguments for dotnet. - Established nuget-prime project for managing package downloads.
This commit is contained in:
46
src/AdvisoryAI/AGENTS.md
Normal file
46
src/AdvisoryAI/AGENTS.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# Advisory AI · AGENTS
|
||||
|
||||
## Roles
|
||||
- Backend engineer (.NET 10, C# preview) for `StellaOps.AdvisoryAI*` services and worker.
|
||||
- Docs engineer for Advisory AI runbooks and user guides in `docs/advisory-ai` and related policy/SBOM docs.
|
||||
- QA automation engineer for `__Tests/StellaOps.AdvisoryAI.Tests` (unit/golden/property/perf).
|
||||
|
||||
## Working Directory
|
||||
- Primary: `src/AdvisoryAI/**` (WebService, Worker, Hosting, plugins, tests).
|
||||
- Docs: `docs/advisory-ai/**`, `docs/policy/assistant-parameters.md`, `docs/sbom/*` when explicitly touched by sprint tasks.
|
||||
- Shared libraries allowed only if referenced by Advisory AI projects; otherwise stay in-module.
|
||||
|
||||
## Required Reading (treat as read before DOING)
|
||||
- `docs/README.md`
|
||||
- `docs/07_HIGH_LEVEL_ARCHITECTURE.md`
|
||||
- `docs/modules/platform/architecture-overview.md`
|
||||
- `docs/modules/advisory-ai/architecture.md`
|
||||
- Sprint context: `docs/implplan/SPRINT_0111_0001_0001_advisoryai.md`
|
||||
- Guardrail and ops knobs: `docs/policy/assistant-parameters.md`
|
||||
|
||||
## Working Agreements
|
||||
- Determinism first: stable ordering, seeded randomness, UTC ISO-8601 timestamps, content-addressed caches; no wall-clock timing in tests.
|
||||
- Offline-friendly: no hardcoded external endpoints; respect BYO trust roots and offline bundles.
|
||||
- Observability: structured logs with event ids; expose counters and (optional) OTEL traces guarded by config.
|
||||
- Configuration: prefer `IOptions` + validated options with data annotations; map env vars in docs.
|
||||
- Security: least privilege, short-lived keys, no embedding secrets; honor guardrail phrases and sanitization paths documented in policy knobs.
|
||||
- Queue/cache: avoid unbounded growth; make capacities and TTLs configurable; default to conservative limits.
|
||||
|
||||
## Testing
|
||||
- Run `dotnet test src/AdvisoryAI/__Tests/StellaOps.AdvisoryAI.Tests/StellaOps.AdvisoryAI.Tests.csproj` before marking DONE.
|
||||
- Add/extend golden/property tests for new behaviors; keep fixtures deterministic (seeded caches, static input data).
|
||||
- For perf-sensitive paths, keep benchmarks deterministic and skip in CI unless flagged.
|
||||
|
||||
## Docs & Change Sync
|
||||
- When changing behaviors or contracts, update relevant docs under `docs/modules/advisory-ai`, `docs/policy/assistant-parameters.md`, or sprint-linked docs; mirror decisions in sprint **Decisions & Risks**.
|
||||
- If new advisories/platform decisions occur, notify sprint log and link updated docs.
|
||||
|
||||
## Contracts & Dependencies
|
||||
- SBOM context feed: follow `SBOM-AIAI-31-001` contract (idempotent, extend-only, no versioning).
|
||||
- DevOps runbook `DEVOPS-AIAI-31-001` governs packaging/on-prem toggles; do not ship manifests without it.
|
||||
- Console/CLI dependencies remain gating for UI/CLI docs (see sprint tracker).
|
||||
|
||||
## Tooling
|
||||
- Target `net10.0`; use latest Microsoft.* packages compatible with net10.
|
||||
- NuGet: prefer local cache `/local-nugets`; avoid floating versions.
|
||||
- Linting/analyzers: keep nullable enabled; treat warnings as errors where feasible.
|
||||
@@ -25,6 +25,19 @@ Deliver the Advisory AI assistant service that synthesizes advisory/VEX evidence
|
||||
- `docs/modules/advisory-ai/architecture.md`
|
||||
- `docs/modules/platform/architecture-overview.md`
|
||||
|
||||
## Roles & Boundaries
|
||||
- **Backend engineer** – APIs, retrievers, guardrails, orchestrator glue under `src/AdvisoryAI/StellaOps.AdvisoryAI*` and shared fixtures in `src/AdvisoryAI/__Tests`.
|
||||
- **Worker/queue engineer** – background processing and cache orchestration in `StellaOps.AdvisoryAI.Worker`.
|
||||
- **Docs engineer** – Advisory AI docs in `docs/advisory-ai/*`, policy/sbom/runbooks in `docs/policy`, `docs/sbom`, `docs/runbooks`.
|
||||
- **QA/Testing** – deterministic harnesses and golden/property/generative tests in `src/AdvisoryAI/__Tests`.
|
||||
- Allowed shared dirs: `StellaOps.AdvisoryAI.Hosting`, `StellaOps.Concelier.PluginBinaries` (read-only plugins), and cross-module contracts under `docs/modules/advisory-ai/*`.
|
||||
|
||||
## Testing & Determinism
|
||||
- Prefer golden/property tests with seeded randoms; fixtures live under `__Tests/Fixtures` with stable ordering.
|
||||
- Cache keys must include tenant + SBOM hash + advisory digest; avoid wall-clock time in logic—use injected clocks.
|
||||
- HTTP clients configurable via options + DI; set timeouts; no live network in unit tests (use test servers/mocks).
|
||||
- When adding APIs, update OpenAPI and ensure validation/guardrail regressions are tested.
|
||||
|
||||
## Working Agreement
|
||||
- 1. Update task status to `DOING`/`DONE` in both correspoding sprint file `/docs/implplan/SPRINT_*.md` and the local `TASKS.md` when you start or finish work.
|
||||
- 2. Review this charter and the Required Reading documents before coding; confirm prerequisites are met.
|
||||
|
||||
@@ -4056,4 +4056,88 @@ spec:
|
||||
return Task.FromResult(_response);
|
||||
}
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task HandleOfflineKitStatusAsync_AsJsonRendersPayload()
|
||||
{
|
||||
var originalExit = Environment.ExitCode;
|
||||
var originalConsole = AnsiConsole.Console;
|
||||
|
||||
try
|
||||
{
|
||||
Environment.ExitCode = 0;
|
||||
var console = new TestConsole();
|
||||
AnsiConsole.Console = console;
|
||||
|
||||
var backend = new StubBackendClient(new JobTriggerResult(true, "ok", null, null))
|
||||
{
|
||||
OfflineStatus = new OfflineKitStatus(
|
||||
"bundle-123",
|
||||
"stable",
|
||||
"kit",
|
||||
false,
|
||||
null,
|
||||
DateTimeOffset.Parse("2025-11-03T00:00:00Z", CultureInfo.InvariantCulture),
|
||||
DateTimeOffset.Parse("2025-11-04T00:00:00Z", CultureInfo.InvariantCulture),
|
||||
"sha256:deadbeef",
|
||||
1024,
|
||||
new[]
|
||||
{
|
||||
new OfflineKitComponentStatus("scanner", "1.0.0", "abc", DateTimeOffset.Parse("2025-11-03T00:00:00Z", CultureInfo.InvariantCulture), 512)
|
||||
})
|
||||
};
|
||||
|
||||
var provider = BuildServiceProvider(backend);
|
||||
|
||||
await CommandHandlers.HandleOfflineKitStatusAsync(
|
||||
provider,
|
||||
asJson: true,
|
||||
verbose: false,
|
||||
cancellationToken: CancellationToken.None);
|
||||
|
||||
Assert.Equal(0, Environment.ExitCode);
|
||||
Assert.Contains("bundle-123", console.Output, StringComparison.OrdinalIgnoreCase);
|
||||
Assert.Contains("scanner", console.Output, StringComparison.OrdinalIgnoreCase);
|
||||
}
|
||||
finally
|
||||
{
|
||||
Environment.ExitCode = originalExit;
|
||||
AnsiConsole.Console = originalConsole;
|
||||
}
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task HandleOfflineKitStatusAsync_AsJsonHandlesEmptyStatus()
|
||||
{
|
||||
var originalExit = Environment.ExitCode;
|
||||
var originalConsole = AnsiConsole.Console;
|
||||
|
||||
try
|
||||
{
|
||||
Environment.ExitCode = 0;
|
||||
var console = new TestConsole();
|
||||
AnsiConsole.Console = console;
|
||||
|
||||
var backend = new StubBackendClient(new JobTriggerResult(true, "ok", null, null))
|
||||
{
|
||||
OfflineStatus = new OfflineKitStatus(null, null, null, false, null, null, null, null, null, Array.Empty<OfflineKitComponentStatus>())
|
||||
};
|
||||
|
||||
var provider = BuildServiceProvider(backend);
|
||||
|
||||
await CommandHandlers.HandleOfflineKitStatusAsync(
|
||||
provider,
|
||||
asJson: true,
|
||||
verbose: false,
|
||||
cancellationToken: CancellationToken.None);
|
||||
|
||||
Assert.Equal(0, Environment.ExitCode);
|
||||
Assert.Contains("\"bundleId\": null", console.Output, StringComparison.OrdinalIgnoreCase);
|
||||
}
|
||||
finally
|
||||
{
|
||||
Environment.ExitCode = originalExit;
|
||||
AnsiConsole.Console = originalConsole;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,14 +5,13 @@ namespace StellaOps.Concelier.WebService.Contracts;
|
||||
|
||||
public sealed record AdvisoryStructuredFieldResponse(
|
||||
string AdvisoryKey,
|
||||
string Fingerprint,
|
||||
int Total,
|
||||
bool Truncated,
|
||||
IReadOnlyList<AdvisoryStructuredFieldEntry> Entries);
|
||||
|
||||
public sealed record AdvisoryStructuredFieldEntry(
|
||||
string Type,
|
||||
string DocumentId,
|
||||
string FieldPath,
|
||||
string ChunkId,
|
||||
AdvisoryStructuredFieldContent Content,
|
||||
AdvisoryStructuredFieldProvenance Provenance);
|
||||
@@ -65,6 +64,8 @@ public sealed record AdvisoryStructuredAffectedContent(
|
||||
string? Status);
|
||||
|
||||
public sealed record AdvisoryStructuredFieldProvenance(
|
||||
string DocumentId,
|
||||
string ObservationPath,
|
||||
string Source,
|
||||
string Kind,
|
||||
string? Value,
|
||||
|
||||
@@ -1,16 +1,19 @@
|
||||
using System.Collections.Immutable;
|
||||
using StellaOps.Concelier.Models.Observations;
|
||||
using StellaOps.Concelier.Models.Observations;
|
||||
using StellaOps.Concelier.RawModels;
|
||||
|
||||
namespace StellaOps.Concelier.WebService.Contracts;
|
||||
|
||||
public sealed record AdvisoryObservationQueryResponse(
|
||||
ImmutableArray<AdvisoryObservation> Observations,
|
||||
AdvisoryObservationLinksetAggregateResponse Linkset,
|
||||
string? NextCursor,
|
||||
bool HasMore);
|
||||
|
||||
public sealed record AdvisoryObservationLinksetAggregateResponse(
|
||||
ImmutableArray<string> Aliases,
|
||||
ImmutableArray<string> Purls,
|
||||
ImmutableArray<string> Cpes,
|
||||
ImmutableArray<AdvisoryObservationReference> References);
|
||||
public sealed record AdvisoryObservationQueryResponse(
|
||||
ImmutableArray<AdvisoryObservation> Observations,
|
||||
AdvisoryObservationLinksetAggregateResponse Linkset,
|
||||
string? NextCursor,
|
||||
bool HasMore);
|
||||
|
||||
public sealed record AdvisoryObservationLinksetAggregateResponse(
|
||||
ImmutableArray<string> Aliases,
|
||||
ImmutableArray<string> Purls,
|
||||
ImmutableArray<string> Cpes,
|
||||
ImmutableArray<AdvisoryObservationReference> References,
|
||||
ImmutableArray<string> Scopes,
|
||||
ImmutableArray<RawRelationship> Relationships);
|
||||
|
||||
@@ -45,18 +45,26 @@ public sealed record AdvisoryIdentifiersRequest(
|
||||
[property: JsonPropertyName("primary")] string Primary,
|
||||
[property: JsonPropertyName("aliases")] IReadOnlyList<string>? Aliases);
|
||||
|
||||
public sealed record AdvisoryLinksetRequest(
|
||||
[property: JsonPropertyName("aliases")] IReadOnlyList<string>? Aliases,
|
||||
[property: JsonPropertyName("purls")] IReadOnlyList<string>? PackageUrls,
|
||||
[property: JsonPropertyName("cpes")] IReadOnlyList<string>? Cpes,
|
||||
[property: JsonPropertyName("references")] IReadOnlyList<AdvisoryLinksetReferenceRequest>? References,
|
||||
[property: JsonPropertyName("reconciledFrom")] IReadOnlyList<string>? ReconciledFrom,
|
||||
[property: JsonPropertyName("notes")] IDictionary<string, string>? Notes);
|
||||
|
||||
public sealed record AdvisoryLinksetReferenceRequest(
|
||||
[property: JsonPropertyName("type")] string Type,
|
||||
[property: JsonPropertyName("url")] string Url,
|
||||
[property: JsonPropertyName("source")] string? Source);
|
||||
public sealed record AdvisoryLinksetRequest(
|
||||
[property: JsonPropertyName("aliases")] IReadOnlyList<string>? Aliases,
|
||||
[property: JsonPropertyName("scopes")] IReadOnlyList<string>? Scopes,
|
||||
[property: JsonPropertyName("relationships")] IReadOnlyList<AdvisoryLinksetRelationshipRequest>? Relationships,
|
||||
[property: JsonPropertyName("purls")] IReadOnlyList<string>? PackageUrls,
|
||||
[property: JsonPropertyName("cpes")] IReadOnlyList<string>? Cpes,
|
||||
[property: JsonPropertyName("references")] IReadOnlyList<AdvisoryLinksetReferenceRequest>? References,
|
||||
[property: JsonPropertyName("reconciledFrom")] IReadOnlyList<string>? ReconciledFrom,
|
||||
[property: JsonPropertyName("notes")] IDictionary<string, string>? Notes);
|
||||
|
||||
public sealed record AdvisoryLinksetRelationshipRequest(
|
||||
[property: JsonPropertyName("type")] string Type,
|
||||
[property: JsonPropertyName("source")] string Source,
|
||||
[property: JsonPropertyName("target")] string Target,
|
||||
[property: JsonPropertyName("provenance")] string? Provenance);
|
||||
|
||||
public sealed record AdvisoryLinksetReferenceRequest(
|
||||
[property: JsonPropertyName("type")] string Type,
|
||||
[property: JsonPropertyName("url")] string Url,
|
||||
[property: JsonPropertyName("source")] string? Source);
|
||||
|
||||
public sealed record AdvisoryIngestResponse(
|
||||
[property: JsonPropertyName("id")] string Id,
|
||||
|
||||
@@ -68,6 +68,8 @@ internal static class AdvisoryRawRequestMapper
|
||||
var linkset = new RawLinkset
|
||||
{
|
||||
Aliases = NormalizeStrings(linksetRequest?.Aliases),
|
||||
Scopes = NormalizeStrings(linksetRequest?.Scopes),
|
||||
Relationships = NormalizeRelationships(linksetRequest?.Relationships),
|
||||
PackageUrls = NormalizeStrings(linksetRequest?.PackageUrls),
|
||||
Cpes = NormalizeStrings(linksetRequest?.Cpes),
|
||||
References = NormalizeReferences(linksetRequest?.References),
|
||||
@@ -135,7 +137,7 @@ internal static class AdvisoryRawRequestMapper
|
||||
if (references is null)
|
||||
{
|
||||
return ImmutableArray<RawReference>.Empty;
|
||||
}
|
||||
}
|
||||
|
||||
var builder = ImmutableArray.CreateBuilder<RawReference>();
|
||||
foreach (var reference in references)
|
||||
@@ -151,10 +153,38 @@ internal static class AdvisoryRawRequestMapper
|
||||
}
|
||||
|
||||
builder.Add(new RawReference(reference.Type.Trim(), reference.Url.Trim(), string.IsNullOrWhiteSpace(reference.Source) ? null : reference.Source.Trim()));
|
||||
}
|
||||
|
||||
return builder.Count == 0 ? ImmutableArray<RawReference>.Empty : builder.ToImmutable();
|
||||
}
|
||||
}
|
||||
|
||||
return builder.Count == 0 ? ImmutableArray<RawReference>.Empty : builder.ToImmutable();
|
||||
}
|
||||
|
||||
private static ImmutableArray<RawRelationship> NormalizeRelationships(IEnumerable<AdvisoryLinksetRelationshipRequest>? relationships)
|
||||
{
|
||||
if (relationships is null)
|
||||
{
|
||||
return ImmutableArray<RawRelationship>.Empty;
|
||||
}
|
||||
|
||||
var builder = ImmutableArray.CreateBuilder<RawRelationship>();
|
||||
foreach (var relationship in relationships)
|
||||
{
|
||||
if (relationship is null
|
||||
|| string.IsNullOrWhiteSpace(relationship.Type)
|
||||
|| string.IsNullOrWhiteSpace(relationship.Source)
|
||||
|| string.IsNullOrWhiteSpace(relationship.Target))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
builder.Add(new RawRelationship(
|
||||
relationship.Type.Trim(),
|
||||
relationship.Source.Trim(),
|
||||
relationship.Target.Trim(),
|
||||
string.IsNullOrWhiteSpace(relationship.Provenance) ? null : relationship.Provenance.Trim()));
|
||||
}
|
||||
|
||||
return builder.Count == 0 ? ImmutableArray<RawRelationship>.Empty : builder.ToImmutable();
|
||||
}
|
||||
|
||||
private static JsonElement NormalizeRawContent(JsonElement element)
|
||||
{
|
||||
|
||||
@@ -438,7 +438,9 @@ var observationsEndpoint = app.MapGet("/concelier/observations", async (
|
||||
result.Linkset.Aliases,
|
||||
result.Linkset.Purls,
|
||||
result.Linkset.Cpes,
|
||||
result.Linkset.References),
|
||||
result.Linkset.References,
|
||||
result.Linkset.Scopes,
|
||||
result.Linkset.Relationships),
|
||||
result.NextCursor,
|
||||
result.HasMore);
|
||||
|
||||
@@ -861,6 +863,7 @@ var advisoryChunksEndpoint = app.MapGet("/advisories/{advisoryKey}/chunks", asyn
|
||||
var formatFilter = BuildFilterSet(context.Request.Query["format"]);
|
||||
|
||||
var resolution = await ResolveAdvisoryAsync(
|
||||
tenant,
|
||||
normalizedKey,
|
||||
advisoryStore,
|
||||
aliasStore,
|
||||
@@ -891,6 +894,7 @@ var advisoryChunksEndpoint = app.MapGet("/advisories/{advisoryKey}/chunks", asyn
|
||||
var observations = observationResult.Observations.ToArray();
|
||||
var buildOptions = new AdvisoryChunkBuildOptions(
|
||||
advisory.AdvisoryKey,
|
||||
fingerprint,
|
||||
chunkLimit,
|
||||
observationLimit,
|
||||
sectionFilter,
|
||||
@@ -1319,11 +1323,17 @@ IResult? EnsureTenantAuthorized(HttpContext context, string tenant)
|
||||
}
|
||||
|
||||
async Task<(Advisory Advisory, ImmutableArray<string> Aliases, string Fingerprint)?> ResolveAdvisoryAsync(
|
||||
string tenant,
|
||||
string advisoryKey,
|
||||
IAdvisoryStore advisoryStore,
|
||||
IAliasStore aliasStore,
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(tenant))
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
ArgumentNullException.ThrowIfNull(advisoryStore);
|
||||
ArgumentNullException.ThrowIfNull(aliasStore);
|
||||
|
||||
|
||||
@@ -12,6 +12,7 @@ namespace StellaOps.Concelier.WebService.Services;
|
||||
|
||||
internal sealed record AdvisoryChunkBuildOptions(
|
||||
string AdvisoryKey,
|
||||
string Fingerprint,
|
||||
int ChunkLimit,
|
||||
int ObservationLimit,
|
||||
ImmutableHashSet<string> SectionFilter,
|
||||
@@ -56,9 +57,7 @@ internal sealed class AdvisoryChunkBuilder
|
||||
|
||||
var vendorIndex = new ObservationIndex(observations);
|
||||
var chunkLimit = Math.Max(1, options.ChunkLimit);
|
||||
var entries = new List<AdvisoryStructuredFieldEntry>(chunkLimit);
|
||||
var total = 0;
|
||||
var truncated = false;
|
||||
var entries = new List<AdvisoryStructuredFieldEntry>();
|
||||
var sectionFilter = options.SectionFilter ?? ImmutableHashSet<string>.Empty;
|
||||
|
||||
foreach (var section in SectionOrder)
|
||||
@@ -82,31 +81,25 @@ internal sealed class AdvisoryChunkBuilder
|
||||
continue;
|
||||
}
|
||||
|
||||
total += bucket.Count;
|
||||
|
||||
if (entries.Count >= chunkLimit)
|
||||
{
|
||||
truncated = true;
|
||||
continue;
|
||||
}
|
||||
|
||||
var remaining = chunkLimit - entries.Count;
|
||||
if (bucket.Count <= remaining)
|
||||
{
|
||||
entries.AddRange(bucket);
|
||||
}
|
||||
else
|
||||
{
|
||||
entries.AddRange(bucket.Take(remaining));
|
||||
truncated = true;
|
||||
}
|
||||
entries.AddRange(bucket);
|
||||
}
|
||||
|
||||
var ordered = entries
|
||||
.OrderBy(static entry => entry.Type, StringComparer.Ordinal)
|
||||
.ThenBy(static entry => entry.Provenance.ObservationPath, StringComparer.Ordinal)
|
||||
.ThenBy(static entry => entry.Provenance.DocumentId, StringComparer.Ordinal)
|
||||
.ToArray();
|
||||
|
||||
var total = ordered.Length;
|
||||
var truncated = total > chunkLimit;
|
||||
var limited = truncated ? ordered.Take(chunkLimit).ToArray() : ordered;
|
||||
|
||||
var response = new AdvisoryStructuredFieldResponse(
|
||||
options.AdvisoryKey,
|
||||
options.Fingerprint,
|
||||
total,
|
||||
truncated,
|
||||
entries);
|
||||
limited);
|
||||
|
||||
var telemetry = new AdvisoryChunkTelemetrySummary(
|
||||
vendorIndex.SourceCount,
|
||||
@@ -284,11 +277,11 @@ internal sealed class AdvisoryChunkBuilder
|
||||
|
||||
return new AdvisoryStructuredFieldEntry(
|
||||
type,
|
||||
documentId,
|
||||
fieldPath,
|
||||
chunkId,
|
||||
content,
|
||||
new AdvisoryStructuredFieldProvenance(
|
||||
documentId,
|
||||
fieldPath,
|
||||
provenance.Source,
|
||||
provenance.Kind,
|
||||
provenance.Value,
|
||||
|
||||
@@ -13,6 +13,8 @@ public sealed record AdvisoryLinkset(
|
||||
ImmutableArray<string> ObservationIds,
|
||||
AdvisoryLinksetNormalized? Normalized,
|
||||
AdvisoryLinksetProvenance? Provenance,
|
||||
double? Confidence,
|
||||
IReadOnlyList<AdvisoryLinksetConflict>? Conflicts,
|
||||
DateTimeOffset CreatedAt,
|
||||
string? BuiltByJobId);
|
||||
|
||||
@@ -34,6 +36,11 @@ public sealed record AdvisoryLinksetProvenance(
|
||||
string? ToolVersion,
|
||||
string? PolicyHash);
|
||||
|
||||
public sealed record AdvisoryLinksetConflict(
|
||||
string Field,
|
||||
string Reason,
|
||||
IReadOnlyList<string>? Values);
|
||||
|
||||
internal static class BsonDocumentHelper
|
||||
{
|
||||
public static BsonDocument FromDictionary(Dictionary<string, object?> dictionary)
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Collections.Immutable;
|
||||
using System.Linq;
|
||||
@@ -55,6 +56,8 @@ internal sealed class AdvisoryLinksetBackfillService : IAdvisoryLinksetBackfillS
|
||||
observationIds,
|
||||
normalized,
|
||||
null,
|
||||
1.0,
|
||||
Array.Empty<AdvisoryLinksetConflict>(),
|
||||
createdAt,
|
||||
null);
|
||||
|
||||
|
||||
@@ -24,6 +24,19 @@ internal static class AdvisoryLinksetNormalization
|
||||
return Build(purls);
|
||||
}
|
||||
|
||||
public static (AdvisoryLinksetNormalized? normalized, double? confidence, IReadOnlyList<AdvisoryLinksetConflict> conflicts) FromRawLinksetWithConfidence(
|
||||
RawLinkset linkset,
|
||||
double? providedConfidence = null)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(linkset);
|
||||
|
||||
var normalized = Build(linkset.PackageUrls);
|
||||
var confidence = CoerceConfidence(providedConfidence);
|
||||
var conflicts = ExtractConflicts(linkset);
|
||||
|
||||
return (normalized, confidence, conflicts);
|
||||
}
|
||||
|
||||
private static AdvisoryLinksetNormalized? Build(IEnumerable<string> purlValues)
|
||||
{
|
||||
var normalizedPurls = NormalizePurls(purlValues);
|
||||
@@ -75,4 +88,44 @@ internal static class AdvisoryLinksetNormalization
|
||||
|
||||
return versions.ToList();
|
||||
}
|
||||
|
||||
private static double? CoerceConfidence(double? confidence)
|
||||
{
|
||||
if (!confidence.HasValue)
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
if (double.IsNaN(confidence.Value) || double.IsInfinity(confidence.Value))
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
return Math.Clamp(confidence.Value, 0d, 1d);
|
||||
}
|
||||
|
||||
private static IReadOnlyList<AdvisoryLinksetConflict> ExtractConflicts(RawLinkset linkset)
|
||||
{
|
||||
if (linkset.Notes is null || linkset.Notes.Count == 0)
|
||||
{
|
||||
return Array.Empty<AdvisoryLinksetConflict>();
|
||||
}
|
||||
|
||||
var conflicts = new List<AdvisoryLinksetConflict>();
|
||||
|
||||
foreach (var note in linkset.Notes)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(note.Key) || string.IsNullOrWhiteSpace(note.Value))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
conflicts.Add(new AdvisoryLinksetConflict(
|
||||
note.Key.Trim(),
|
||||
note.Value.Trim(),
|
||||
null));
|
||||
}
|
||||
|
||||
return conflicts;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,15 @@
|
||||
using System.Collections.Immutable;
|
||||
using StellaOps.Concelier.Models.Observations;
|
||||
|
||||
namespace StellaOps.Concelier.Core.Observations;
|
||||
|
||||
/// <summary>
|
||||
/// Aggregated linkset facets (aliases, purls, cpes, references, scopes, relationships) built from a set of observations.
|
||||
/// </summary>
|
||||
public sealed record AdvisoryObservationLinksetAggregate(
|
||||
ImmutableArray<string> Aliases,
|
||||
ImmutableArray<string> Purls,
|
||||
ImmutableArray<string> Cpes,
|
||||
ImmutableArray<AdvisoryObservationReference> References,
|
||||
ImmutableArray<string> Scopes,
|
||||
ImmutableArray<RawRelationship> Relationships);
|
||||
@@ -1,6 +1,7 @@
|
||||
using System.Collections.Immutable;
|
||||
using StellaOps.Concelier.Models;
|
||||
using StellaOps.Concelier.Models.Observations;
|
||||
using System.Collections.Immutable;
|
||||
using StellaOps.Concelier.Models;
|
||||
using StellaOps.Concelier.Models.Observations;
|
||||
using StellaOps.Concelier.RawModels;
|
||||
|
||||
namespace StellaOps.Concelier.Core.Observations;
|
||||
|
||||
@@ -66,17 +67,19 @@ public sealed record AdvisoryObservationQueryOptions
|
||||
/// <summary>
|
||||
/// Query result containing observations and their aggregated linkset hints.
|
||||
/// </summary>
|
||||
public sealed record AdvisoryObservationQueryResult(
|
||||
ImmutableArray<AdvisoryObservation> Observations,
|
||||
AdvisoryObservationLinksetAggregate Linkset,
|
||||
string? NextCursor,
|
||||
bool HasMore);
|
||||
|
||||
/// <summary>
|
||||
/// Aggregated linkset built from the observations returned by a query.
|
||||
/// </summary>
|
||||
public sealed record AdvisoryObservationLinksetAggregate(
|
||||
ImmutableArray<string> Aliases,
|
||||
ImmutableArray<string> Purls,
|
||||
ImmutableArray<string> Cpes,
|
||||
ImmutableArray<AdvisoryObservationReference> References);
|
||||
public sealed record AdvisoryObservationQueryResult(
|
||||
ImmutableArray<AdvisoryObservation> Observations,
|
||||
AdvisoryObservationLinksetAggregate Linkset,
|
||||
string? NextCursor,
|
||||
bool HasMore);
|
||||
|
||||
/// <summary>
|
||||
/// Aggregated linkset built from the observations returned by a query.
|
||||
/// </summary>
|
||||
public sealed record AdvisoryObservationLinksetAggregate(
|
||||
ImmutableArray<string> Aliases,
|
||||
ImmutableArray<string> Purls,
|
||||
ImmutableArray<string> Cpes,
|
||||
ImmutableArray<AdvisoryObservationReference> References,
|
||||
ImmutableArray<string> Scopes,
|
||||
ImmutableArray<RawRelationship> Relationships);
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
using System.Collections.Immutable;
|
||||
using System.Globalization;
|
||||
using System.Text;
|
||||
using StellaOps.Concelier.Models;
|
||||
using StellaOps.Concelier.Models.Observations;
|
||||
using System.Collections.Immutable;
|
||||
using System.Globalization;
|
||||
using System.Text;
|
||||
using StellaOps.Concelier.Models;
|
||||
using StellaOps.Concelier.Models.Observations;
|
||||
using StellaOps.Concelier.RawModels;
|
||||
|
||||
namespace StellaOps.Concelier.Core.Observations;
|
||||
|
||||
@@ -195,24 +196,28 @@ public sealed class AdvisoryObservationQueryService : IAdvisoryObservationQueryS
|
||||
|
||||
private static AdvisoryObservationLinksetAggregate BuildAggregateLinkset(ImmutableArray<AdvisoryObservation> observations)
|
||||
{
|
||||
if (observations.IsDefaultOrEmpty)
|
||||
{
|
||||
return new AdvisoryObservationLinksetAggregate(
|
||||
ImmutableArray<string>.Empty,
|
||||
ImmutableArray<string>.Empty,
|
||||
ImmutableArray<string>.Empty,
|
||||
ImmutableArray<AdvisoryObservationReference>.Empty);
|
||||
}
|
||||
if (observations.IsDefaultOrEmpty)
|
||||
{
|
||||
return new AdvisoryObservationLinksetAggregate(
|
||||
ImmutableArray<string>.Empty,
|
||||
ImmutableArray<string>.Empty,
|
||||
ImmutableArray<string>.Empty,
|
||||
ImmutableArray<AdvisoryObservationReference>.Empty,
|
||||
ImmutableArray<string>.Empty,
|
||||
ImmutableArray<RawRelationship>.Empty);
|
||||
}
|
||||
|
||||
var aliasSet = new HashSet<string>(StringComparer.Ordinal);
|
||||
var purlSet = new HashSet<string>(StringComparer.Ordinal);
|
||||
var cpeSet = new HashSet<string>(StringComparer.Ordinal);
|
||||
var referenceSet = new HashSet<AdvisoryObservationReference>();
|
||||
|
||||
foreach (var observation in observations)
|
||||
{
|
||||
foreach (var alias in observation.Linkset.Aliases)
|
||||
{
|
||||
var referenceSet = new HashSet<AdvisoryObservationReference>();
|
||||
var scopeSet = new HashSet<string>(StringComparer.Ordinal);
|
||||
var relationshipSet = new HashSet<RawRelationship>();
|
||||
|
||||
foreach (var observation in observations)
|
||||
{
|
||||
foreach (var alias in observation.Linkset.Aliases)
|
||||
{
|
||||
aliasSet.Add(alias);
|
||||
}
|
||||
|
||||
@@ -227,18 +232,34 @@ public sealed class AdvisoryObservationQueryService : IAdvisoryObservationQueryS
|
||||
}
|
||||
|
||||
foreach (var reference in observation.Linkset.References)
|
||||
{
|
||||
referenceSet.Add(reference);
|
||||
}
|
||||
}
|
||||
|
||||
return new AdvisoryObservationLinksetAggregate(
|
||||
aliasSet.OrderBy(static alias => alias, StringComparer.Ordinal).ToImmutableArray(),
|
||||
purlSet.OrderBy(static purl => purl, StringComparer.Ordinal).ToImmutableArray(),
|
||||
cpeSet.OrderBy(static cpe => cpe, StringComparer.Ordinal).ToImmutableArray(),
|
||||
referenceSet
|
||||
.OrderBy(static reference => reference.Type, StringComparer.Ordinal)
|
||||
.ThenBy(static reference => reference.Url, StringComparer.Ordinal)
|
||||
.ToImmutableArray());
|
||||
}
|
||||
}
|
||||
{
|
||||
referenceSet.Add(reference);
|
||||
}
|
||||
|
||||
foreach (var scope in observation.RawLinkset.Scopes)
|
||||
{
|
||||
scopeSet.Add(scope);
|
||||
}
|
||||
|
||||
foreach (var relationship in observation.RawLinkset.Relationships)
|
||||
{
|
||||
relationshipSet.Add(relationship);
|
||||
}
|
||||
}
|
||||
|
||||
return new AdvisoryObservationLinksetAggregate(
|
||||
aliasSet.OrderBy(static alias => alias, StringComparer.Ordinal).ToImmutableArray(),
|
||||
purlSet.OrderBy(static purl => purl, StringComparer.Ordinal).ToImmutableArray(),
|
||||
cpeSet.OrderBy(static cpe => cpe, StringComparer.Ordinal).ToImmutableArray(),
|
||||
referenceSet
|
||||
.OrderBy(static reference => reference.Type, StringComparer.Ordinal)
|
||||
.ThenBy(static reference => reference.Url, StringComparer.Ordinal)
|
||||
.ToImmutableArray(),
|
||||
scopeSet.OrderBy(static scope => scope, StringComparer.Ordinal).ToImmutableArray(),
|
||||
relationshipSet
|
||||
.OrderBy(static rel => rel.Type, StringComparer.Ordinal)
|
||||
.ThenBy(static rel => rel.Source, StringComparer.Ordinal)
|
||||
.ThenBy(static rel => rel.Target, StringComparer.Ordinal)
|
||||
.ToImmutableArray());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -116,7 +116,10 @@ internal sealed class AdvisoryRawService : IAdvisoryRawService
|
||||
var observation = _observationFactory.Create(enriched, _timeProvider.GetUtcNow());
|
||||
await _observationSink.UpsertAsync(observation, cancellationToken).ConfigureAwait(false);
|
||||
|
||||
var normalizedLinkset = AdvisoryLinksetNormalization.FromRawLinkset(enriched.Linkset);
|
||||
var (normalizedLinkset, confidence, conflicts) = AdvisoryLinksetNormalization.FromRawLinksetWithConfidence(
|
||||
enriched.Linkset,
|
||||
providedConfidence: null);
|
||||
|
||||
var linkset = new AdvisoryLinkset(
|
||||
tenant,
|
||||
source,
|
||||
@@ -124,6 +127,8 @@ internal sealed class AdvisoryRawService : IAdvisoryRawService
|
||||
ImmutableArray.Create(observation.ObservationId),
|
||||
normalizedLinkset,
|
||||
null,
|
||||
confidence ?? 1.0,
|
||||
conflicts,
|
||||
_timeProvider.GetUtcNow(),
|
||||
null);
|
||||
|
||||
|
||||
@@ -95,6 +95,26 @@ public sealed record AdvisoryObservation
|
||||
return references;
|
||||
}
|
||||
|
||||
static ImmutableArray<RawRelationship> SanitizeRelationships(ImmutableArray<RawRelationship> relationships)
|
||||
{
|
||||
if (relationships.IsDefault)
|
||||
{
|
||||
return ImmutableArray<RawRelationship>.Empty;
|
||||
}
|
||||
|
||||
return relationships;
|
||||
}
|
||||
|
||||
static ImmutableArray<string> SanitizeScopes(ImmutableArray<string> scopes)
|
||||
{
|
||||
if (scopes.IsDefault)
|
||||
{
|
||||
return ImmutableArray<string>.Empty;
|
||||
}
|
||||
|
||||
return scopes;
|
||||
}
|
||||
|
||||
static ImmutableDictionary<string, string> SanitizeNotes(ImmutableDictionary<string, string>? notes)
|
||||
{
|
||||
if (notes is null || notes.Count == 0)
|
||||
@@ -108,6 +128,8 @@ public sealed record AdvisoryObservation
|
||||
return rawLinkset with
|
||||
{
|
||||
Aliases = SanitizeStrings(rawLinkset.Aliases),
|
||||
Scopes = SanitizeScopes(rawLinkset.Scopes),
|
||||
Relationships = SanitizeRelationships(rawLinkset.Relationships),
|
||||
PackageUrls = SanitizeStrings(rawLinkset.PackageUrls),
|
||||
Cpes = SanitizeStrings(rawLinkset.Cpes),
|
||||
References = SanitizeReferences(rawLinkset.References),
|
||||
|
||||
@@ -0,0 +1,240 @@
|
||||
using System.Collections.Immutable;
|
||||
|
||||
namespace StellaOps.Concelier.Models.Observations;
|
||||
|
||||
/// <summary>
|
||||
/// Version 1 Link-Not-Merge observation record: immutable, per-source payload with provenance and tenant guards.
|
||||
/// </summary>
|
||||
public sealed record AdvisoryObservationV1
|
||||
{
|
||||
public AdvisoryObservationV1(
|
||||
string observationId,
|
||||
string tenant,
|
||||
string source,
|
||||
string advisoryId,
|
||||
string? title,
|
||||
string? summary,
|
||||
ImmutableArray<ObservationSeverity> severities,
|
||||
ImmutableArray<ObservationAffected> affected,
|
||||
ImmutableArray<string> references,
|
||||
ImmutableArray<string> weaknesses,
|
||||
DateTimeOffset? published,
|
||||
DateTimeOffset? modified,
|
||||
ObservationProvenance provenance,
|
||||
DateTimeOffset ingestedAt,
|
||||
string? supersedesObservationId = null)
|
||||
{
|
||||
ObservationId = Validation.EnsureNotNullOrWhiteSpace(observationId, nameof(observationId));
|
||||
Tenant = Validation.EnsureNotNullOrWhiteSpace(tenant, nameof(tenant)).ToLowerInvariant();
|
||||
Source = Validation.EnsureNotNullOrWhiteSpace(source, nameof(source)).ToLowerInvariant();
|
||||
AdvisoryId = Validation.EnsureNotNullOrWhiteSpace(advisoryId, nameof(advisoryId));
|
||||
Title = Validation.TrimToNull(title);
|
||||
Summary = Validation.TrimToNull(summary);
|
||||
Severities = Normalize(severities);
|
||||
Affected = Normalize(affected);
|
||||
References = NormalizeStrings(references);
|
||||
Weaknesses = NormalizeStrings(weaknesses);
|
||||
Published = published?.ToUniversalTime();
|
||||
Modified = modified?.ToUniversalTime();
|
||||
Provenance = provenance ?? throw new ArgumentNullException(nameof(provenance));
|
||||
IngestedAt = ingestedAt.ToUniversalTime();
|
||||
SupersedesObservationId = Validation.TrimToNull(supersedesObservationId);
|
||||
}
|
||||
|
||||
public string ObservationId { get; }
|
||||
|
||||
public string Tenant { get; }
|
||||
|
||||
public string Source { get; }
|
||||
|
||||
public string AdvisoryId { get; }
|
||||
|
||||
public string? Title { get; }
|
||||
|
||||
public string? Summary { get; }
|
||||
|
||||
public ImmutableArray<ObservationSeverity> Severities { get; }
|
||||
|
||||
public ImmutableArray<ObservationAffected> Affected { get; }
|
||||
|
||||
public ImmutableArray<string> References { get; }
|
||||
|
||||
public ImmutableArray<string> Weaknesses { get; }
|
||||
|
||||
public DateTimeOffset? Published { get; }
|
||||
|
||||
public DateTimeOffset? Modified { get; }
|
||||
|
||||
public ObservationProvenance Provenance { get; }
|
||||
|
||||
public DateTimeOffset IngestedAt { get; }
|
||||
|
||||
public string? SupersedesObservationId { get; }
|
||||
|
||||
private static ImmutableArray<string> NormalizeStrings(ImmutableArray<string> values)
|
||||
{
|
||||
if (values.IsDefaultOrEmpty)
|
||||
{
|
||||
return ImmutableArray<string>.Empty;
|
||||
}
|
||||
|
||||
var builder = ImmutableArray.CreateBuilder<string>();
|
||||
foreach (var value in values)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(value))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
builder.Add(value.Trim());
|
||||
}
|
||||
|
||||
return builder.Count == 0 ? ImmutableArray<string>.Empty : builder.ToImmutable();
|
||||
}
|
||||
|
||||
private static ImmutableArray<T> Normalize<T>(ImmutableArray<T> values)
|
||||
{
|
||||
if (values.IsDefaultOrEmpty)
|
||||
{
|
||||
return ImmutableArray<T>.Empty;
|
||||
}
|
||||
|
||||
return values;
|
||||
}
|
||||
}
|
||||
|
||||
public sealed class ObservationSeverity
|
||||
{
|
||||
public ObservationSeverity(string system, double score, string? vector)
|
||||
{
|
||||
System = Validation.EnsureNotNullOrWhiteSpace(system, nameof(system));
|
||||
Score = score;
|
||||
Vector = Validation.TrimToNull(vector);
|
||||
}
|
||||
|
||||
public string System { get; }
|
||||
|
||||
public double Score { get; }
|
||||
|
||||
public string? Vector { get; }
|
||||
}
|
||||
|
||||
public sealed class ObservationAffected
|
||||
{
|
||||
public ObservationAffected(
|
||||
string purl,
|
||||
string? package,
|
||||
ImmutableArray<string> versions,
|
||||
ImmutableArray<ObservationVersionRange> ranges,
|
||||
string? ecosystem,
|
||||
ImmutableArray<string> cpes)
|
||||
{
|
||||
Purl = Validation.EnsureNotNullOrWhiteSpace(purl, nameof(purl));
|
||||
Package = Validation.TrimToNull(package);
|
||||
Versions = NormalizeStrings(versions);
|
||||
Ranges = ranges.IsDefault ? ImmutableArray<ObservationVersionRange>.Empty : ranges;
|
||||
Ecosystem = Validation.TrimToNull(ecosystem);
|
||||
Cpes = NormalizeStrings(cpes);
|
||||
}
|
||||
|
||||
public string Purl { get; }
|
||||
|
||||
public string? Package { get; }
|
||||
|
||||
public ImmutableArray<string> Versions { get; }
|
||||
|
||||
public ImmutableArray<ObservationVersionRange> Ranges { get; }
|
||||
|
||||
public string? Ecosystem { get; }
|
||||
|
||||
public ImmutableArray<string> Cpes { get; }
|
||||
|
||||
private static ImmutableArray<string> NormalizeStrings(ImmutableArray<string> values)
|
||||
{
|
||||
if (values.IsDefaultOrEmpty)
|
||||
{
|
||||
return ImmutableArray<string>.Empty;
|
||||
}
|
||||
|
||||
var builder = ImmutableArray.CreateBuilder<string>();
|
||||
foreach (var value in values)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(value))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
builder.Add(value.Trim());
|
||||
}
|
||||
|
||||
return builder.Count == 0 ? ImmutableArray<string>.Empty : builder.ToImmutable();
|
||||
}
|
||||
}
|
||||
|
||||
public sealed class ObservationVersionRange
|
||||
{
|
||||
public ObservationVersionRange(string type, ImmutableArray<ObservationRangeEvent> events)
|
||||
{
|
||||
Type = Validation.EnsureNotNullOrWhiteSpace(type, nameof(type));
|
||||
Events = events.IsDefault ? ImmutableArray<ObservationRangeEvent>.Empty : events;
|
||||
}
|
||||
|
||||
public string Type { get; }
|
||||
|
||||
public ImmutableArray<ObservationRangeEvent> Events { get; }
|
||||
}
|
||||
|
||||
public sealed class ObservationRangeEvent
|
||||
{
|
||||
public ObservationRangeEvent(string @event, string value)
|
||||
{
|
||||
Event = Validation.EnsureNotNullOrWhiteSpace(@event, nameof(@event));
|
||||
Value = Validation.EnsureNotNullOrWhiteSpace(value, nameof(value));
|
||||
}
|
||||
|
||||
public string Event { get; }
|
||||
|
||||
public string Value { get; }
|
||||
}
|
||||
|
||||
public sealed class ObservationProvenance
|
||||
{
|
||||
public ObservationProvenance(
|
||||
string sourceArtifactSha,
|
||||
DateTimeOffset fetchedAt,
|
||||
string? ingestJobId,
|
||||
ObservationSignature? signature)
|
||||
{
|
||||
SourceArtifactSha = Validation.EnsureNotNullOrWhiteSpace(sourceArtifactSha, nameof(sourceArtifactSha));
|
||||
FetchedAt = fetchedAt.ToUniversalTime();
|
||||
IngestJobId = Validation.TrimToNull(ingestJobId);
|
||||
Signature = signature;
|
||||
}
|
||||
|
||||
public string SourceArtifactSha { get; }
|
||||
|
||||
public DateTimeOffset FetchedAt { get; }
|
||||
|
||||
public string? IngestJobId { get; }
|
||||
|
||||
public ObservationSignature? Signature { get; }
|
||||
}
|
||||
|
||||
public sealed class ObservationSignature
|
||||
{
|
||||
public ObservationSignature(bool present, string? format, string? keyId, string? signatureValue)
|
||||
{
|
||||
Present = present;
|
||||
Format = Validation.TrimToNull(format);
|
||||
KeyId = Validation.TrimToNull(keyId);
|
||||
SignatureValue = Validation.TrimToNull(signatureValue);
|
||||
}
|
||||
|
||||
public bool Present { get; }
|
||||
|
||||
public string? Format { get; }
|
||||
|
||||
public string? KeyId { get; }
|
||||
|
||||
public string? SignatureValue { get; }
|
||||
}
|
||||
@@ -55,23 +55,35 @@ public sealed record RawLinkset
|
||||
{
|
||||
[JsonPropertyName("aliases")]
|
||||
public ImmutableArray<string> Aliases { get; init; } = ImmutableArray<string>.Empty;
|
||||
|
||||
[JsonPropertyName("purls")]
|
||||
public ImmutableArray<string> PackageUrls { get; init; } = ImmutableArray<string>.Empty;
|
||||
|
||||
[JsonPropertyName("cpes")]
|
||||
public ImmutableArray<string> Cpes { get; init; } = ImmutableArray<string>.Empty;
|
||||
|
||||
[JsonPropertyName("scopes")]
|
||||
public ImmutableArray<string> Scopes { get; init; } = ImmutableArray<string>.Empty;
|
||||
|
||||
[JsonPropertyName("relationships")]
|
||||
public ImmutableArray<RawRelationship> Relationships { get; init; } = ImmutableArray<RawRelationship>.Empty;
|
||||
|
||||
[JsonPropertyName("purls")]
|
||||
public ImmutableArray<string> PackageUrls { get; init; } = ImmutableArray<string>.Empty;
|
||||
|
||||
[JsonPropertyName("cpes")]
|
||||
public ImmutableArray<string> Cpes { get; init; } = ImmutableArray<string>.Empty;
|
||||
|
||||
[JsonPropertyName("references")]
|
||||
public ImmutableArray<RawReference> References { get; init; } = ImmutableArray<RawReference>.Empty;
|
||||
|
||||
[JsonPropertyName("reconciled_from")]
|
||||
public ImmutableArray<string> ReconciledFrom { get; init; } = ImmutableArray<string>.Empty;
|
||||
|
||||
[JsonPropertyName("notes")]
|
||||
public ImmutableDictionary<string, string> Notes { get; init; } = ImmutableDictionary<string, string>.Empty;
|
||||
|
||||
[JsonPropertyName("notes")]
|
||||
public ImmutableDictionary<string, string> Notes { get; init; } = ImmutableDictionary<string, string>.Empty;
|
||||
}
|
||||
|
||||
public sealed record RawRelationship(
|
||||
[property: JsonPropertyName("type")] string Type,
|
||||
[property: JsonPropertyName("source")] string Source,
|
||||
[property: JsonPropertyName("target")] string Target,
|
||||
[property: JsonPropertyName("provenance")] string? Provenance = null);
|
||||
|
||||
public sealed record RawReference(
|
||||
[property: JsonPropertyName("type")] string Type,
|
||||
[property: JsonPropertyName("url")] string Url,
|
||||
|
||||
@@ -29,6 +29,16 @@ public sealed class AdvisoryLinksetDocument
|
||||
public AdvisoryLinksetNormalizedDocument? Normalized { get; set; }
|
||||
= null;
|
||||
|
||||
[BsonElement("confidence")]
|
||||
[BsonIgnoreIfNull]
|
||||
public double? Confidence { get; set; }
|
||||
= null;
|
||||
|
||||
[BsonElement("conflicts")]
|
||||
[BsonIgnoreIfNull]
|
||||
public List<AdvisoryLinksetConflictDocument>? Conflicts { get; set; }
|
||||
= null;
|
||||
|
||||
[BsonElement("createdAt")]
|
||||
public DateTime CreatedAt { get; set; } = DateTime.UtcNow;
|
||||
|
||||
@@ -85,3 +95,18 @@ public sealed class AdvisoryLinksetProvenanceDocument
|
||||
public string? PolicyHash { get; set; }
|
||||
= null;
|
||||
}
|
||||
|
||||
[BsonIgnoreExtraElements]
|
||||
public sealed class AdvisoryLinksetConflictDocument
|
||||
{
|
||||
[BsonElement("field")]
|
||||
public string Field { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("reason")]
|
||||
public string Reason { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("values")]
|
||||
[BsonIgnoreIfNull]
|
||||
public List<string>? Values { get; set; }
|
||||
= null;
|
||||
}
|
||||
|
||||
@@ -5,11 +5,12 @@ using CoreLinksets = StellaOps.Concelier.Core.Linksets;
|
||||
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Linksets;
|
||||
|
||||
// Backcompat sink name retained for compile includes; forwards to the Mongo-specific store.
|
||||
internal sealed class AdvisoryLinksetSink : CoreLinksets.IAdvisoryLinksetSink
|
||||
{
|
||||
private readonly IAdvisoryLinksetStore _store;
|
||||
private readonly IMongoAdvisoryLinksetStore _store;
|
||||
|
||||
public AdvisoryLinksetSink(IAdvisoryLinksetStore store)
|
||||
public AdvisoryLinksetSink(IMongoAdvisoryLinksetStore store)
|
||||
{
|
||||
_store = store ?? throw new ArgumentNullException(nameof(store));
|
||||
}
|
||||
|
||||
@@ -0,0 +1,20 @@
|
||||
using System;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Linksets;
|
||||
|
||||
internal sealed class ConcelierMongoLinksetSink : global::StellaOps.Concelier.Core.Linksets.IAdvisoryLinksetSink
|
||||
{
|
||||
private readonly IMongoAdvisoryLinksetStore _store;
|
||||
|
||||
public ConcelierMongoLinksetSink(IMongoAdvisoryLinksetStore store)
|
||||
{
|
||||
_store = store ?? throw new ArgumentNullException(nameof(store));
|
||||
}
|
||||
|
||||
public Task UpsertAsync(global::StellaOps.Concelier.Core.Linksets.AdvisoryLinkset linkset, CancellationToken cancellationToken)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(linkset);
|
||||
return _store.UpsertAsync(linkset, cancellationToken);
|
||||
}
|
||||
}
|
||||
@@ -6,15 +6,14 @@ using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using MongoDB.Driver;
|
||||
using CoreLinksets = StellaOps.Concelier.Core.Linksets;
|
||||
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Linksets;
|
||||
|
||||
// Internal type kept in storage namespace to avoid name clash with core interface
|
||||
internal sealed class MongoAdvisoryLinksetStore : CoreLinksets.IAdvisoryLinksetStore, CoreLinksets.IAdvisoryLinksetLookup
|
||||
// Storage implementation of advisory linkset persistence.
|
||||
internal sealed class ConcelierMongoLinksetStore : IMongoAdvisoryLinksetStore
|
||||
{
|
||||
private readonly IMongoCollection<AdvisoryLinksetDocument> _collection;
|
||||
|
||||
public MongoAdvisoryLinksetStore(IMongoCollection<AdvisoryLinksetDocument> collection)
|
||||
public ConcelierMongoLinksetStore(IMongoCollection<AdvisoryLinksetDocument> collection)
|
||||
{
|
||||
_collection = collection ?? throw new ArgumentNullException(nameof(collection));
|
||||
}
|
||||
@@ -24,8 +23,9 @@ internal sealed class MongoAdvisoryLinksetStore : CoreLinksets.IAdvisoryLinksetS
|
||||
ArgumentNullException.ThrowIfNull(linkset);
|
||||
|
||||
var document = MapToDocument(linkset);
|
||||
var tenant = linkset.TenantId.ToLowerInvariant();
|
||||
var filter = Builders<AdvisoryLinksetDocument>.Filter.And(
|
||||
Builders<AdvisoryLinksetDocument>.Filter.Eq(d => d.TenantId, linkset.TenantId),
|
||||
Builders<AdvisoryLinksetDocument>.Filter.Eq(d => d.TenantId, tenant),
|
||||
Builders<AdvisoryLinksetDocument>.Filter.Eq(d => d.Source, linkset.Source),
|
||||
Builders<AdvisoryLinksetDocument>.Filter.Eq(d => d.AdvisoryId, linkset.AdvisoryId));
|
||||
|
||||
@@ -73,9 +73,6 @@ internal sealed class MongoAdvisoryLinksetStore : CoreLinksets.IAdvisoryLinksetS
|
||||
|
||||
var filter = builder.And(filters);
|
||||
|
||||
var sort = Builders<AdvisoryLinksetDocument>.Sort.Descending(d => d.CreatedAt).Ascending(d => d.AdvisoryId);
|
||||
var findFilter = filter;
|
||||
|
||||
if (cursor is not null)
|
||||
{
|
||||
var cursorFilter = builder.Or(
|
||||
@@ -84,10 +81,11 @@ internal sealed class MongoAdvisoryLinksetStore : CoreLinksets.IAdvisoryLinksetS
|
||||
builder.Eq(d => d.CreatedAt, cursor.CreatedAt.UtcDateTime),
|
||||
builder.Gt(d => d.AdvisoryId, cursor.AdvisoryId)));
|
||||
|
||||
findFilter = builder.And(findFilter, cursorFilter);
|
||||
filter = builder.And(filter, cursorFilter);
|
||||
}
|
||||
|
||||
var documents = await _collection.Find(findFilter)
|
||||
var sort = Builders<AdvisoryLinksetDocument>.Sort.Descending(d => d.CreatedAt).Ascending(d => d.AdvisoryId);
|
||||
var documents = await _collection.Find(filter)
|
||||
.Sort(sort)
|
||||
.Limit(limit)
|
||||
.ToListAsync(cancellationToken)
|
||||
@@ -98,14 +96,23 @@ internal sealed class MongoAdvisoryLinksetStore : CoreLinksets.IAdvisoryLinksetS
|
||||
|
||||
private static AdvisoryLinksetDocument MapToDocument(CoreLinksets.AdvisoryLinkset linkset)
|
||||
{
|
||||
var doc = new AdvisoryLinksetDocument
|
||||
return new AdvisoryLinksetDocument
|
||||
{
|
||||
TenantId = linkset.TenantId,
|
||||
TenantId = linkset.TenantId.ToLowerInvariant(),
|
||||
Source = linkset.Source,
|
||||
AdvisoryId = linkset.AdvisoryId,
|
||||
Observations = new List<string>(linkset.ObservationIds),
|
||||
CreatedAt = linkset.CreatedAt.UtcDateTime,
|
||||
BuiltByJobId = linkset.BuiltByJobId,
|
||||
Confidence = linkset.Confidence,
|
||||
Conflicts = linkset.Conflicts is null
|
||||
? null
|
||||
: linkset.Conflicts.Select(conflict => new AdvisoryLinksetConflictDocument
|
||||
{
|
||||
Field = conflict.Field,
|
||||
Reason = conflict.Reason,
|
||||
Values = conflict.Values is null ? null : new List<string>(conflict.Values)
|
||||
}).ToList(),
|
||||
Provenance = linkset.Provenance is null ? null : new AdvisoryLinksetProvenanceDocument
|
||||
{
|
||||
ObservationHashes = linkset.Provenance.ObservationHashes is null
|
||||
@@ -122,26 +129,31 @@ internal sealed class MongoAdvisoryLinksetStore : CoreLinksets.IAdvisoryLinksetS
|
||||
Severities = linkset.Normalized.SeveritiesToBson(),
|
||||
}
|
||||
};
|
||||
|
||||
return doc;
|
||||
}
|
||||
|
||||
private static CoreLinksets.AdvisoryLinkset FromDocument(AdvisoryLinksetDocument doc)
|
||||
{
|
||||
return new AdvisoryLinkset(
|
||||
return new CoreLinksets.AdvisoryLinkset(
|
||||
doc.TenantId,
|
||||
doc.Source,
|
||||
doc.AdvisoryId,
|
||||
doc.Observations.ToImmutableArray(),
|
||||
doc.Normalized is null ? null : new AdvisoryLinksetNormalized(
|
||||
doc.Normalized is null ? null : new CoreLinksets.AdvisoryLinksetNormalized(
|
||||
doc.Normalized.Purls,
|
||||
doc.Normalized.Versions,
|
||||
doc.Normalized.Ranges?.Select(ToDictionary).ToList(),
|
||||
doc.Normalized.Severities?.Select(ToDictionary).ToList()),
|
||||
doc.Provenance is null ? null : new AdvisoryLinksetProvenance(
|
||||
doc.Provenance is null ? null : new CoreLinksets.AdvisoryLinksetProvenance(
|
||||
doc.Provenance.ObservationHashes,
|
||||
doc.Provenance.ToolVersion,
|
||||
doc.Provenance.PolicyHash),
|
||||
doc.Confidence,
|
||||
doc.Conflicts is null
|
||||
? null
|
||||
: doc.Conflicts.Select(conflict => new CoreLinksets.AdvisoryLinksetConflict(
|
||||
conflict.Field,
|
||||
conflict.Reason,
|
||||
conflict.Values)).ToList(),
|
||||
DateTime.SpecifyKind(doc.CreatedAt, DateTimeKind.Utc),
|
||||
doc.BuiltByJobId);
|
||||
}
|
||||
@@ -0,0 +1,5 @@
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Linksets;
|
||||
|
||||
public interface IMongoAdvisoryLinksetStore : global::StellaOps.Concelier.Core.Linksets.IAdvisoryLinksetStore, global::StellaOps.Concelier.Core.Linksets.IAdvisoryLinksetLookup
|
||||
{
|
||||
}
|
||||
@@ -0,0 +1,66 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using MongoDB.Bson;
|
||||
using MongoDB.Driver;
|
||||
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Migrations;
|
||||
|
||||
/// <summary>
|
||||
/// Normalises advisory_linksets tenant ids to lowercase to keep lookups/write paths consistent.
|
||||
/// </summary>
|
||||
public sealed class EnsureAdvisoryLinksetsTenantLowerMigration : IMongoMigration
|
||||
{
|
||||
private const string MigrationId = "20251117_advisory_linksets_tenant_lower";
|
||||
private const int BatchSize = 500;
|
||||
|
||||
public string Id => MigrationId;
|
||||
|
||||
public string Description => "Lowercase tenant ids in advisory_linksets to match query filters.";
|
||||
|
||||
public async Task ApplyAsync(IMongoDatabase database, CancellationToken cancellationToken)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(database);
|
||||
|
||||
var collection = database.GetCollection<BsonDocument>(MongoStorageDefaults.Collections.AdvisoryLinksets);
|
||||
|
||||
var filter = Builders<BsonDocument>.Filter.Where(doc =>
|
||||
doc.Contains("TenantId") &&
|
||||
doc["TenantId"].BsonType == BsonType.String &&
|
||||
doc["TenantId"].AsString != doc["TenantId"].AsString.ToLowerInvariant());
|
||||
|
||||
using var cursor = await collection.Find(filter).ToCursorAsync(cancellationToken).ConfigureAwait(false);
|
||||
|
||||
var writes = new List<WriteModel<BsonDocument>>(BatchSize);
|
||||
while (await cursor.MoveNextAsync(cancellationToken).ConfigureAwait(false))
|
||||
{
|
||||
foreach (var doc in cursor.Current)
|
||||
{
|
||||
var currentTenant = doc["TenantId"].AsString;
|
||||
var lower = currentTenant.ToLowerInvariant();
|
||||
if (lower == currentTenant)
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
var idFilter = Builders<BsonDocument>.Filter.Eq("_id", doc["_id"]);
|
||||
var update = Builders<BsonDocument>.Update.Set("TenantId", lower);
|
||||
writes.Add(new UpdateOneModel<BsonDocument>(idFilter, update));
|
||||
|
||||
if (writes.Count >= BatchSize)
|
||||
{
|
||||
await collection.BulkWriteAsync(writes, new BulkWriteOptions { IsOrdered = false }, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
writes.Clear();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (writes.Count > 0)
|
||||
{
|
||||
await collection.BulkWriteAsync(writes, new BulkWriteOptions { IsOrdered = false }, cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -28,5 +28,6 @@ public static class MongoStorageDefaults
|
||||
public const string AdvisoryStatements = "advisory_statements";
|
||||
public const string AdvisoryConflicts = "advisory_conflicts";
|
||||
public const string AdvisoryObservations = "advisory_observations";
|
||||
public const string AdvisoryLinksets = "advisory_linksets";
|
||||
}
|
||||
}
|
||||
|
||||
@@ -175,6 +175,16 @@ public sealed class AdvisoryObservationRawLinksetDocument
|
||||
public List<string>? Aliases { get; set; }
|
||||
= new();
|
||||
|
||||
[BsonElement("scopes")]
|
||||
[BsonIgnoreIfNull]
|
||||
public List<string>? Scopes { get; set; }
|
||||
= new();
|
||||
|
||||
[BsonElement("relationships")]
|
||||
[BsonIgnoreIfNull]
|
||||
public List<AdvisoryObservationRawRelationshipDocument>? Relationships { get; set; }
|
||||
= new();
|
||||
|
||||
[BsonElement("purls")]
|
||||
[BsonIgnoreIfNull]
|
||||
public List<string>? PackageUrls { get; set; }
|
||||
@@ -217,3 +227,21 @@ public sealed class AdvisoryObservationRawReferenceDocument
|
||||
public string? Source { get; set; }
|
||||
= null;
|
||||
}
|
||||
|
||||
[BsonIgnoreExtraElements]
|
||||
public sealed class AdvisoryObservationRawRelationshipDocument
|
||||
{
|
||||
[BsonElement("type")]
|
||||
public string Type { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("source")]
|
||||
public string Source { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("target")]
|
||||
public string Target { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("provenance")]
|
||||
[BsonIgnoreIfNull]
|
||||
public string? Provenance { get; set; }
|
||||
= null;
|
||||
}
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Collections.Immutable;
|
||||
using System.Linq;
|
||||
using System.Text.Json.Nodes;
|
||||
using System.Linq;
|
||||
using System.Text.Json;
|
||||
using System.Text.Json.Nodes;
|
||||
using MongoDB.Bson;
|
||||
using MongoDB.Bson.IO;
|
||||
using StellaOps.Concelier.Models.Observations;
|
||||
@@ -10,11 +11,11 @@ using StellaOps.Concelier.RawModels;
|
||||
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Observations;
|
||||
|
||||
internal static class AdvisoryObservationDocumentFactory
|
||||
{
|
||||
private static readonly JsonWriterSettings JsonSettings = new() { OutputMode = JsonOutputMode.RelaxedExtendedJson };
|
||||
|
||||
public static AdvisoryObservation ToModel(AdvisoryObservationDocument document)
|
||||
internal static class AdvisoryObservationDocumentFactory
|
||||
{
|
||||
private static readonly JsonWriterSettings JsonSettings = new() { OutputMode = JsonOutputMode.RelaxedExtendedJson };
|
||||
|
||||
public static AdvisoryObservation ToModel(AdvisoryObservationDocument document)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(document);
|
||||
|
||||
@@ -61,7 +62,69 @@ internal static class AdvisoryObservationDocumentFactory
|
||||
|
||||
return observation;
|
||||
}
|
||||
|
||||
|
||||
public static AdvisoryObservationDocument ToDocument(AdvisoryObservation observation)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(observation);
|
||||
|
||||
var contentRaw = observation.Content.Raw?.ToJsonString(new JsonSerializerOptions
|
||||
{
|
||||
Encoder = System.Text.Encodings.Web.JavaScriptEncoder.UnsafeRelaxedJsonEscaping
|
||||
}) ?? "{}";
|
||||
|
||||
var document = new AdvisoryObservationDocument
|
||||
{
|
||||
Id = observation.ObservationId,
|
||||
Tenant = observation.Tenant,
|
||||
Source = new AdvisoryObservationSourceDocument
|
||||
{
|
||||
Vendor = observation.Source.Vendor,
|
||||
Stream = observation.Source.Stream,
|
||||
Api = observation.Source.Api,
|
||||
CollectorVersion = observation.Source.CollectorVersion
|
||||
},
|
||||
Upstream = new AdvisoryObservationUpstreamDocument
|
||||
{
|
||||
UpstreamId = observation.Upstream.UpstreamId,
|
||||
DocumentVersion = observation.Upstream.DocumentVersion,
|
||||
FetchedAt = observation.Upstream.FetchedAt.UtcDateTime,
|
||||
ReceivedAt = observation.Upstream.ReceivedAt.UtcDateTime,
|
||||
ContentHash = observation.Upstream.ContentHash,
|
||||
Signature = new AdvisoryObservationSignatureDocument
|
||||
{
|
||||
Present = observation.Upstream.Signature.Present,
|
||||
Format = observation.Upstream.Signature.Format,
|
||||
KeyId = observation.Upstream.Signature.KeyId,
|
||||
Signature = observation.Upstream.Signature.Signature
|
||||
},
|
||||
Metadata = observation.Upstream.Metadata.ToDictionary(pair => pair.Key, pair => pair.Value, StringComparer.Ordinal)
|
||||
},
|
||||
Content = new AdvisoryObservationContentDocument
|
||||
{
|
||||
Format = observation.Content.Format,
|
||||
SpecVersion = observation.Content.SpecVersion,
|
||||
Raw = BsonDocument.Parse(contentRaw),
|
||||
Metadata = observation.Content.Metadata.ToDictionary(pair => pair.Key, pair => pair.Value, StringComparer.Ordinal)
|
||||
},
|
||||
Linkset = new AdvisoryObservationLinksetDocument
|
||||
{
|
||||
Aliases = observation.Linkset.Aliases.ToList(),
|
||||
Purls = observation.Linkset.Purls.ToList(),
|
||||
Cpes = observation.Linkset.Cpes.ToList(),
|
||||
References = observation.Linkset.References.Select(reference => new AdvisoryObservationReferenceDocument
|
||||
{
|
||||
Type = reference.Type,
|
||||
Url = reference.Url
|
||||
}).ToList()
|
||||
},
|
||||
RawLinkset = ToRawLinksetDocument(observation.RawLinkset),
|
||||
CreatedAt = observation.CreatedAt.UtcDateTime,
|
||||
Attributes = observation.Attributes.ToDictionary(pair => pair.Key, pair => pair.Value, StringComparer.Ordinal)
|
||||
};
|
||||
|
||||
return document;
|
||||
}
|
||||
|
||||
private static JsonNode ParseJsonNode(BsonDocument raw)
|
||||
{
|
||||
if (raw is null || raw.ElementCount == 0)
|
||||
@@ -113,6 +176,22 @@ internal static class AdvisoryObservationDocumentFactory
|
||||
.ToImmutableArray();
|
||||
}
|
||||
|
||||
static ImmutableArray<RawRelationship> ToImmutableRelationships(List<AdvisoryObservationRawRelationshipDocument>? relationships)
|
||||
{
|
||||
if (relationships is null || relationships.Count == 0)
|
||||
{
|
||||
return ImmutableArray<RawRelationship>.Empty;
|
||||
}
|
||||
|
||||
return relationships
|
||||
.Select(static relationship => new RawRelationship(
|
||||
relationship.Type ?? string.Empty,
|
||||
relationship.Source ?? string.Empty,
|
||||
relationship.Target ?? string.Empty,
|
||||
relationship.Provenance))
|
||||
.ToImmutableArray();
|
||||
}
|
||||
|
||||
static ImmutableArray<RawReference> ToImmutableReferences(List<AdvisoryObservationRawReferenceDocument>? references)
|
||||
{
|
||||
if (references is null || references.Count == 0)
|
||||
@@ -152,6 +231,8 @@ internal static class AdvisoryObservationDocumentFactory
|
||||
return new RawLinkset
|
||||
{
|
||||
Aliases = ToImmutableStringArray(document.Aliases),
|
||||
Scopes = ToImmutableStringArray(document.Scopes),
|
||||
Relationships = ToImmutableRelationships(document.Relationships),
|
||||
PackageUrls = ToImmutableStringArray(document.PackageUrls),
|
||||
Cpes = ToImmutableStringArray(document.Cpes),
|
||||
References = ToImmutableReferences(document.References),
|
||||
@@ -159,4 +240,35 @@ internal static class AdvisoryObservationDocumentFactory
|
||||
Notes = ToImmutableDictionary(document.Notes)
|
||||
};
|
||||
}
|
||||
|
||||
private static AdvisoryObservationRawLinksetDocument ToRawLinksetDocument(RawLinkset rawLinkset)
|
||||
{
|
||||
return new AdvisoryObservationRawLinksetDocument
|
||||
{
|
||||
Aliases = rawLinkset.Aliases.IsDefault ? new List<string>() : rawLinkset.Aliases.ToList(),
|
||||
Scopes = rawLinkset.Scopes.IsDefault ? new List<string>() : rawLinkset.Scopes.ToList(),
|
||||
Relationships = rawLinkset.Relationships.IsDefault
|
||||
? new List<AdvisoryObservationRawRelationshipDocument>()
|
||||
: rawLinkset.Relationships.Select(relationship => new AdvisoryObservationRawRelationshipDocument
|
||||
{
|
||||
Type = relationship.Type,
|
||||
Source = relationship.Source,
|
||||
Target = relationship.Target,
|
||||
Provenance = relationship.Provenance
|
||||
}).ToList(),
|
||||
PackageUrls = rawLinkset.PackageUrls.IsDefault ? new List<string>() : rawLinkset.PackageUrls.ToList(),
|
||||
Cpes = rawLinkset.Cpes.IsDefault ? new List<string>() : rawLinkset.Cpes.ToList(),
|
||||
References = rawLinkset.References.IsDefault
|
||||
? new List<AdvisoryObservationRawReferenceDocument>()
|
||||
: rawLinkset.References.Select(reference => new AdvisoryObservationRawReferenceDocument
|
||||
{
|
||||
Type = reference.Type,
|
||||
Url = reference.Url,
|
||||
Source = reference.Source
|
||||
}).ToList(),
|
||||
ReconciledFrom = rawLinkset.ReconciledFrom.IsDefault ? new List<string>() : rawLinkset.ReconciledFrom.ToList(),
|
||||
Notes = rawLinkset.Notes.Count == 0 ? new Dictionary<string, string>(StringComparer.Ordinal)
|
||||
: rawLinkset.Notes.ToDictionary(pair => pair.Key, pair => pair.Value, StringComparer.Ordinal)
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,6 +2,8 @@ using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Linq;
|
||||
using System.Text.RegularExpressions;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using MongoDB.Bson;
|
||||
using MongoDB.Driver;
|
||||
using StellaOps.Concelier.Core.Observations;
|
||||
@@ -9,21 +11,21 @@ using StellaOps.Concelier.Models.Observations;
|
||||
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Observations;
|
||||
|
||||
internal sealed class AdvisoryObservationStore : IAdvisoryObservationStore
|
||||
{
|
||||
private readonly IMongoCollection<AdvisoryObservationDocument> collection;
|
||||
|
||||
public AdvisoryObservationStore(IMongoCollection<AdvisoryObservationDocument> collection)
|
||||
internal sealed class AdvisoryObservationStore : IAdvisoryObservationStore
|
||||
{
|
||||
private readonly IMongoCollection<AdvisoryObservationDocument> collection;
|
||||
|
||||
public AdvisoryObservationStore(IMongoCollection<AdvisoryObservationDocument> collection)
|
||||
{
|
||||
this.collection = collection ?? throw new ArgumentNullException(nameof(collection));
|
||||
}
|
||||
|
||||
public async Task<IReadOnlyList<AdvisoryObservation>> ListByTenantAsync(string tenant, CancellationToken cancellationToken)
|
||||
{
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(tenant);
|
||||
|
||||
var filter = Builders<AdvisoryObservationDocument>.Filter.Eq(document => document.Tenant, tenant.ToLowerInvariant());
|
||||
var documents = await collection
|
||||
{
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(tenant);
|
||||
|
||||
var filter = Builders<AdvisoryObservationDocument>.Filter.Eq(document => document.Tenant, tenant.ToLowerInvariant());
|
||||
var documents = await collection
|
||||
.Find(filter)
|
||||
.SortByDescending(document => document.CreatedAt)
|
||||
.ThenBy(document => document.Id)
|
||||
@@ -111,6 +113,18 @@ internal sealed class AdvisoryObservationStore : IAdvisoryObservationStore
|
||||
return documents.Select(AdvisoryObservationDocumentFactory.ToModel).ToArray();
|
||||
}
|
||||
|
||||
public async Task UpsertAsync(AdvisoryObservation observation, CancellationToken cancellationToken)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(observation);
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
|
||||
var document = AdvisoryObservationDocumentFactory.ToDocument(observation);
|
||||
var filter = Builders<AdvisoryObservationDocument>.Filter.Eq(d => d.Id, document.Id);
|
||||
var options = new ReplaceOptions { IsUpsert = true };
|
||||
|
||||
await collection.ReplaceOneAsync(filter, document, options, cancellationToken).ConfigureAwait(false);
|
||||
}
|
||||
|
||||
private static string[] NormalizeValues(IEnumerable<string>? values, Func<string, string> projector)
|
||||
{
|
||||
if (values is null)
|
||||
|
||||
@@ -1,20 +1,23 @@
|
||||
using System.Collections.Generic;
|
||||
using StellaOps.Concelier.Models.Observations;
|
||||
using StellaOps.Concelier.Core.Observations;
|
||||
using System.Collections.Generic;
|
||||
using System.Threading;
|
||||
using StellaOps.Concelier.Models.Observations;
|
||||
using StellaOps.Concelier.Core.Observations;
|
||||
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Observations;
|
||||
|
||||
public interface IAdvisoryObservationStore
|
||||
{
|
||||
Task<IReadOnlyList<AdvisoryObservation>> ListByTenantAsync(string tenant, CancellationToken cancellationToken);
|
||||
|
||||
Task<IReadOnlyList<AdvisoryObservation>> FindByFiltersAsync(
|
||||
string tenant,
|
||||
IEnumerable<string>? observationIds,
|
||||
IEnumerable<string>? aliases,
|
||||
IEnumerable<string>? purls,
|
||||
IEnumerable<string>? cpes,
|
||||
AdvisoryObservationCursor? cursor,
|
||||
int limit,
|
||||
CancellationToken cancellationToken);
|
||||
}
|
||||
public interface IAdvisoryObservationStore
|
||||
{
|
||||
Task<IReadOnlyList<AdvisoryObservation>> ListByTenantAsync(string tenant, CancellationToken cancellationToken);
|
||||
|
||||
Task<IReadOnlyList<AdvisoryObservation>> FindByFiltersAsync(
|
||||
string tenant,
|
||||
IEnumerable<string>? observationIds,
|
||||
IEnumerable<string>? aliases,
|
||||
IEnumerable<string>? purls,
|
||||
IEnumerable<string>? cpes,
|
||||
AdvisoryObservationCursor? cursor,
|
||||
int limit,
|
||||
CancellationToken cancellationToken);
|
||||
|
||||
Task UpsertAsync(AdvisoryObservation observation, CancellationToken cancellationToken);
|
||||
}
|
||||
|
||||
@@ -0,0 +1,167 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using MongoDB.Bson;
|
||||
using MongoDB.Bson.Serialization.Attributes;
|
||||
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Observations.V1;
|
||||
|
||||
[BsonIgnoreExtraElements]
|
||||
public sealed class AdvisoryObservationV1Document
|
||||
{
|
||||
[BsonId]
|
||||
public ObjectId Id { get; set; }
|
||||
|
||||
[BsonElement("tenantId")]
|
||||
public string TenantId { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("source")]
|
||||
public string Source { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("advisoryId")]
|
||||
public string AdvisoryId { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("title")]
|
||||
[BsonIgnoreIfNull]
|
||||
public string? Title { get; set; }
|
||||
|
||||
[BsonElement("summary")]
|
||||
[BsonIgnoreIfNull]
|
||||
public string? Summary { get; set; }
|
||||
|
||||
[BsonElement("severities")]
|
||||
[BsonIgnoreIfNull]
|
||||
public List<ObservationSeverityDocument>? Severities { get; set; }
|
||||
|
||||
[BsonElement("affected")]
|
||||
[BsonIgnoreIfNull]
|
||||
public List<ObservationAffectedDocument>? Affected { get; set; }
|
||||
|
||||
[BsonElement("references")]
|
||||
[BsonIgnoreIfNull]
|
||||
public List<string>? References { get; set; }
|
||||
|
||||
[BsonElement("weaknesses")]
|
||||
[BsonIgnoreIfNull]
|
||||
public List<string>? Weaknesses { get; set; }
|
||||
|
||||
[BsonElement("published")]
|
||||
[BsonDateTimeOptions(Kind = DateTimeKind.Utc)]
|
||||
[BsonIgnoreIfNull]
|
||||
public DateTime? Published { get; set; }
|
||||
|
||||
[BsonElement("modified")]
|
||||
[BsonDateTimeOptions(Kind = DateTimeKind.Utc)]
|
||||
[BsonIgnoreIfNull]
|
||||
public DateTime? Modified { get; set; }
|
||||
|
||||
[BsonElement("provenance")]
|
||||
public ObservationProvenanceDocument Provenance { get; set; } = new();
|
||||
|
||||
[BsonElement("ingestedAt")]
|
||||
[BsonDateTimeOptions(Kind = DateTimeKind.Utc)]
|
||||
public DateTime IngestedAt { get; set; }
|
||||
|
||||
[BsonElement("supersedesId")]
|
||||
[BsonIgnoreIfNull]
|
||||
public ObjectId? SupersedesId { get; set; }
|
||||
}
|
||||
|
||||
[BsonIgnoreExtraElements]
|
||||
public sealed class ObservationSeverityDocument
|
||||
{
|
||||
[BsonElement("system")]
|
||||
public string System { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("score")]
|
||||
public double Score { get; set; }
|
||||
|
||||
[BsonElement("vector")]
|
||||
[BsonIgnoreIfNull]
|
||||
public string? Vector { get; set; }
|
||||
}
|
||||
|
||||
[BsonIgnoreExtraElements]
|
||||
public sealed class ObservationAffectedDocument
|
||||
{
|
||||
[BsonElement("purl")]
|
||||
public string Purl { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("package")]
|
||||
[BsonIgnoreIfNull]
|
||||
public string? Package { get; set; }
|
||||
|
||||
[BsonElement("versions")]
|
||||
[BsonIgnoreIfNull]
|
||||
public List<string>? Versions { get; set; }
|
||||
|
||||
[BsonElement("ranges")]
|
||||
[BsonIgnoreIfNull]
|
||||
public List<ObservationVersionRangeDocument>? Ranges { get; set; }
|
||||
|
||||
[BsonElement("ecosystem")]
|
||||
[BsonIgnoreIfNull]
|
||||
public string? Ecosystem { get; set; }
|
||||
|
||||
[BsonElement("cpes")]
|
||||
[BsonIgnoreIfNull]
|
||||
public List<string>? Cpes { get; set; }
|
||||
}
|
||||
|
||||
[BsonIgnoreExtraElements]
|
||||
public sealed class ObservationVersionRangeDocument
|
||||
{
|
||||
[BsonElement("type")]
|
||||
public string Type { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("events")]
|
||||
[BsonIgnoreIfNull]
|
||||
public List<ObservationRangeEventDocument>? Events { get; set; }
|
||||
}
|
||||
|
||||
[BsonIgnoreExtraElements]
|
||||
public sealed class ObservationRangeEventDocument
|
||||
{
|
||||
[BsonElement("event")]
|
||||
public string Event { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("value")]
|
||||
public string Value { get; set; } = string.Empty;
|
||||
}
|
||||
|
||||
[BsonIgnoreExtraElements]
|
||||
public sealed class ObservationProvenanceDocument
|
||||
{
|
||||
[BsonElement("sourceArtifactSha")]
|
||||
public string SourceArtifactSha { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("fetchedAt")]
|
||||
[BsonDateTimeOptions(Kind = DateTimeKind.Utc)]
|
||||
public DateTime FetchedAt { get; set; }
|
||||
|
||||
[BsonElement("ingestJobId")]
|
||||
[BsonIgnoreIfNull]
|
||||
public string? IngestJobId { get; set; }
|
||||
|
||||
[BsonElement("signature")]
|
||||
[BsonIgnoreIfNull]
|
||||
public ObservationSignatureDocument? Signature { get; set; }
|
||||
}
|
||||
|
||||
[BsonIgnoreExtraElements]
|
||||
public sealed class ObservationSignatureDocument
|
||||
{
|
||||
[BsonElement("present")]
|
||||
public bool Present { get; set; }
|
||||
|
||||
[BsonElement("format")]
|
||||
[BsonIgnoreIfNull]
|
||||
public string? Format { get; set; }
|
||||
|
||||
[BsonElement("keyId")]
|
||||
[BsonIgnoreIfNull]
|
||||
public string? KeyId { get; set; }
|
||||
|
||||
[BsonElement("signature")]
|
||||
[BsonIgnoreIfNull]
|
||||
public string? Signature { get; set; }
|
||||
}
|
||||
@@ -0,0 +1,178 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Collections.Immutable;
|
||||
using System.Linq;
|
||||
using MongoDB.Bson;
|
||||
using StellaOps.Concelier.Models.Observations;
|
||||
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Observations.V1;
|
||||
|
||||
internal static class AdvisoryObservationV1DocumentFactory
|
||||
{
|
||||
public static AdvisoryObservationV1 ToModel(AdvisoryObservationV1Document document)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(document);
|
||||
|
||||
var severities = document.Severities is null
|
||||
? ImmutableArray<ObservationSeverity>.Empty
|
||||
: document.Severities.Select(s => new ObservationSeverity(s.System, s.Score, s.Vector)).ToImmutableArray();
|
||||
|
||||
var affected = document.Affected is null
|
||||
? ImmutableArray<ObservationAffected>.Empty
|
||||
: document.Affected.Select(ToAffected).ToImmutableArray();
|
||||
|
||||
var references = ToImmutableStrings(document.References);
|
||||
var weaknesses = ToImmutableStrings(document.Weaknesses);
|
||||
|
||||
var provenanceDoc = document.Provenance ?? throw new ArgumentNullException(nameof(document.Provenance));
|
||||
var signatureDoc = provenanceDoc.Signature;
|
||||
var signature = signatureDoc is null
|
||||
? null
|
||||
: new ObservationSignature(signatureDoc.Present, signatureDoc.Format, signatureDoc.KeyId, signatureDoc.Signature);
|
||||
|
||||
var provenance = new ObservationProvenance(
|
||||
provenanceDoc.SourceArtifactSha,
|
||||
DateTime.SpecifyKind(provenanceDoc.FetchedAt, DateTimeKind.Utc),
|
||||
provenanceDoc.IngestJobId,
|
||||
signature);
|
||||
|
||||
return new AdvisoryObservationV1(
|
||||
document.Id.ToString(),
|
||||
document.TenantId,
|
||||
document.Source,
|
||||
document.AdvisoryId,
|
||||
document.Title,
|
||||
document.Summary,
|
||||
severities,
|
||||
affected,
|
||||
references,
|
||||
weaknesses,
|
||||
document.Published.HasValue ? DateTime.SpecifyKind(document.Published.Value, DateTimeKind.Utc) : null,
|
||||
document.Modified.HasValue ? DateTime.SpecifyKind(document.Modified.Value, DateTimeKind.Utc) : null,
|
||||
provenance,
|
||||
DateTime.SpecifyKind(document.IngestedAt, DateTimeKind.Utc),
|
||||
document.SupersedesId?.ToString());
|
||||
}
|
||||
|
||||
public static AdvisoryObservationV1Document FromModel(AdvisoryObservationV1 model)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(model);
|
||||
|
||||
var document = new AdvisoryObservationV1Document
|
||||
{
|
||||
Id = ObjectId.Parse(model.ObservationId),
|
||||
TenantId = model.Tenant,
|
||||
Source = model.Source,
|
||||
AdvisoryId = model.AdvisoryId,
|
||||
Title = model.Title,
|
||||
Summary = model.Summary,
|
||||
Severities = model.Severities
|
||||
.Select(severity => new ObservationSeverityDocument
|
||||
{
|
||||
System = severity.System,
|
||||
Score = severity.Score,
|
||||
Vector = severity.Vector
|
||||
})
|
||||
.ToList(),
|
||||
Affected = model.Affected.Select(ToDocument).ToList(),
|
||||
References = model.References.IsDefaultOrEmpty ? null : model.References.ToList(),
|
||||
Weaknesses = model.Weaknesses.IsDefaultOrEmpty ? null : model.Weaknesses.ToList(),
|
||||
Published = model.Published?.UtcDateTime,
|
||||
Modified = model.Modified?.UtcDateTime,
|
||||
SupersedesId = string.IsNullOrWhiteSpace(model.SupersedesObservationId)
|
||||
? null
|
||||
: ObjectId.Parse(model.SupersedesObservationId!),
|
||||
IngestedAt = model.IngestedAt.UtcDateTime,
|
||||
Provenance = new ObservationProvenanceDocument
|
||||
{
|
||||
SourceArtifactSha = model.Provenance.SourceArtifactSha,
|
||||
FetchedAt = model.Provenance.FetchedAt.UtcDateTime,
|
||||
IngestJobId = model.Provenance.IngestJobId,
|
||||
Signature = model.Provenance.Signature is null
|
||||
? null
|
||||
: new ObservationSignatureDocument
|
||||
{
|
||||
Present = model.Provenance.Signature.Present,
|
||||
Format = model.Provenance.Signature.Format,
|
||||
KeyId = model.Provenance.Signature.KeyId,
|
||||
Signature = model.Provenance.Signature.SignatureValue
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
return document;
|
||||
}
|
||||
|
||||
private static ImmutableArray<string> ToImmutableStrings(IEnumerable<string>? values)
|
||||
{
|
||||
if (values is null)
|
||||
{
|
||||
return ImmutableArray<string>.Empty;
|
||||
}
|
||||
|
||||
var builder = ImmutableArray.CreateBuilder<string>();
|
||||
foreach (var value in values)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(value))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
builder.Add(value.Trim());
|
||||
}
|
||||
|
||||
return builder.Count == 0 ? ImmutableArray<string>.Empty : builder.ToImmutable();
|
||||
}
|
||||
|
||||
private static ObservationAffected ToAffected(ObservationAffectedDocument document)
|
||||
{
|
||||
var ranges = document.Ranges is null
|
||||
? ImmutableArray<ObservationVersionRange>.Empty
|
||||
: document.Ranges.Select(ToRange).ToImmutableArray();
|
||||
|
||||
return new ObservationAffected(
|
||||
document.Purl,
|
||||
document.Package,
|
||||
ToImmutableStrings(document.Versions),
|
||||
ranges,
|
||||
document.Ecosystem,
|
||||
ToImmutableStrings(document.Cpes));
|
||||
}
|
||||
|
||||
private static ObservationVersionRange ToRange(ObservationVersionRangeDocument document)
|
||||
{
|
||||
var events = document.Events is null
|
||||
? ImmutableArray<ObservationRangeEvent>.Empty
|
||||
: document.Events.Select(evt => new ObservationRangeEvent(evt.Event, evt.Value)).ToImmutableArray();
|
||||
|
||||
return new ObservationVersionRange(document.Type, events);
|
||||
}
|
||||
|
||||
private static ObservationAffectedDocument ToDocument(ObservationAffected model)
|
||||
{
|
||||
return new ObservationAffectedDocument
|
||||
{
|
||||
Purl = model.Purl,
|
||||
Package = model.Package,
|
||||
Versions = model.Versions.IsDefaultOrEmpty ? null : model.Versions.ToList(),
|
||||
Ranges = model.Ranges.IsDefaultOrEmpty ? null : model.Ranges.Select(ToDocument).ToList(),
|
||||
Ecosystem = model.Ecosystem,
|
||||
Cpes = model.Cpes.IsDefaultOrEmpty ? null : model.Cpes.ToList()
|
||||
};
|
||||
}
|
||||
|
||||
private static ObservationVersionRangeDocument ToDocument(ObservationVersionRange model)
|
||||
{
|
||||
return new ObservationVersionRangeDocument
|
||||
{
|
||||
Type = model.Type,
|
||||
Events = model.Events.IsDefaultOrEmpty
|
||||
? null
|
||||
: model.Events.Select(evt => new ObservationRangeEventDocument
|
||||
{
|
||||
Event = evt.Event,
|
||||
Value = evt.Value
|
||||
}).ToList()
|
||||
};
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,26 @@
|
||||
using System;
|
||||
using System.Security.Cryptography;
|
||||
using System.Text;
|
||||
using MongoDB.Bson;
|
||||
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Observations.V1;
|
||||
|
||||
internal static class ObservationIdBuilder
|
||||
{
|
||||
public static ObjectId Create(string tenant, string source, string advisoryId, string sourceArtifactSha)
|
||||
{
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(tenant);
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(source);
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(advisoryId);
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(sourceArtifactSha);
|
||||
|
||||
var material = $"{tenant.Trim().ToLowerInvariant()}|{source.Trim().ToLowerInvariant()}|{advisoryId.Trim()}|{sourceArtifactSha.Trim()}";
|
||||
var hash = SHA256.HashData(Encoding.UTF8.GetBytes(material));
|
||||
|
||||
Span<byte> objectIdBytes = stackalloc byte[12];
|
||||
hash.AsSpan(0, 12).CopyTo(objectIdBytes);
|
||||
|
||||
// ObjectId requires a byte[]; copy the stackalloc span into a managed array.
|
||||
return new ObjectId(objectIdBytes.ToArray());
|
||||
}
|
||||
}
|
||||
@@ -645,23 +645,42 @@ internal sealed class MongoAdvisoryRawRepository : IAdvisoryRawRepository
|
||||
|
||||
private static RawLinkset MapLinkset(BsonDocument linkset)
|
||||
{
|
||||
var aliases = linkset.TryGetValue("aliases", out var aliasesValue) && aliasesValue.IsBsonArray
|
||||
? aliasesValue.AsBsonArray.Select(BsonValueToString).ToImmutableArray()
|
||||
: ImmutableArray<string>.Empty;
|
||||
|
||||
var purls = linkset.TryGetValue("purls", out var purlsValue) && purlsValue.IsBsonArray
|
||||
? purlsValue.AsBsonArray.Select(BsonValueToString).ToImmutableArray()
|
||||
: ImmutableArray<string>.Empty;
|
||||
|
||||
var cpes = linkset.TryGetValue("cpes", out var cpesValue) && cpesValue.IsBsonArray
|
||||
? cpesValue.AsBsonArray.Select(BsonValueToString).ToImmutableArray()
|
||||
: ImmutableArray<string>.Empty;
|
||||
|
||||
var references = linkset.TryGetValue("references", out var referencesValue) && referencesValue.IsBsonArray
|
||||
? referencesValue.AsBsonArray
|
||||
.Where(static value => value.IsBsonDocument)
|
||||
.Select(value =>
|
||||
{
|
||||
var aliases = linkset.TryGetValue("aliases", out var aliasesValue) && aliasesValue.IsBsonArray
|
||||
? aliasesValue.AsBsonArray.Select(BsonValueToString).ToImmutableArray()
|
||||
: ImmutableArray<string>.Empty;
|
||||
|
||||
var scopes = linkset.TryGetValue("scopes", out var scopesValue) && scopesValue.IsBsonArray
|
||||
? scopesValue.AsBsonArray.Select(BsonValueToString).ToImmutableArray()
|
||||
: ImmutableArray<string>.Empty;
|
||||
|
||||
var purls = linkset.TryGetValue("purls", out var purlsValue) && purlsValue.IsBsonArray
|
||||
? purlsValue.AsBsonArray.Select(BsonValueToString).ToImmutableArray()
|
||||
: ImmutableArray<string>.Empty;
|
||||
|
||||
var cpes = linkset.TryGetValue("cpes", out var cpesValue) && cpesValue.IsBsonArray
|
||||
? cpesValue.AsBsonArray.Select(BsonValueToString).ToImmutableArray()
|
||||
: ImmutableArray<string>.Empty;
|
||||
|
||||
var relationships = linkset.TryGetValue("relationships", out var relationshipsValue) && relationshipsValue.IsBsonArray
|
||||
? relationshipsValue.AsBsonArray
|
||||
.Where(static value => value.IsBsonDocument)
|
||||
.Select(value =>
|
||||
{
|
||||
var doc = value.AsBsonDocument;
|
||||
return new RawRelationship(
|
||||
GetRequiredString(doc, "type"),
|
||||
GetRequiredString(doc, "source"),
|
||||
GetRequiredString(doc, "target"),
|
||||
GetOptionalString(doc, "provenance"));
|
||||
})
|
||||
.ToImmutableArray()
|
||||
: ImmutableArray<RawRelationship>.Empty;
|
||||
|
||||
var references = linkset.TryGetValue("references", out var referencesValue) && referencesValue.IsBsonArray
|
||||
? referencesValue.AsBsonArray
|
||||
.Where(static value => value.IsBsonDocument)
|
||||
.Select(value =>
|
||||
{
|
||||
var doc = value.AsBsonDocument;
|
||||
return new RawReference(
|
||||
GetRequiredString(doc, "type"),
|
||||
@@ -684,14 +703,16 @@ internal sealed class MongoAdvisoryRawRepository : IAdvisoryRawRepository
|
||||
}
|
||||
}
|
||||
|
||||
return new RawLinkset
|
||||
{
|
||||
Aliases = aliases,
|
||||
PackageUrls = purls,
|
||||
Cpes = cpes,
|
||||
References = references,
|
||||
ReconciledFrom = reconciledFrom,
|
||||
Notes = notesBuilder.ToImmutable()
|
||||
return new RawLinkset
|
||||
{
|
||||
Aliases = aliases,
|
||||
Scopes = scopes,
|
||||
Relationships = relationships,
|
||||
PackageUrls = purls,
|
||||
Cpes = cpes,
|
||||
References = references,
|
||||
ReconciledFrom = reconciledFrom,
|
||||
Notes = notesBuilder.ToImmutable()
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
@@ -80,13 +80,13 @@ public static class ServiceCollectionExtensions
|
||||
services.AddSingleton<IAdvisoryEventRepository, MongoAdvisoryEventRepository>();
|
||||
services.AddSingleton<IAdvisoryEventLog, AdvisoryEventLog>();
|
||||
services.AddSingleton<IAdvisoryRawRepository, MongoAdvisoryRawRepository>();
|
||||
services.AddSingleton<StellaOps.Concelier.Storage.Mongo.Linksets.MongoAdvisoryLinksetStore>();
|
||||
services.AddSingleton<StellaOps.Concelier.Storage.Mongo.Linksets.IMongoAdvisoryLinksetStore, StellaOps.Concelier.Storage.Mongo.Linksets.ConcelierMongoLinksetStore>();
|
||||
services.AddSingleton<StellaOps.Concelier.Core.Linksets.IAdvisoryLinksetStore>(sp =>
|
||||
sp.GetRequiredService<StellaOps.Concelier.Storage.Mongo.Linksets.MongoAdvisoryLinksetStore>());
|
||||
sp.GetRequiredService<StellaOps.Concelier.Storage.Mongo.Linksets.IMongoAdvisoryLinksetStore>());
|
||||
services.AddSingleton<StellaOps.Concelier.Core.Linksets.IAdvisoryLinksetLookup>(sp =>
|
||||
sp.GetRequiredService<StellaOps.Concelier.Storage.Mongo.Linksets.MongoAdvisoryLinksetStore>());
|
||||
services.AddSingleton<StellaOps.Concelier.Core.Linksets.IAdvisoryObservationSink, StellaOps.Concelier.Storage.Mongo.Linksets.AdvisoryObservationSink>();
|
||||
services.AddSingleton<StellaOps.Concelier.Core.Linksets.IAdvisoryLinksetSink, StellaOps.Concelier.Storage.Mongo.Linksets.AdvisoryLinksetSink>();
|
||||
sp.GetRequiredService<StellaOps.Concelier.Storage.Mongo.Linksets.IMongoAdvisoryLinksetStore>());
|
||||
services.AddSingleton<StellaOps.Concelier.Core.Observations.IAdvisoryObservationSink, StellaOps.Concelier.Storage.Mongo.Observations.AdvisoryObservationSink>();
|
||||
services.AddSingleton<StellaOps.Concelier.Core.Linksets.IAdvisoryLinksetSink, StellaOps.Concelier.Storage.Mongo.Linksets.ConcelierMongoLinksetSink>();
|
||||
services.AddSingleton<IExportStateStore, ExportStateStore>();
|
||||
services.TryAddSingleton<ExportStateManager>();
|
||||
|
||||
|
||||
@@ -0,0 +1,47 @@
|
||||
using System.Collections.Generic;
|
||||
using System.Collections.Immutable;
|
||||
using StellaOps.Concelier.Core.Linksets;
|
||||
using StellaOps.Concelier.RawModels;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Concelier.Core.Tests.Linksets;
|
||||
|
||||
public sealed class AdvisoryLinksetNormalizationTests
|
||||
{
|
||||
[Fact]
|
||||
public void FromRawLinksetWithConfidence_ExtractsNotesAsConflicts()
|
||||
{
|
||||
var linkset = new RawLinkset
|
||||
{
|
||||
PackageUrls = ImmutableArray.Create("pkg:npm/foo@1.0.0"),
|
||||
Notes = new Dictionary<string, string>
|
||||
{
|
||||
{ "severity", "disagree" }
|
||||
}
|
||||
};
|
||||
|
||||
var (normalized, confidence, conflicts) = AdvisoryLinksetNormalization.FromRawLinksetWithConfidence(linkset, 0.8);
|
||||
|
||||
Assert.NotNull(normalized);
|
||||
Assert.Equal(0.8, confidence);
|
||||
Assert.Single(conflicts);
|
||||
Assert.Equal("severity", conflicts[0].Field);
|
||||
Assert.Equal("disagree", conflicts[0].Reason);
|
||||
}
|
||||
|
||||
[Theory]
|
||||
[InlineData(-1, 0)]
|
||||
[InlineData(2, 1)]
|
||||
[InlineData(double.NaN, null)]
|
||||
public void FromRawLinksetWithConfidence_ClampsConfidence(double input, double? expected)
|
||||
{
|
||||
var linkset = new RawLinkset
|
||||
{
|
||||
PackageUrls = ImmutableArray<string>.Empty
|
||||
};
|
||||
|
||||
var (_, confidence, _) = AdvisoryLinksetNormalization.FromRawLinksetWithConfidence(linkset, input);
|
||||
|
||||
Assert.Equal(expected, confidence);
|
||||
}
|
||||
}
|
||||
@@ -29,20 +29,30 @@ public sealed class AdvisoryObservationQueryServiceTests
|
||||
{
|
||||
new AdvisoryObservationReference("advisory", "https://example.test/advisory-1")
|
||||
},
|
||||
createdAt: DateTimeOffset.UtcNow.AddMinutes(-5)),
|
||||
CreateObservation(
|
||||
observationId: "tenant-a:osv:beta:1",
|
||||
tenant: "tenant-a",
|
||||
aliases: new[] { "CVE-2025-0002", "GHSA-xyzz" },
|
||||
purls: new[] { "pkg:pypi/package-b@2.0.0" },
|
||||
cpes: Array.Empty<string>(),
|
||||
references: new[]
|
||||
{
|
||||
new AdvisoryObservationReference("advisory", "https://example.test/advisory-2"),
|
||||
new AdvisoryObservationReference("patch", "https://example.test/patch-1")
|
||||
},
|
||||
createdAt: DateTimeOffset.UtcNow)
|
||||
};
|
||||
scopes: new[] { "runtime" },
|
||||
relationships: new[]
|
||||
{
|
||||
new RawRelationship("depends_on", "pkg:npm/package-a@1.0.0", "pkg:npm/lib@2.0.0", "sbom-a")
|
||||
},
|
||||
createdAt: DateTimeOffset.UtcNow.AddMinutes(-5)),
|
||||
CreateObservation(
|
||||
observationId: "tenant-a:osv:beta:1",
|
||||
tenant: "tenant-a",
|
||||
aliases: new[] { "CVE-2025-0002", "GHSA-xyzz" },
|
||||
purls: new[] { "pkg:pypi/package-b@2.0.0" },
|
||||
cpes: Array.Empty<string>(),
|
||||
references: new[]
|
||||
{
|
||||
new AdvisoryObservationReference("advisory", "https://example.test/advisory-2"),
|
||||
new AdvisoryObservationReference("patch", "https://example.test/patch-1")
|
||||
},
|
||||
scopes: new[] { "build" },
|
||||
relationships: new[]
|
||||
{
|
||||
new RawRelationship("affects", "pkg:pypi/package-b@2.0.0", "component-x", "sbom-b")
|
||||
},
|
||||
createdAt: DateTimeOffset.UtcNow)
|
||||
};
|
||||
|
||||
var lookup = new InMemoryLookup(observations);
|
||||
var service = new AdvisoryObservationQueryService(lookup);
|
||||
@@ -63,15 +73,22 @@ public sealed class AdvisoryObservationQueryServiceTests
|
||||
|
||||
Assert.Equal(new[] { "cpe:/a:vendor:product:1.0" }, result.Linkset.Cpes);
|
||||
|
||||
Assert.Equal(3, result.Linkset.References.Length);
|
||||
Assert.Equal("advisory", result.Linkset.References[0].Type);
|
||||
Assert.Equal("https://example.test/advisory-1", result.Linkset.References[0].Url);
|
||||
Assert.Equal("https://example.test/advisory-2", result.Linkset.References[1].Url);
|
||||
Assert.Equal("patch", result.Linkset.References[2].Type);
|
||||
|
||||
Assert.False(result.HasMore);
|
||||
Assert.Null(result.NextCursor);
|
||||
}
|
||||
Assert.Equal(3, result.Linkset.References.Length);
|
||||
Assert.Equal("advisory", result.Linkset.References[0].Type);
|
||||
Assert.Equal("https://example.test/advisory-1", result.Linkset.References[0].Url);
|
||||
Assert.Equal("https://example.test/advisory-2", result.Linkset.References[1].Url);
|
||||
Assert.Equal("patch", result.Linkset.References[2].Type);
|
||||
|
||||
Assert.Equal(new[] { "build", "runtime" }, result.Linkset.Scopes);
|
||||
Assert.Equal(2, result.Linkset.Relationships.Length);
|
||||
Assert.Equal("affects", result.Linkset.Relationships[0].Type);
|
||||
Assert.Equal("component-x", result.Linkset.Relationships[0].Target);
|
||||
Assert.Equal("depends_on", result.Linkset.Relationships[1].Type);
|
||||
Assert.Equal("pkg:npm/lib@2.0.0", result.Linkset.Relationships[1].Target);
|
||||
|
||||
Assert.False(result.HasMore);
|
||||
Assert.Null(result.NextCursor);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task QueryAsync_WithAliasFilter_UsesAliasLookupAndFilters()
|
||||
@@ -218,9 +235,11 @@ public sealed class AdvisoryObservationQueryServiceTests
|
||||
IEnumerable<string> purls,
|
||||
IEnumerable<string> cpes,
|
||||
IEnumerable<AdvisoryObservationReference> references,
|
||||
DateTimeOffset createdAt)
|
||||
{
|
||||
var raw = JsonNode.Parse("""{"message":"payload"}""") ?? throw new InvalidOperationException("Raw payload must not be null.");
|
||||
DateTimeOffset createdAt,
|
||||
IEnumerable<string>? scopes = null,
|
||||
IEnumerable<RawRelationship>? relationships = null)
|
||||
{
|
||||
var raw = JsonNode.Parse("""{"message":"payload"}""") ?? throw new InvalidOperationException("Raw payload must not be null.");
|
||||
|
||||
var upstream = new AdvisoryObservationUpstream(
|
||||
upstreamId: observationId,
|
||||
@@ -239,7 +258,9 @@ public sealed class AdvisoryObservationQueryServiceTests
|
||||
Cpes = cpes.ToImmutableArray(),
|
||||
References = references
|
||||
.Select(static reference => new RawReference(reference.Type, reference.Url))
|
||||
.ToImmutableArray()
|
||||
.ToImmutableArray(),
|
||||
Scopes = scopes?.ToImmutableArray() ?? ImmutableArray<string>.Empty,
|
||||
Relationships = relationships?.ToImmutableArray() ?? ImmutableArray<RawRelationship>.Empty
|
||||
};
|
||||
|
||||
return new AdvisoryObservation(
|
||||
|
||||
@@ -0,0 +1,171 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Collections.Immutable;
|
||||
using System.Linq;
|
||||
using System.Reflection;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using MongoDB.Driver;
|
||||
using StellaOps.Concelier.Core.Linksets;
|
||||
using StellaOps.Concelier.Storage.Mongo;
|
||||
using StellaOps.Concelier.Storage.Mongo.Linksets;
|
||||
using StellaOps.Concelier.Testing;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Tests.Linksets;
|
||||
|
||||
public sealed class ConcelierMongoLinksetStoreTests : IClassFixture<MongoIntegrationFixture>
|
||||
{
|
||||
private readonly MongoIntegrationFixture _fixture;
|
||||
|
||||
public ConcelierMongoLinksetStoreTests(MongoIntegrationFixture fixture)
|
||||
{
|
||||
_fixture = fixture;
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void MapToDocument_StoresConfidenceAndConflicts()
|
||||
{
|
||||
var linkset = new AdvisoryLinkset(
|
||||
"tenant",
|
||||
"ghsa",
|
||||
"GHSA-1234",
|
||||
ImmutableArray.Create("obs-1", "obs-2"),
|
||||
null,
|
||||
new AdvisoryLinksetProvenance(new[] { "h1", "h2" }, "tool", "policy"),
|
||||
0.82,
|
||||
new List<AdvisoryLinksetConflict>
|
||||
{
|
||||
new("severity", "disagree", new[] { "HIGH", "MEDIUM" })
|
||||
},
|
||||
DateTimeOffset.UtcNow,
|
||||
"job-1");
|
||||
|
||||
var method = typeof(ConcelierMongoLinksetStore).GetMethod(
|
||||
"MapToDocument",
|
||||
BindingFlags.NonPublic | BindingFlags.Static);
|
||||
|
||||
Assert.NotNull(method);
|
||||
|
||||
var document = (AdvisoryLinksetDocument)method!.Invoke(null, new object?[] { linkset })!;
|
||||
|
||||
Assert.Equal(linkset.Confidence, document.Confidence);
|
||||
Assert.NotNull(document.Conflicts);
|
||||
Assert.Single(document.Conflicts!);
|
||||
Assert.Equal("severity", document.Conflicts![0].Field);
|
||||
Assert.Equal("disagree", document.Conflicts![0].Reason);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void FromDocument_RestoresConfidenceAndConflicts()
|
||||
{
|
||||
var doc = new AdvisoryLinksetDocument
|
||||
{
|
||||
TenantId = "tenant",
|
||||
Source = "ghsa",
|
||||
AdvisoryId = "GHSA-1234",
|
||||
Observations = new List<string> { "obs-1" },
|
||||
Confidence = 0.5,
|
||||
Conflicts = new List<AdvisoryLinksetConflictDocument>
|
||||
{
|
||||
new()
|
||||
{
|
||||
Field = "references",
|
||||
Reason = "mismatch",
|
||||
Values = new List<string> { "url1", "url2" }
|
||||
}
|
||||
},
|
||||
CreatedAt = DateTime.UtcNow
|
||||
};
|
||||
|
||||
var method = typeof(ConcelierMongoLinksetStore).GetMethod(
|
||||
"FromDocument",
|
||||
BindingFlags.NonPublic | BindingFlags.Static);
|
||||
|
||||
Assert.NotNull(method);
|
||||
|
||||
var model = (AdvisoryLinkset)method!.Invoke(null, new object?[] { doc })!;
|
||||
|
||||
Assert.Equal(0.5, model.Confidence);
|
||||
Assert.NotNull(model.Conflicts);
|
||||
Assert.Single(model.Conflicts!);
|
||||
Assert.Equal("references", model.Conflicts![0].Field);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task FindByTenantAsync_OrdersByCreatedAtThenAdvisoryId()
|
||||
{
|
||||
await _fixture.Database.DropCollectionAsync(MongoStorageDefaults.Collections.AdvisoryLinksets);
|
||||
|
||||
var collection = _fixture.Database.GetCollection<AdvisoryLinksetDocument>(MongoStorageDefaults.Collections.AdvisoryLinksets);
|
||||
var store = new ConcelierMongoLinksetStore(collection);
|
||||
|
||||
var now = DateTimeOffset.UtcNow;
|
||||
var linksets = new[]
|
||||
{
|
||||
new AdvisoryLinkset("Tenant-A", "src", "ADV-002", ImmutableArray.Create("obs-1"), null, null, null, null, now, "job-1"),
|
||||
new AdvisoryLinkset("Tenant-A", "src", "ADV-001", ImmutableArray.Create("obs-2"), null, null, null, null, now, "job-2"),
|
||||
new AdvisoryLinkset("Tenant-A", "src", "ADV-003", ImmutableArray.Create("obs-3"), null, null, null, null, now.AddMinutes(-5), "job-3")
|
||||
};
|
||||
|
||||
foreach (var linkset in linksets)
|
||||
{
|
||||
await store.UpsertAsync(linkset, CancellationToken.None);
|
||||
}
|
||||
|
||||
var results = await store.FindByTenantAsync("TENANT-A", null, null, cursor: null, limit: 10, cancellationToken: CancellationToken.None);
|
||||
|
||||
Assert.Equal(new[] { "ADV-001", "ADV-002", "ADV-003" }, results.Select(r => r.AdvisoryId));
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task FindByTenantAsync_AppliesCursorForDeterministicPaging()
|
||||
{
|
||||
await _fixture.Database.DropCollectionAsync(MongoStorageDefaults.Collections.AdvisoryLinksets);
|
||||
|
||||
var collection = _fixture.Database.GetCollection<AdvisoryLinksetDocument>(MongoStorageDefaults.Collections.AdvisoryLinksets);
|
||||
var store = new ConcelierMongoLinksetStore(collection);
|
||||
|
||||
var now = DateTimeOffset.UtcNow;
|
||||
var firstPage = new[]
|
||||
{
|
||||
new AdvisoryLinkset("tenant-a", "src", "ADV-010", ImmutableArray.Create("obs-1"), null, null, null, null, now, "job-1"),
|
||||
new AdvisoryLinkset("tenant-a", "src", "ADV-020", ImmutableArray.Create("obs-2"), null, null, null, null, now, "job-2"),
|
||||
new AdvisoryLinkset("tenant-a", "src", "ADV-030", ImmutableArray.Create("obs-3"), null, null, null, null, now.AddMinutes(-10), "job-3")
|
||||
};
|
||||
|
||||
foreach (var linkset in firstPage)
|
||||
{
|
||||
await store.UpsertAsync(linkset, CancellationToken.None);
|
||||
}
|
||||
|
||||
var initial = await store.FindByTenantAsync("tenant-a", null, null, cursor: null, limit: 10, cancellationToken: CancellationToken.None);
|
||||
var cursor = new AdvisoryLinksetCursor(initial[1].CreatedAt, initial[1].AdvisoryId);
|
||||
|
||||
var paged = await store.FindByTenantAsync("tenant-a", null, null, cursor, limit: 10, cancellationToken: CancellationToken.None);
|
||||
|
||||
Assert.Single(paged);
|
||||
Assert.Equal("ADV-030", paged[0].AdvisoryId);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task Upsert_NormalizesTenantToLowerInvariant()
|
||||
{
|
||||
await _fixture.Database.DropCollectionAsync(MongoStorageDefaults.Collections.AdvisoryLinksets);
|
||||
|
||||
var collection = _fixture.Database.GetCollection<AdvisoryLinksetDocument>(MongoStorageDefaults.Collections.AdvisoryLinksets);
|
||||
var store = new ConcelierMongoLinksetStore(collection);
|
||||
|
||||
var linkset = new AdvisoryLinkset("Tenant-A", "ghsa", "GHSA-1", ImmutableArray.Create("obs-1"), null, null, null, null, DateTimeOffset.UtcNow, "job-1");
|
||||
await store.UpsertAsync(linkset, CancellationToken.None);
|
||||
|
||||
var fetched = await collection.Find(Builders<AdvisoryLinksetDocument>.Filter.Empty).FirstOrDefaultAsync();
|
||||
|
||||
Assert.NotNull(fetched);
|
||||
Assert.Equal("tenant-a", fetched!.TenantId);
|
||||
|
||||
var results = await store.FindByTenantAsync("TENANT-A", null, null, cursor: null, limit: 10, cancellationToken: CancellationToken.None);
|
||||
Assert.Single(results);
|
||||
Assert.Equal("GHSA-1", results[0].AdvisoryId);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,39 @@
|
||||
using System.Threading.Tasks;
|
||||
using MongoDB.Bson;
|
||||
using MongoDB.Driver;
|
||||
using StellaOps.Concelier.Storage.Mongo.Migrations;
|
||||
using StellaOps.Concelier.Testing;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Tests.Migrations;
|
||||
|
||||
[Collection("mongo-fixture")]
|
||||
public sealed class EnsureAdvisoryLinksetsTenantLowerMigrationTests : IClassFixture<MongoIntegrationFixture>
|
||||
{
|
||||
private readonly MongoIntegrationFixture _fixture;
|
||||
|
||||
public EnsureAdvisoryLinksetsTenantLowerMigrationTests(MongoIntegrationFixture fixture)
|
||||
{
|
||||
_fixture = fixture;
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task ApplyAsync_LowersTenantIds()
|
||||
{
|
||||
var collection = _fixture.Database.GetCollection<BsonDocument>(MongoStorageDefaults.Collections.AdvisoryLinksets);
|
||||
await _fixture.Database.DropCollectionAsync(MongoStorageDefaults.Collections.AdvisoryLinksets);
|
||||
|
||||
await collection.InsertManyAsync(new[]
|
||||
{
|
||||
new BsonDocument { { "TenantId", "Tenant-A" }, { "Source", "src" }, { "AdvisoryId", "ADV-1" }, { "Observations", new BsonArray() } },
|
||||
new BsonDocument { { "TenantId", "tenant-b" }, { "Source", "src" }, { "AdvisoryId", "ADV-2" }, { "Observations", new BsonArray() } }
|
||||
});
|
||||
|
||||
var migration = new EnsureAdvisoryLinksetsTenantLowerMigration();
|
||||
await migration.ApplyAsync(_fixture.Database, default);
|
||||
|
||||
var all = await collection.Find(FilterDefinition<BsonDocument>.Empty).ToListAsync();
|
||||
Assert.Contains(all, doc => doc["TenantId"] == "tenant-a");
|
||||
Assert.Contains(all, doc => doc["TenantId"] == "tenant-b");
|
||||
}
|
||||
}
|
||||
@@ -56,6 +56,11 @@ public sealed class AdvisoryObservationDocumentFactoryTests
|
||||
RawLinkset = new AdvisoryObservationRawLinksetDocument
|
||||
{
|
||||
Aliases = new List<string> { "CVE-2025-1234", "cve-2025-1234" },
|
||||
Scopes = new List<string> { "runtime", "build" },
|
||||
Relationships = new List<AdvisoryObservationRawRelationshipDocument>
|
||||
{
|
||||
new() { Type = "depends_on", Source = "componentA", Target = "componentB", Provenance = "sbom-manifest" }
|
||||
},
|
||||
PackageUrls = new List<string> { "pkg:generic/foo@1.0.0" },
|
||||
Cpes = new List<string> { "cpe:/a:vendor:product:1" },
|
||||
References = new List<AdvisoryObservationRawReferenceDocument>
|
||||
@@ -78,6 +83,11 @@ public sealed class AdvisoryObservationDocumentFactoryTests
|
||||
Assert.True(observation.Content.Raw?["example"]?.GetValue<bool>());
|
||||
Assert.Equal(document.Linkset.References![0].Type, observation.Linkset.References[0].Type);
|
||||
Assert.Equal(new[] { "CVE-2025-1234", "cve-2025-1234" }, observation.RawLinkset.Aliases);
|
||||
Assert.Equal(new[] { "runtime", "build" }, observation.RawLinkset.Scopes);
|
||||
Assert.Equal("depends_on", observation.RawLinkset.Relationships[0].Type);
|
||||
Assert.Equal("componentA", observation.RawLinkset.Relationships[0].Source);
|
||||
Assert.Equal("componentB", observation.RawLinkset.Relationships[0].Target);
|
||||
Assert.Equal("sbom-manifest", observation.RawLinkset.Relationships[0].Provenance);
|
||||
Assert.Equal("Advisory", observation.RawLinkset.References[0].Type);
|
||||
Assert.Equal("vendor", observation.RawLinkset.References[0].Source);
|
||||
Assert.Equal("note-value", observation.RawLinkset.Notes["note-key"]);
|
||||
|
||||
@@ -0,0 +1,94 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Collections.Immutable;
|
||||
using MongoDB.Bson;
|
||||
using StellaOps.Concelier.Models.Observations;
|
||||
using StellaOps.Concelier.Storage.Mongo.Observations.V1;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Tests.Observations;
|
||||
|
||||
public sealed class AdvisoryObservationV1DocumentFactoryTests
|
||||
{
|
||||
[Fact]
|
||||
public void ObservationIdBuilder_IsDeterministic()
|
||||
{
|
||||
var id1 = ObservationIdBuilder.Create("TENANT", "Ghsa", "GHSA-1234", "sha256:abc");
|
||||
var id2 = ObservationIdBuilder.Create("tenant", "ghsa", "GHSA-1234", "sha256:abc");
|
||||
|
||||
Assert.Equal(id1, id2);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void ToModel_MapsAndNormalizes()
|
||||
{
|
||||
var document = new AdvisoryObservationV1Document
|
||||
{
|
||||
Id = new ObjectId("6710f1f1a1b2c3d4e5f60708"),
|
||||
TenantId = "TENANT-01",
|
||||
Source = "GHSA",
|
||||
AdvisoryId = "GHSA-2025-0001",
|
||||
Title = "Test title",
|
||||
Summary = "Summary",
|
||||
Severities = new List<ObservationSeverityDocument>
|
||||
{
|
||||
new() { System = "cvssv3.1", Score = 7.5, Vector = "AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N" }
|
||||
},
|
||||
Affected = new List<ObservationAffectedDocument>
|
||||
{
|
||||
new()
|
||||
{
|
||||
Purl = "pkg:nuget/foo@1.2.3",
|
||||
Package = "foo",
|
||||
Versions = new List<string>{ "1.2.3" },
|
||||
Ranges = new List<ObservationVersionRangeDocument>
|
||||
{
|
||||
new()
|
||||
{
|
||||
Type = "ECOSYSTEM",
|
||||
Events = new List<ObservationRangeEventDocument>
|
||||
{
|
||||
new(){ Event = "introduced", Value = "1.0.0" },
|
||||
new(){ Event = "fixed", Value = "1.2.3" }
|
||||
}
|
||||
}
|
||||
},
|
||||
Ecosystem = "nuget",
|
||||
Cpes = new List<string>{ "cpe:/a:foo:bar:1.2.3" }
|
||||
}
|
||||
},
|
||||
References = new List<string>{ "https://example.test/advisory" },
|
||||
Weaknesses = new List<string>{ "CWE-79" },
|
||||
Published = new DateTime(2025, 11, 1, 0, 0, 0, DateTimeKind.Utc),
|
||||
Modified = new DateTime(2025, 11, 10, 0, 0, 0, DateTimeKind.Utc),
|
||||
IngestedAt = new DateTime(2025, 11, 12, 0, 0, 0, DateTimeKind.Utc),
|
||||
Provenance = new ObservationProvenanceDocument
|
||||
{
|
||||
SourceArtifactSha = "sha256:abc",
|
||||
FetchedAt = new DateTime(2025, 11, 12, 0, 0, 0, DateTimeKind.Utc),
|
||||
IngestJobId = "job-1",
|
||||
Signature = new ObservationSignatureDocument
|
||||
{
|
||||
Present = true,
|
||||
Format = "dsse",
|
||||
KeyId = "k1",
|
||||
Signature = "sig"
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
var model = AdvisoryObservationV1DocumentFactory.ToModel(document);
|
||||
|
||||
Assert.Equal("6710f1f1a1b2c3d4e5f60708", model.ObservationId);
|
||||
Assert.Equal("tenant-01", model.Tenant);
|
||||
Assert.Equal("ghsa", model.Source);
|
||||
Assert.Equal("GHSA-2025-0001", model.AdvisoryId);
|
||||
Assert.Equal("Test title", model.Title);
|
||||
Assert.Single(model.Severities);
|
||||
Assert.Single(model.Affected);
|
||||
Assert.Single(model.References);
|
||||
Assert.Single(model.Weaknesses);
|
||||
Assert.Equal(new DateTimeOffset(2025, 11, 12, 0, 0, 0, TimeSpan.Zero), model.IngestedAt);
|
||||
Assert.NotNull(model.Provenance.Signature);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,56 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using MongoDB.Bson.Serialization.Attributes;
|
||||
|
||||
namespace StellaOps.Concelier.WebService.Tests;
|
||||
|
||||
/// <summary>
|
||||
/// Minimal linkset document used only for seeding the Mongo collection in WebService integration tests.
|
||||
/// Matches the shape written by the linkset ingestion pipeline.
|
||||
/// </summary>
|
||||
internal sealed class AdvisoryLinksetDocument
|
||||
{
|
||||
[BsonElement("tenantId")]
|
||||
public string TenantId { get; init; } = string.Empty;
|
||||
|
||||
[BsonElement("source")]
|
||||
public string Source { get; init; } = string.Empty;
|
||||
|
||||
[BsonElement("advisoryId")]
|
||||
public string AdvisoryId { get; init; } = string.Empty;
|
||||
|
||||
[BsonElement("observations")]
|
||||
public IReadOnlyList<string> Observations { get; init; } = Array.Empty<string>();
|
||||
|
||||
[BsonElement("createdAt")]
|
||||
public DateTime CreatedAt { get; init; }
|
||||
|
||||
[BsonElement("normalized")]
|
||||
public AdvisoryLinksetNormalizedDocument Normalized { get; init; } = new();
|
||||
}
|
||||
|
||||
internal sealed class AdvisoryLinksetNormalizedDocument
|
||||
{
|
||||
[BsonElement("purls")]
|
||||
public IReadOnlyList<string> Purls { get; init; } = Array.Empty<string>();
|
||||
|
||||
[BsonElement("versions")]
|
||||
public IReadOnlyList<string> Versions { get; init; } = Array.Empty<string>();
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Shape used when reading /linksets responses in WebService endpoint tests.
|
||||
/// </summary>
|
||||
internal sealed class AdvisoryLinksetQueryResponse
|
||||
{
|
||||
public AdvisoryLinksetResponse[] Linksets { get; init; } = Array.Empty<AdvisoryLinksetResponse>();
|
||||
public bool HasMore { get; init; }
|
||||
public string? NextCursor { get; init; }
|
||||
}
|
||||
|
||||
internal sealed class AdvisoryLinksetResponse
|
||||
{
|
||||
public string AdvisoryId { get; init; } = string.Empty;
|
||||
public IReadOnlyList<string> Purls { get; init; } = Array.Empty<string>();
|
||||
public IReadOnlyList<string> Versions { get; init; } = Array.Empty<string>();
|
||||
}
|
||||
@@ -33,6 +33,7 @@ using StellaOps.Concelier.Merge.Services;
|
||||
using StellaOps.Concelier.Storage.Mongo;
|
||||
using StellaOps.Concelier.Storage.Mongo.Advisories;
|
||||
using StellaOps.Concelier.Storage.Mongo.Observations;
|
||||
using StellaOps.Concelier.Storage.Mongo.Linksets;
|
||||
using StellaOps.Concelier.Core.Raw;
|
||||
using StellaOps.Concelier.WebService.Jobs;
|
||||
using StellaOps.Concelier.WebService.Options;
|
||||
@@ -376,13 +377,12 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
|
||||
var root = document.RootElement;
|
||||
|
||||
Assert.Equal("CVE-2025-0001", root.GetProperty("advisoryKey").GetString());
|
||||
Assert.False(string.IsNullOrWhiteSpace(root.GetProperty("fingerprint").GetString()));
|
||||
Assert.Equal(1, root.GetProperty("total").GetInt32());
|
||||
Assert.False(root.GetProperty("truncated").GetBoolean());
|
||||
|
||||
var entry = Assert.Single(root.GetProperty("entries").EnumerateArray());
|
||||
Assert.Equal("workaround", entry.GetProperty("type").GetString());
|
||||
Assert.Equal("tenant-a:chunk:newest", entry.GetProperty("documentId").GetString());
|
||||
Assert.Equal("/references/0", entry.GetProperty("fieldPath").GetString());
|
||||
Assert.False(string.IsNullOrWhiteSpace(entry.GetProperty("chunkId").GetString()));
|
||||
|
||||
var content = entry.GetProperty("content");
|
||||
@@ -391,6 +391,8 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
|
||||
Assert.Equal("https://vendor.example/workaround", content.GetProperty("url").GetString());
|
||||
|
||||
var provenance = entry.GetProperty("provenance");
|
||||
Assert.Equal("tenant-a:chunk:newest", provenance.GetProperty("documentId").GetString());
|
||||
Assert.Equal("/references/0", provenance.GetProperty("observationPath").GetString());
|
||||
Assert.Equal("nvd", provenance.GetProperty("source").GetString());
|
||||
Assert.Equal("workaround", provenance.GetProperty("kind").GetString());
|
||||
Assert.Equal("tenant-a:chunk:newest", provenance.GetProperty("value").GetString());
|
||||
@@ -638,6 +640,9 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
|
||||
|
||||
using var client = _factory.CreateClient();
|
||||
|
||||
long expectedSegments = 0;
|
||||
string expectedTruncatedTag = "false";
|
||||
|
||||
var metrics = await CaptureMetricsAsync(
|
||||
AdvisoryAiMetrics.MeterName,
|
||||
new[]
|
||||
@@ -654,6 +659,13 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
|
||||
var first = await client.GetAsync(url);
|
||||
first.EnsureSuccessStatusCode();
|
||||
|
||||
using (var firstDocument = await first.Content.ReadFromJsonAsync<JsonDocument>())
|
||||
{
|
||||
Assert.NotNull(firstDocument);
|
||||
expectedSegments = firstDocument!.RootElement.GetProperty("entries").GetArrayLength();
|
||||
expectedTruncatedTag = firstDocument.RootElement.GetProperty("truncated").GetBoolean() ? "true" : "false";
|
||||
}
|
||||
|
||||
var second = await client.GetAsync(url);
|
||||
second.EnsureSuccessStatusCode();
|
||||
});
|
||||
@@ -679,7 +691,11 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
|
||||
|
||||
Assert.True(metrics.TryGetValue("advisory_ai_chunk_segments", out var segmentMeasurements));
|
||||
Assert.Equal(2, segmentMeasurements!.Count);
|
||||
Assert.Contains(segmentMeasurements!, measurement => GetTagValue(measurement, "truncated") == "false");
|
||||
Assert.All(segmentMeasurements!, measurement =>
|
||||
{
|
||||
Assert.Equal(expectedSegments, measurement.Value);
|
||||
Assert.Equal(expectedTruncatedTag, GetTagValue(measurement, "truncated"));
|
||||
});
|
||||
|
||||
Assert.True(metrics.TryGetValue("advisory_ai_chunk_sources", out var sourceMeasurements));
|
||||
Assert.Equal(2, sourceMeasurements!.Count);
|
||||
@@ -2522,6 +2538,7 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
|
||||
Array.Empty<string>(),
|
||||
references,
|
||||
Array.Empty<string>(),
|
||||
Array.Empty<string>(),
|
||||
new Dictionary<string, string> { ["note"] = "ingest-test" }));
|
||||
}
|
||||
|
||||
|
||||
@@ -1,37 +1,46 @@
|
||||
# AGENTS
|
||||
## Role
|
||||
ASP.NET Minimal API surface for Excititor ingest, provider administration, reconciliation, export, and verification flows.
|
||||
# Excititor WebService Charter
|
||||
|
||||
## Mission
|
||||
Expose Excititor APIs (console VEX views, graph/Vuln Explorer feeds, observation intake/health) while honoring the Aggregation-Only Contract (no consensus/severity logic in this service).
|
||||
|
||||
## Scope
|
||||
- Program bootstrap, DI wiring for connectors/normalizers/export/attestation/policy/storage.
|
||||
- HTTP endpoints `/excititor/*` with authentication, authorization scopes, request validation, and deterministic responses.
|
||||
- Job orchestration bridges for Worker hand-off (when co-hosted) and offline-friendly configuration.
|
||||
- Observability (structured logs, metrics, tracing) aligned with StellaOps conventions.
|
||||
- Optional/minor DI dependencies on minimal APIs must be declared with `[FromServices] SomeType? service = null` parameters so endpoint tests do not require bespoke service registrations.
|
||||
## Participants
|
||||
- StellaOps.Cli sends `excititor` verbs to this service via token-authenticated HTTPS.
|
||||
- Worker receives scheduled jobs and uses shared infrastructure via common DI extensions.
|
||||
- Authority service provides tokens; WebService enforces scopes before executing operations.
|
||||
## Interfaces & contracts
|
||||
- DTOs for ingest/export requests, run metadata, provider management.
|
||||
- Background job interfaces for ingest/resume/reconcile triggering.
|
||||
- Health/status endpoints exposing pull/export history and current policy revision.
|
||||
## In/Out of scope
|
||||
In: HTTP hosting, request orchestration, DI composition, auth/authorization, logging.
|
||||
Out: long-running ingestion loops (Worker), export rendering (Export module), connector implementations.
|
||||
## Observability & security expectations
|
||||
- Enforce bearer token scopes, enforce audit logging (request/response correlation IDs, provider IDs).
|
||||
- Emit structured events for ingest runs, export invocations, attestation references.
|
||||
- Provide built-in counters/histograms for latency and throughput.
|
||||
## Tests
|
||||
- Minimal API contract/unit tests and integration harness will live in `../StellaOps.Excititor.WebService.Tests`.
|
||||
- Working directory: `src/Excititor/StellaOps.Excititor.WebService`
|
||||
- HTTP APIs, DTOs, controllers, authz filters, composition root, telemetry hooks.
|
||||
- Wiring to Core/Storage libraries; no direct policy or consensus logic.
|
||||
|
||||
## Required Reading
|
||||
- `docs/modules/excititor/architecture.md`
|
||||
- `docs/modules/platform/architecture-overview.md`
|
||||
- `docs/modules/excititor/README.md#latest-updates`
|
||||
- `docs/modules/excititor/vex_observations.md`
|
||||
- `docs/ingestion/aggregation-only-contract.md`
|
||||
- `docs/modules/excititor/implementation_plan.md`
|
||||
|
||||
## Working Agreement
|
||||
- 1. Update task status to `DOING`/`DONE` in both correspoding sprint file `/docs/implplan/SPRINT_*.md` and the local `TASKS.md` when you start or finish work.
|
||||
- 2. Review this charter and the Required Reading documents before coding; confirm prerequisites are met.
|
||||
- 3. Keep changes deterministic (stable ordering, timestamps, hashes) and align with offline/air-gap expectations.
|
||||
- 4. Coordinate doc updates, tests, and cross-guild communication whenever contracts or workflows change.
|
||||
- 5. Revert to `TODO` if you pause the task without shipping changes; leave notes in commit/PR descriptions for context.
|
||||
## Roles
|
||||
- Backend developer (.NET 10 / C# preview).
|
||||
- QA automation (integration + API contract tests).
|
||||
|
||||
## Working Agreements
|
||||
1. Update sprint `Delivery Tracker` when tasks move TODO→DOING→DONE/BLOCKED; mirror notes in Execution Log.
|
||||
2. Keep APIs aggregation-only: persist raw observations, provenance, and precedence pointers; never merge/weight/consensus here.
|
||||
3. Enforce tenant scoping and RBAC on all endpoints; default-deny for cross-tenant data.
|
||||
4. Offline-first: no external network calls; rely on cached/mirrored feeds only.
|
||||
5. Observability: structured logs, counters, optional OTEL traces behind configuration flags.
|
||||
|
||||
## Testing
|
||||
- Prefer deterministic API/integration tests under `__Tests` with seeded Mongo fixtures.
|
||||
- Verify RBAC/tenant isolation, idempotent ingestion, and stable ordering of VEX aggregates.
|
||||
- Use ISO-8601 UTC timestamps and stable sorting in responses; assert on content hashes where applicable.
|
||||
|
||||
## Determinism & Data
|
||||
- MongoDB is the canonical store; never apply consensus transformations before persistence.
|
||||
- Ensure paged/list endpoints use explicit sort keys (e.g., vendor, upstreamId, version, createdUtc).
|
||||
- Avoid nondeterministic clocks/randomness; inject clocks and GUID providers for tests.
|
||||
|
||||
## Boundaries
|
||||
- Do not modify Policy Engine or Cartographer schemas from here; consume published contracts only.
|
||||
- Configuration via appsettings/environment; no hard-coded secrets.
|
||||
|
||||
## Ready-to-Start Checklist
|
||||
- Required docs reviewed.
|
||||
- Test database/fixtures prepared (no external dependencies).
|
||||
- Feature flags defined for new endpoints before exposing them.
|
||||
|
||||
@@ -471,7 +471,8 @@ app.MapGet("/v1/vex/observations/{vulnerabilityId}/{productKey}", async (
|
||||
var providerFilter = BuildStringFilterSet(context.Request.Query["providerId"]);
|
||||
var statusFilter = BuildStatusFilter(context.Request.Query["status"]);
|
||||
var since = ParseSinceTimestamp(context.Request.Query["since"]);
|
||||
var limit = ResolveLimit(context.Request.Query["limit"], defaultValue: 200, min: 1, max: 500);
|
||||
// Evidence chunks follow doc limits: default 500, max 2000.
|
||||
var limit = ResolveLimit(context.Request.Query["limit"], defaultValue: 500, min: 1, max: 2000);
|
||||
|
||||
var request = new VexObservationProjectionRequest(
|
||||
tenant,
|
||||
@@ -514,6 +515,10 @@ app.MapGet("/v1/vex/observations/{vulnerabilityId}/{productKey}", async (
|
||||
result.Truncated,
|
||||
statements);
|
||||
|
||||
// Set total/truncated headers for clients (spec: Excititor-Results-*).
|
||||
context.Response.Headers["Excititor-Results-Total"] = result.TotalCount.ToString(CultureInfo.InvariantCulture);
|
||||
context.Response.Headers["Excititor-Results-Truncated"] = result.Truncated ? "true" : "false";
|
||||
|
||||
return Results.Json(response);
|
||||
});
|
||||
|
||||
@@ -562,11 +567,21 @@ app.MapGet("/v1/vex/evidence/chunks", async (
|
||||
}
|
||||
catch (OperationCanceledException)
|
||||
{
|
||||
EvidenceTelemetry.RecordChunkOutcome(tenant, "cancelled");
|
||||
return Results.StatusCode(StatusCodes.Status499ClientClosedRequest);
|
||||
}
|
||||
catch
|
||||
{
|
||||
EvidenceTelemetry.RecordChunkOutcome(tenant, "error");
|
||||
throw;
|
||||
}
|
||||
|
||||
context.Response.Headers["X-Total-Count"] = result.TotalCount.ToString(CultureInfo.InvariantCulture);
|
||||
context.Response.Headers["X-Truncated"] = result.Truncated ? "true" : "false";
|
||||
EvidenceTelemetry.RecordChunkOutcome(tenant, "success", result.Chunks.Count, result.Truncated);
|
||||
EvidenceTelemetry.RecordChunkSignatureStatus(tenant, result.Chunks);
|
||||
|
||||
// Align headers with published contract.
|
||||
context.Response.Headers["Excititor-Results-Total"] = result.TotalCount.ToString(CultureInfo.InvariantCulture);
|
||||
context.Response.Headers["Excititor-Results-Truncated"] = result.Truncated ? "true" : "false";
|
||||
context.Response.ContentType = "application/x-ndjson";
|
||||
|
||||
var options = new JsonSerializerOptions(JsonSerializerDefaults.Web);
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Diagnostics.Metrics;
|
||||
using System.Linq;
|
||||
using StellaOps.Excititor.Core.Aoc;
|
||||
using StellaOps.Excititor.WebService.Contracts;
|
||||
using StellaOps.Excititor.WebService.Services;
|
||||
|
||||
namespace StellaOps.Excititor.WebService.Telemetry;
|
||||
@@ -24,6 +26,18 @@ internal static class EvidenceTelemetry
|
||||
unit: "statements",
|
||||
description: "Distribution of statements returned per observation projection request.");
|
||||
|
||||
private static readonly Counter<long> EvidenceRequestCounter =
|
||||
Meter.CreateCounter<long>(
|
||||
"excititor.vex.evidence.requests",
|
||||
unit: "requests",
|
||||
description: "Number of evidence chunk requests handled by the evidence APIs.");
|
||||
|
||||
private static readonly Histogram<int> EvidenceChunkHistogram =
|
||||
Meter.CreateHistogram<int>(
|
||||
"excititor.vex.evidence.chunk_count",
|
||||
unit: "chunks",
|
||||
description: "Distribution of evidence chunks streamed per request.");
|
||||
|
||||
private static readonly Counter<long> SignatureStatusCounter =
|
||||
Meter.CreateCounter<long>(
|
||||
"excititor.vex.signature.status",
|
||||
@@ -53,13 +67,27 @@ internal static class EvidenceTelemetry
|
||||
return;
|
||||
}
|
||||
|
||||
ObservationStatementHistogram.Record(
|
||||
returnedCount,
|
||||
new[]
|
||||
{
|
||||
new KeyValuePair<string, object?>("tenant", normalizedTenant),
|
||||
new KeyValuePair<string, object?>("outcome", outcome),
|
||||
});
|
||||
ObservationStatementHistogram.Record(returnedCount, tags);
|
||||
}
|
||||
|
||||
public static void RecordChunkOutcome(string? tenant, string outcome, int chunkCount = 0, bool truncated = false)
|
||||
{
|
||||
var normalizedTenant = NormalizeTenant(tenant);
|
||||
var tags = new[]
|
||||
{
|
||||
new KeyValuePair<string, object?>("tenant", normalizedTenant),
|
||||
new KeyValuePair<string, object?>("outcome", outcome),
|
||||
new KeyValuePair<string, object?>("truncated", truncated),
|
||||
};
|
||||
|
||||
EvidenceRequestCounter.Add(1, tags);
|
||||
|
||||
if (!string.Equals(outcome, "success", StringComparison.OrdinalIgnoreCase))
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
EvidenceChunkHistogram.Record(chunkCount, tags);
|
||||
}
|
||||
|
||||
public static void RecordSignatureStatus(string? tenant, IReadOnlyList<VexObservationStatementProjection> statements)
|
||||
@@ -72,6 +100,7 @@ internal static class EvidenceTelemetry
|
||||
var normalizedTenant = NormalizeTenant(tenant);
|
||||
var missing = 0;
|
||||
var unverified = 0;
|
||||
var verified = 0;
|
||||
|
||||
foreach (var statement in statements)
|
||||
{
|
||||
@@ -86,6 +115,10 @@ internal static class EvidenceTelemetry
|
||||
{
|
||||
unverified++;
|
||||
}
|
||||
else
|
||||
{
|
||||
verified++;
|
||||
}
|
||||
}
|
||||
|
||||
if (missing > 0)
|
||||
@@ -103,11 +136,62 @@ internal static class EvidenceTelemetry
|
||||
{
|
||||
SignatureStatusCounter.Add(
|
||||
unverified,
|
||||
new[]
|
||||
{
|
||||
new KeyValuePair<string, object?>("tenant", normalizedTenant),
|
||||
new KeyValuePair<string, object?>("status", "unverified"),
|
||||
});
|
||||
BuildSignatureTags(normalizedTenant, "unverified"));
|
||||
}
|
||||
|
||||
if (verified > 0)
|
||||
{
|
||||
SignatureStatusCounter.Add(
|
||||
verified,
|
||||
BuildSignatureTags(normalizedTenant, "verified"));
|
||||
}
|
||||
}
|
||||
|
||||
public static void RecordChunkSignatureStatus(string? tenant, IReadOnlyList<VexEvidenceChunkResponse> chunks)
|
||||
{
|
||||
if (chunks is null || chunks.Count == 0)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
var normalizedTenant = NormalizeTenant(tenant);
|
||||
|
||||
var unsigned = 0;
|
||||
var unverified = 0;
|
||||
var verified = 0;
|
||||
|
||||
foreach (var chunk in chunks)
|
||||
{
|
||||
var signature = chunk.Signature;
|
||||
if (signature is null)
|
||||
{
|
||||
unsigned++;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (signature.VerifiedAt is null)
|
||||
{
|
||||
unverified++;
|
||||
}
|
||||
else
|
||||
{
|
||||
verified++;
|
||||
}
|
||||
}
|
||||
|
||||
if (unsigned > 0)
|
||||
{
|
||||
SignatureStatusCounter.Add(unsigned, BuildSignatureTags(normalizedTenant, "unsigned"));
|
||||
}
|
||||
|
||||
if (unverified > 0)
|
||||
{
|
||||
SignatureStatusCounter.Add(unverified, BuildSignatureTags(normalizedTenant, "unverified"));
|
||||
}
|
||||
|
||||
if (verified > 0)
|
||||
{
|
||||
SignatureStatusCounter.Add(verified, BuildSignatureTags(normalizedTenant, "verified"));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -151,4 +235,11 @@ internal static class EvidenceTelemetry
|
||||
|
||||
private static string NormalizeSurface(string? surface)
|
||||
=> string.IsNullOrWhiteSpace(surface) ? "unknown" : surface.ToLowerInvariant();
|
||||
|
||||
private static KeyValuePair<string, object?>[] BuildSignatureTags(string tenant, string status)
|
||||
=> new[]
|
||||
{
|
||||
new KeyValuePair<string, object?>("tenant", tenant),
|
||||
new KeyValuePair<string, object?>("status", status),
|
||||
};
|
||||
}
|
||||
|
||||
@@ -1,34 +1,39 @@
|
||||
# AGENTS
|
||||
## Role
|
||||
Background processing host coordinating scheduled pulls, retries, reconciliation, verification, and cache maintenance for Excititor.
|
||||
# Excititor Worker Charter
|
||||
|
||||
## Mission
|
||||
Run Excititor background jobs (ingestion, linkset extraction, dedup/idempotency enforcement) under the Aggregation-Only Contract; orchestrate Core + Storage without applying consensus or severity.
|
||||
|
||||
## Scope
|
||||
- Hosted service (Worker Service) wiring timers/queues for provider pulls and reconciliation cycles.
|
||||
- Resume token management, retry policies, and failure quarantines for connectors.
|
||||
- Re-verification of stored attestations and cache garbage collection routines.
|
||||
- Operational metrics and structured logging for offline-friendly monitoring.
|
||||
## Participants
|
||||
- Triggered by WebService job requests or internal schedules to run connector pulls.
|
||||
- Collaborates with Storage.Mongo repositories and Attestation verification utilities.
|
||||
- Emits telemetry consumed by observability stack and CLI status queries.
|
||||
## Interfaces & contracts
|
||||
- Scheduler abstractions, provider run controllers, retry/backoff strategies, and queue processors.
|
||||
- Hooks for policy revision changes and cache GC thresholds.
|
||||
## In/Out of scope
|
||||
In: background orchestration, job lifecycle management, observability for worker operations.
|
||||
Out: HTTP endpoint definitions, domain modeling, connector-specific parsing logic.
|
||||
## Observability & security expectations
|
||||
- Publish metrics for pull latency, failure counts, retry depth, cache size, and verification outcomes.
|
||||
- Log correlation IDs & provider IDs; avoid leaking secret config values.
|
||||
## Tests
|
||||
- Worker orchestration tests, timer controls, and retry behavior will live in `../StellaOps.Excititor.Worker.Tests`.
|
||||
- Working directory: `src/Excititor/StellaOps.Excititor.Worker`
|
||||
- Job runners, pipelines, scheduling, DI wiring, health checks, telemetry for background tasks.
|
||||
|
||||
## Required Reading
|
||||
- `docs/modules/excititor/architecture.md`
|
||||
- `docs/modules/platform/architecture-overview.md`
|
||||
- `docs/modules/excititor/vex_observations.md`
|
||||
- `docs/ingestion/aggregation-only-contract.md`
|
||||
- `docs/modules/excititor/implementation_plan.md`
|
||||
|
||||
## Working Agreement
|
||||
- 1. Update task status to `DOING`/`DONE` in both correspoding sprint file `/docs/implplan/SPRINT_*.md` and the local `TASKS.md` when you start or finish work.
|
||||
- 2. Review this charter and the Required Reading documents before coding; confirm prerequisites are met.
|
||||
- 3. Keep changes deterministic (stable ordering, timestamps, hashes) and align with offline/air-gap expectations.
|
||||
- 4. Coordinate doc updates, tests, and cross-guild communication whenever contracts or workflows change.
|
||||
- 5. Revert to `TODO` if you pause the task without shipping changes; leave notes in commit/PR descriptions for context.
|
||||
## Roles
|
||||
- Backend/worker engineer (.NET 10).
|
||||
- QA automation (background job + integration tests).
|
||||
|
||||
## Working Agreements
|
||||
1. Track task status in sprint files; log notable operational decisions in Execution Log.
|
||||
2. Respect tenant isolation on all job inputs/outputs; never process cross-tenant data.
|
||||
3. Idempotent processing only: guard against duplicate bundles and repeated messages.
|
||||
4. Offline-first; no external fetches during jobs.
|
||||
5. Observability: structured logs, counters, and optional OTEL traces behind config flags.
|
||||
|
||||
## Testing & Determinism
|
||||
- Provide deterministic job fixtures with seeded clocks/IDs; assert stable ordering of outputs and retries.
|
||||
- Simulate failure/retry paths; ensure idempotent writes in Storage.
|
||||
- Keep timestamps UTC ISO-8601; inject clock/GUID providers for tests.
|
||||
|
||||
## Boundaries
|
||||
- Delegate domain logic to Core and persistence to Storage.Mongo; avoid embedding policy or UI concerns.
|
||||
- Configuration via appsettings/environment; no hard-coded secrets.
|
||||
|
||||
## Ready-to-Start Checklist
|
||||
- Required docs reviewed.
|
||||
- Test harness prepared for background jobs (including retry/backoff settings).
|
||||
- Feature flags defined for new pipelines before enabling in production runs.
|
||||
|
||||
@@ -1,37 +1,40 @@
|
||||
# AGENTS
|
||||
## Role
|
||||
Domain source of truth for VEX statements, consensus rollups, and trust policy orchestration across all Excititor services.
|
||||
# Excititor Core Charter
|
||||
|
||||
## Mission
|
||||
Provide ingestion/domain logic for VEX observations and linksets under the Aggregation-Only Contract: store raw facts, provenance, and precedence pointers without computing consensus or severity.
|
||||
|
||||
## Scope
|
||||
- Records for raw document metadata, normalized claims, consensus projections, and export descriptors.
|
||||
- Policy + weighting engine that projects provider trust tiers into consensus status outcomes.
|
||||
- Connector, normalizer, export, and attestation contracts shared by WebService, Worker, and plug-ins.
|
||||
- Deterministic hashing utilities (query signatures, artifact digests, attestation subjects).
|
||||
## Participants
|
||||
- Excititor WebService uses the models to persist ingress/egress payloads and to perform consensus mutations.
|
||||
- Excititor Worker executes reconciliation and verification routines using policy helpers defined here.
|
||||
- Export/Attestation modules depend on record definitions for envelopes and manifest payloads.
|
||||
## Interfaces & contracts
|
||||
- `IVexConnector`, `INormalizer`, `IExportEngine`, `ITransparencyLogClient`, `IArtifactStore`, and policy abstractions for consensus resolution.
|
||||
- Value objects for provider metadata, VexClaim, VexConsensusEntry, ExportManifest, QuerySignature.
|
||||
- Deterministic comparer utilities and stable JSON serialization helpers for tests and cache keys.
|
||||
## In/Out of scope
|
||||
In: domain invariants, policy evaluation helpers, deterministic serialization, shared abstractions.
|
||||
Out: Mongo persistence implementations, HTTP endpoints, background scheduling, concrete connector logic.
|
||||
## Observability & security expectations
|
||||
- Avoid secret handling; provide structured logging extension methods for consensus decisions.
|
||||
- Emit correlation identifiers and query signatures without embedding PII.
|
||||
- Ensure deterministic logging order to keep reproducibility guarantees intact.
|
||||
## Tests
|
||||
- Unit coverage lives in `../StellaOps.Excititor.Core.Tests` (to be scaffolded) focusing on consensus, policy gates, and serialization determinism.
|
||||
- Golden fixtures must rely on canonical JSON snapshots produced via stable serializers.
|
||||
- Working directory: `src/Excititor/__Libraries/StellaOps.Excititor.Core`
|
||||
- Domain models, validators, linkset extraction, idempotent upserts, tenant guards, and invariants shared by WebService/Worker.
|
||||
- No UI concerns; no policy evaluation.
|
||||
|
||||
## Required Reading
|
||||
- `docs/modules/excititor/architecture.md`
|
||||
- `docs/modules/platform/architecture-overview.md`
|
||||
- `docs/modules/excititor/vex_observations.md`
|
||||
- `docs/ingestion/aggregation-only-contract.md`
|
||||
- `docs/modules/excititor/implementation_plan.md`
|
||||
|
||||
## Working Agreement
|
||||
- 1. Update task status to `DOING`/`DONE` in both correspoding sprint file `/docs/implplan/SPRINT_*.md` and the local `TASKS.md` when you start or finish work.
|
||||
- 2. Review this charter and the Required Reading documents before coding; confirm prerequisites are met.
|
||||
- 3. Keep changes deterministic (stable ordering, timestamps, hashes) and align with offline/air-gap expectations.
|
||||
- 4. Coordinate doc updates, tests, and cross-guild communication whenever contracts or workflows change.
|
||||
- 5. Revert to `TODO` if you pause the task without shipping changes; leave notes in commit/PR descriptions for context.
|
||||
## Roles
|
||||
- Backend library engineer (.NET 10 / C# preview).
|
||||
- QA automation (unit + integration against Mongo fixtures).
|
||||
|
||||
## Working Agreements
|
||||
1. Update sprint status on task transitions; log notable decisions in sprint Execution Log.
|
||||
2. Enforce idempotent ingestion: uniqueness on `(vendor, upstreamId, contentHash, tenant)` and append-only supersede chains.
|
||||
3. Preserve provenance fields and reconciled-from metadata when building linksets; never drop issuer data.
|
||||
4. Tenant isolation is mandatory: all queries/commands include tenant scope; cross-tenant writes must be rejected.
|
||||
5. Offline-first; avoid fetching external resources at runtime.
|
||||
|
||||
## Testing & Determinism
|
||||
- Write deterministic tests: seeded clocks/GUIDs, stable ordering of collections, ISO-8601 UTC timestamps.
|
||||
- Cover linkset extraction ordering, supersede chain construction, and duplicate prevention.
|
||||
- Use Mongo in-memory/test harness fixtures; do not rely on live services.
|
||||
|
||||
## Boundaries
|
||||
- Do not embed Policy Engine rules or Cartographer schemas here; expose contracts for consumers instead.
|
||||
- Keep serialization shapes versioned; document breaking changes in `docs/modules/excititor/changes.md` if created.
|
||||
|
||||
## Ready-to-Start Checklist
|
||||
- Required docs reviewed.
|
||||
- Deterministic test fixtures in place.
|
||||
- Feature flags/config options identified for any behavioral changes.
|
||||
|
||||
@@ -0,0 +1,154 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Collections.Immutable;
|
||||
using System.Linq;
|
||||
|
||||
namespace StellaOps.Excititor.Core.Observations;
|
||||
|
||||
public static class VexLinksetUpdatedEventFactory
|
||||
{
|
||||
public const string EventType = "vex.linkset.updated";
|
||||
|
||||
public static VexLinksetUpdatedEvent Create(
|
||||
string tenant,
|
||||
string linksetId,
|
||||
string vulnerabilityId,
|
||||
string productKey,
|
||||
IEnumerable<VexObservation> observations,
|
||||
IEnumerable<VexObservationDisagreement> disagreements,
|
||||
DateTimeOffset createdAtUtc)
|
||||
{
|
||||
var normalizedTenant = Ensure(tenant, nameof(tenant)).ToLowerInvariant();
|
||||
var normalizedLinksetId = Ensure(linksetId, nameof(linksetId));
|
||||
var normalizedVulnerabilityId = Ensure(vulnerabilityId, nameof(vulnerabilityId));
|
||||
var normalizedProductKey = Ensure(productKey, nameof(productKey));
|
||||
|
||||
var observationRefs = (observations ?? Enumerable.Empty<VexObservation>())
|
||||
.Where(obs => obs is not null)
|
||||
.SelectMany(obs => obs.Statements.Select(statement => new VexLinksetObservationRef(
|
||||
observationId: obs.ObservationId,
|
||||
providerId: obs.ProviderId,
|
||||
status: statement.Status.ToString().ToLowerInvariant(),
|
||||
confidence: statement.Signals?.Severity?.Score)))
|
||||
.Distinct(VexLinksetObservationRefComparer.Instance)
|
||||
.OrderBy(refItem => refItem.ProviderId, StringComparer.OrdinalIgnoreCase)
|
||||
.ThenBy(refItem => refItem.ObservationId, StringComparer.Ordinal)
|
||||
.ToImmutableArray();
|
||||
|
||||
var disagreementList = (disagreements ?? Enumerable.Empty<VexObservationDisagreement>())
|
||||
.Where(d => d is not null)
|
||||
.Select(d => new VexObservationDisagreement(
|
||||
providerId: Normalize(d.ProviderId),
|
||||
status: Normalize(d.Status),
|
||||
justification: VexObservation.TrimToNull(d.Justification),
|
||||
confidence: d.Confidence is null ? null : Math.Clamp(d.Confidence.Value, 0.0, 1.0)))
|
||||
.Distinct(DisagreementComparer.Instance)
|
||||
.OrderBy(d => d.ProviderId, StringComparer.OrdinalIgnoreCase)
|
||||
.ThenBy(d => d.Status, StringComparer.OrdinalIgnoreCase)
|
||||
.ThenBy(d => d.Justification ?? string.Empty, StringComparer.OrdinalIgnoreCase)
|
||||
.ToImmutableArray();
|
||||
|
||||
return new VexLinksetUpdatedEvent(
|
||||
EventType,
|
||||
normalizedTenant,
|
||||
normalizedLinksetId,
|
||||
normalizedVulnerabilityId,
|
||||
normalizedProductKey,
|
||||
observationRefs,
|
||||
disagreementList,
|
||||
createdAtUtc);
|
||||
}
|
||||
|
||||
private static string Normalize(string value) => Ensure(value, nameof(value));
|
||||
|
||||
private static string Ensure(string value, string name)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(value))
|
||||
{
|
||||
throw new ArgumentException($"{name} cannot be null or whitespace", name);
|
||||
}
|
||||
return value.Trim();
|
||||
}
|
||||
|
||||
private sealed class VexLinksetObservationRefComparer : IEqualityComparer<VexLinksetObservationRef>
|
||||
{
|
||||
public static readonly VexLinksetObservationRefComparer Instance = new();
|
||||
|
||||
public bool Equals(VexLinksetObservationRef? x, VexLinksetObservationRef? y)
|
||||
{
|
||||
if (ReferenceEquals(x, y))
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
if (x is null || y is null)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
return string.Equals(x.ObservationId, y.ObservationId, StringComparison.Ordinal)
|
||||
&& string.Equals(x.ProviderId, y.ProviderId, StringComparison.OrdinalIgnoreCase)
|
||||
&& string.Equals(x.Status, y.Status, StringComparison.OrdinalIgnoreCase)
|
||||
&& Nullable.Equals(x.Confidence, y.Confidence);
|
||||
}
|
||||
|
||||
public int GetHashCode(VexLinksetObservationRef obj)
|
||||
{
|
||||
var hash = new HashCode();
|
||||
hash.Add(obj.ObservationId, StringComparer.Ordinal);
|
||||
hash.Add(obj.ProviderId, StringComparer.OrdinalIgnoreCase);
|
||||
hash.Add(obj.Status, StringComparer.OrdinalIgnoreCase);
|
||||
hash.Add(obj.Confidence);
|
||||
return hash.ToHashCode();
|
||||
}
|
||||
}
|
||||
|
||||
private sealed class DisagreementComparer : IEqualityComparer<VexObservationDisagreement>
|
||||
{
|
||||
public static readonly DisagreementComparer Instance = new();
|
||||
|
||||
public bool Equals(VexObservationDisagreement? x, VexObservationDisagreement? y)
|
||||
{
|
||||
if (ReferenceEquals(x, y))
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
if (x is null || y is null)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
return string.Equals(x.ProviderId, y.ProviderId, StringComparison.OrdinalIgnoreCase)
|
||||
&& string.Equals(x.Status, y.Status, StringComparison.OrdinalIgnoreCase)
|
||||
&& string.Equals(x.Justification, y.Justification, StringComparison.OrdinalIgnoreCase)
|
||||
&& Nullable.Equals(x.Confidence, y.Confidence);
|
||||
}
|
||||
|
||||
public int GetHashCode(VexObservationDisagreement obj)
|
||||
{
|
||||
var hash = new HashCode();
|
||||
hash.Add(obj.ProviderId, StringComparer.OrdinalIgnoreCase);
|
||||
hash.Add(obj.Status, StringComparer.OrdinalIgnoreCase);
|
||||
hash.Add(obj.Justification, StringComparer.OrdinalIgnoreCase);
|
||||
hash.Add(obj.Confidence);
|
||||
return hash.ToHashCode();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public sealed record VexLinksetObservationRef(
|
||||
string ObservationId,
|
||||
string ProviderId,
|
||||
string Status,
|
||||
double? Confidence);
|
||||
|
||||
public sealed record VexLinksetUpdatedEvent(
|
||||
string EventType,
|
||||
string Tenant,
|
||||
string LinksetId,
|
||||
string VulnerabilityId,
|
||||
string ProductKey,
|
||||
ImmutableArray<VexLinksetObservationRef> Observations,
|
||||
ImmutableArray<VexObservationDisagreement> Disagreements,
|
||||
DateTimeOffset CreatedAtUtc);
|
||||
@@ -352,21 +352,23 @@ public sealed record VexObservationStatement
|
||||
}
|
||||
}
|
||||
|
||||
public sealed record VexObservationLinkset
|
||||
{
|
||||
public VexObservationLinkset(
|
||||
IEnumerable<string>? aliases,
|
||||
IEnumerable<string>? purls,
|
||||
IEnumerable<string>? cpes,
|
||||
IEnumerable<VexObservationReference>? references,
|
||||
IEnumerable<string>? reconciledFrom = null)
|
||||
{
|
||||
Aliases = NormalizeSet(aliases, toLower: true);
|
||||
Purls = NormalizeSet(purls, toLower: false);
|
||||
Cpes = NormalizeSet(cpes, toLower: false);
|
||||
References = NormalizeReferences(references);
|
||||
ReconciledFrom = NormalizeSet(reconciledFrom, toLower: false);
|
||||
}
|
||||
public sealed record VexObservationLinkset
|
||||
{
|
||||
public VexObservationLinkset(
|
||||
IEnumerable<string>? aliases,
|
||||
IEnumerable<string>? purls,
|
||||
IEnumerable<string>? cpes,
|
||||
IEnumerable<VexObservationReference>? references,
|
||||
IEnumerable<string>? reconciledFrom = null,
|
||||
IEnumerable<VexObservationDisagreement>? disagreements = null)
|
||||
{
|
||||
Aliases = NormalizeSet(aliases, toLower: true);
|
||||
Purls = NormalizeSet(purls, toLower: false);
|
||||
Cpes = NormalizeSet(cpes, toLower: false);
|
||||
References = NormalizeReferences(references);
|
||||
ReconciledFrom = NormalizeSet(reconciledFrom, toLower: false);
|
||||
Disagreements = NormalizeDisagreements(disagreements);
|
||||
}
|
||||
|
||||
public ImmutableArray<string> Aliases { get; }
|
||||
|
||||
@@ -374,9 +376,11 @@ public sealed record VexObservationLinkset
|
||||
|
||||
public ImmutableArray<string> Cpes { get; }
|
||||
|
||||
public ImmutableArray<VexObservationReference> References { get; }
|
||||
|
||||
public ImmutableArray<string> ReconciledFrom { get; }
|
||||
public ImmutableArray<VexObservationReference> References { get; }
|
||||
|
||||
public ImmutableArray<string> ReconciledFrom { get; }
|
||||
|
||||
public ImmutableArray<VexObservationDisagreement> Disagreements { get; }
|
||||
|
||||
private static ImmutableArray<string> NormalizeSet(IEnumerable<string>? values, bool toLower)
|
||||
{
|
||||
@@ -419,19 +423,116 @@ public sealed record VexObservationLinkset
|
||||
set.Add(reference);
|
||||
}
|
||||
|
||||
return set.Count == 0 ? ImmutableArray<VexObservationReference>.Empty : set.ToImmutableArray();
|
||||
}
|
||||
}
|
||||
return set.Count == 0 ? ImmutableArray<VexObservationReference>.Empty : set.ToImmutableArray();
|
||||
}
|
||||
|
||||
private static ImmutableArray<VexObservationDisagreement> NormalizeDisagreements(
|
||||
IEnumerable<VexObservationDisagreement>? disagreements)
|
||||
{
|
||||
if (disagreements is null)
|
||||
{
|
||||
return ImmutableArray<VexObservationDisagreement>.Empty;
|
||||
}
|
||||
|
||||
var comparer = Comparer<VexObservationDisagreement>.Create(static (left, right) =>
|
||||
{
|
||||
var providerCompare = StringComparer.OrdinalIgnoreCase.Compare(left.ProviderId, right.ProviderId);
|
||||
if (providerCompare != 0)
|
||||
{
|
||||
return providerCompare;
|
||||
}
|
||||
|
||||
return StringComparer.OrdinalIgnoreCase.Compare(left.Status, right.Status);
|
||||
});
|
||||
|
||||
var set = new SortedSet<VexObservationDisagreement>(Comparer<VexObservationDisagreement>.Create((a, b) =>
|
||||
{
|
||||
var providerCompare = StringComparer.OrdinalIgnoreCase.Compare(a.ProviderId, b.ProviderId);
|
||||
if (providerCompare != 0)
|
||||
{
|
||||
return providerCompare;
|
||||
}
|
||||
|
||||
var statusCompare = StringComparer.OrdinalIgnoreCase.Compare(a.Status, b.Status);
|
||||
if (statusCompare != 0)
|
||||
{
|
||||
return statusCompare;
|
||||
}
|
||||
|
||||
var justificationCompare = StringComparer.OrdinalIgnoreCase.Compare(
|
||||
a.Justification ?? string.Empty,
|
||||
b.Justification ?? string.Empty);
|
||||
if (justificationCompare != 0)
|
||||
{
|
||||
return justificationCompare;
|
||||
}
|
||||
|
||||
return Nullable.Compare(a.Confidence, b.Confidence);
|
||||
}));
|
||||
|
||||
foreach (var disagreement in disagreements)
|
||||
{
|
||||
if (disagreement is null)
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
var normalizedProvider = VexObservation.TrimToNull(disagreement.ProviderId);
|
||||
var normalizedStatus = VexObservation.TrimToNull(disagreement.Status);
|
||||
|
||||
if (normalizedProvider is null || normalizedStatus is null)
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
var normalizedJustification = VexObservation.TrimToNull(disagreement.Justification);
|
||||
var clampedConfidence = disagreement.Confidence is null
|
||||
? null
|
||||
: Math.Clamp(disagreement.Confidence.Value, 0.0, 1.0);
|
||||
|
||||
set.Add(new VexObservationDisagreement(
|
||||
normalizedProvider,
|
||||
normalizedStatus,
|
||||
normalizedJustification,
|
||||
clampedConfidence));
|
||||
}
|
||||
|
||||
return set.Count == 0 ? ImmutableArray<VexObservationDisagreement>.Empty : set.ToImmutableArray();
|
||||
}
|
||||
}
|
||||
|
||||
public sealed record VexObservationReference
|
||||
{
|
||||
public VexObservationReference(string type, string url)
|
||||
{
|
||||
Type = VexObservation.EnsureNotNullOrWhiteSpace(type, nameof(type));
|
||||
Url = VexObservation.EnsureNotNullOrWhiteSpace(url, nameof(url));
|
||||
}
|
||||
public sealed record VexObservationReference
|
||||
{
|
||||
public VexObservationReference(string type, string url)
|
||||
{
|
||||
Type = VexObservation.EnsureNotNullOrWhiteSpace(type, nameof(type));
|
||||
Url = VexObservation.EnsureNotNullOrWhiteSpace(url, nameof(url));
|
||||
}
|
||||
|
||||
public string Type { get; }
|
||||
|
||||
public string Url { get; }
|
||||
}
|
||||
|
||||
public string Url { get; }
|
||||
}
|
||||
|
||||
public sealed record VexObservationDisagreement
|
||||
{
|
||||
public VexObservationDisagreement(
|
||||
string providerId,
|
||||
string status,
|
||||
string? justification,
|
||||
double? confidence)
|
||||
{
|
||||
ProviderId = VexObservation.EnsureNotNullOrWhiteSpace(providerId, nameof(providerId));
|
||||
Status = VexObservation.EnsureNotNullOrWhiteSpace(status, nameof(status));
|
||||
Justification = justification;
|
||||
Confidence = confidence;
|
||||
}
|
||||
|
||||
public string ProviderId { get; }
|
||||
|
||||
public string Status { get; }
|
||||
|
||||
public string? Justification { get; }
|
||||
|
||||
public double? Confidence { get; }
|
||||
}
|
||||
|
||||
@@ -1,35 +1,37 @@
|
||||
# AGENTS
|
||||
## Role
|
||||
MongoDB persistence layer for Excititor raw documents, claims, consensus snapshots, exports, and cache metadata.
|
||||
# Excititor Storage (Mongo) Charter
|
||||
|
||||
## Mission
|
||||
Provide Mongo-backed persistence for Excititor ingestion, linksets, and observations with deterministic schemas, indexes, and migrations; keep aggregation-only semantics intact.
|
||||
|
||||
## Scope
|
||||
- Collection schemas, Bson class maps, repositories, and transactional write patterns for ingest/export flows.
|
||||
- GridFS integration for raw source documents and artifact metadata persistence.
|
||||
- Migrations, index builders, and bootstrap routines aligned with offline-first deployments.
|
||||
- Deterministic query helpers used by WebService, Worker, and Export modules.
|
||||
## Participants
|
||||
- WebService invokes repositories to store ingest runs, recompute consensus, and register exports.
|
||||
- Worker relies on repositories for resume markers, retry queues, and cache GC flows.
|
||||
- Export/Attestation modules pull stored claims/consensus data for snapshot building.
|
||||
## Interfaces & contracts
|
||||
- Repository abstractions (`IVexRawStore`, `IVexClaimStore`, `IVexConsensusStore`, `IVexExportStore`, `IVexCacheIndex`) and migration host interfaces.
|
||||
- Diagnostics hooks providing collection health metrics and schema validation results.
|
||||
## In/Out of scope
|
||||
In: MongoDB data access, migrations, transactional semantics, schema documentation.
|
||||
Out: domain modeling (Core), policy evaluation (Policy), HTTP surfaces (WebService).
|
||||
## Observability & security expectations
|
||||
- Emit structured logs for collection/migration events including revision ids and elapsed timings.
|
||||
- Expose health metrics (counts, queue backlog) and publish to OpenTelemetry when enabled.
|
||||
- Ensure no raw secret material is logged; mask tokens/URLs in diagnostics.
|
||||
## Tests
|
||||
- Integration fixtures (Mongo runner) and schema regression tests will reside in `../StellaOps.Excititor.Storage.Mongo.Tests`.
|
||||
- Working directory: `src/Excititor/__Libraries/StellaOps.Excititor.Storage.Mongo`
|
||||
- Collections, indexes, migrations, repository abstractions, and data access helpers shared by WebService/Worker/Core.
|
||||
|
||||
## Required Reading
|
||||
- `docs/modules/excititor/architecture.md`
|
||||
- `docs/modules/platform/architecture-overview.md`
|
||||
- `docs/modules/excititor/vex_observations.md`
|
||||
- `docs/ingestion/aggregation-only-contract.md`
|
||||
- `docs/modules/excititor/implementation_plan.md`
|
||||
|
||||
## Working Agreement
|
||||
- 1. Update task status to `DOING`/`DONE` in both correspoding sprint file `/docs/implplan/SPRINT_*.md` and the local `TASKS.md` when you start or finish work.
|
||||
- 2. Review this charter and the Required Reading documents before coding; confirm prerequisites are met.
|
||||
- 3. Keep changes deterministic (stable ordering, timestamps, hashes) and align with offline/air-gap expectations.
|
||||
- 4. Coordinate doc updates, tests, and cross-guild communication whenever contracts or workflows change.
|
||||
- 5. Revert to `TODO` if you pause the task without shipping changes; leave notes in commit/PR descriptions for context.
|
||||
## Roles
|
||||
- Backend/storage engineer (.NET 10, MongoDB driver ≥3.0).
|
||||
- QA automation (repository + migration tests).
|
||||
|
||||
## Working Agreements
|
||||
1. Maintain deterministic migrations; record new indexes and shapes in sprint `Execution Log` and module docs if added.
|
||||
2. Enforce tenant scope in all queries; include partition keys in indexes where applicable.
|
||||
3. No consensus/weighting logic; store raw facts, provenance, and precedence pointers only.
|
||||
4. Offline-first; no runtime external calls.
|
||||
|
||||
## Testing & Determinism
|
||||
- Use Mongo test fixtures/in-memory harness with seeded data; assert index presence and sort stability.
|
||||
- Keep timestamps UTC ISO-8601 and ordering explicit (e.g., vendor, upstreamId, version, createdUtc).
|
||||
- Avoid nondeterministic ObjectId/GUID usage in tests; seed values.
|
||||
|
||||
## Boundaries
|
||||
- Do not embed Policy Engine or Cartographer schemas; consume published contracts.
|
||||
- Config via DI/appsettings; no hard-coded connection strings.
|
||||
|
||||
## Ready-to-Start Checklist
|
||||
- Required docs reviewed.
|
||||
- Test fixture database prepared; migrations scripted and reversible where possible.
|
||||
|
||||
@@ -0,0 +1,79 @@
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using MongoDB.Driver;
|
||||
|
||||
namespace StellaOps.Excititor.Storage.Mongo.Migrations;
|
||||
|
||||
internal sealed class VexObservationCollectionsMigration : IVexMongoMigration
|
||||
{
|
||||
public string Id => "20251117-observations-linksets";
|
||||
|
||||
public async ValueTask ExecuteAsync(IMongoDatabase database, CancellationToken cancellationToken)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(database);
|
||||
|
||||
await EnsureObservationsIndexesAsync(database, cancellationToken).ConfigureAwait(false);
|
||||
await EnsureLinksetIndexesAsync(database, cancellationToken).ConfigureAwait(false);
|
||||
}
|
||||
|
||||
private static Task EnsureObservationsIndexesAsync(IMongoDatabase database, CancellationToken cancellationToken)
|
||||
{
|
||||
var collection = database.GetCollection<VexObservationRecord>(VexMongoCollectionNames.Observations);
|
||||
|
||||
var tenantObservationIndex = Builders<VexObservationRecord>.IndexKeys
|
||||
.Ascending(x => x.Tenant)
|
||||
.Ascending(x => x.ObservationId);
|
||||
|
||||
var tenantVulnIndex = Builders<VexObservationRecord>.IndexKeys
|
||||
.Ascending(x => x.Tenant)
|
||||
.Ascending(x => x.VulnerabilityId);
|
||||
|
||||
var tenantProductIndex = Builders<VexObservationRecord>.IndexKeys
|
||||
.Ascending(x => x.Tenant)
|
||||
.Ascending(x => x.ProductKey);
|
||||
|
||||
var tenantDigestIndex = Builders<VexObservationRecord>.IndexKeys
|
||||
.Ascending(x => x.Tenant)
|
||||
.Ascending("Document.Digest");
|
||||
|
||||
var tenantProviderStatusIndex = Builders<VexObservationRecord>.IndexKeys
|
||||
.Ascending(x => x.Tenant)
|
||||
.Ascending(x => x.ProviderId)
|
||||
.Ascending(x => x.Status);
|
||||
|
||||
return Task.WhenAll(
|
||||
collection.Indexes.CreateOneAsync(new CreateIndexModel<VexObservationRecord>(tenantObservationIndex, new CreateIndexOptions { Unique = true }), cancellationToken: cancellationToken),
|
||||
collection.Indexes.CreateOneAsync(new CreateIndexModel<VexObservationRecord>(tenantVulnIndex), cancellationToken: cancellationToken),
|
||||
collection.Indexes.CreateOneAsync(new CreateIndexModel<VexObservationRecord>(tenantProductIndex), cancellationToken: cancellationToken),
|
||||
collection.Indexes.CreateOneAsync(new CreateIndexModel<VexObservationRecord>(tenantDigestIndex), cancellationToken: cancellationToken),
|
||||
collection.Indexes.CreateOneAsync(new CreateIndexModel<VexObservationRecord>(tenantProviderStatusIndex), cancellationToken: cancellationToken));
|
||||
}
|
||||
|
||||
private static Task EnsureLinksetIndexesAsync(IMongoDatabase database, CancellationToken cancellationToken)
|
||||
{
|
||||
var collection = database.GetCollection<VexLinksetRecord>(VexMongoCollectionNames.Linksets);
|
||||
|
||||
var tenantLinksetIndex = Builders<VexLinksetRecord>.IndexKeys
|
||||
.Ascending(x => x.Tenant)
|
||||
.Ascending(x => x.LinksetId);
|
||||
|
||||
var tenantVulnIndex = Builders<VexLinksetRecord>.IndexKeys
|
||||
.Ascending(x => x.Tenant)
|
||||
.Ascending(x => x.VulnerabilityId);
|
||||
|
||||
var tenantProductIndex = Builders<VexLinksetRecord>.IndexKeys
|
||||
.Ascending(x => x.Tenant)
|
||||
.Ascending(x => x.ProductKey);
|
||||
|
||||
var tenantDisagreementProviderIndex = Builders<VexLinksetRecord>.IndexKeys
|
||||
.Ascending(x => x.Tenant)
|
||||
.Ascending("Disagreements.ProviderId")
|
||||
.Ascending("Disagreements.Status");
|
||||
|
||||
return Task.WhenAll(
|
||||
collection.Indexes.CreateOneAsync(new CreateIndexModel<VexLinksetRecord>(tenantLinksetIndex, new CreateIndexOptions { Unique = true }), cancellationToken: cancellationToken),
|
||||
collection.Indexes.CreateOneAsync(new CreateIndexModel<VexLinksetRecord>(tenantVulnIndex), cancellationToken: cancellationToken),
|
||||
collection.Indexes.CreateOneAsync(new CreateIndexModel<VexLinksetRecord>(tenantProductIndex), cancellationToken: cancellationToken),
|
||||
collection.Indexes.CreateOneAsync(new CreateIndexModel<VexLinksetRecord>(tenantDisagreementProviderIndex), cancellationToken: cancellationToken));
|
||||
}
|
||||
}
|
||||
@@ -60,6 +60,7 @@ public static class VexMongoServiceCollectionExtensions
|
||||
services.AddSingleton<IVexMongoMigration, VexInitialIndexMigration>();
|
||||
services.AddSingleton<IVexMongoMigration, VexConsensusSignalsMigration>();
|
||||
services.AddSingleton<IVexMongoMigration, VexConsensusHoldMigration>();
|
||||
services.AddSingleton<IVexMongoMigration, VexObservationCollectionsMigration>();
|
||||
services.AddSingleton<VexMongoMigrationRunner>();
|
||||
services.AddHostedService<VexMongoMigrationHostedService>();
|
||||
return services;
|
||||
|
||||
@@ -64,12 +64,12 @@ public static class VexMongoMappingRegistry
|
||||
}
|
||||
}
|
||||
|
||||
public static class VexMongoCollectionNames
|
||||
{
|
||||
public const string Migrations = "vex.migrations";
|
||||
public const string Providers = "vex.providers";
|
||||
public const string Raw = "vex.raw";
|
||||
public const string Statements = "vex.statements";
|
||||
public static class VexMongoCollectionNames
|
||||
{
|
||||
public const string Migrations = "vex.migrations";
|
||||
public const string Providers = "vex.providers";
|
||||
public const string Raw = "vex.raw";
|
||||
public const string Statements = "vex.statements";
|
||||
public const string Claims = Statements;
|
||||
public const string Consensus = "vex.consensus";
|
||||
public const string Exports = "vex.exports";
|
||||
@@ -77,4 +77,6 @@ public static class VexMongoCollectionNames
|
||||
public const string ConnectorState = "vex.connector_state";
|
||||
public const string ConsensusHolds = "vex.consensus_holds";
|
||||
public const string Attestations = "vex.attestations";
|
||||
public const string Observations = "vex.observations";
|
||||
public const string Linksets = "vex.linksets";
|
||||
}
|
||||
|
||||
@@ -63,8 +63,8 @@ internal sealed class VexRawDocumentRecord
|
||||
}
|
||||
|
||||
[BsonIgnoreExtraElements]
|
||||
internal sealed class VexExportManifestRecord
|
||||
{
|
||||
internal sealed class VexExportManifestRecord
|
||||
{
|
||||
[BsonId]
|
||||
public string Id { get; set; } = default!;
|
||||
|
||||
@@ -164,8 +164,8 @@ internal sealed class VexExportManifestRecord
|
||||
SizeBytes = manifest.SizeBytes,
|
||||
};
|
||||
|
||||
public VexExportManifest ToDomain()
|
||||
{
|
||||
public VexExportManifest ToDomain()
|
||||
{
|
||||
var signedAt = SignedAt.HasValue
|
||||
? new DateTimeOffset(DateTime.SpecifyKind(SignedAt.Value, DateTimeKind.Utc))
|
||||
: (DateTimeOffset?)null;
|
||||
@@ -276,6 +276,73 @@ internal sealed class VexQuietStatementRecord
|
||||
}
|
||||
}
|
||||
|
||||
[BsonIgnoreExtraElements]
|
||||
internal sealed class VexObservationRecord
|
||||
{
|
||||
[BsonId]
|
||||
public string Id { get; set; } = default!; // observationId
|
||||
|
||||
public string Tenant { get; set; } = default!;
|
||||
|
||||
public string ObservationId { get; set; } = default!;
|
||||
|
||||
public string VulnerabilityId { get; set; } = default!;
|
||||
|
||||
public string ProductKey { get; set; } = default!;
|
||||
|
||||
public string ProviderId { get; set; } = default!;
|
||||
|
||||
public string Status { get; set; } = default!;
|
||||
|
||||
public VexObservationDocumentRecord Document { get; set; } = new();
|
||||
|
||||
public DateTime CreatedAt { get; set; } = DateTime.SpecifyKind(DateTime.UtcNow, DateTimeKind.Utc);
|
||||
|
||||
public List<VexLinksetDisagreementRecord> Disagreements { get; set; } = new();
|
||||
}
|
||||
|
||||
[BsonIgnoreExtraElements]
|
||||
internal sealed class VexObservationDocumentRecord
|
||||
{
|
||||
public string Digest { get; set; } = default!;
|
||||
|
||||
public string? SourceUri { get; set; }
|
||||
= null;
|
||||
}
|
||||
|
||||
[BsonIgnoreExtraElements]
|
||||
internal sealed class VexLinksetRecord
|
||||
{
|
||||
[BsonId]
|
||||
public string Id { get; set; } = default!; // linksetId
|
||||
|
||||
public string Tenant { get; set; } = default!;
|
||||
|
||||
public string LinksetId { get; set; } = default!;
|
||||
|
||||
public string VulnerabilityId { get; set; } = default!;
|
||||
|
||||
public string ProductKey { get; set; } = default!;
|
||||
|
||||
public DateTime CreatedAt { get; set; } = DateTime.SpecifyKind(DateTime.UtcNow, DateTimeKind.Utc);
|
||||
|
||||
public List<VexLinksetDisagreementRecord> Disagreements { get; set; } = new();
|
||||
}
|
||||
|
||||
[BsonIgnoreExtraElements]
|
||||
internal sealed class VexLinksetDisagreementRecord
|
||||
{
|
||||
public string ProviderId { get; set; } = default!;
|
||||
|
||||
public string Status { get; set; } = default!;
|
||||
|
||||
public string? Justification { get; set; }
|
||||
= null;
|
||||
|
||||
public double? Confidence { get; set; }
|
||||
= null;
|
||||
}
|
||||
|
||||
[BsonIgnoreExtraElements]
|
||||
internal sealed class VexProviderRecord
|
||||
{
|
||||
|
||||
@@ -0,0 +1,126 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Collections.Immutable;
|
||||
using StellaOps.Excititor.Core.Observations;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Excititor.Core.Tests.Observations;
|
||||
|
||||
public sealed class VexLinksetUpdatedEventFactoryTests
|
||||
{
|
||||
[Fact]
|
||||
public void Create_Normalizes_Sorts_And_Deduplicates()
|
||||
{
|
||||
var now = DateTimeOffset.UtcNow;
|
||||
|
||||
var observations = new List<VexObservation>
|
||||
{
|
||||
CreateObservation("obs-2", "provider-b", VexClaimStatus.Affected, 0.8, now),
|
||||
CreateObservation("obs-1", "provider-a", VexClaimStatus.NotAffected, 0.1, now),
|
||||
CreateObservation("obs-1", "provider-a", VexClaimStatus.NotAffected, 0.1, now), // duplicate
|
||||
};
|
||||
|
||||
var disagreements = new[]
|
||||
{
|
||||
new VexObservationDisagreement("provider-b", "affected", "reason", 1.2),
|
||||
new VexObservationDisagreement("provider-b", "affected", "reason", 1.2),
|
||||
new VexObservationDisagreement("provider-a", "not_affected", null, -0.5),
|
||||
};
|
||||
|
||||
var evt = VexLinksetUpdatedEventFactory.Create(
|
||||
tenant: "TENANT",
|
||||
linksetId: "link-123",
|
||||
vulnerabilityId: "CVE-2025-0001",
|
||||
productKey: "pkg:demo/app",
|
||||
observations,
|
||||
disagreements,
|
||||
now);
|
||||
|
||||
Assert.Equal(VexLinksetUpdatedEventFactory.EventType, evt.EventType);
|
||||
Assert.Equal("tenant", evt.Tenant);
|
||||
Assert.Equal(2, evt.Observations.Length);
|
||||
Assert.Collection(evt.Observations,
|
||||
first =>
|
||||
{
|
||||
Assert.Equal("obs-1", first.ObservationId);
|
||||
Assert.Equal("provider-a", first.ProviderId);
|
||||
Assert.Equal("not_affected", first.Status);
|
||||
Assert.Equal(0.1, first.Confidence);
|
||||
},
|
||||
second =>
|
||||
{
|
||||
Assert.Equal("obs-2", second.ObservationId);
|
||||
Assert.Equal("provider-b", second.ProviderId);
|
||||
Assert.Equal("affected", second.Status);
|
||||
Assert.Equal(0.8, second.Confidence);
|
||||
});
|
||||
|
||||
Assert.Equal(2, evt.Disagreements.Length);
|
||||
Assert.Collection(evt.Disagreements,
|
||||
first =>
|
||||
{
|
||||
Assert.Equal("provider-a", first.ProviderId);
|
||||
Assert.Equal("not_affected", first.Status);
|
||||
Assert.Equal(0.0, first.Confidence); // clamped
|
||||
},
|
||||
second =>
|
||||
{
|
||||
Assert.Equal("provider-b", second.ProviderId);
|
||||
Assert.Equal("affected", second.Status);
|
||||
Assert.Equal(1.0, second.Confidence); // clamped
|
||||
Assert.Equal("reason", second.Justification);
|
||||
});
|
||||
}
|
||||
|
||||
private static VexObservation CreateObservation(
|
||||
string observationId,
|
||||
string providerId,
|
||||
VexClaimStatus status,
|
||||
double? severity,
|
||||
DateTimeOffset createdAt)
|
||||
{
|
||||
var statement = new VexObservationStatement(
|
||||
vulnerabilityId: "CVE-2025-0001",
|
||||
productKey: "pkg:demo/app",
|
||||
status: status,
|
||||
lastObserved: createdAt,
|
||||
purl: "pkg:demo/app",
|
||||
cpe: null,
|
||||
evidence: ImmutableArray<System.Text.Json.Nodes.JsonNode>.Empty,
|
||||
signals: severity is null
|
||||
? null
|
||||
: new VexSignalSnapshot(new VexSeveritySignal("cvss", severity, "n/a", vector: null), Kev: null, Epss: null));
|
||||
|
||||
var upstream = new VexObservationUpstream(
|
||||
upstreamId: observationId,
|
||||
documentVersion: null,
|
||||
fetchedAt: createdAt,
|
||||
receivedAt: createdAt,
|
||||
contentHash: $"sha256:{observationId}",
|
||||
signature: new VexObservationSignature(true, "sub", "iss", createdAt));
|
||||
|
||||
var linkset = new VexObservationLinkset(
|
||||
aliases: null,
|
||||
purls: new[] { "pkg:demo/app" },
|
||||
cpes: null,
|
||||
references: null);
|
||||
|
||||
var content = new VexObservationContent(
|
||||
format: "csaf",
|
||||
specVersion: "2.0",
|
||||
raw: System.Text.Json.Nodes.JsonNode.Parse("{}")!);
|
||||
|
||||
return new VexObservation(
|
||||
observationId,
|
||||
tenant: "tenant",
|
||||
providerId,
|
||||
streamId: "stream",
|
||||
upstream,
|
||||
statements: ImmutableArray.Create(statement),
|
||||
content,
|
||||
linkset,
|
||||
createdAt,
|
||||
supersedes: ImmutableArray<string>.Empty,
|
||||
attributes: ImmutableDictionary<string, string>.Empty);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,64 @@
|
||||
using System.Collections.Generic;
|
||||
using System.Linq;
|
||||
using StellaOps.Excititor.Core.Observations;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Excititor.Core.Tests.Observations;
|
||||
|
||||
public sealed class VexObservationLinksetTests
|
||||
{
|
||||
[Fact]
|
||||
public void Disagreements_Normalize_SortsAndClamps()
|
||||
{
|
||||
var disagreements = new[]
|
||||
{
|
||||
new VexObservationDisagreement("Provider-B", "affected", "just", 1.2),
|
||||
new VexObservationDisagreement("provider-a", "not_affected", null, -0.1),
|
||||
new VexObservationDisagreement("provider-a", "not_affected", null, 0.5),
|
||||
};
|
||||
|
||||
var linkset = new VexObservationLinkset(
|
||||
aliases: null,
|
||||
purls: null,
|
||||
cpes: null,
|
||||
references: null,
|
||||
reconciledFrom: null,
|
||||
disagreements: disagreements);
|
||||
|
||||
Assert.Equal(2, linkset.Disagreements.Length);
|
||||
|
||||
var first = linkset.Disagreements[0];
|
||||
Assert.Equal("provider-a", first.ProviderId);
|
||||
Assert.Equal("not_affected", first.Status);
|
||||
Assert.Null(first.Justification);
|
||||
Assert.Equal(0.0, first.Confidence); // clamped from -0.1
|
||||
|
||||
var second = linkset.Disagreements[1];
|
||||
Assert.Equal("Provider-B", second.ProviderId);
|
||||
Assert.Equal("affected", second.Status);
|
||||
Assert.Equal("just", second.Justification);
|
||||
Assert.Equal(1.0, second.Confidence); // clamped from 1.2
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void Disagreements_Deduplicates_ByProviderStatusJustificationConfidence()
|
||||
{
|
||||
var disagreements = new List<VexObservationDisagreement>
|
||||
{
|
||||
new("provider-a", "affected", null, 0.7),
|
||||
new("provider-a", "affected", null, 0.7),
|
||||
new("provider-a", "affected", null, 0.7),
|
||||
};
|
||||
|
||||
var linkset = new VexObservationLinkset(
|
||||
aliases: null,
|
||||
purls: null,
|
||||
cpes: null,
|
||||
references: null,
|
||||
reconciledFrom: null,
|
||||
disagreements: disagreements);
|
||||
|
||||
Assert.Single(linkset.Disagreements);
|
||||
Assert.Equal(0.7, linkset.Disagreements[0].Confidence);
|
||||
}
|
||||
}
|
||||
@@ -21,11 +21,12 @@ public sealed class VexMongoMigrationRunnerTests : IAsyncLifetime
|
||||
[Fact]
|
||||
public async Task RunAsync_AppliesInitialIndexesOnce()
|
||||
{
|
||||
var migrations = new IVexMongoMigration[]
|
||||
{
|
||||
new VexInitialIndexMigration(),
|
||||
new VexConsensusSignalsMigration(),
|
||||
};
|
||||
var migrations = new IVexMongoMigration[]
|
||||
{
|
||||
new VexInitialIndexMigration(),
|
||||
new VexConsensusSignalsMigration(),
|
||||
new VexObservationCollectionsMigration(),
|
||||
};
|
||||
var runner = new VexMongoMigrationRunner(_database, migrations, NullLogger<VexMongoMigrationRunner>.Instance);
|
||||
|
||||
await runner.RunAsync(CancellationToken.None);
|
||||
@@ -33,8 +34,8 @@ public sealed class VexMongoMigrationRunnerTests : IAsyncLifetime
|
||||
|
||||
var appliedCollection = _database.GetCollection<VexMigrationRecord>(VexMongoCollectionNames.Migrations);
|
||||
var applied = await appliedCollection.Find(FilterDefinition<VexMigrationRecord>.Empty).ToListAsync();
|
||||
Assert.Equal(2, applied.Count);
|
||||
Assert.Equal(migrations.Select(m => m.Id).OrderBy(id => id, StringComparer.Ordinal), applied.Select(record => record.Id).OrderBy(id => id, StringComparer.Ordinal));
|
||||
Assert.Equal(3, applied.Count);
|
||||
Assert.Equal(migrations.Select(m => m.Id).OrderBy(id => id, StringComparer.Ordinal), applied.Select(record => record.Id).OrderBy(id => id, StringComparer.Ordinal));
|
||||
|
||||
Assert.True(HasIndex(_database.GetCollection<VexRawDocumentRecord>(VexMongoCollectionNames.Raw), "ProviderId_1_Format_1_RetrievedAt_1"));
|
||||
Assert.True(HasIndex(_database.GetCollection<VexProviderRecord>(VexMongoCollectionNames.Providers), "Kind_1"));
|
||||
@@ -43,11 +44,19 @@ public sealed class VexMongoMigrationRunnerTests : IAsyncLifetime
|
||||
Assert.True(HasIndex(_database.GetCollection<VexConsensusRecord>(VexMongoCollectionNames.Consensus), "PolicyRevisionId_1_CalculatedAt_-1"));
|
||||
Assert.True(HasIndex(_database.GetCollection<VexExportManifestRecord>(VexMongoCollectionNames.Exports), "QuerySignature_1_Format_1"));
|
||||
Assert.True(HasIndex(_database.GetCollection<VexCacheEntryRecord>(VexMongoCollectionNames.Cache), "QuerySignature_1_Format_1"));
|
||||
Assert.True(HasIndex(_database.GetCollection<VexCacheEntryRecord>(VexMongoCollectionNames.Cache), "ExpiresAt_1"));
|
||||
Assert.True(HasIndex(_database.GetCollection<VexStatementRecord>(VexMongoCollectionNames.Statements), "VulnerabilityId_1_Product.Key_1_InsertedAt_-1"));
|
||||
Assert.True(HasIndex(_database.GetCollection<VexStatementRecord>(VexMongoCollectionNames.Statements), "ProviderId_1_InsertedAt_-1"));
|
||||
Assert.True(HasIndex(_database.GetCollection<VexStatementRecord>(VexMongoCollectionNames.Statements), "Document.Digest_1"));
|
||||
}
|
||||
Assert.True(HasIndex(_database.GetCollection<VexCacheEntryRecord>(VexMongoCollectionNames.Cache), "ExpiresAt_1"));
|
||||
Assert.True(HasIndex(_database.GetCollection<VexStatementRecord>(VexMongoCollectionNames.Statements), "VulnerabilityId_1_Product.Key_1_InsertedAt_-1"));
|
||||
Assert.True(HasIndex(_database.GetCollection<VexStatementRecord>(VexMongoCollectionNames.Statements), "ProviderId_1_InsertedAt_-1"));
|
||||
Assert.True(HasIndex(_database.GetCollection<VexStatementRecord>(VexMongoCollectionNames.Statements), "Document.Digest_1"));
|
||||
Assert.True(HasIndex(_database.GetCollection<VexObservationRecord>(VexMongoCollectionNames.Observations), "Tenant_1_ObservationId_1"));
|
||||
Assert.True(HasIndex(_database.GetCollection<VexObservationRecord>(VexMongoCollectionNames.Observations), "Tenant_1_VulnerabilityId_1"));
|
||||
Assert.True(HasIndex(_database.GetCollection<VexObservationRecord>(VexMongoCollectionNames.Observations), "Tenant_1_ProductKey_1"));
|
||||
Assert.True(HasIndex(_database.GetCollection<VexObservationRecord>(VexMongoCollectionNames.Observations), "Tenant_1_Document.Digest_1"));
|
||||
Assert.True(HasIndex(_database.GetCollection<VexObservationRecord>(VexMongoCollectionNames.Observations), "Tenant_1_ProviderId_1_Status_1"));
|
||||
Assert.True(HasIndex(_database.GetCollection<VexLinksetRecord>(VexMongoCollectionNames.Linksets), "Tenant_1_LinksetId_1"));
|
||||
Assert.True(HasIndex(_database.GetCollection<VexLinksetRecord>(VexMongoCollectionNames.Linksets), "Tenant_1_VulnerabilityId_1"));
|
||||
Assert.True(HasIndex(_database.GetCollection<VexLinksetRecord>(VexMongoCollectionNames.Linksets), "Tenant_1_ProductKey_1"));
|
||||
}
|
||||
|
||||
private static bool HasIndex<TDocument>(IMongoCollection<TDocument> collection, string name)
|
||||
{
|
||||
|
||||
@@ -0,0 +1,88 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Diagnostics.Metrics;
|
||||
using System.Linq;
|
||||
using StellaOps.Excititor.WebService.Contracts;
|
||||
using StellaOps.Excititor.WebService.Telemetry;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Excititor.WebService.Tests;
|
||||
|
||||
public sealed class EvidenceTelemetryTests
|
||||
{
|
||||
[Fact]
|
||||
public void RecordChunkOutcome_EmitsCounterAndHistogram()
|
||||
{
|
||||
var measurements = new List<(string Name, double Value, IReadOnlyList<KeyValuePair<string, object?>> Tags)>();
|
||||
|
||||
using var listener = CreateListener((instrument, value, tags) =>
|
||||
{
|
||||
measurements.Add((instrument.Name, value, tags.ToList()));
|
||||
});
|
||||
|
||||
EvidenceTelemetry.RecordChunkOutcome("tenant-a", "success", chunkCount: 3, truncated: true);
|
||||
|
||||
Assert.Contains(measurements, m => m.Name == "excititor.vex.evidence.requests" && m.Value == 1);
|
||||
Assert.Contains(measurements, m => m.Name == "excititor.vex.evidence.chunk_count" && m.Value == 3);
|
||||
|
||||
var requestTags = measurements.First(m => m.Name == "excititor.vex.evidence.requests").Tags.ToDictionary(kv => kv.Key, kv => kv.Value);
|
||||
Assert.Equal("tenant-a", requestTags["tenant"]);
|
||||
Assert.Equal("success", requestTags["outcome"]);
|
||||
Assert.Equal(true, requestTags["truncated"]);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void RecordChunkSignatureStatus_EmitsSignatureCounters()
|
||||
{
|
||||
var measurements = new List<(string Name, double Value, IReadOnlyList<KeyValuePair<string, object?>> Tags)>();
|
||||
|
||||
using var listener = CreateListener((instrument, value, tags) =>
|
||||
{
|
||||
measurements.Add((instrument.Name, value, tags.ToList()));
|
||||
});
|
||||
|
||||
var now = DateTimeOffset.UtcNow;
|
||||
var scope = new VexEvidenceChunkScope("pkg:demo/app", "demo", "1.0.0", "pkg:demo/app@1.0.0", null, new[] { "component-a" });
|
||||
var document = new VexEvidenceChunkDocument("digest-1", "spdx", "https://example.test/vex.json", "r1");
|
||||
|
||||
var chunks = new List<VexEvidenceChunkResponse>
|
||||
{
|
||||
new("obs-1", "link-1", "CVE-2025-0001", "pkg:demo/app", "provider-a", "Affected", "just", "detail", 0.9, now.AddMinutes(-10), now, scope, document, new VexEvidenceChunkSignature("cosign", "sub", "issuer", "kid", now, null), new Dictionary<string, string>()),
|
||||
new("obs-2", "link-2", "CVE-2025-0001", "pkg:demo/app", "provider-b", "NotAffected", null, null, null, now.AddMinutes(-8), now, scope, document, null, new Dictionary<string, string>()),
|
||||
new("obs-3", "link-3", "CVE-2025-0001", "pkg:demo/app", "provider-c", "Affected", null, null, null, now.AddMinutes(-6), now, scope, document, new VexEvidenceChunkSignature("cosign", "sub", "issuer", "kid", null, null), new Dictionary<string, string>()),
|
||||
};
|
||||
|
||||
EvidenceTelemetry.RecordChunkSignatureStatus("tenant-b", chunks);
|
||||
|
||||
AssertSignatureMeasurement(measurements, "unsigned", 1, "tenant-b");
|
||||
AssertSignatureMeasurement(measurements, "unverified", 1, "tenant-b");
|
||||
AssertSignatureMeasurement(measurements, "verified", 1, "tenant-b");
|
||||
}
|
||||
|
||||
private static MeterListener CreateListener(Action<Instrument, double, ReadOnlySpan<KeyValuePair<string, object?>>> callback)
|
||||
{
|
||||
var listener = new MeterListener
|
||||
{
|
||||
InstrumentPublished = (instrument, l) =>
|
||||
{
|
||||
if (instrument.Meter.Name == EvidenceTelemetry.MeterName)
|
||||
{
|
||||
l.EnableMeasurementEvents(instrument);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
listener.SetMeasurementEventCallback<long>((instrument, measurement, tags, _) => callback(instrument, measurement, tags));
|
||||
listener.SetMeasurementEventCallback<int>((instrument, measurement, tags, _) => callback(instrument, measurement, tags));
|
||||
|
||||
listener.Start();
|
||||
return listener;
|
||||
}
|
||||
|
||||
private static void AssertSignatureMeasurement(IEnumerable<(string Name, double Value, IReadOnlyList<KeyValuePair<string, object?>> Tags)> measurements, string status, int expectedValue, string tenant)
|
||||
{
|
||||
var match = measurements.FirstOrDefault(m => m.Name == "excititor.vex.signature.status" && m.Tags.Any(t => t.Key == "status" && (string?)t.Value == status));
|
||||
Assert.Equal(expectedValue, match.Value);
|
||||
Assert.Contains(match.Tags, t => t.Key == "tenant" && (string?)t.Value == tenant);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,54 @@
|
||||
using System.Text.Json.Nodes;
|
||||
using FluentAssertions;
|
||||
using StellaOps.Findings.Ledger.Domain;
|
||||
using StellaOps.Findings.Ledger.Infrastructure.InMemory;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Findings.Ledger.Tests.Infrastructure;
|
||||
|
||||
public class InMemoryLedgerEventRepositoryTests
|
||||
{
|
||||
[Fact]
|
||||
public async Task GetEvidenceReferencesAsync_returns_only_events_with_refs()
|
||||
{
|
||||
var repo = new InMemoryLedgerEventRepository();
|
||||
var tenant = "tenant-1";
|
||||
var findingId = "finding-123";
|
||||
|
||||
var withEvidence = CreateRecord(tenant, findingId, sequence: 1, evidenceRef: "bundle-1");
|
||||
var withoutEvidence = CreateRecord(tenant, findingId, sequence: 2, evidenceRef: null);
|
||||
await repo.AppendAsync(withEvidence, CancellationToken.None);
|
||||
await repo.AppendAsync(withoutEvidence, CancellationToken.None);
|
||||
|
||||
var results = await repo.GetEvidenceReferencesAsync(tenant, findingId, CancellationToken.None);
|
||||
|
||||
results.Should().HaveCount(1);
|
||||
results[0].EvidenceBundleRef.Should().Be("bundle-1");
|
||||
results[0].EventId.Should().Be(withEvidence.EventId);
|
||||
}
|
||||
|
||||
private static LedgerEventRecord CreateRecord(string tenant, string findingId, long sequence, string? evidenceRef)
|
||||
{
|
||||
var body = new JsonObject { ["status"] = "affected" };
|
||||
return new LedgerEventRecord(
|
||||
tenant,
|
||||
Guid.NewGuid(),
|
||||
sequence,
|
||||
Guid.NewGuid(),
|
||||
"finding.status.changed",
|
||||
"policy-v1",
|
||||
findingId,
|
||||
"artifact-1",
|
||||
null,
|
||||
"actor-1",
|
||||
"operator",
|
||||
DateTimeOffset.UtcNow,
|
||||
DateTimeOffset.UtcNow,
|
||||
body,
|
||||
"hash-event",
|
||||
"hash-prev",
|
||||
"hash-leaf",
|
||||
"canon",
|
||||
evidenceRef);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,63 @@
|
||||
using System.Diagnostics.Metrics;
|
||||
using System.Linq;
|
||||
using FluentAssertions;
|
||||
using StellaOps.Findings.Ledger.Observability;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Findings.Ledger.Tests.Observability;
|
||||
|
||||
public class LedgerMetricsTests
|
||||
{
|
||||
[Fact]
|
||||
public void RecordProjectionApply_emits_histogram_and_counter_with_tags()
|
||||
{
|
||||
var histogramValues = new List<Measurement<double>>();
|
||||
var counterValues = new List<Measurement<long>>();
|
||||
|
||||
using var listener = new MeterListener
|
||||
{
|
||||
InstrumentPublished = (instrument, l) =>
|
||||
{
|
||||
if (instrument.Meter.Name == "StellaOps.Findings.Ledger")
|
||||
{
|
||||
l.EnableMeasurementEvents(instrument);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
listener.SetMeasurementEventCallback<double>((instrument, measurement, _) =>
|
||||
{
|
||||
if (instrument.Name is "ledger_projection_apply_seconds" or "ledger_projection_lag_seconds")
|
||||
{
|
||||
histogramValues.Add(measurement);
|
||||
}
|
||||
});
|
||||
|
||||
listener.SetMeasurementEventCallback<long>((instrument, measurement, _) =>
|
||||
{
|
||||
if (instrument.Name == "ledger_projection_events_total")
|
||||
{
|
||||
counterValues.Add(measurement);
|
||||
}
|
||||
});
|
||||
|
||||
listener.Start();
|
||||
|
||||
LedgerMetrics.RecordProjectionApply(
|
||||
TimeSpan.FromMilliseconds(40),
|
||||
1.2,
|
||||
"tenant-x",
|
||||
"finding.status.changed",
|
||||
"v1.0",
|
||||
"affected");
|
||||
|
||||
histogramValues.Should().NotBeEmpty();
|
||||
counterValues.Should().NotBeEmpty();
|
||||
|
||||
var tags = histogramValues.First().Tags.ToDictionary(kv => kv.Key, kv => kv.Value?.ToString());
|
||||
tags["tenant"].Should().Be("tenant-x");
|
||||
tags["event_type"].Should().Be("finding.status.changed");
|
||||
tags["policy_version"].Should().Be("v1.0");
|
||||
tags["evaluation_status"].Should().Be("affected");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,117 @@
|
||||
using System.Diagnostics;
|
||||
using System.Text.Json.Nodes;
|
||||
using FluentAssertions;
|
||||
using StellaOps.Findings.Ledger.Domain;
|
||||
using StellaOps.Findings.Ledger.Observability;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Findings.Ledger.Tests.Observability;
|
||||
|
||||
public class LedgerTelemetryTests
|
||||
{
|
||||
[Fact]
|
||||
public void StartLedgerAppend_sets_core_tags()
|
||||
{
|
||||
using var listener = CreateListener();
|
||||
var draft = CreateDraft();
|
||||
|
||||
using var activity = LedgerTelemetry.StartLedgerAppend(draft);
|
||||
|
||||
activity.Should().NotBeNull();
|
||||
activity!.DisplayName.Should().Be("Ledger.Append");
|
||||
activity.GetTagItem("tenant").Should().Be(draft.TenantId);
|
||||
activity.GetTagItem("chain_id").Should().Be(draft.ChainId);
|
||||
activity.GetTagItem("event_type").Should().Be(draft.EventType);
|
||||
activity.GetTagItem("policy_version").Should().Be(draft.PolicyVersion);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void MarkAppendOutcome_sets_hashes_and_success_status()
|
||||
{
|
||||
using var listener = CreateListener();
|
||||
var draft = CreateDraft();
|
||||
var record = CreateRecord(draft);
|
||||
using var activity = LedgerTelemetry.StartLedgerAppend(draft);
|
||||
|
||||
LedgerTelemetry.MarkAppendOutcome(activity, record, TimeSpan.FromMilliseconds(12));
|
||||
|
||||
activity.Should().NotBeNull();
|
||||
activity!.Status.Should().Be(ActivityStatusCode.Ok);
|
||||
activity.GetTagItem("event_hash").Should().Be(record.EventHash);
|
||||
activity.GetTagItem("merkle_leaf_hash").Should().Be(record.MerkleLeafHash);
|
||||
activity.GetTagItem("duration_ms").Should().Be(12d);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void StartProjectionApply_sets_projection_tags()
|
||||
{
|
||||
using var listener = CreateListener();
|
||||
var record = CreateRecord(CreateDraft());
|
||||
|
||||
using var activity = LedgerTelemetry.StartProjectionApply(record);
|
||||
|
||||
activity.Should().NotBeNull();
|
||||
activity!.DisplayName.Should().Be("Ledger.Projection.Apply");
|
||||
activity.GetTagItem("tenant").Should().Be(record.TenantId);
|
||||
activity.GetTagItem("finding_id").Should().Be(record.FindingId);
|
||||
}
|
||||
|
||||
private static ActivityListener CreateListener()
|
||||
{
|
||||
var listener = new ActivityListener
|
||||
{
|
||||
ShouldListenTo = source => source.Name == LedgerTelemetry.ActivitySourceName,
|
||||
Sample = (ref ActivityCreationOptions<ActivityContext> _) => ActivitySamplingResult.AllDataAndRecorded,
|
||||
SampleUsingParentId = (ref ActivityCreationOptions<string> _) => ActivitySamplingResult.AllDataAndRecorded
|
||||
};
|
||||
|
||||
ActivitySource.AddActivityListener(listener);
|
||||
return listener;
|
||||
}
|
||||
|
||||
private static LedgerEventDraft CreateDraft()
|
||||
{
|
||||
var payload = new JsonObject { ["payload"] = "value" };
|
||||
var envelope = new JsonObject { ["event"] = new JsonObject { ["id"] = Guid.NewGuid() } };
|
||||
return new LedgerEventDraft(
|
||||
"tenant-a",
|
||||
Guid.NewGuid(),
|
||||
1,
|
||||
Guid.NewGuid(),
|
||||
"finding.status.changed",
|
||||
"v1.0",
|
||||
"finding-123",
|
||||
"artifact-abc",
|
||||
null,
|
||||
"actor-1",
|
||||
"operator",
|
||||
DateTimeOffset.UtcNow,
|
||||
DateTimeOffset.UtcNow,
|
||||
payload,
|
||||
envelope,
|
||||
LedgerEventConstants.EmptyHash);
|
||||
}
|
||||
|
||||
private static LedgerEventRecord CreateRecord(LedgerEventDraft draft)
|
||||
{
|
||||
return new LedgerEventRecord(
|
||||
draft.TenantId,
|
||||
draft.ChainId,
|
||||
draft.SequenceNumber,
|
||||
draft.EventId,
|
||||
draft.EventType,
|
||||
draft.PolicyVersion,
|
||||
draft.FindingId,
|
||||
draft.ArtifactId,
|
||||
draft.SourceRunId,
|
||||
draft.ActorId,
|
||||
draft.ActorType,
|
||||
draft.OccurredAt,
|
||||
draft.RecordedAt,
|
||||
draft.Payload,
|
||||
"hash-event",
|
||||
draft.ProvidedPreviousHash ?? LedgerEventConstants.EmptyHash,
|
||||
"hash-leaf",
|
||||
"canonical-json");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,81 @@
|
||||
using System.Diagnostics;
|
||||
using System.Text.Json.Nodes;
|
||||
using FluentAssertions;
|
||||
using StellaOps.Findings.Ledger.Domain;
|
||||
using StellaOps.Findings.Ledger.Observability;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Findings.Ledger.Tests.Observability;
|
||||
|
||||
public class LedgerTimelineTests
|
||||
{
|
||||
[Fact]
|
||||
public void EmitLedgerAppended_writes_structured_log_with_event_id()
|
||||
{
|
||||
var logger = new TestLogger<LedgerTimelineTests>();
|
||||
using var activity = new Activity("test").Start();
|
||||
var record = CreateRecord();
|
||||
|
||||
LedgerTimeline.EmitLedgerAppended(logger, record, "evidence-123");
|
||||
|
||||
logger.Entries.Should().HaveCount(1);
|
||||
var entry = logger.Entries.First();
|
||||
entry.EventId.Name.Should().Be("ledger.event.appended");
|
||||
entry.EventId.Id.Should().Be(6101);
|
||||
|
||||
var state = AsDictionary(entry.State);
|
||||
state["Tenant"].Should().Be(record.TenantId);
|
||||
state["EvidenceRef"].Should().Be("evidence-123");
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void EmitProjectionUpdated_writes_structured_log_with_status()
|
||||
{
|
||||
var logger = new TestLogger<LedgerTimelineTests>();
|
||||
using var activity = new Activity("test").Start();
|
||||
var record = CreateRecord();
|
||||
|
||||
LedgerTimeline.EmitProjectionUpdated(logger, record, "affected");
|
||||
|
||||
var entry = logger.Entries.Single();
|
||||
entry.EventId.Name.Should().Be("ledger.projection.updated");
|
||||
entry.EventId.Id.Should().Be(6201);
|
||||
|
||||
var state = AsDictionary(entry.State);
|
||||
state["Status"].Should().Be("affected");
|
||||
}
|
||||
|
||||
private static LedgerEventRecord CreateRecord()
|
||||
{
|
||||
var payload = new JsonObject { ["status"] = "affected" };
|
||||
return new LedgerEventRecord(
|
||||
"tenant-a",
|
||||
Guid.NewGuid(),
|
||||
1,
|
||||
Guid.NewGuid(),
|
||||
"finding.status.changed",
|
||||
"v1",
|
||||
"finding-1",
|
||||
"artifact-1",
|
||||
null,
|
||||
"actor-1",
|
||||
"operator",
|
||||
DateTimeOffset.UtcNow,
|
||||
DateTimeOffset.UtcNow,
|
||||
payload,
|
||||
"hash-event",
|
||||
"hash-prev",
|
||||
"hash-leaf",
|
||||
"canonical-json");
|
||||
}
|
||||
|
||||
private static IDictionary<string, object?> AsDictionary(object state)
|
||||
{
|
||||
if (state is not IEnumerable<KeyValuePair<string, object?>> pairs)
|
||||
{
|
||||
return new Dictionary<string, object?>();
|
||||
}
|
||||
|
||||
return pairs.ToDictionary(k => k.Key, v => v.Value);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,30 @@
|
||||
using System.Collections.Concurrent;
|
||||
using Microsoft.Extensions.Logging;
|
||||
|
||||
namespace StellaOps.Findings.Ledger.Tests.Observability;
|
||||
|
||||
internal sealed class TestLogger<T> : ILogger<T>
|
||||
{
|
||||
private readonly ConcurrentBag<LogEntry> _entries = new();
|
||||
|
||||
public IReadOnlyCollection<LogEntry> Entries => _entries;
|
||||
|
||||
public IDisposable BeginScope<TState>(TState state) where TState : notnull => NullScope.Instance;
|
||||
|
||||
public bool IsEnabled(LogLevel logLevel) => true;
|
||||
|
||||
public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception? exception, Func<TState, Exception?, string> formatter)
|
||||
{
|
||||
_entries.Add(new LogEntry(logLevel, eventId, state));
|
||||
}
|
||||
|
||||
internal sealed record LogEntry(LogLevel Level, EventId EventId, object State);
|
||||
|
||||
private sealed class NullScope : IDisposable
|
||||
{
|
||||
public static readonly NullScope Instance = new();
|
||||
public void Dispose()
|
||||
{
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,22 @@
|
||||
<Project Sdk="Microsoft.NET.Sdk">
|
||||
|
||||
<PropertyGroup>
|
||||
<TargetFramework>net10.0</TargetFramework>
|
||||
<Nullable>enable</Nullable>
|
||||
<ImplicitUsings>enable</ImplicitUsings>
|
||||
</PropertyGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<ProjectReference Include="..\\StellaOps.Findings.Ledger\\StellaOps.Findings.Ledger.csproj" />
|
||||
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.11.1" />
|
||||
<PackageReference Include="xunit" Version="2.8.1" />
|
||||
<PackageReference Include="xunit.runner.visualstudio" Version="2.8.1">
|
||||
<PrivateAssets>all</PrivateAssets>
|
||||
</PackageReference>
|
||||
<PackageReference Include="FluentAssertions" Version="6.12.0" />
|
||||
<PackageReference Include="coverlet.collector" Version="6.0.0">
|
||||
<PrivateAssets>all</PrivateAssets>
|
||||
</PackageReference>
|
||||
</ItemGroup>
|
||||
|
||||
</Project>
|
||||
@@ -0,0 +1,12 @@
|
||||
using System.Text.Json.Serialization;
|
||||
|
||||
namespace StellaOps.Findings.Ledger.Domain;
|
||||
|
||||
public sealed record EvidenceReference(
|
||||
Guid EventId,
|
||||
string EvidenceBundleRef,
|
||||
DateTimeOffset RecordedAt)
|
||||
{
|
||||
[JsonIgnore]
|
||||
public string EvidenceBundleRefNormalized => EvidenceBundleRef.Trim();
|
||||
}
|
||||
@@ -18,7 +18,8 @@ public sealed record LedgerEventDraft(
|
||||
DateTimeOffset RecordedAt,
|
||||
JsonObject Payload,
|
||||
JsonObject CanonicalEnvelope,
|
||||
string? ProvidedPreviousHash);
|
||||
string? ProvidedPreviousHash,
|
||||
string? EvidenceBundleReference = null);
|
||||
|
||||
public sealed record LedgerEventRecord(
|
||||
string TenantId,
|
||||
@@ -38,7 +39,8 @@ public sealed record LedgerEventRecord(
|
||||
string EventHash,
|
||||
string PreviousHash,
|
||||
string MerkleLeafHash,
|
||||
string CanonicalJson);
|
||||
string CanonicalJson,
|
||||
string? EvidenceBundleReference = null);
|
||||
|
||||
public sealed record LedgerChainHead(
|
||||
long SequenceNumber,
|
||||
|
||||
@@ -10,4 +10,6 @@ public interface ILedgerEventRepository
|
||||
Task<LedgerChainHead?> GetChainHeadAsync(string tenantId, Guid chainId, CancellationToken cancellationToken);
|
||||
|
||||
Task AppendAsync(LedgerEventRecord record, CancellationToken cancellationToken);
|
||||
|
||||
Task<IReadOnlyList<EvidenceReference>> GetEvidenceReferencesAsync(string tenantId, string findingId, CancellationToken cancellationToken);
|
||||
}
|
||||
|
||||
@@ -42,6 +42,19 @@ public sealed class InMemoryLedgerEventRepository : ILedgerEventRepository
|
||||
return Task.CompletedTask;
|
||||
}
|
||||
|
||||
public Task<IReadOnlyList<EvidenceReference>> GetEvidenceReferencesAsync(string tenantId, string findingId, CancellationToken cancellationToken)
|
||||
{
|
||||
var matches = _events.Values
|
||||
.Where(e => e.TenantId == tenantId
|
||||
&& string.Equals(e.FindingId, findingId, StringComparison.Ordinal)
|
||||
&& !string.IsNullOrWhiteSpace(e.EvidenceBundleReference))
|
||||
.OrderByDescending(e => e.RecordedAt)
|
||||
.Select(e => new EvidenceReference(e.EventId, e.EvidenceBundleReference!, e.RecordedAt))
|
||||
.ToList();
|
||||
|
||||
return Task.FromResult<IReadOnlyList<EvidenceReference>>(matches);
|
||||
}
|
||||
|
||||
private static LedgerEventRecord Clone(LedgerEventRecord record)
|
||||
{
|
||||
var clonedBody = (JsonObject)record.EventBody.DeepClone();
|
||||
|
||||
@@ -24,7 +24,8 @@ public sealed class PostgresLedgerEventRepository : ILedgerEventRepository
|
||||
event_body,
|
||||
event_hash,
|
||||
previous_hash,
|
||||
merkle_leaf_hash
|
||||
merkle_leaf_hash,
|
||||
evidence_bundle_ref
|
||||
FROM ledger_events
|
||||
WHERE tenant_id = @tenant_id
|
||||
AND event_id = @event_id
|
||||
@@ -59,7 +60,8 @@ public sealed class PostgresLedgerEventRepository : ILedgerEventRepository
|
||||
event_body,
|
||||
event_hash,
|
||||
previous_hash,
|
||||
merkle_leaf_hash)
|
||||
merkle_leaf_hash,
|
||||
evidence_bundle_ref)
|
||||
VALUES (
|
||||
@tenant_id,
|
||||
@chain_id,
|
||||
@@ -77,7 +79,8 @@ public sealed class PostgresLedgerEventRepository : ILedgerEventRepository
|
||||
@event_body,
|
||||
@event_hash,
|
||||
@previous_hash,
|
||||
@merkle_leaf_hash)
|
||||
@merkle_leaf_hash,
|
||||
@evidence_bundle_ref)
|
||||
""";
|
||||
|
||||
private readonly LedgerDataSource _dataSource;
|
||||
@@ -162,6 +165,7 @@ public sealed class PostgresLedgerEventRepository : ILedgerEventRepository
|
||||
command.Parameters.AddWithValue("event_hash", record.EventHash);
|
||||
command.Parameters.AddWithValue("previous_hash", record.PreviousHash);
|
||||
command.Parameters.AddWithValue("merkle_leaf_hash", record.MerkleLeafHash);
|
||||
command.Parameters.AddWithValue("evidence_bundle_ref", (object?)record.EvidenceBundleReference ?? DBNull.Value);
|
||||
|
||||
try
|
||||
{
|
||||
@@ -194,6 +198,7 @@ public sealed class PostgresLedgerEventRepository : ILedgerEventRepository
|
||||
var eventHash = reader.GetString(12);
|
||||
var previousHash = reader.GetString(13);
|
||||
var merkleLeafHash = reader.GetString(14);
|
||||
var evidenceBundleRef = reader.IsDBNull(15) ? null : reader.GetString(15);
|
||||
|
||||
var canonicalEnvelope = LedgerCanonicalJsonSerializer.Canonicalize(eventBody);
|
||||
var canonicalJson = LedgerCanonicalJsonSerializer.Serialize(canonicalEnvelope);
|
||||
@@ -216,6 +221,37 @@ public sealed class PostgresLedgerEventRepository : ILedgerEventRepository
|
||||
eventHash,
|
||||
previousHash,
|
||||
merkleLeafHash,
|
||||
canonicalJson);
|
||||
canonicalJson,
|
||||
evidenceBundleRef);
|
||||
}
|
||||
|
||||
public async Task<IReadOnlyList<EvidenceReference>> GetEvidenceReferencesAsync(string tenantId, string findingId, CancellationToken cancellationToken)
|
||||
{
|
||||
const string sql = """
|
||||
SELECT event_id, evidence_bundle_ref, recorded_at
|
||||
FROM ledger_events
|
||||
WHERE tenant_id = @tenant_id
|
||||
AND finding_id = @finding_id
|
||||
AND evidence_bundle_ref IS NOT NULL
|
||||
ORDER BY recorded_at DESC
|
||||
""";
|
||||
|
||||
await using var connection = await _dataSource.OpenConnectionAsync(tenantId, cancellationToken).ConfigureAwait(false);
|
||||
await using var command = new NpgsqlCommand(sql, connection);
|
||||
command.CommandTimeout = _dataSource.CommandTimeoutSeconds;
|
||||
command.Parameters.AddWithValue("tenant_id", tenantId);
|
||||
command.Parameters.AddWithValue("finding_id", findingId);
|
||||
|
||||
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
|
||||
var results = new List<EvidenceReference>();
|
||||
while (await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
|
||||
{
|
||||
results.Add(new EvidenceReference(
|
||||
reader.GetGuid(0),
|
||||
reader.GetString(1),
|
||||
reader.GetFieldValue<DateTimeOffset>(2)));
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
using System.Diagnostics;
|
||||
using Microsoft.Extensions.Hosting;
|
||||
using Microsoft.Extensions.Logging;
|
||||
using Microsoft.Extensions.Options;
|
||||
@@ -5,6 +6,7 @@ using StellaOps.Findings.Ledger.Domain;
|
||||
using StellaOps.Findings.Ledger.Infrastructure;
|
||||
using StellaOps.Findings.Ledger.Infrastructure.Policy;
|
||||
using StellaOps.Findings.Ledger.Options;
|
||||
using StellaOps.Findings.Ledger.Observability;
|
||||
using StellaOps.Findings.Ledger.Services;
|
||||
|
||||
namespace StellaOps.Findings.Ledger.Infrastructure.Projection;
|
||||
@@ -74,9 +76,21 @@ public sealed class LedgerProjectionWorker : BackgroundService
|
||||
|
||||
foreach (var record in batch)
|
||||
{
|
||||
using var scope = _logger.BeginScope(new Dictionary<string, object?>
|
||||
{
|
||||
["tenant"] = record.TenantId,
|
||||
["chainId"] = record.ChainId,
|
||||
["eventId"] = record.EventId,
|
||||
["eventType"] = record.EventType,
|
||||
["policyVersion"] = record.PolicyVersion
|
||||
});
|
||||
using var activity = LedgerTelemetry.StartProjectionApply(record);
|
||||
var applyStopwatch = Stopwatch.StartNew();
|
||||
string? evaluationStatus = null;
|
||||
|
||||
try
|
||||
{
|
||||
await ApplyAsync(record, stoppingToken).ConfigureAwait(false);
|
||||
evaluationStatus = await ApplyAsync(record, stoppingToken).ConfigureAwait(false);
|
||||
|
||||
checkpoint = checkpoint with
|
||||
{
|
||||
@@ -86,13 +100,36 @@ public sealed class LedgerProjectionWorker : BackgroundService
|
||||
};
|
||||
|
||||
await _repository.SaveCheckpointAsync(checkpoint, stoppingToken).ConfigureAwait(false);
|
||||
|
||||
_logger.LogInformation(
|
||||
"Projected ledger event {EventId} for tenant {Tenant} chain {ChainId} seq {Sequence} finding {FindingId}.",
|
||||
record.EventId,
|
||||
record.TenantId,
|
||||
record.ChainId,
|
||||
record.SequenceNumber,
|
||||
record.FindingId);
|
||||
activity?.SetStatus(System.Diagnostics.ActivityStatusCode.Ok);
|
||||
|
||||
applyStopwatch.Stop();
|
||||
var now = _timeProvider.GetUtcNow();
|
||||
var lagSeconds = Math.Max(0, (now - record.RecordedAt).TotalSeconds);
|
||||
LedgerMetrics.RecordProjectionApply(
|
||||
applyStopwatch.Elapsed,
|
||||
lagSeconds,
|
||||
record.TenantId,
|
||||
record.EventType,
|
||||
record.PolicyVersion,
|
||||
evaluationStatus ?? string.Empty);
|
||||
LedgerTimeline.EmitProjectionUpdated(_logger, record, evaluationStatus, evidenceBundleRef: null);
|
||||
}
|
||||
catch (OperationCanceledException) when (stoppingToken.IsCancellationRequested)
|
||||
{
|
||||
LedgerTelemetry.MarkError(activity, "projection_cancelled");
|
||||
return;
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
LedgerTelemetry.MarkError(activity, "projection_failed");
|
||||
_logger.LogError(ex, "Failed to project ledger event {EventId} for tenant {TenantId}.", record.EventId, record.TenantId);
|
||||
await DelayAsync(stoppingToken).ConfigureAwait(false);
|
||||
break;
|
||||
@@ -101,7 +138,7 @@ public sealed class LedgerProjectionWorker : BackgroundService
|
||||
}
|
||||
}
|
||||
|
||||
private async Task ApplyAsync(LedgerEventRecord record, CancellationToken cancellationToken)
|
||||
private async Task<string?> ApplyAsync(LedgerEventRecord record, CancellationToken cancellationToken)
|
||||
{
|
||||
var current = await _repository.GetAsync(record.TenantId, record.FindingId, record.PolicyVersion, cancellationToken).ConfigureAwait(false);
|
||||
var evaluation = await _policyEvaluationService.EvaluateAsync(record, current, cancellationToken).ConfigureAwait(false);
|
||||
@@ -114,6 +151,8 @@ public sealed class LedgerProjectionWorker : BackgroundService
|
||||
{
|
||||
await _repository.InsertActionAsync(result.Action, cancellationToken).ConfigureAwait(false);
|
||||
}
|
||||
|
||||
return evaluation.Status;
|
||||
}
|
||||
|
||||
private async Task DelayAsync(CancellationToken cancellationToken)
|
||||
|
||||
@@ -15,6 +15,20 @@ internal static class LedgerMetrics
|
||||
"ledger_events_total",
|
||||
description: "Number of ledger events appended.");
|
||||
|
||||
private static readonly Histogram<double> ProjectionApplySeconds = Meter.CreateHistogram<double>(
|
||||
"ledger_projection_apply_seconds",
|
||||
unit: "s",
|
||||
description: "Duration to apply a ledger event to the finding projection.");
|
||||
|
||||
private static readonly Histogram<double> ProjectionLagSeconds = Meter.CreateHistogram<double>(
|
||||
"ledger_projection_lag_seconds",
|
||||
unit: "s",
|
||||
description: "Lag between ledger recorded_at and projection application time.");
|
||||
|
||||
private static readonly Counter<long> ProjectionEventsTotal = Meter.CreateCounter<long>(
|
||||
"ledger_projection_events_total",
|
||||
description: "Number of ledger events applied to projections.");
|
||||
|
||||
public static void RecordWriteSuccess(TimeSpan duration, string? tenantId, string? eventType, string? source)
|
||||
{
|
||||
var tags = new TagList
|
||||
@@ -27,4 +41,25 @@ internal static class LedgerMetrics
|
||||
WriteLatencySeconds.Record(duration.TotalSeconds, tags);
|
||||
EventsTotal.Add(1, tags);
|
||||
}
|
||||
|
||||
public static void RecordProjectionApply(
|
||||
TimeSpan duration,
|
||||
double lagSeconds,
|
||||
string? tenantId,
|
||||
string? eventType,
|
||||
string? policyVersion,
|
||||
string? evaluationStatus)
|
||||
{
|
||||
var tags = new TagList
|
||||
{
|
||||
{ "tenant", tenantId ?? string.Empty },
|
||||
{ "event_type", eventType ?? string.Empty },
|
||||
{ "policy_version", policyVersion ?? string.Empty },
|
||||
{ "evaluation_status", evaluationStatus ?? string.Empty }
|
||||
};
|
||||
|
||||
ProjectionApplySeconds.Record(duration.TotalSeconds, tags);
|
||||
ProjectionLagSeconds.Record(lagSeconds, tags);
|
||||
ProjectionEventsTotal.Add(1, tags);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,79 @@
|
||||
using System.Diagnostics;
|
||||
using StellaOps.Findings.Ledger.Domain;
|
||||
|
||||
namespace StellaOps.Findings.Ledger.Observability;
|
||||
|
||||
/// <summary>
|
||||
/// Centralised ActivitySource and tagging helpers for ledger telemetry.
|
||||
/// Keeps tags consistent across writer, projector, and query surfaces.
|
||||
/// </summary>
|
||||
internal static class LedgerTelemetry
|
||||
{
|
||||
internal const string ActivitySourceName = "StellaOps.Findings.Ledger";
|
||||
|
||||
private static readonly ActivitySource ActivitySource = new(ActivitySourceName);
|
||||
|
||||
public static Activity? StartLedgerAppend(LedgerEventDraft draft)
|
||||
{
|
||||
var activity = ActivitySource.StartActivity("Ledger.Append", ActivityKind.Internal);
|
||||
if (activity is null)
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
activity.SetTag("tenant", draft.TenantId);
|
||||
activity.SetTag("chain_id", draft.ChainId);
|
||||
activity.SetTag("sequence", draft.SequenceNumber);
|
||||
activity.SetTag("event_id", draft.EventId);
|
||||
activity.SetTag("event_type", draft.EventType);
|
||||
activity.SetTag("actor_id", draft.ActorId);
|
||||
activity.SetTag("actor_type", draft.ActorType);
|
||||
activity.SetTag("policy_version", draft.PolicyVersion);
|
||||
activity.SetTag("source", draft.SourceRunId.HasValue ? "policy_run" : "workflow");
|
||||
|
||||
return activity;
|
||||
}
|
||||
|
||||
public static void MarkAppendOutcome(Activity? activity, LedgerEventRecord record, TimeSpan duration)
|
||||
{
|
||||
if (activity is null)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
activity.SetTag("event_hash", record.EventHash);
|
||||
activity.SetTag("previous_hash", record.PreviousHash);
|
||||
activity.SetTag("merkle_leaf_hash", record.MerkleLeafHash);
|
||||
activity.SetTag("duration_ms", duration.TotalMilliseconds);
|
||||
activity.SetStatus(ActivityStatusCode.Ok);
|
||||
}
|
||||
|
||||
public static void MarkError(Activity? activity, string reason)
|
||||
{
|
||||
if (activity is null)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
activity.SetStatus(ActivityStatusCode.Error, reason);
|
||||
}
|
||||
|
||||
public static Activity? StartProjectionApply(LedgerEventRecord record)
|
||||
{
|
||||
var activity = ActivitySource.StartActivity("Ledger.Projection.Apply", ActivityKind.Internal);
|
||||
if (activity is null)
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
activity.SetTag("tenant", record.TenantId);
|
||||
activity.SetTag("chain_id", record.ChainId);
|
||||
activity.SetTag("sequence", record.SequenceNumber);
|
||||
activity.SetTag("event_id", record.EventId);
|
||||
activity.SetTag("event_type", record.EventType);
|
||||
activity.SetTag("policy_version", record.PolicyVersion);
|
||||
activity.SetTag("finding_id", record.FindingId);
|
||||
|
||||
return activity;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,65 @@
|
||||
using System.Diagnostics;
|
||||
using Microsoft.Extensions.Logging;
|
||||
using StellaOps.Findings.Ledger.Domain;
|
||||
|
||||
namespace StellaOps.Findings.Ledger.Observability;
|
||||
|
||||
/// <summary>
|
||||
/// Emits structured timeline events for ledger operations.
|
||||
/// Currently materialised as structured logs; can be swapped for event sink later.
|
||||
/// </summary>
|
||||
internal static class LedgerTimeline
|
||||
{
|
||||
private static readonly EventId LedgerAppended = new(6101, "ledger.event.appended");
|
||||
private static readonly EventId ProjectionUpdated = new(6201, "ledger.projection.updated");
|
||||
|
||||
public static void EmitLedgerAppended(ILogger logger, LedgerEventRecord record, string? evidenceBundleRef = null)
|
||||
{
|
||||
if (logger is null)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
var traceId = Activity.Current?.TraceId.ToHexString() ?? string.Empty;
|
||||
|
||||
logger.LogInformation(
|
||||
LedgerAppended,
|
||||
"timeline ledger.event.appended tenant={Tenant} chain={ChainId} seq={Sequence} event={EventId} type={EventType} policy={PolicyVersion} finding={FindingId} trace={TraceId} evidence_ref={EvidenceRef}",
|
||||
record.TenantId,
|
||||
record.ChainId,
|
||||
record.SequenceNumber,
|
||||
record.EventId,
|
||||
record.EventType,
|
||||
record.PolicyVersion,
|
||||
record.FindingId,
|
||||
traceId,
|
||||
evidenceBundleRef ?? record.EvidenceBundleReference ?? string.Empty);
|
||||
}
|
||||
|
||||
public static void EmitProjectionUpdated(
|
||||
ILogger logger,
|
||||
LedgerEventRecord record,
|
||||
string? evaluationStatus,
|
||||
string? evidenceBundleRef = null)
|
||||
{
|
||||
if (logger is null)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
var traceId = Activity.Current?.TraceId.ToHexString() ?? string.Empty;
|
||||
|
||||
logger.LogInformation(
|
||||
ProjectionUpdated,
|
||||
"timeline ledger.projection.updated tenant={Tenant} chain={ChainId} seq={Sequence} event={EventId} policy={PolicyVersion} finding={FindingId} status={Status} trace={TraceId} evidence_ref={EvidenceRef}",
|
||||
record.TenantId,
|
||||
record.ChainId,
|
||||
record.SequenceNumber,
|
||||
record.EventId,
|
||||
record.PolicyVersion,
|
||||
record.FindingId,
|
||||
evaluationStatus ?? string.Empty,
|
||||
traceId,
|
||||
evidenceBundleRef ?? record.EvidenceBundleReference ?? string.Empty);
|
||||
}
|
||||
}
|
||||
@@ -32,10 +32,21 @@ public sealed class LedgerEventWriteService : ILedgerEventWriteService
|
||||
public async Task<LedgerWriteResult> AppendAsync(LedgerEventDraft draft, CancellationToken cancellationToken)
|
||||
{
|
||||
var stopwatch = Stopwatch.StartNew();
|
||||
using var activity = LedgerTelemetry.StartLedgerAppend(draft);
|
||||
using var scope = _logger.BeginScope(new Dictionary<string, object?>
|
||||
{
|
||||
["tenant"] = draft.TenantId,
|
||||
["chainId"] = draft.ChainId,
|
||||
["sequence"] = draft.SequenceNumber,
|
||||
["eventId"] = draft.EventId,
|
||||
["eventType"] = draft.EventType,
|
||||
["policyVersion"] = draft.PolicyVersion
|
||||
});
|
||||
|
||||
var validationErrors = ValidateDraft(draft);
|
||||
if (validationErrors.Count > 0)
|
||||
{
|
||||
LedgerTelemetry.MarkError(activity, "validation_failed");
|
||||
return LedgerWriteResult.ValidationFailed([.. validationErrors]);
|
||||
}
|
||||
|
||||
@@ -45,6 +56,7 @@ public sealed class LedgerEventWriteService : ILedgerEventWriteService
|
||||
var canonicalJson = LedgerCanonicalJsonSerializer.Serialize(draft.CanonicalEnvelope);
|
||||
if (!string.Equals(existing.CanonicalJson, canonicalJson, StringComparison.Ordinal))
|
||||
{
|
||||
LedgerTelemetry.MarkError(activity, "event_id_conflict");
|
||||
return LedgerWriteResult.Conflict(
|
||||
"event_id_conflict",
|
||||
$"Event '{draft.EventId}' already exists with a different payload.");
|
||||
@@ -58,6 +70,7 @@ public sealed class LedgerEventWriteService : ILedgerEventWriteService
|
||||
var expectedSequence = chainHead is null ? 1 : chainHead.SequenceNumber + 1;
|
||||
if (draft.SequenceNumber != expectedSequence)
|
||||
{
|
||||
LedgerTelemetry.MarkError(activity, "sequence_mismatch");
|
||||
return LedgerWriteResult.Conflict(
|
||||
"sequence_mismatch",
|
||||
$"Sequence number '{draft.SequenceNumber}' does not match expected '{expectedSequence}'.");
|
||||
@@ -66,6 +79,7 @@ public sealed class LedgerEventWriteService : ILedgerEventWriteService
|
||||
var previousHash = chainHead?.EventHash ?? LedgerEventConstants.EmptyHash;
|
||||
if (draft.ProvidedPreviousHash is not null && !string.Equals(draft.ProvidedPreviousHash, previousHash, StringComparison.OrdinalIgnoreCase))
|
||||
{
|
||||
LedgerTelemetry.MarkError(activity, "previous_hash_mismatch");
|
||||
return LedgerWriteResult.Conflict(
|
||||
"previous_hash_mismatch",
|
||||
$"Provided previous hash '{draft.ProvidedPreviousHash}' does not match chain head hash '{previousHash}'.");
|
||||
@@ -93,7 +107,8 @@ public sealed class LedgerEventWriteService : ILedgerEventWriteService
|
||||
hashResult.EventHash,
|
||||
previousHash,
|
||||
hashResult.MerkleLeafHash,
|
||||
hashResult.CanonicalJson);
|
||||
hashResult.CanonicalJson,
|
||||
draft.EvidenceBundleReference);
|
||||
|
||||
try
|
||||
{
|
||||
@@ -102,10 +117,29 @@ public sealed class LedgerEventWriteService : ILedgerEventWriteService
|
||||
|
||||
stopwatch.Stop();
|
||||
LedgerMetrics.RecordWriteSuccess(stopwatch.Elapsed, draft.TenantId, draft.EventType, DetermineSource(draft));
|
||||
LedgerTelemetry.MarkAppendOutcome(activity, record, stopwatch.Elapsed);
|
||||
|
||||
_logger.LogInformation(
|
||||
"Ledger append committed for tenant {Tenant} chain {ChainId} seq {Sequence} event {EventId} ({EventType}) hash {Hash} prev {PrevHash}.",
|
||||
record.TenantId,
|
||||
record.ChainId,
|
||||
record.SequenceNumber,
|
||||
record.EventId,
|
||||
record.EventType,
|
||||
record.EventHash,
|
||||
record.PreviousHash);
|
||||
LedgerTimeline.EmitLedgerAppended(_logger, record, evidenceBundleRef: null);
|
||||
}
|
||||
catch (Exception ex) when (IsDuplicateKeyException(ex))
|
||||
{
|
||||
_logger.LogWarning(ex, "Ledger append detected concurrent duplicate for {EventId}", draft.EventId);
|
||||
LedgerTelemetry.MarkError(activity, "duplicate_event");
|
||||
_logger.LogWarning(
|
||||
ex,
|
||||
"Ledger append detected concurrent duplicate for tenant {Tenant} chain {ChainId} seq {Sequence} event {EventId}.",
|
||||
draft.TenantId,
|
||||
draft.ChainId,
|
||||
draft.SequenceNumber,
|
||||
draft.EventId);
|
||||
var persisted = await _repository.GetByEventIdAsync(draft.TenantId, draft.EventId, cancellationToken).ConfigureAwait(false);
|
||||
if (persisted is null)
|
||||
{
|
||||
|
||||
@@ -0,0 +1,8 @@
|
||||
-- LEDGER-OBS-53-001: persist evidence bundle references alongside ledger entries.
|
||||
|
||||
ALTER TABLE ledger_events
|
||||
ADD COLUMN evidence_bundle_ref text NULL;
|
||||
|
||||
CREATE INDEX IF NOT EXISTS ix_ledger_events_finding_evidence_ref
|
||||
ON ledger_events (tenant_id, finding_id, recorded_at DESC)
|
||||
WHERE evidence_bundle_ref IS NOT NULL;
|
||||
@@ -0,0 +1,9 @@
|
||||
# Worker SDK (Go) — Task Tracker
|
||||
|
||||
| Task ID | Status | Notes | Updated (UTC) |
|
||||
| --- | --- | --- | --- |
|
||||
| WORKER-GO-32-001 | DONE | Initial Go SDK scaffold with config binding, auth headers, claim/ack client, smoke sample, and unit tests. | 2025-11-17 |
|
||||
| WORKER-GO-32-002 | DONE | Heartbeat/progress helpers, logging hooks, metrics, and jittered retries. | 2025-11-17 |
|
||||
| WORKER-GO-33-001 | DONE | Artifact publish helpers, checksum hashing, metadata payload, idempotency guard. | 2025-11-17 |
|
||||
| WORKER-GO-33-002 | DONE | Error classification/backoff helpers and structured failure reporting. | 2025-11-17 |
|
||||
| WORKER-GO-34-001 | DONE | Backfill range execution helpers, watermark handshake, artifact dedupe verification. | 2025-11-17 |
|
||||
@@ -0,0 +1,45 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"git.stella-ops.org/stellaops/orchestrator/worker-sdk-go/pkg/workersdk"
|
||||
)
|
||||
|
||||
func main() {
|
||||
client, err := workersdk.New(workersdk.Config{
|
||||
BaseURL: "http://localhost:8080",
|
||||
APIKey: "dev-token",
|
||||
TenantID: "local-tenant",
|
||||
ProjectID: "demo-project",
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatalf("configure client: %v", err)
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
claim, err := client.Claim(ctx, workersdk.ClaimJobRequest{WorkerID: "demo-worker", Capabilities: []string{"pack-run"}})
|
||||
if err != nil {
|
||||
log.Fatalf("claim job: %v", err)
|
||||
}
|
||||
if claim == nil {
|
||||
log.Println("no work available")
|
||||
return
|
||||
}
|
||||
|
||||
// ... perform work using claim.Payload ...
|
||||
|
||||
// heartbeat and progress
|
||||
_ = client.Heartbeat(ctx, claim.JobID, claim.LeaseID)
|
||||
_ = client.Progress(ctx, claim.JobID, claim.LeaseID, 50, "halfway")
|
||||
|
||||
if err := client.Ack(ctx, workersdk.AckJobRequest{JobID: claim.JobID, LeaseID: claim.LeaseID, Status: "succeeded"}); err != nil {
|
||||
log.Fatalf("ack job: %v", err)
|
||||
}
|
||||
|
||||
log.Printf("acknowledged job %s", claim.JobID)
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
module git.stella-ops.org/stellaops/orchestrator/worker-sdk-go
|
||||
|
||||
go 1.21
|
||||
@@ -0,0 +1,31 @@
|
||||
package transport
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
// RoundTripper abstracts HTTP transport so we can stub in tests without
|
||||
// depending on the default client.
|
||||
type RoundTripper interface {
|
||||
RoundTrip(*http.Request) (*http.Response, error)
|
||||
}
|
||||
|
||||
// Client wraps an http.Client-like implementation.
|
||||
type Client interface {
|
||||
Do(req *http.Request) (*http.Response, error)
|
||||
}
|
||||
|
||||
// DefaultClient returns a minimal http.Client with sane defaults.
|
||||
func DefaultClient(rt RoundTripper) *http.Client {
|
||||
if rt == nil {
|
||||
return &http.Client{}
|
||||
}
|
||||
return &http.Client{Transport: rt}
|
||||
}
|
||||
|
||||
// Do wraps an HTTP call using the provided Client.
|
||||
func Do(ctx context.Context, c Client, req *http.Request) (*http.Response, error) {
|
||||
req = req.WithContext(ctx)
|
||||
return c.Do(req)
|
||||
}
|
||||
@@ -0,0 +1,66 @@
|
||||
package workersdk
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"io"
|
||||
)
|
||||
|
||||
// StorageClient is a minimal interface for artifact storage backends.
|
||||
type StorageClient interface {
|
||||
PutObject(ctx context.Context, key string, body io.Reader, metadata map[string]string) error
|
||||
}
|
||||
|
||||
// ArtifactPublishRequest describes an artifact to upload.
|
||||
type ArtifactPublishRequest struct {
|
||||
JobID string
|
||||
LeaseID string
|
||||
ObjectKey string
|
||||
Content io.Reader
|
||||
ContentLength int64
|
||||
ContentType string
|
||||
ArtifactType string
|
||||
IdempotencyKey string
|
||||
Storage StorageClient
|
||||
}
|
||||
|
||||
// ArtifactPublishResponse returns checksum metadata.
|
||||
type ArtifactPublishResponse struct {
|
||||
SHA256 string
|
||||
Size int64
|
||||
}
|
||||
|
||||
// PublishArtifact uploads artifact content with checksum metadata and idempotency guard.
|
||||
func (c *Client) PublishArtifact(ctx context.Context, req ArtifactPublishRequest) (*ArtifactPublishResponse, error) {
|
||||
if req.JobID == "" || req.LeaseID == "" {
|
||||
return nil, fmt.Errorf("JobID and LeaseID are required")
|
||||
}
|
||||
if req.ObjectKey == "" {
|
||||
return nil, fmt.Errorf("ObjectKey is required")
|
||||
}
|
||||
if req.Storage == nil {
|
||||
return nil, fmt.Errorf("Storage client is required")
|
||||
}
|
||||
|
||||
// Compute SHA256 while streaming.
|
||||
hasher := sha256.New()
|
||||
tee := io.TeeReader(req.Content, hasher)
|
||||
// Wrap to enforce known length? length optional; storage client may use metadata.
|
||||
metadata := map[string]string{
|
||||
"x-stellaops-job-id": req.JobID,
|
||||
"x-stellaops-lease": req.LeaseID,
|
||||
"x-stellaops-type": req.ArtifactType,
|
||||
"x-stellaops-ct": req.ContentType,
|
||||
}
|
||||
if req.IdempotencyKey != "" {
|
||||
metadata["x-idempotency-key"] = req.IdempotencyKey
|
||||
}
|
||||
|
||||
if err := req.Storage.PutObject(ctx, req.ObjectKey, tee, metadata); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
sum := hex.EncodeToString(hasher.Sum(nil))
|
||||
return &ArtifactPublishResponse{SHA256: sum, Size: req.ContentLength}, nil
|
||||
}
|
||||
@@ -0,0 +1,61 @@
|
||||
package workersdk
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"io"
|
||||
"testing"
|
||||
)
|
||||
|
||||
type memStorage struct {
|
||||
key string
|
||||
data []byte
|
||||
metadata map[string]string
|
||||
}
|
||||
|
||||
func (m *memStorage) PutObject(ctx context.Context, key string, body io.Reader, metadata map[string]string) error {
|
||||
b, err := io.ReadAll(body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
m.key = key
|
||||
m.data = b
|
||||
m.metadata = metadata
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestPublishArtifact(t *testing.T) {
|
||||
store := &memStorage{}
|
||||
client, err := New(Config{BaseURL: "https://example"})
|
||||
if err != nil {
|
||||
t.Fatalf("new client: %v", err)
|
||||
}
|
||||
|
||||
content := []byte("hello")
|
||||
resp, err := client.PublishArtifact(context.Background(), ArtifactPublishRequest{
|
||||
JobID: "job1",
|
||||
LeaseID: "lease1",
|
||||
ObjectKey: "artifacts/job1/output.txt",
|
||||
Content: bytes.NewReader(content),
|
||||
ContentLength: int64(len(content)),
|
||||
ContentType: "text/plain",
|
||||
ArtifactType: "log",
|
||||
IdempotencyKey: "idem-1",
|
||||
Storage: store,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("publish: %v", err)
|
||||
}
|
||||
if resp.SHA256 == "" || resp.Size != 5 {
|
||||
t.Fatalf("unexpected resp: %+v", resp)
|
||||
}
|
||||
if store.key != "artifacts/job1/output.txt" {
|
||||
t.Fatalf("key mismatch: %s", store.key)
|
||||
}
|
||||
if store.metadata["x-idempotency-key"] != "idem-1" {
|
||||
t.Fatalf("idempotency missing")
|
||||
}
|
||||
if store.metadata["x-stellaops-job-id"] != "job1" {
|
||||
t.Fatalf("job metadata missing")
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,86 @@
|
||||
package workersdk
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Range represents an inclusive backfill window.
|
||||
type Range struct {
|
||||
Start time.Time
|
||||
End time.Time
|
||||
}
|
||||
|
||||
// Validate ensures start <= end.
|
||||
func (r Range) Validate() error {
|
||||
if r.End.Before(r.Start) {
|
||||
return fmt.Errorf("range end before start")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// WatermarkHandshake ensures the worker's view of the watermark matches orchestrator-provided value.
|
||||
type WatermarkHandshake struct {
|
||||
Expected string
|
||||
Current string
|
||||
}
|
||||
|
||||
func (w WatermarkHandshake) Validate() error {
|
||||
if w.Expected == "" {
|
||||
return fmt.Errorf("expected watermark required")
|
||||
}
|
||||
if w.Expected != w.Current {
|
||||
return fmt.Errorf("watermark mismatch")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Deduper tracks processed artifact digests to prevent duplicate publication.
|
||||
type Deduper struct {
|
||||
seen map[string]struct{}
|
||||
}
|
||||
|
||||
// NewDeduper creates a deduper.
|
||||
func NewDeduper() *Deduper {
|
||||
return &Deduper{seen: make(map[string]struct{})}
|
||||
}
|
||||
|
||||
// Seen returns true if digest already processed; marks new digests.
|
||||
func (d *Deduper) Seen(digest string) bool {
|
||||
if digest == "" {
|
||||
return false
|
||||
}
|
||||
if _, ok := d.seen[digest]; ok {
|
||||
return true
|
||||
}
|
||||
d.seen[digest] = struct{}{}
|
||||
return false
|
||||
}
|
||||
|
||||
// ExecuteRange iterates [start,end] by step days, invoking fn for each day.
|
||||
func ExecuteRange(ctx context.Context, r Range, step time.Duration, fn func(context.Context, time.Time) error) error {
|
||||
if err := r.Validate(); err != nil {
|
||||
return err
|
||||
}
|
||||
if step <= 0 {
|
||||
return fmt.Errorf("step must be positive")
|
||||
}
|
||||
for ts := r.Start; !ts.After(r.End); ts = ts.Add(step) {
|
||||
if err := fn(ctx, ts); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// VerifyAndPublishArtifact wraps PublishArtifact with dedupe and watermark guard.
|
||||
func (c *Client) VerifyAndPublishArtifact(ctx context.Context, wm WatermarkHandshake, dedupe *Deduper, req ArtifactPublishRequest) (*ArtifactPublishResponse, error) {
|
||||
if err := wm.Validate(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if dedupe != nil && dedupe.Seen(req.IdempotencyKey) {
|
||||
return nil, fmt.Errorf("duplicate artifact idempotency key")
|
||||
}
|
||||
return c.PublishArtifact(ctx, req)
|
||||
}
|
||||
@@ -0,0 +1,85 @@
|
||||
package workersdk
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"io"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
type stubStorage struct{}
|
||||
|
||||
func (stubStorage) PutObject(ctx context.Context, key string, body io.Reader, metadata map[string]string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestRangeValidation(t *testing.T) {
|
||||
r := Range{Start: time.Now(), End: time.Now().Add(-time.Hour)}
|
||||
if err := r.Validate(); err == nil {
|
||||
t.Fatalf("expected error for invalid range")
|
||||
}
|
||||
}
|
||||
|
||||
func TestExecuteRange(t *testing.T) {
|
||||
start := time.Date(2025, 11, 15, 0, 0, 0, 0, time.UTC)
|
||||
end := start.Add(48 * time.Hour)
|
||||
r := Range{Start: start, End: end}
|
||||
calls := 0
|
||||
err := ExecuteRange(context.Background(), r, 24*time.Hour, func(ctx context.Context, ts time.Time) error {
|
||||
calls++
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("execute range: %v", err)
|
||||
}
|
||||
if calls != 3 {
|
||||
t.Fatalf("expected 3 calls, got %d", calls)
|
||||
}
|
||||
}
|
||||
|
||||
func TestWatermarkMismatch(t *testing.T) {
|
||||
wm := WatermarkHandshake{Expected: "abc", Current: "def"}
|
||||
if err := wm.Validate(); err == nil {
|
||||
t.Fatal("expected mismatch error")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeduper(t *testing.T) {
|
||||
d := NewDeduper()
|
||||
if d.Seen("sha") {
|
||||
t.Fatal("should be new")
|
||||
}
|
||||
if !d.Seen("sha") {
|
||||
t.Fatal("should detect duplicate")
|
||||
}
|
||||
}
|
||||
|
||||
func TestVerifyAndPublishArtifactDuplicate(t *testing.T) {
|
||||
d := NewDeduper()
|
||||
c, _ := New(Config{BaseURL: "https://x"})
|
||||
d.Seen("idem1")
|
||||
_, err := c.VerifyAndPublishArtifact(
|
||||
context.Background(),
|
||||
WatermarkHandshake{Expected: "w", Current: "w"},
|
||||
d,
|
||||
ArtifactPublishRequest{IdempotencyKey: "idem1", Storage: stubStorage{}, JobID: "j", LeaseID: "l", ObjectKey: "k", Content: bytes.NewReader([]byte{}), ArtifactType: "log"},
|
||||
)
|
||||
if err == nil {
|
||||
t.Fatal("expected duplicate error")
|
||||
}
|
||||
}
|
||||
|
||||
func TestVerifyAndPublishArtifactWatermark(t *testing.T) {
|
||||
d := NewDeduper()
|
||||
c, _ := New(Config{BaseURL: "https://x"})
|
||||
_, err := c.VerifyAndPublishArtifact(
|
||||
context.Background(),
|
||||
WatermarkHandshake{Expected: "w1", Current: "w2"},
|
||||
d,
|
||||
ArtifactPublishRequest{IdempotencyKey: "idem2", Storage: stubStorage{}, JobID: "j", LeaseID: "l", ObjectKey: "k", Content: bytes.NewReader([]byte{}), ArtifactType: "log"},
|
||||
)
|
||||
if err == nil {
|
||||
t.Fatal("expected watermark error")
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,243 @@
|
||||
package workersdk
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path"
|
||||
"time"
|
||||
|
||||
"git.stella-ops.org/stellaops/orchestrator/worker-sdk-go/internal/transport"
|
||||
)
|
||||
|
||||
// Client provides job claim/acknowledge operations.
|
||||
type Client struct {
|
||||
baseURL *url.URL
|
||||
apiKey string
|
||||
tenantID string
|
||||
projectID string
|
||||
userAgent string
|
||||
http transport.Client
|
||||
logger Logger
|
||||
metrics MetricsSink
|
||||
}
|
||||
|
||||
// New creates a configured Client.
|
||||
func New(cfg Config) (*Client, error) {
|
||||
if err := cfg.validate(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
parsed, err := url.Parse(cfg.BaseURL)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("invalid BaseURL: %w", err)
|
||||
}
|
||||
ua := cfg.UserAgent
|
||||
if ua == "" {
|
||||
ua = "stellaops-worker-sdk-go/0.1"
|
||||
}
|
||||
|
||||
return &Client{
|
||||
baseURL: parsed,
|
||||
apiKey: cfg.APIKey,
|
||||
tenantID: cfg.TenantID,
|
||||
projectID: cfg.ProjectID,
|
||||
userAgent: ua,
|
||||
http: cfg.httpClient(),
|
||||
logger: cfg.logger(),
|
||||
metrics: cfg.metrics(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// ClaimJobRequest represents a worker's desire to lease a job.
|
||||
type ClaimJobRequest struct {
|
||||
WorkerID string `json:"worker_id"`
|
||||
Capabilities []string `json:"capabilities,omitempty"`
|
||||
}
|
||||
|
||||
// ClaimJobResponse returns the leased job payload.
|
||||
type ClaimJobResponse struct {
|
||||
JobID string `json:"job_id"`
|
||||
LeaseID string `json:"lease_id"`
|
||||
ExpiresAt time.Time `json:"expires_at"`
|
||||
JobType string `json:"job_type"`
|
||||
Payload json.RawMessage `json:"payload"`
|
||||
RetryAfter int `json:"retry_after_seconds,omitempty"`
|
||||
NotBefore *time.Time `json:"not_before,omitempty"`
|
||||
TraceID string `json:"trace_id,omitempty"`
|
||||
}
|
||||
|
||||
// AckJobRequest represents completion of a job.
|
||||
type AckJobRequest struct {
|
||||
JobID string `json:"job_id"`
|
||||
LeaseID string `json:"lease_id"`
|
||||
Status string `json:"status"`
|
||||
Message string `json:"message,omitempty"`
|
||||
Rotating string `json:"rotating_token,omitempty"`
|
||||
}
|
||||
|
||||
// Claim requests the next available job for the worker.
|
||||
func (c *Client) Claim(ctx context.Context, req ClaimJobRequest) (*ClaimJobResponse, error) {
|
||||
if req.WorkerID == "" {
|
||||
return nil, fmt.Errorf("WorkerID is required")
|
||||
}
|
||||
endpoint := c.resolve("/api/jobs/lease")
|
||||
resp, err := c.doClaim(ctx, endpoint, req)
|
||||
if err == nil {
|
||||
c.metrics.IncClaimed()
|
||||
}
|
||||
c.logger.Info(ctx, "claim", map[string]any{"worker_id": req.WorkerID, "err": err})
|
||||
return resp, err
|
||||
}
|
||||
|
||||
// Ack acknowledges job completion or failure.
|
||||
func (c *Client) Ack(ctx context.Context, req AckJobRequest) error {
|
||||
if req.JobID == "" || req.LeaseID == "" || req.Status == "" {
|
||||
return fmt.Errorf("JobID, LeaseID, and Status are required")
|
||||
}
|
||||
endpoint := c.resolve(path.Join("/api/jobs", req.JobID, "ack"))
|
||||
payload, err := json.Marshal(req)
|
||||
if err != nil {
|
||||
return fmt.Errorf("marshal ack request: %w", err)
|
||||
}
|
||||
|
||||
httpReq, err := http.NewRequest(http.MethodPost, endpoint, bytes.NewReader(payload))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
c.applyHeaders(httpReq)
|
||||
httpReq.Header.Set("Content-Type", "application/json")
|
||||
|
||||
resp, err := transport.Do(ctx, c.http, httpReq)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode >= 300 {
|
||||
b, _ := io.ReadAll(io.LimitReader(resp.Body, 8<<10))
|
||||
return fmt.Errorf("ack failed: %s (%s)", resp.Status, string(b))
|
||||
}
|
||||
c.metrics.IncAck(req.Status)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Heartbeat reports liveness for a job lease.
|
||||
func (c *Client) Heartbeat(ctx context.Context, jobID, leaseID string) error {
|
||||
if jobID == "" || leaseID == "" {
|
||||
return fmt.Errorf("JobID and LeaseID are required")
|
||||
}
|
||||
endpoint := c.resolve(path.Join("/api/jobs", jobID, "heartbeat"))
|
||||
payload, _ := json.Marshal(map[string]string{"lease_id": leaseID})
|
||||
httpReq, err := http.NewRequest(http.MethodPost, endpoint, bytes.NewReader(payload))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
c.applyHeaders(httpReq)
|
||||
httpReq.Header.Set("Content-Type", "application/json")
|
||||
start := time.Now()
|
||||
resp, err := transport.Do(ctx, c.http, httpReq)
|
||||
if err != nil {
|
||||
c.metrics.IncHeartbeatFailures()
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode >= 300 {
|
||||
b, _ := io.ReadAll(io.LimitReader(resp.Body, 8<<10))
|
||||
c.metrics.IncHeartbeatFailures()
|
||||
return fmt.Errorf("heartbeat failed: %s (%s)", resp.Status, string(b))
|
||||
}
|
||||
c.metrics.ObserveHeartbeatLatency(time.Since(start).Seconds())
|
||||
return nil
|
||||
}
|
||||
|
||||
// Progress reports worker progress (0-100) with optional message.
|
||||
func (c *Client) Progress(ctx context.Context, jobID, leaseID string, pct int, message string) error {
|
||||
if pct < 0 || pct > 100 {
|
||||
return fmt.Errorf("pct must be 0-100")
|
||||
}
|
||||
payload, _ := json.Marshal(map[string]any{
|
||||
"lease_id": leaseID,
|
||||
"progress": pct,
|
||||
"message": message,
|
||||
})
|
||||
endpoint := c.resolve(path.Join("/api/jobs", jobID, "progress"))
|
||||
httpReq, err := http.NewRequest(http.MethodPost, endpoint, bytes.NewReader(payload))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
c.applyHeaders(httpReq)
|
||||
httpReq.Header.Set("Content-Type", "application/json")
|
||||
resp, err := transport.Do(ctx, c.http, httpReq)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode >= 300 {
|
||||
b, _ := io.ReadAll(io.LimitReader(resp.Body, 8<<10))
|
||||
return fmt.Errorf("progress failed: %s (%s)", resp.Status, string(b))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Client) doClaim(ctx context.Context, endpoint string, req ClaimJobRequest) (*ClaimJobResponse, error) {
|
||||
payload, err := json.Marshal(req)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshal claim request: %w", err)
|
||||
}
|
||||
|
||||
httpReq, err := http.NewRequest(http.MethodPost, endpoint, bytes.NewReader(payload))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
c.applyHeaders(httpReq)
|
||||
httpReq.Header.Set("Content-Type", "application/json")
|
||||
|
||||
resp, err := transport.Do(ctx, c.http, httpReq)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode == http.StatusNoContent {
|
||||
return nil, nil // no work available
|
||||
}
|
||||
if resp.StatusCode >= 300 {
|
||||
b, _ := io.ReadAll(io.LimitReader(resp.Body, 8<<10))
|
||||
return nil, fmt.Errorf("claim failed: %s (%s)", resp.Status, string(b))
|
||||
}
|
||||
|
||||
var out ClaimJobResponse
|
||||
decoder := json.NewDecoder(resp.Body)
|
||||
decoder.DisallowUnknownFields()
|
||||
if err := decoder.Decode(&out); err != nil {
|
||||
return nil, fmt.Errorf("decode claim response: %w", err)
|
||||
}
|
||||
return &out, nil
|
||||
}
|
||||
|
||||
func (c *Client) applyHeaders(r *http.Request) {
|
||||
if c.apiKey != "" {
|
||||
r.Header.Set("Authorization", "Bearer "+c.apiKey)
|
||||
}
|
||||
if c.tenantID != "" {
|
||||
r.Header.Set("X-StellaOps-Tenant", c.tenantID)
|
||||
}
|
||||
if c.projectID != "" {
|
||||
r.Header.Set("X-StellaOps-Project", c.projectID)
|
||||
}
|
||||
r.Header.Set("Accept", "application/json")
|
||||
if c.userAgent != "" {
|
||||
r.Header.Set("User-Agent", c.userAgent)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Client) resolve(p string) string {
|
||||
clone := *c.baseURL
|
||||
clone.Path = path.Join(clone.Path, p)
|
||||
return clone.String()
|
||||
}
|
||||
@@ -0,0 +1,136 @@
|
||||
package workersdk
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
type claimRecorded struct {
|
||||
Method string
|
||||
Path string
|
||||
Auth string
|
||||
Tenant string
|
||||
Project string
|
||||
Body ClaimJobRequest
|
||||
}
|
||||
|
||||
type ackRecorded struct {
|
||||
Method string
|
||||
Path string
|
||||
Body AckJobRequest
|
||||
}
|
||||
|
||||
func TestClaimAndAck(t *testing.T) {
|
||||
var claimRec claimRecorded
|
||||
var ackRec ackRecorded
|
||||
|
||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
switch r.URL.Path {
|
||||
case "/api/jobs/lease":
|
||||
claimRec.Method = r.Method
|
||||
claimRec.Path = r.URL.Path
|
||||
claimRec.Auth = r.Header.Get("Authorization")
|
||||
claimRec.Tenant = r.Header.Get("X-StellaOps-Tenant")
|
||||
claimRec.Project = r.Header.Get("X-StellaOps-Project")
|
||||
if err := json.NewDecoder(r.Body).Decode(&claimRec.Body); err != nil {
|
||||
t.Fatalf("decode claim: %v", err)
|
||||
}
|
||||
resp := ClaimJobResponse{
|
||||
JobID: "123",
|
||||
LeaseID: "lease-1",
|
||||
ExpiresAt: time.Date(2025, 11, 17, 0, 0, 0, 0, time.UTC),
|
||||
JobType: "demo",
|
||||
Payload: json.RawMessage(`{"key":"value"}`),
|
||||
}
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(resp)
|
||||
case "/api/jobs/123/heartbeat":
|
||||
w.WriteHeader(http.StatusAccepted)
|
||||
case "/api/jobs/123/progress":
|
||||
w.WriteHeader(http.StatusAccepted)
|
||||
default:
|
||||
ackRec.Method = r.Method
|
||||
ackRec.Path = r.URL.Path
|
||||
if err := json.NewDecoder(r.Body).Decode(&ackRec.Body); err != nil {
|
||||
t.Fatalf("decode ack: %v", err)
|
||||
}
|
||||
w.WriteHeader(http.StatusAccepted)
|
||||
}
|
||||
}))
|
||||
defer srv.Close()
|
||||
|
||||
client, err := New(Config{
|
||||
BaseURL: srv.URL,
|
||||
APIKey: "token-1",
|
||||
TenantID: "tenant-a",
|
||||
ProjectID: "project-1",
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("new client: %v", err)
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
claimResp, err := client.Claim(ctx, ClaimJobRequest{WorkerID: "worker-7", Capabilities: []string{"scan"}})
|
||||
if err != nil {
|
||||
t.Fatalf("claim: %v", err)
|
||||
}
|
||||
|
||||
if claimRec.Method != http.MethodPost || claimRec.Path != "/api/jobs/lease" {
|
||||
t.Fatalf("unexpected claim method/path: %s %s", claimRec.Method, claimRec.Path)
|
||||
}
|
||||
if claimRec.Auth != "Bearer token-1" {
|
||||
t.Fatalf("auth header mismatch: %s", claimRec.Auth)
|
||||
}
|
||||
if claimRec.Tenant != "tenant-a" || claimRec.Project != "project-1" {
|
||||
t.Fatalf("tenant/project headers missing: %s %s", claimRec.Tenant, claimRec.Project)
|
||||
}
|
||||
if claimRec.Body.WorkerID != "worker-7" {
|
||||
t.Fatalf("worker id missing")
|
||||
}
|
||||
|
||||
if claimResp == nil || claimResp.JobID != "123" || claimResp.LeaseID != "lease-1" {
|
||||
t.Fatalf("claim response mismatch: %+v", claimResp)
|
||||
}
|
||||
|
||||
err = client.Ack(ctx, AckJobRequest{JobID: claimResp.JobID, LeaseID: claimResp.LeaseID, Status: "succeeded"})
|
||||
if err != nil {
|
||||
t.Fatalf("ack error: %v", err)
|
||||
}
|
||||
|
||||
if err := client.Heartbeat(ctx, claimResp.JobID, claimResp.LeaseID); err != nil {
|
||||
t.Fatalf("heartbeat error: %v", err)
|
||||
}
|
||||
if err := client.Progress(ctx, claimResp.JobID, claimResp.LeaseID, 50, "halfway"); err != nil {
|
||||
t.Fatalf("progress error: %v", err)
|
||||
}
|
||||
|
||||
if ackRec.Method != http.MethodPost {
|
||||
t.Fatalf("ack method mismatch: %s", ackRec.Method)
|
||||
}
|
||||
if ackRec.Path != "/api/jobs/123/ack" {
|
||||
t.Fatalf("ack path mismatch: %s", ackRec.Path)
|
||||
}
|
||||
if ackRec.Body.Status != "succeeded" || ackRec.Body.JobID != "123" {
|
||||
t.Fatalf("ack body mismatch: %+v", ackRec.Body)
|
||||
}
|
||||
}
|
||||
|
||||
func TestClaimMissingWorker(t *testing.T) {
|
||||
client, err := New(Config{BaseURL: "https://example.invalid"})
|
||||
if err != nil {
|
||||
t.Fatalf("new client: %v", err)
|
||||
}
|
||||
if _, err := client.Claim(context.Background(), ClaimJobRequest{}); err == nil {
|
||||
t.Fatal("expected error for missing worker id")
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigValidation(t *testing.T) {
|
||||
if _, err := New(Config{}); err == nil {
|
||||
t.Fatal("expected error for missing base url")
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,49 @@
|
||||
package workersdk
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"git.stella-ops.org/stellaops/orchestrator/worker-sdk-go/internal/transport"
|
||||
)
|
||||
|
||||
// Config holds SDK configuration.
|
||||
type Config struct {
|
||||
BaseURL string
|
||||
APIKey string
|
||||
TenantID string
|
||||
ProjectID string
|
||||
UserAgent string
|
||||
Client transport.Client
|
||||
Logger Logger
|
||||
Metrics MetricsSink
|
||||
}
|
||||
|
||||
func (c *Config) validate() error {
|
||||
if strings.TrimSpace(c.BaseURL) == "" {
|
||||
return errors.New("BaseURL is required")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Config) httpClient() transport.Client {
|
||||
if c.Client != nil {
|
||||
return c.Client
|
||||
}
|
||||
return transport.DefaultClient(http.DefaultTransport)
|
||||
}
|
||||
|
||||
func (c *Config) logger() Logger {
|
||||
if c.Logger != nil {
|
||||
return c.Logger
|
||||
}
|
||||
return NoopLogger{}
|
||||
}
|
||||
|
||||
func (c *Config) metrics() MetricsSink {
|
||||
if c.Metrics != nil {
|
||||
return c.Metrics
|
||||
}
|
||||
return NoopMetrics{}
|
||||
}
|
||||
@@ -0,0 +1,29 @@
|
||||
package workersdk
|
||||
|
||||
// ErrorCode represents orchestrator error categories.
|
||||
type ErrorCode string
|
||||
|
||||
const (
|
||||
ErrorCodeTemporary ErrorCode = "temporary"
|
||||
ErrorCodePermanent ErrorCode = "permanent"
|
||||
ErrorCodeFatal ErrorCode = "fatal"
|
||||
ErrorCodeUnauth ErrorCode = "unauthorized"
|
||||
ErrorCodeQuota ErrorCode = "quota_exceeded"
|
||||
ErrorCodeValidation ErrorCode = "validation"
|
||||
)
|
||||
|
||||
// ErrorClassification maps HTTP status to codes and retryability.
|
||||
func ErrorClassification(status int) (ErrorCode, bool) {
|
||||
switch {
|
||||
case status == 401 || status == 403:
|
||||
return ErrorCodeUnauth, false
|
||||
case status >= 500 && status < 600:
|
||||
return ErrorCodeTemporary, true
|
||||
case status == 429:
|
||||
return ErrorCodeQuota, true
|
||||
case status >= 400 && status < 500:
|
||||
return ErrorCodePermanent, false
|
||||
default:
|
||||
return "", false
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,24 @@
|
||||
package workersdk
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestErrorClassification(t *testing.T) {
|
||||
cases := []struct {
|
||||
status int
|
||||
code ErrorCode
|
||||
retry bool
|
||||
}{
|
||||
{500, ErrorCodeTemporary, true},
|
||||
{503, ErrorCodeTemporary, true},
|
||||
{429, ErrorCodeQuota, true},
|
||||
{401, ErrorCodeUnauth, false},
|
||||
{400, ErrorCodePermanent, false},
|
||||
{404, ErrorCodePermanent, false},
|
||||
}
|
||||
for _, c := range cases {
|
||||
code, retry := ErrorClassification(c.status)
|
||||
if code != c.code || retry != c.retry {
|
||||
t.Fatalf("status %d -> got %s retry %v", c.status, code, retry)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,15 @@
|
||||
package workersdk
|
||||
|
||||
import "context"
|
||||
|
||||
// Logger is a minimal structured logger interface.
|
||||
type Logger interface {
|
||||
Info(ctx context.Context, msg string, fields map[string]any)
|
||||
Error(ctx context.Context, msg string, fields map[string]any)
|
||||
}
|
||||
|
||||
// NoopLogger is used when no logger provided.
|
||||
type NoopLogger struct{}
|
||||
|
||||
func (NoopLogger) Info(_ context.Context, _ string, _ map[string]any) {}
|
||||
func (NoopLogger) Error(_ context.Context, _ string, _ map[string]any) {}
|
||||
@@ -0,0 +1,17 @@
|
||||
package workersdk
|
||||
|
||||
// MetricsSink allows callers to wire Prometheus or other metrics systems.
|
||||
type MetricsSink interface {
|
||||
IncClaimed()
|
||||
IncAck(status string)
|
||||
ObserveHeartbeatLatency(seconds float64)
|
||||
IncHeartbeatFailures()
|
||||
}
|
||||
|
||||
// NoopMetrics is the default sink when none is provided.
|
||||
type NoopMetrics struct{}
|
||||
|
||||
func (NoopMetrics) IncClaimed() {}
|
||||
func (NoopMetrics) IncAck(_ string) {}
|
||||
func (NoopMetrics) ObserveHeartbeatLatency(_ float64) {}
|
||||
func (NoopMetrics) IncHeartbeatFailures() {}
|
||||
@@ -0,0 +1,53 @@
|
||||
package workersdk
|
||||
|
||||
import (
|
||||
"context"
|
||||
"math/rand"
|
||||
"time"
|
||||
)
|
||||
|
||||
// RetryPolicy defines retry behavior.
|
||||
type RetryPolicy struct {
|
||||
MaxAttempts int
|
||||
BaseDelay time.Duration
|
||||
MaxDelay time.Duration
|
||||
Jitter float64 // between 0 and 1, represents percentage of jitter.
|
||||
}
|
||||
|
||||
// DefaultRetryPolicy returns exponential backoff with jitter suitable for worker I/O.
|
||||
func DefaultRetryPolicy() RetryPolicy {
|
||||
return RetryPolicy{MaxAttempts: 5, BaseDelay: 200 * time.Millisecond, MaxDelay: 5 * time.Second, Jitter: 0.2}
|
||||
}
|
||||
|
||||
// Retry executes fn with retries according to policy.
|
||||
func Retry(ctx context.Context, policy RetryPolicy, fn func() error) error {
|
||||
if policy.MaxAttempts <= 0 {
|
||||
policy = DefaultRetryPolicy()
|
||||
}
|
||||
delay := policy.BaseDelay
|
||||
for attempt := 1; attempt <= policy.MaxAttempts; attempt++ {
|
||||
err := fn()
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
if attempt == policy.MaxAttempts {
|
||||
return err
|
||||
}
|
||||
// apply jitter
|
||||
jitter := 1 + (policy.Jitter * (rand.Float64()*2 - 1))
|
||||
sleepFor := time.Duration(float64(delay) * jitter)
|
||||
if sleepFor > policy.MaxDelay {
|
||||
sleepFor = policy.MaxDelay
|
||||
}
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
case <-time.After(sleepFor):
|
||||
}
|
||||
delay *= 2
|
||||
if delay > policy.MaxDelay {
|
||||
delay = policy.MaxDelay
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -0,0 +1,10 @@
|
||||
# StellaOps Orchestrator Worker SDK (Python)
|
||||
|
||||
Async-friendly SDK for StellaOps workers: claim jobs, acknowledge results, and attach tenant-aware auth headers. The default transport is dependency-free and can be swapped for aiohttp/httpx as needed.
|
||||
|
||||
## Quick start
|
||||
```bash
|
||||
export ORCH_BASE_URL=http://localhost:8080
|
||||
export ORCH_API_KEY=dev-token
|
||||
python sample_worker.py
|
||||
```
|
||||
@@ -0,0 +1,9 @@
|
||||
# Worker SDK (Python) — Task Tracker
|
||||
|
||||
| Task ID | Status | Notes | Updated (UTC) |
|
||||
| --- | --- | --- | --- |
|
||||
| WORKER-PY-32-001 | DONE | Async Python SDK scaffold with config/auth headers, claim/ack client, sample worker script, and unit tests using stub transport. | 2025-11-17 |
|
||||
| WORKER-PY-32-002 | DONE | Heartbeat/progress helpers with logging/metrics and cancellation-safe retries. | 2025-11-17 |
|
||||
| WORKER-PY-33-001 | DONE | Artifact publish/idempotency helpers with checksum hashing and storage adapters. | 2025-11-17 |
|
||||
| WORKER-PY-33-002 | DONE | Error classification/backoff helper aligned to orchestrator codes and structured failure reports. | 2025-11-17 |
|
||||
| WORKER-PY-34-001 | DONE | Backfill iteration, watermark handshake, and artifact dedupe verification utilities. | 2025-11-17 |
|
||||
@@ -0,0 +1,11 @@
|
||||
[project]
|
||||
name = "stellaops-orchestrator-worker"
|
||||
version = "0.1.0"
|
||||
description = "Async worker SDK for StellaOps Orchestrator"
|
||||
authors = [{name = "StellaOps"}]
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10"
|
||||
|
||||
[build-system]
|
||||
requires = ["setuptools"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
@@ -0,0 +1,41 @@
|
||||
import asyncio
|
||||
import os
|
||||
|
||||
from stellaops_orchestrator_worker import (
|
||||
AckJobRequest,
|
||||
ClaimJobRequest,
|
||||
Config,
|
||||
OrchestratorClient,
|
||||
)
|
||||
from stellaops_orchestrator_worker.retry import RetryPolicy, retry
|
||||
|
||||
|
||||
async def main():
|
||||
cfg = Config(
|
||||
base_url=os.environ.get("ORCH_BASE_URL", "http://localhost:8080"),
|
||||
api_key=os.environ.get("ORCH_API_KEY", "dev-token"),
|
||||
tenant_id=os.environ.get("ORCH_TENANT", "local-tenant"),
|
||||
project_id=os.environ.get("ORCH_PROJECT", "demo-project"),
|
||||
)
|
||||
client = OrchestratorClient(cfg)
|
||||
|
||||
claim = await client.claim(ClaimJobRequest(worker_id="py-worker", capabilities=["pack-run"]))
|
||||
if claim is None:
|
||||
print("no work available")
|
||||
return
|
||||
|
||||
# ... perform actual work described by claim.payload ...
|
||||
await client.heartbeat(job_id=claim.job_id, lease_id=claim.lease_id)
|
||||
await client.progress(job_id=claim.job_id, lease_id=claim.lease_id, pct=50, message="halfway")
|
||||
|
||||
async def _ack():
|
||||
await client.ack(
|
||||
AckJobRequest(job_id=claim.job_id, lease_id=claim.lease_id, status="succeeded"),
|
||||
)
|
||||
|
||||
await retry(RetryPolicy(), _ack)
|
||||
print(f"acknowledged job {claim.job_id}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
@@ -0,0 +1,37 @@
|
||||
"""Async worker SDK for StellaOps Orchestrator."""
|
||||
|
||||
from .client import OrchestratorClient, ClaimJobRequest, AckJobRequest, ClaimJobResponse
|
||||
from .config import Config
|
||||
from .metrics import MetricsSink, NoopMetrics
|
||||
from .transport import Transport, InMemoryTransport, TransportRequest, TransportResponse
|
||||
from .retry import RetryPolicy, retry
|
||||
from .storage import publish_artifact, InMemoryStorage, ArtifactPublishResult, Storage
|
||||
from .errors import ErrorCode, classify_status
|
||||
from .backfill import Range, WatermarkHandshake, Deduper, execute_range, verify_and_publish_artifact
|
||||
|
||||
__all__ = [
|
||||
"OrchestratorClient",
|
||||
"ClaimJobRequest",
|
||||
"ClaimJobResponse",
|
||||
"AckJobRequest",
|
||||
"Config",
|
||||
"MetricsSink",
|
||||
"NoopMetrics",
|
||||
"RetryPolicy",
|
||||
"retry",
|
||||
"Storage",
|
||||
"publish_artifact",
|
||||
"InMemoryStorage",
|
||||
"ArtifactPublishResult",
|
||||
"Range",
|
||||
"WatermarkHandshake",
|
||||
"Deduper",
|
||||
"execute_range",
|
||||
"verify_and_publish_artifact",
|
||||
"ErrorCode",
|
||||
"classify_status",
|
||||
"Transport",
|
||||
"InMemoryTransport",
|
||||
"TransportRequest",
|
||||
"TransportResponse",
|
||||
]
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user