Add inline DSSE provenance documentation and Mongo schema

- Introduced a new document outlining the inline DSSE provenance for SBOM, VEX, scan, and derived events.
- Defined the Mongo schema for event patches, including key fields for provenance and trust verification.
- Documented the write path for ingesting provenance metadata and backfilling historical events.
- Created CI/CD snippets for uploading DSSE attestations and generating provenance metadata.
- Established Mongo indexes for efficient provenance queries and provided query recipes for various use cases.
- Outlined policy gates for managing VEX decisions based on provenance verification.
- Included UI nudges for displaying provenance information and implementation tasks for future enhancements.

---

Implement reachability lattice and scoring model

- Developed a comprehensive document detailing the reachability lattice and scoring model.
- Defined core types for reachability states, evidence, and mitigations with corresponding C# models.
- Established a scoring policy with base score contributions from various evidence classes.
- Mapped reachability states to VEX gates and provided a clear overview of evidence sources.
- Documented the event graph schema for persisting reachability data in MongoDB.
- Outlined the integration of runtime probes for evidence collection and defined a roadmap for future tasks.

---

Introduce uncertainty states and entropy scoring

- Created a draft document for tracking uncertainty states and their impact on risk scoring.
- Defined core uncertainty states with associated entropy values and evidence requirements.
- Established a schema for storing uncertainty states alongside findings.
- Documented the risk score calculation incorporating uncertainty and its effect on final risk assessments.
- Provided policy guidelines for handling uncertainty in decision-making processes.
- Outlined UI guidelines for displaying uncertainty information and suggested remediation actions.

---

Add Ruby package inventory management

- Implemented Ruby package inventory management with corresponding data models and storage mechanisms.
- Created C# records for Ruby package inventory, artifacts, provenance, and runtime details.
- Developed a repository for managing Ruby package inventory documents in MongoDB.
- Implemented a service for storing and retrieving Ruby package inventories.
- Added unit tests for the Ruby package inventory store to ensure functionality and data integrity.
This commit is contained in:
master
2025-11-13 00:20:33 +02:00
parent 86be324fc0
commit 7040984215
41 changed files with 1955 additions and 76 deletions

30
bench/README.md Normal file
View File

@@ -0,0 +1,30 @@
# StellaOps Bench Repository
> **Status:** Draft — aligns with `docs/benchmarks/vex-evidence-playbook.md` (Sprint401).
> **Purpose:** Host reproducible VEX decisions and comparison data that prove StellaOps signal quality vs. baseline scanners.
## Layout
```
bench/
README.md # this file
findings/ # per CVE/product bundles
CVE-YYYY-NNNNN/
evidence/
reachability.json
sbom.cdx.json
decision.openvex.json
decision.dsse.json
rekor.txt
metadata.json
tools/
verify.sh # DSSE + Rekor verifier
verify.py # offline verifier
compare.py # baseline comparison script
replay.sh # runs reachability replay manifolds
results/
summary.csv
runs/<date>/... # raw outputs + replay manifests
```
Refer to `docs/benchmarks/vex-evidence-playbook.md` for artifact contracts and automation tasks. The `bench/` tree will be populated once `BENCH-AUTO-401-019` and `DOCS-VEX-401-012` land.

View File

@@ -673,7 +673,7 @@ See `docs/dev/32_AUTH_CLIENT_GUIDE.md` for recommended profiles (online vs. air-
| `stellaops-cli scan run` | Execute scanner container against a directory (auto-upload) | `--target <directory>` (required)<br>`--runner <docker\|dotnet\|self>` (default from config)<br>`--entry <image-or-entrypoint>`<br>`[scanner-args...]` | Runs the scanner, writes results into `ResultsDirectory`, emits a structured `scan-run-*.json` metadata file, and automatically uploads the artefact when the exit code is `0`. | | `stellaops-cli scan run` | Execute scanner container against a directory (auto-upload) | `--target <directory>` (required)<br>`--runner <docker\|dotnet\|self>` (default from config)<br>`--entry <image-or-entrypoint>`<br>`[scanner-args...]` | Runs the scanner, writes results into `ResultsDirectory`, emits a structured `scan-run-*.json` metadata file, and automatically uploads the artefact when the exit code is `0`. |
| `stellaops-cli scan upload` | Re-upload existing scan artefact | `--file <path>` | Useful for retries when automatic upload fails or when operating offline. | | `stellaops-cli scan upload` | Re-upload existing scan artefact | `--file <path>` | Useful for retries when automatic upload fails or when operating offline. |
| `stellaops-cli ruby inspect` | Offline Ruby workspace inspection (Gemfile / lock + runtime signals) | `--root <directory>` (default current directory)<br>`--format <table\|json>` (default `table`) | Runs the bundled `RubyLanguageAnalyzer`, renders Observation summary (bundler/runtime/capabilities) plus Package/Version/Group/Source/Lockfile/Runtime columns, or emits JSON `{ packages: [...], observation: {...} }`. Exit codes: `0` success, `64` invalid format, `70` unexpected failure, `71` missing directory. | | `stellaops-cli ruby inspect` | Offline Ruby workspace inspection (Gemfile / lock + runtime signals) | `--root <directory>` (default current directory)<br>`--format <table\|json>` (default `table`) | Runs the bundled `RubyLanguageAnalyzer`, renders Observation summary (bundler/runtime/capabilities) plus Package/Version/Group/Source/Lockfile/Runtime columns, or emits JSON `{ packages: [...], observation: {...} }`. Exit codes: `0` success, `64` invalid format, `70` unexpected failure, `71` missing directory. |
| `stellaops-cli ruby resolve` | Fetch Ruby package inventory for a completed scan | `--image <registry-ref>` *or* `--scan-id <id>` (one required)<br>`--format <table\|json>` (default `table`) | Calls `GetRubyPackagesAsync` to download `ruby_packages.json`, groups entries by bundle/platform, and shows runtime entrypoints/usage. Table output mirrors `inspect`; JSON returns `{ scanId, groups: [...] }`. Exit codes: `0` success, `64` invalid args, `70` backend failure. | | `stellaops-cli ruby resolve` | Fetch Ruby package inventory for a completed scan | `--image <registry-ref>` *or* `--scan-id <id>` (one required)<br>`--format <table\|json>` (default `table`) | Calls `GetRubyPackagesAsync` (`GET /api/scans/{scanId}/ruby-packages`) to download the canonical `RubyPackageInventory`. Table output mirrors `inspect` with groups/platform/runtime usage; JSON now returns `{ scanId, imageDigest, generatedAt, groups: [...] }`. Exit codes: `0` success, `64` invalid args, `70` backend failure, `0` with warning when inventory hasnt been persisted yet. |
| `stellaops-cli db fetch` | Trigger connector jobs | `--source <id>` (e.g. `redhat`, `osv`)<br>`--stage <fetch\|parse\|map>` (default `fetch`)<br>`--mode <resume|init|cursor>` | Translates to `POST /jobs/source:{source}:{stage}` with `trigger=cli` | | `stellaops-cli db fetch` | Trigger connector jobs | `--source <id>` (e.g. `redhat`, `osv`)<br>`--stage <fetch\|parse\|map>` (default `fetch`)<br>`--mode <resume|init|cursor>` | Translates to `POST /jobs/source:{source}:{stage}` with `trigger=cli` |
| `stellaops-cli db merge` | Run canonical merge reconcile | — | Calls `POST /jobs/merge:reconcile`; exit code `0` on acceptance, `1` on failures/conflicts | | `stellaops-cli db merge` | Run canonical merge reconcile | — | Calls `POST /jobs/merge:reconcile`; exit code `0` on acceptance, `1` on failures/conflicts |
| `stellaops-cli db export` | Kick JSON / Trivy exports | `--format <json\|trivy-db>` (default `json`)<br>`--delta`<br>`--publish-full/--publish-delta`<br>`--bundle-full/--bundle-delta` | Sets `{ delta = true }` parameter when requested and can override ORAS/bundle toggles per run | | `stellaops-cli db export` | Kick JSON / Trivy exports | `--format <json\|trivy-db>` (default `json`)<br>`--delta`<br>`--publish-full/--publish-delta`<br>`--bundle-full/--bundle-delta` | Sets `{ delta = true }` parameter when requested and can override ORAS/bundle toggles per run |
@@ -684,7 +684,7 @@ See `docs/dev/32_AUTH_CLIENT_GUIDE.md` for recommended profiles (online vs. air-
### Ruby dependency verbs (`stellaops-cli ruby …`) ### Ruby dependency verbs (`stellaops-cli ruby …`)
`ruby inspect` runs the same deterministic `RubyLanguageAnalyzer` bundled with Scanner.Worker against the local working tree—no backend calls—so operators can sanity-check Gemfile / Gemfile.lock pairs before shipping. The command now renders an observation banner (bundler version, package/runtime counts, capability flags, scheduler names) before the package table so air-gapped users can prove what evidence was collected. `ruby resolve` downloads the `ruby_packages.json` artifact that Scanner creates for each scan (via `GetRubyPackagesAsync`) and reshapes it for operators who need to reason about groups/platforms/runtime usage after the fact. `ruby inspect` runs the same deterministic `RubyLanguageAnalyzer` bundled with Scanner.Worker against the local working tree—no backend calls—so operators can sanity-check Gemfile / Gemfile.lock pairs before shipping. The command now renders an observation banner (bundler version, package/runtime counts, capability flags, scheduler names) before the package table so air-gapped users can prove what evidence was collected. `ruby resolve` reuses the persisted `RubyPackageInventory` (stored under Mongo `ruby.packages` and exposed via `GET /api/scans/{scanId}/ruby-packages`) so operators can reason about groups/platforms/runtime usage after Scanner or Offline Kits finish processing; the CLI surfaces `scanId`, `imageDigest`, and `generatedAt` metadata in JSON mode for downstream scripting.
**`ruby inspect` flags** **`ruby inspect` flags**
@@ -731,7 +731,7 @@ Successful runs exit `0`; invalid formats raise **64**, unexpected failures retu
| ---- | ------- | ----------- | | ---- | ------- | ----------- |
| `--image <registry/ref>` | — | Scanner artifact identifier (image digest/tag). Mutually exclusive with `--scan-id`; one is required. | | `--image <registry/ref>` | — | Scanner artifact identifier (image digest/tag). Mutually exclusive with `--scan-id`; one is required. |
| `--scan-id <id>` | — | Explicit scan identifier returned by `scan run`. | | `--scan-id <id>` | — | Explicit scan identifier returned by `scan run`. |
| `--format <table\|json>` | `table` | `json` writes `{ "scanId": "…", "groups": [{ "group": "default", "platform": "-", "packages": [...] }] }`. | | `--format <table\|json>` | `table` | `json` writes the full `{ "scanId": "…", "imageDigest": "…", "generatedAt": "…", "groups": [{ "group": "default", "platform": "-", "packages": [...] }] }` payload for downstream automation. |
| `--verbose` / `-v` | `false` | Enables HTTP + resolver logging. | | `--verbose` / `-v` | `false` | Enables HTTP + resolver logging. |
Errors caused by missing identifiers return **64**; transient backend errors surface as **70** (with full context in logs). Table output groups packages by Gem/Bundle group + platform and shows runtime entrypoints or `[grey]-[/]` when unused. JSON payloads stay stable for downstream automation: Errors caused by missing identifiers return **64**; transient backend errors surface as **70** (with full context in logs). Table output groups packages by Gem/Bundle group + platform and shows runtime entrypoints or `[grey]-[/]` when unused. JSON payloads stay stable for downstream automation:
@@ -739,6 +739,8 @@ Errors caused by missing identifiers return **64**; transient backend errors sur
```json ```json
{ {
"scanId": "scan-ruby", "scanId": "scan-ruby",
"imageDigest": "sha256:4f7d...",
"generatedAt": "2025-11-12T16:05:00Z",
"groups": [ "groups": [
{ {
"group": "default", "group": "default",

279
docs/ci/dsse-build-flow.md Normal file
View File

@@ -0,0 +1,279 @@
# Build-Time DSSE Attestation Walkthrough
> **Status:** Draft — aligns with the November 2025 advisory “Embed in-toto attestations (DSSE-wrapped) into .NET 10/C# builds.”
> **Owners:** Attestor Guild · DevOps Guild · Docs Guild.
This guide shows how to emit signed, in-toto compliant DSSE envelopes for every container build step (scan → package → push) using StellaOps Authority keys. The same primitives power our Signer/Attestor services, but this walkthrough targets developer pipelines (GitHub/GitLab, dotnet builds, container scanners).
---
## 1. Concepts refresher
| Term | Meaning |
|------|---------|
| **In-toto Statement** | JSON document describing what happened (predicate) to which artifact (subject). |
| **DSSE** | Dead Simple Signing Envelope: wraps the statement, base64 payload, and signatures. |
| **Authority Signer** | StellaOps client that signs data via file-based keys, HSM/KMS, or keyless Fulcio certs. |
| **PAE** | Pre-Authentication Encoding: canonical “DSSEv1 <len> <payloadType> <len> <payload>” byte layout that is signed. |
Requirements:
1. .NET 10 SDK (preview) for C# helper code.
2. Authority key material (dev: file-based Ed25519; prod: Authority/KMS signer).
3. Artifact digest (e.g., `pkg:docker/registry/app@sha256:...`) per step.
---
## 2. Helpers (drop-in library)
Create `src/StellaOps.Attestation` with:
```csharp
public sealed record InTotoStatement(
string _type,
IReadOnlyList<Subject> subject,
string predicateType,
object predicate);
public sealed record Subject(string name, IReadOnlyDictionary<string,string> digest);
public sealed record DsseEnvelope(
string payloadType,
string payload,
IReadOnlyList<Signature> signatures);
public sealed record Signature(string keyid, string sig);
public interface IAuthoritySigner
{
Task<string> GetKeyIdAsync(CancellationToken ct = default);
Task<byte[]> SignAsync(ReadOnlyMemory<byte> pae, CancellationToken ct = default);
}
```
DSSE helper:
```csharp
public static class DsseHelper
{
public static async Task<DsseEnvelope> WrapAsync(
InTotoStatement statement,
IAuthoritySigner signer,
string payloadType = "application/vnd.in-toto+json",
CancellationToken ct = default)
{
var payloadBytes = JsonSerializer.SerializeToUtf8Bytes(statement,
new JsonSerializerOptions { DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull });
var pae = PreAuthEncode(payloadType, payloadBytes);
var signature = await signer.SignAsync(pae, ct).ConfigureAwait(false);
var keyId = await signer.GetKeyIdAsync(ct).ConfigureAwait(false);
return new DsseEnvelope(
payloadType,
Convert.ToBase64String(payloadBytes),
new[] { new Signature(keyId, Convert.ToBase64String(signature)) });
}
private static byte[] PreAuthEncode(string payloadType, byte[] payload)
{
static byte[] Cat(params byte[][] parts)
{
var len = parts.Sum(p => p.Length);
var buf = new byte[len];
var offset = 0;
foreach (var part in parts)
{
Buffer.BlockCopy(part, 0, buf, offset, part.Length);
offset += part.Length;
}
return buf;
}
var header = Encoding.UTF8.GetBytes("DSSEv1");
var pt = Encoding.UTF8.GetBytes(payloadType);
var lenPt = Encoding.UTF8.GetBytes(pt.Length.ToString(CultureInfo.InvariantCulture));
var lenPayload = Encoding.UTF8.GetBytes(payload.Length.ToString(CultureInfo.InvariantCulture));
var space = Encoding.UTF8.GetBytes(" ");
return Cat(header, space, lenPt, space, pt, space, lenPayload, space, payload);
}
}
```
Authority signer examples:
/dev (file Ed25519):
```csharp
public sealed class FileEd25519Signer : IAuthoritySigner, IDisposable
{
private readonly Ed25519 _ed;
private readonly string _keyId;
public FileEd25519Signer(byte[] privateKeySeed, string keyId)
{
_ed = new Ed25519(privateKeySeed);
_keyId = keyId;
}
public Task<string> GetKeyIdAsync(CancellationToken ct) => Task.FromResult(_keyId);
public Task<byte[]> SignAsync(ReadOnlyMemory<byte> pae, CancellationToken ct)
=> Task.FromResult(_ed.Sign(pae.Span.ToArray()));
public void Dispose() => _ed.Dispose();
}
```
Prod (Authority KMS):
Reuse the existing `StellaOps.Signer.KmsSigner` adapter—wrap it behind `IAuthoritySigner`.
---
## 3. Emitting attestations per step
Subject helper:
```csharp
static Subject ImageSubject(string imageDigest) => new(
name: imageDigest,
digest: new Dictionary<string,string>{{"sha256", imageDigest.Replace("sha256:", "", StringComparison.Ordinal)}});
```
### 3.1 Scan
```csharp
var scanStmt = new InTotoStatement(
_type: "https://in-toto.io/Statement/v1",
subject: new[]{ ImageSubject(imageDigest) },
predicateType: "https://stella.ops/predicates/scanner-evidence/v1",
predicate: new {
scanner = "StellaOps.Scanner 0.9.0",
findingsSha256 = scanResultsHash,
startedAt = startedIso,
finishedAt = finishedIso,
rulePack = "lattice:default@2025-11-01"
});
var scanEnvelope = await DsseHelper.WrapAsync(scanStmt, signer);
await File.WriteAllTextAsync("artifacts/attest-scan.dsse.json", JsonSerializer.Serialize(scanEnvelope));
```
### 3.2 Package (SLSA provenance)
```csharp
var pkgStmt = new InTotoStatement(
"https://in-toto.io/Statement/v1",
new[]{ ImageSubject(imageDigest) },
"https://slsa.dev/provenance/v1",
new {
builder = new { id = "stella://builder/dockerfile" },
buildType = "dockerfile/v1",
invocation = new { configSource = repoUrl, entryPoint = dockerfilePath },
materials = new[] { new { uri = repoUrl, digest = new { git = gitSha } } }
});
var pkgEnvelope = await DsseHelper.WrapAsync(pkgStmt, signer);
await File.WriteAllTextAsync("artifacts/attest-package.dsse.json", JsonSerializer.Serialize(pkgEnvelope));
```
### 3.3 Push
```csharp
var pushStmt = new InTotoStatement(
"https://in-toto.io/Statement/v1",
new[]{ ImageSubject(imageDigest) },
"https://stella.ops/predicates/push/v1",
new { registry = registryUrl, repository = repoName, tags, pushedAt = DateTimeOffset.UtcNow });
var pushEnvelope = await DsseHelper.WrapAsync(pushStmt, signer);
await File.WriteAllTextAsync("artifacts/attest-push.dsse.json", JsonSerializer.Serialize(pushEnvelope));
```
---
## 4. CI integration
### 4.1 GitLab example
```yaml
.attest-template: &attest
image: mcr.microsoft.com/dotnet/sdk:10.0-preview
before_script:
- dotnet build src/StellaOps.Attestation/StellaOps.Attestation.csproj
variables:
AUTHORITY_KEY_FILE: "$CI_PROJECT_DIR/secrets/ed25519.key"
IMAGE_DIGEST: "$CI_REGISTRY_IMAGE@${CI_COMMIT_SHA}"
attest:scan:
stage: scan
script:
- dotnet run --project tools/StellaOps.Attestor.Tool -- step scan --subject "$IMAGE_DIGEST" --out artifacts/attest-scan.dsse.json
artifacts:
paths: [artifacts/attest-scan.dsse.json]
attest:package:
stage: package
script:
- dotnet run --project tools/StellaOps.Attestor.Tool -- step package --subject "$IMAGE_DIGEST" --out artifacts/attest-package.dsse.json
attest:push:
stage: push
script:
- dotnet run --project tools/StellaOps.Attestor.Tool -- step push --subject "$IMAGE_DIGEST" --registry "$CI_REGISTRY" --tags "$CI_COMMIT_REF_NAME"
```
### 4.2 GitHub Actions snippet
```yaml
jobs:
attest:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-dotnet@v4
with: { dotnet-version: '10.0.x' }
- name: Build attestor helpers
run: dotnet build src/StellaOps.Attestation/StellaOps.Attestation.csproj
- name: Emit scan attestation
run: dotnet run --project tools/StellaOps.Attestor.Tool -- step scan --subject "${{ env.IMAGE_DIGEST }}" --out artifacts/attest-scan.dsse.json
env:
AUTHORITY_KEY_REF: ${{ secrets.AUTHORITY_KEY_REF }}
```
---
## 5. Verification
* `stella attest verify --file artifacts/attest-scan.dsse.json` (CLI planned under `DSSE-CLI-401-021`).
* Manual validation:
1. Base64 decode payload → ensure `_type` = `https://in-toto.io/Statement/v1`, `subject[].digest.sha256` matches artifact.
2. Recompute PAE and verify signature with the Authority public key.
3. Attach envelope to Rekor (optional) via existing Attestor API.
---
## 6. Storage conventions
Store DSSE files next to build outputs:
```
artifacts/
attest-scan.dsse.json
attest-package.dsse.json
attest-push.dsse.json
```
Include the SHA-256 digest of each envelope in promotion manifests (`docs/release/promotion-attestations.md`) so downstream verifiers can trace chain of custody.
---
## 7. References
- [In-toto Statement v1](https://in-toto.io/spec/v1)
- [DSSE specification](https://github.com/secure-systems-lab/dsse)
- `docs/modules/signer/architecture.md`
- `docs/modules/attestor/architecture.md`
- `docs/release/promotion-attestations.md`
Keep this file updated alongside `DSSE-LIB-401-020` and `DSSE-CLI-401-021`. When the bench repo publishes sample attestations, link them here.

View File

@@ -30,3 +30,5 @@
- `SCANNER-ENG-0016`: 2025-11-10 — Completed Ruby lock collector and vendor ingestion work: honour `.bundle/config` overrides, fold workspace lockfiles, emit bundler groups, add Ruby analyzer fixtures/goldens (including new git/path offline kit mirror), and `dotnet test ... --filter Ruby` passes. - `SCANNER-ENG-0016`: 2025-11-10 — Completed Ruby lock collector and vendor ingestion work: honour `.bundle/config` overrides, fold workspace lockfiles, emit bundler groups, add Ruby analyzer fixtures/goldens (including new git/path offline kit mirror), and `dotnet test ... --filter Ruby` passes.
- `SCANNER-ENG-0009`: Emitted observation payload + `ruby-observation` component summarising packages, runtime edges, and capability flags for Policy/Surface exports; fixtures updated for determinism and Offline Kit now ships the observation JSON. - `SCANNER-ENG-0009`: Emitted observation payload + `ruby-observation` component summarising packages, runtime edges, and capability flags for Policy/Surface exports; fixtures updated for determinism and Offline Kit now ships the observation JSON.
- `SCANNER-ENG-0009`: 2025-11-12 — Added bundler-version metadata to observation payloads, introduced the `complex-app` fixture to cover vendor caches/BUNDLE_PATH overrides, and taught `stellaops-cli ruby inspect` to print the observation banner (bundler/runtime/capabilities) alongside JSON `observation` blocks. - `SCANNER-ENG-0009`: 2025-11-12 — Added bundler-version metadata to observation payloads, introduced the `complex-app` fixture to cover vendor caches/BUNDLE_PATH overrides, and taught `stellaops-cli ruby inspect` to print the observation banner (bundler/runtime/capabilities) alongside JSON `observation` blocks.
- `SCANNER-ENG-0009`: 2025-11-12 — Ruby package inventories now flow into `RubyPackageInventoryStore`; `SurfaceManifestStageExecutor` builds the package list, persists it via Mongo, and Scanner.WebService exposes the data through `GET /api/scans/{scanId}/ruby-packages` for CLI/Policy consumers.
- `SCANNER-ENG-0009`: 2025-11-12 — Ruby package inventory API now returns a typed envelope (scanId/imageDigest/generatedAt + packages) backed by `ruby.packages`; Worker/WebService DI registers the real store when Mongo is enabled, CLI `ruby resolve` consumes the new payload/warns when inventories are still warming, and docs/OpenAPI references were refreshed.

View File

@@ -13,29 +13,149 @@ This file now only tracks the runtime & signals status snapshot. Active backlog
| 140.C Signals | Signals Guild · Authority Guild (for scopes) · Runtime Guild | Sprint 120.A AirGap; Sprint 130.A Scanner | DOING | API skeleton and callgraph ingestion are active; runtime facts endpoint still depends on the same shared prerequisites. | | 140.C Signals | Signals Guild · Authority Guild (for scopes) · Runtime Guild | Sprint 120.A AirGap; Sprint 130.A Scanner | DOING | API skeleton and callgraph ingestion are active; runtime facts endpoint still depends on the same shared prerequisites. |
| 140.D Zastava | Zastava Observer/Webhook Guilds · Security Guild | Sprint 120.A AirGap; Sprint 130.A Scanner | TODO | Surface.FS integration waits on Scanner surface caches; prep sealed-mode env helpers meanwhile. | | 140.D Zastava | Zastava Observer/Webhook Guilds · Security Guild | Sprint 120.A AirGap; Sprint 130.A Scanner | TODO | Surface.FS integration waits on Scanner surface caches; prep sealed-mode env helpers meanwhile. |
# Status snapshot (2025-11-09) # Status snapshot (2025-11-12)
- **140.A Graph** GRAPH-INDEX-28-007/008/009/010 remain TODO while Scanner surface artifacts and SBOM projection schemas are outstanding; no clustering/backfill/fixture work has started. - **140.A Graph** GRAPH-INDEX-28-007/008/009/010 remain TODO while Scanner surface artifacts and SBOM projection schemas are outstanding; clustering/backfill/fixture scaffolds are staged but cannot progress until analyzer payloads arrive.
- **140.B SbomService** Advisory AI, console, and orchestrator tracks stay TODO; SBOM-SERVICE-21-001..004 are BLOCKED until Concelier Link-Not-Merge (`CONCELIER-GRAPH-21-001`) + Cartographer schema (`CARTO-GRAPH-21-002`) land. - **140.B SbomService** Advisory AI, console, and orchestrator tracks stay TODO; SBOM-SERVICE-21-001..004 remain BLOCKED waiting for Concelier Link-Not-Merge (`CONCELIER-GRAPH-21-001`) plus Cartographer schema (`CARTO-GRAPH-21-002`), and AirGap parity must be re-validated once schemas land. Teams are refining projection docs so we can flip to DOING as soon as payloads land.
- **140.C Signals** SIGNALS-24-001 now complete (host, RBAC, sealed-mode readiness, `/signals/facts/{subject}`); SIGNALS-24-002 added callgraph retrieval APIs but still needs CAS promotion; SIGNALS-24-003 accepts JSON + NDJSON runtime uploads, yet NDJSON provenance/context wiring remains TODO. Scoring/cache work (SIGNALS-24-004/005) is still BLOCKED pending runtime feed availability (target 2025-11-09). - **140.C Signals** SIGNALS-24-001 shipped on 2025-11-09; SIGNALS-24-002 is DOING with callgraph retrieval live but CAS promotion + signed manifest tooling still pending; SIGNALS-24-003 is DOING after JSON/NDJSON ingestion merged, yet provenance/context enrichment and runtime feed reconciliation remain in-flight. Scoring/cache work (SIGNALS-24-004/005) stays BLOCKED until runtime uploads publish consistently and scope propagation validation (post `AUTH-SIG-26-001`) completes.
- **140.D Zastava** ZASTAVA-ENV-01/02, ZASTAVA-SECRETS-01/02, and ZASTAVA-SURFACE-01/02 are still TODO because Surface.FS cache outputs from Scanner arent published; guilds limited to design/prep. - **140.D Zastava** ZASTAVA-ENV/SECRETS/SURFACE tracks remain TODO because Surface.FS cache outputs from Scanner are still unavailable; guilds continue prepping Surface.Env helper adoption and sealed-mode scaffolding.
## Wave task tracker (refreshed 2025-11-12)
### 140.A Graph
| Task ID | State | Notes |
| --- | --- | --- |
| GRAPH-INDEX-28-007 | TODO | Clustering/centrality jobs queued behind Scanner surface analyzer artifacts; design work complete but implementation held. |
| GRAPH-INDEX-28-008 | TODO | Incremental update/backfill pipeline depends on 28-007 artifacts; retry/backoff plumbing sketched but blocked. |
| GRAPH-INDEX-28-009 | TODO | Test/fixture/chaos coverage waits on earlier jobs to exist so determinism checks have data. |
| GRAPH-INDEX-28-010 | TODO | Packaging/offline bundles paused until upstream graph jobs are available to embed. |
### 140.B SbomService
| Task ID | State | Notes |
| --- | --- | --- |
| SBOM-AIAI-31-001 | TODO | Advisory AI path/timeline endpoints specced; awaiting projection schema finalization. |
| SBOM-AIAI-31-002 | TODO | Metrics/dashboards tied to 31-001; blocked on the same schema availability. |
| SBOM-CONSOLE-23-001 | TODO | Console catalog API draft complete; depends on Concelier/Cartographer payload definitions. |
| SBOM-CONSOLE-23-002 | TODO | Global component lookup API needs 23-001 responses + cache hints before work can start. |
| SBOM-ORCH-32-001 | TODO | Orchestrator registration is sequenced after projection schema because payload shapes map into job metadata. |
| SBOM-ORCH-33-001 | TODO | Backpressure/telemetry features depend on 32-001 workers. |
| SBOM-ORCH-34-001 | TODO | Backfill + watermark logic requires the orchestrator integration from 33-001. |
| SBOM-SERVICE-21-001 | BLOCKED | Normalized SBOM projection schema cannot ship until Concelier (`CONCELIER-GRAPH-21-001`) delivers Link-Not-Merge definitions. |
| SBOM-SERVICE-21-002 | BLOCKED | Change events hinge on 21-001 response contract; no work underway. |
| SBOM-SERVICE-21-003 | BLOCKED | Entry point/service node management blocked behind 21-002 event outputs. |
| SBOM-SERVICE-21-004 | BLOCKED | Observability wiring follows projection + event pipelines; on hold. |
| SBOM-SERVICE-23-001 | TODO | Asset metadata extensions queued once 21-004 observability baseline exists. |
| SBOM-SERVICE-23-002 | TODO | Asset update events depend on 23-001 schema. |
| SBOM-VULN-29-001 | TODO | Inventory evidence feed deferred until projection schema + runtime align. |
| SBOM-VULN-29-002 | TODO | Resolver feed requires 29-001 event payloads. |
### 140.C Signals
| Task ID | State | Notes |
| --- | --- | --- |
| SIGNALS-24-001 | DONE (2025-11-09) | Host skeleton, RBAC, sealed-mode readiness, `/signals/facts/{subject}` retrieval, and readiness probes merged; serves as base for downstream ingestion. |
| SIGNALS-24-002 | DOING (2025-11-07) | Callgraph ingestion + retrieval APIs are live, but CAS promotion and signed manifest publication remain; cannot close until reachability jobs can trust stored graphs. |
| SIGNALS-24-003 | DOING (2025-11-09) | Runtime facts ingestion accepts JSON/NDJSON and gzip streams; provenance/context enrichment and NDJSON-to-AOC wiring still outstanding. |
| SIGNALS-24-004 | BLOCKED (2025-10-27) | Reachability scoring waits on complete ingestion feeds (24-002/003) plus Authority scope validation. |
| SIGNALS-24-005 | BLOCKED (2025-10-27) | Cache + `signals.fact.updated` events depend on scoring outputs; remains idle until 24-004 unblocks. |
### 140.D Zastava
| Task ID | State | Notes |
| --- | --- | --- |
| ZASTAVA-ENV-01 | TODO | Observer adoption of Surface.Env helpers paused while Surface.FS cache contract finalizes. |
| ZASTAVA-ENV-02 | TODO | Webhook helper migration follows ENV-01 completion. |
| ZASTAVA-SECRETS-01 | TODO | Surface.Secrets wiring for Observer pending published cache endpoints. |
| ZASTAVA-SECRETS-02 | TODO | Webhook secret retrieval cascades from SECRETS-01 work. |
| ZASTAVA-SURFACE-01 | TODO | Surface.FS client integration blocked on Scanner layer metadata; tests ready once packages mirror offline dependencies. |
| ZASTAVA-SURFACE-02 | TODO | Admission enforcement requires SURFACE-01 so webhook responses can gate on cache freshness. |
## In-flight focus (DOING items)
| Task ID | Remaining work | Target date | Owners |
| --- | --- | --- | --- |
| SIGNALS-24-002 | Promote callgraph CAS buckets to prod scopes, publish signed manifest metadata, document retention/GC policy, wire alerts for failed graph retrievals. | 2025-11-14 | Signals Guild, Platform Storage Guild |
| SIGNALS-24-003 | Finalize provenance/context enrichment (Authority scopes + runtime metadata), support NDJSON batch provenance, backfill existing facts, and validate AOC contract. | 2025-11-15 | Signals Guild, Runtime Guild, Authority Guild |
## Wave readiness checklist (2025-11-12)
| Wave | Entry criteria | Prep status | Next checkpoint |
| --- | --- | --- | --- |
| 140.A Graph | Scanner surface analyzer artifacts + SBOM projection schema for clustering inputs. | Job scaffolds and determinism harness drafted; waiting on artifact ETA. | 2025-11-13 cross-guild sync (Scanner ↔ Graph) to lock delivery window. |
| 140.B SbomService | Concelier Link-Not-Merge + Cartographer projection schema, plus AirGap parity review. | Projection doc redlines complete; schema doc ready for Concelier feedback. | 2025-11-14 schema review (Concelier, Cartographer, SBOM). |
| 140.C Signals | CAS promotion approval + runtime provenance contract + AUTH-SIG-26-001 sign-off. | HOST + callgraph retrieval merged; CAS/provenance work tracked in DOING table above. | 2025-11-13 runtime sync to approve CAS rollout + schema freeze. |
| 140.D Zastava | Surface.FS cache availability + Surface.Env helper specs published. | Env/secrets design notes ready; waiting for Scanner cache drop and Surface.FS API stubs. | 2025-11-15 Surface guild office hours to confirm helper adoption plan. |
### Signals DOING activity log
| Date | Update | Owners |
| --- | --- | --- |
| 2025-11-12 | Drafted CAS promotion checklist (bucket policies, signer config, GC guardrails) and circulated to Platform Storage for approval; added alert runbooks for failed graph retrievals. | Signals Guild, Platform Storage Guild |
| 2025-11-11 | Completed NDJSON ingestion soak test (JSON/NDJSON + gzip) and documented provenance enrichment mapping required from Authority scopes; open PR wiring AOC metadata pending review. | Signals Guild, Runtime Guild |
| 2025-11-09 | Runtime facts ingestion endpoint + streaming NDJSON support merged with sealed-mode gating; next tasks are provenance enrichment and scoring linkage. | Signals Guild, Runtime Guild |
## Dependency status watchlist (2025-11-12)
| Dependency | Status | Latest detail | Owner(s) / follow-up |
| --- | --- | --- | --- |
| AUTH-SIG-26-001 (Signals scopes + AOC) | DONE (2025-10-29) | Authority shipped scope + role templates; Signals is validating propagation + provenance enrichment before enabling scoring. | Authority Guild · Runtime Guild · Signals Guild |
| CONCELIER-GRAPH-21-001 (SBOM projection enrichment) | BLOCKED (2025-10-27) | Awaiting Cartographer schema + Link-Not-Merge contract; SBOM/Graph/Zastava work cannot proceed without enriched projections. | Concelier Core · Cartographer Guild |
| CONCELIER-GRAPH-21-002 / CARTO-GRAPH-21-002 (SBOM change events) | BLOCKED (2025-10-27) | Change event contract depends on 21-001; Cartographer has not provided webhook schema yet. | Concelier Core · Cartographer Guild · Platform Events Guild |
| Sprint 130 Scanner surface artifacts | ETA pending | Analyzer artifact publication schedule still outstanding; Graph/Zastava need cache outputs and manifests. | Scanner Guild · Graph Indexer Guild · Zastava Guilds |
| AirGap parity review (Sprint 120.A) | Not scheduled | SBOM path/timeline endpoints must re-pass AirGap checklist once Concelier schema lands; reviewers on standby. | AirGap Guild · SBOM Service Guild |
## Upcoming checkpoints
| Date | Session | Goal | Impacted wave(s) | Prep owner(s) |
| --- | --- | --- | --- | --- |
| 2025-11-13 | Scanner ↔ Graph readiness sync | Lock analyzer artifact ETA + cache publish plan so GRAPH-INDEX-28-007 can start immediately after delivery. | 140.A Graph · 140.D Zastava | Scanner Guild · Graph Indexer Guild |
| 2025-11-13 | Runtime/Signals CAS + provenance review | Approve CAS promotion checklist, freeze provenance schema, and green-light SIGNALS-24-002/003 close-out tasks. | 140.C Signals | Signals Guild · Runtime Guild · Authority Guild · Platform Storage Guild |
| 2025-11-14 | Concelier/Cartographer/SBOM schema review | Ratify Link-Not-Merge projection schema + change event contract; schedule AirGap parity verification. | 140.B SbomService · 140.A Graph · 140.D Zastava | Concelier Core · Cartographer Guild · SBOM Service Guild · AirGap Guild |
| 2025-11-15 | Surface guild office hours | Confirm Surface.Env helper adoption + Surface.FS cache drop timeline for Zastava. | 140.D Zastava | Surface Guild · Zastava Observer/Webhook Guilds |
### Meeting prep checklist
| Session | Pre-reads / artifacts | Open questions to resolve | Owners |
| --- | --- | --- | --- |
| Scanner ↔ Graph (2025-11-13) | Sprint 130 surface artifact roadmap draft, GRAPH-INDEX-28-007 scaffolds, ZASTAVA-SURFACE dependency list. | Exact drop date for analyzer artifacts? Will caches ship phased or all at once? Need mock payloads if delayed? | Scanner Guild · Graph Indexer Guild · Zastava Guilds |
| Runtime/Signals CAS review (2025-11-13) | CAS promotion checklist, signed manifest PR links, provenance schema draft, NDJSON ingestion soak results. | Storage approval on bucket policies/GC? Authority confirmation on scope propagation + AOC metadata? Backfill approach for existing runtime facts? | Signals Guild · Runtime Guild · Authority Guild · Platform Storage Guild |
| Concelier schema review (2025-11-14) | Link-Not-Merge schema redlines, Cartographer webhook contract, AirGap parity checklist, SBOM-SERVICE-21-001 scaffolding plan. | Final field list for relationships/scopes? Event payload metadata requirements? AirGap review schedule & owners? | Concelier Core · Cartographer Guild · SBOM Service Guild · AirGap Guild |
| Surface guild office hours (2025-11-15) | Surface.Env helper adoption notes, sealed-mode test harness outline, Surface.FS API stub timeline. | Can Surface.FS caches publish before Analyzer drop? Any additional sealed-mode requirements? Who owns Surface.Env rollout in Observer/Webhook repos? | Surface Guild · Zastava Observer/Webhook Guilds |
## Target outcomes (through 2025-11-15)
| Deliverable | Target date | Status | Dependencies / notes |
| --- | --- | --- | --- |
| SIGNALS-24-002 CAS promotion + signed manifests | 2025-11-14 | DOING | Needs Platform Storage sign-off from 2025-11-13 review; alerts/runbooks drafted. |
| SIGNALS-24-003 provenance enrichment + backfill | 2025-11-15 | DOING | Waiting on Runtime/Authority schema freeze + scope fixtures; NDJSON ingestion landed. |
| Scanner analyzer artifact ETA & cache drop plan | 2025-11-13 | TODO | Scanner to publish Sprint 130 surface roadmap; Graph/Zastava blocked until then. |
| Concelier Link-Not-Merge schema ratified | 2025-11-14 | BLOCKED | Requires `CONCELIER-GRAPH-21-001` + `CARTO-GRAPH-21-002` agreement; AirGap review scheduled after sign-off. |
| Surface.Env helper adoption checklist | 2025-11-15 | TODO | Zastava guild preparing sealed-mode test harness; depends on Surface guild office hours outcomes. |
# Blockers & coordination # Blockers & coordination
- **Concelier Link-Not-Merge / Cartographer schemas** SBOM-SERVICE-21-001..004 cannot start until `CONCELIER-GRAPH-21-001` and `CARTO-GRAPH-21-002` deliver the projection payloads. - **Concelier Link-Not-Merge / Cartographer schemas** SBOM-SERVICE-21-001..004 cannot start until `CONCELIER-GRAPH-21-001` and `CARTO-GRAPH-21-002` deliver the projection payloads.
- **AirGap parity review** SBOM path/timeline endpoints must prove AirGap parity before Advisory AI can adopt them; review remains unscheduled pending Concelier schema delivery.
- **Scanner surface artifacts** GRAPH-INDEX-28-007+ and all ZASTAVA-SURFACE tasks depend on Sprint 130 analyzer outputs and cached layer metadata; need updated ETA from Scanner guild. - **Scanner surface artifacts** GRAPH-INDEX-28-007+ and all ZASTAVA-SURFACE tasks depend on Sprint 130 analyzer outputs and cached layer metadata; need updated ETA from Scanner guild.
- **Signals host merge** SIGNALS-24-003/004/005 remain blocked until SIGNALS-24-001/002 merge and Authority scope work (`AUTH-SIG-26-001`) is validated with Runtime guild. - **Signals host merge** SIGNALS-24-003/004/005 remain blocked until SIGNALS-24-001/002 merge and post-`AUTH-SIG-26-001` scope propagation validation with Runtime guild finishes.
- **CAS promotion + signed manifests** SIGNALS-24-002 cannot close until Storage guild reviews CAS promotion plan and manifest signing tooling; downstream scoring needs immutable graph IDs.
- **Runtime provenance wiring** SIGNALS-24-003 still needs Authority scope propagation and NDJSON provenance mapping before runtime feeds can unblock scoring/cache layers.
# Next actions (target: 2025-11-12) # Next actions (target: 2025-11-14)
| Owner(s) | Action | | Owner(s) | Action |
| --- | --- | | --- | --- |
| Graph Indexer Guild | Hold design sync with Scanner Surface + SBOM Service owners to lock artifact delivery dates; prep clustering job scaffolds so work can start once feeds land. | | Graph Indexer Guild | Use 2025-11-13 Scanner sync to lock analyzer artifact ETA; keep clustering/backfill scaffolds staged so GRAPH-INDEX-28-007 can flip to DOING immediately after feeds land. |
| SBOM Service Guild | Finalize projection schema doc with Concelier/Cartographer, then flip SBOM-SERVICE-21-001 to DOING and align SBOM-AIAI-31-001 with Sprint 111 requirements. | | SBOM Service Guild | Circulate redlined projection schema to Concelier/Cartographer ahead of the 2025-11-14 review; scaffold SBOM-SERVICE-21-001 PR so coding can start once schema is approved. |
| Signals Guild | Land SIGNALS-24-001/002 PRs, then immediately kick off SIGNALS-24-003; coordinate scoring/cache roadmap with Runtime + Data Science guilds. | | Signals Guild | Merge CAS promotion + signed manifest PRs, then pivot to SIGNALS-24-003 provenance enrichment/backfill; prepare scoring/cache kickoff deck for 24-004/005 owners. |
| Zastava Guilds | Draft Surface.Env helper adoption plan and ensure Surface.Secrets references are wired so implementation can begin when Surface.FS caches publish. | | Runtime & Authority Guilds | Use delivered AUTH-SIG-26-001 scopes to finish propagation validation, freeze provenance schema, and hand off fixtures to Signals before 2025-11-15. |
| Platform Storage Guild | Review CAS bucket policies/GC guardrails from the 2025-11-12 checklist and provide written sign-off before runtime sync on 2025-11-13. |
| Scanner Guild | Publish Sprint 130 surface artifact roadmap + Surface.FS cache drop timeline so Graph/Zastava can schedule start dates; provide mock datasets if slips extend past 2025-11-15. |
| Zastava Guilds | Convert Surface.Env helper adoption notes into a ready-to-execute checklist, align sealed-mode tests, and be prepared to start once Surface.FS caches are announced. |
# Downstream dependency rollup (snapshot: 2025-11-09) # Downstream dependency rollup (snapshot: 2025-11-12)
| Track | Dependent sprint(s) | Impact if delayed | | Track | Dependent sprint(s) | Impact if delayed |
| --- | --- | --- | | --- | --- | --- |
@@ -52,10 +172,14 @@ This file now only tracks the runtime & signals status snapshot. Active backlog
| Scanner surface artifact delay | GRAPH-INDEX-28-007+ and ZASTAVA-SURFACE-* cannot even start | Scanner guild to deliver analyzer artifact roadmap; Graph/Zastava teams to prepare mocks/tests in advance. | | Scanner surface artifact delay | GRAPH-INDEX-28-007+ and ZASTAVA-SURFACE-* cannot even start | Scanner guild to deliver analyzer artifact roadmap; Graph/Zastava teams to prepare mocks/tests in advance. |
| Signals host/callgraph merge misses 2025-11-09 | SIGNALS-24-003/004/005 remain blocked, pushing reachability scoring past sprint goals | Signals + Authority guilds to prioritize AUTH-SIG-26-001 review and merge SIGNALS-24-001/002 before 2025-11-10 standup. | | Signals host/callgraph merge misses 2025-11-09 | SIGNALS-24-003/004/005 remain blocked, pushing reachability scoring past sprint goals | Signals + Authority guilds to prioritize AUTH-SIG-26-001 review and merge SIGNALS-24-001/002 before 2025-11-10 standup. |
| Authority build regression (`PackApprovalFreshAuthWindow`) | Signals test suite cannot run in CI, delaying validation of new endpoints | Coordinate with Authority guild to restore missing constant in `StellaOps.Auth.ServerIntegration`; rerun Signals tests once fixed. | | Authority build regression (`PackApprovalFreshAuthWindow`) | Signals test suite cannot run in CI, delaying validation of new endpoints | Coordinate with Authority guild to restore missing constant in `StellaOps.Auth.ServerIntegration`; rerun Signals tests once fixed. |
| CAS promotion slips past 2025-11-14 | SIGNALS-24-002 cannot close; reachability scoring has no trusted graph artifacts | Signals + Platform Storage to co-own CAS rollout checklist, escalate blockers during 2025-11-13 runtime sync. |
| Runtime provenance schema churn | SIGNALS-24-003 enrichment delays scoring/cache unblock and risks double uploads | Runtime + Authority guilds to freeze schema by 2025-11-14 and publish contract appendix; Signals updates ingestion once finalized. |
# Coordination log # Coordination log
| Date | Notes | | Date | Notes |
| --- | --- | | --- | --- |
| 2025-11-12 | Snapshot + wave tracker refreshed; pending dependencies captured for Graph/SBOM/Signals/Zastava while Signals DOING work progresses on callgraph CAS promotion + runtime ingestion wiring. |
| 2025-11-11 | Runtime + Signals ran NDJSON ingestion soak test; Authority flagged remaining provenance fields for schema freeze ahead of 2025-11-13 sync. |
| 2025-11-09 | Sprint 140 snapshot refreshed; awaiting Scanner surface artifact ETA, Concelier/CARTO schema delivery, and Signals host merge before any wave can advance to DOING. | | 2025-11-09 | Sprint 140 snapshot refreshed; awaiting Scanner surface artifact ETA, Concelier/CARTO schema delivery, and Signals host merge before any wave can advance to DOING. |
# Sprint 140 - Runtime & Signals # Sprint 140 - Runtime & Signals

View File

@@ -39,5 +39,21 @@ _Theme:_ Finish the provable reachability pipeline (graph CAS → replay → DSS
| DOCS-VEX-401-012 | TODO | Maintain the VEX Evidence Playbook, publish repo templates/README, and document verification workflows for operators. | Docs Guild (`docs/benchmarks/vex-evidence-playbook.md`, `bench/README.md`) | | DOCS-VEX-401-012 | TODO | Maintain the VEX Evidence Playbook, publish repo templates/README, and document verification workflows for operators. | Docs Guild (`docs/benchmarks/vex-evidence-playbook.md`, `bench/README.md`) |
| SYMS-BUNDLE-401-014 | TODO | Produce deterministic symbol bundles for air-gapped installs (`symbols bundle create|verify|load`), including DSSE manifests and Rekor checkpoints, and document offline workflows (`docs/specs/SYMBOL_MANIFEST_v1.md`). | Symbols Guild, Ops Guild (`src/Symbols/StellaOps.Symbols.Bundle`, `ops`) | | SYMS-BUNDLE-401-014 | TODO | Produce deterministic symbol bundles for air-gapped installs (`symbols bundle create|verify|load`), including DSSE manifests and Rekor checkpoints, and document offline workflows (`docs/specs/SYMBOL_MANIFEST_v1.md`). | Symbols Guild, Ops Guild (`src/Symbols/StellaOps.Symbols.Bundle`, `ops`) |
| DOCS-RUNBOOK-401-017 | TODO | Publish the reachability runtime ingestion runbook, link it from delivery guides, and keep Ops/Signals troubleshooting steps current. | Docs Guild · Ops Guild (`docs/runbooks/reachability-runtime.md`, `docs/reachability/DELIVERY_GUIDE.md`) | | DOCS-RUNBOOK-401-017 | TODO | Publish the reachability runtime ingestion runbook, link it from delivery guides, and keep Ops/Signals troubleshooting steps current. | Docs Guild · Ops Guild (`docs/runbooks/reachability-runtime.md`, `docs/reachability/DELIVERY_GUIDE.md`) |
| POLICY-LIB-401-001 | TODO | Extract the policy DSL parser/compiler into `StellaOps.PolicyDsl`, add the lightweight syntax (default action + inline rules), and expose `PolicyEngineFactory`/`SignalContext` APIs for reuse. | Policy Guild (`src/Policy/StellaOps.PolicyDsl`, `docs/policy/dsl.md`) |
| POLICY-LIB-401-002 | TODO | Ship unit-test harness + sample `policy/default.dsl` (table-driven cases) and wire `stella policy lint/simulate` to the shared library. | Policy Guild, CLI Guild (`tests/Policy/StellaOps.PolicyDsl.Tests`, `policy/default.dsl`, `docs/policy/lifecycle.md`) |
| POLICY-ENGINE-401-003 | TODO | Replace in-service DSL compilation with the shared library, support both legacy `stella-dsl@1` packs and the new inline syntax, and keep determinism hashes stable. | Policy Guild (`src/Policy/StellaOps.Policy.Engine`, `docs/modules/policy/architecture.md`) |
| CLI-EDITOR-401-004 | TODO | Enhance `stella policy` CLI verbs (edit/lint/simulate) to edit Git-backed `.dsl` files, run local coverage tests, and commit SemVer metadata. | CLI Guild (`src/Cli/StellaOps.Cli`, `docs/policy/lifecycle.md`) |
| DOCS-DSL-401-005 | TODO | Refresh `docs/policy/dsl.md` + lifecycle docs with the new syntax, signal dictionary (`trust_score`, `reachability`, etc.), authoring workflow, and safety rails (shadow mode, coverage tests). | Docs Guild (`docs/policy/dsl.md`, `docs/policy/lifecycle.md`) |
| DSSE-LIB-401-020 | TODO | Package `StellaOps.Attestor.Envelope` primitives into a reusable `StellaOps.Attestation` library with `InTotoStatement`, `IAuthoritySigner`, DSSE pre-auth helpers, and .NET-friendly APIs for build agents. | Attestor Guild · Platform Guild (`src/Attestor/StellaOps.Attestation`, `src/Attestor/StellaOps.Attestor.Envelope`) |
| DSSE-CLI-401-021 | TODO | Ship a `stella attest` CLI (or sample `StellaOps.Attestor.Tool`) plus GitLab/GitHub workflow snippets that emit DSSE per build step (scan/package/push) using the new library and Authority keys. | CLI Guild · DevOps Guild (`src/Cli/StellaOps.Cli`, `scripts/ci/attest-*`, `docs/modules/attestor/architecture.md`) |
| DSSE-DOCS-401-022 | TODO | Document the build-time attestation walkthrough (`docs/ci/dsse-build-flow.md`): models, helper usage, Authority integration, storage conventions, and verification commands, aligning with the advisory. | Docs Guild · Attestor Guild (`docs/ci/dsse-build-flow.md`, `docs/modules/attestor/architecture.md`) |
| REACH-LATTICE-401-023 | TODO | Define the reachability lattice model (`ReachState`, `EvidenceKind`, `MitigationKind`, scoring policy) in Scanner docs + code; ensure evidence joins write to the event graph schema. | Scanner Guild · Policy Guild (`docs/reachability/lattice.md`, `docs/modules/scanner/architecture.md`, `src/Scanner/StellaOps.Scanner.WebService`) |
| UNCERTAINTY-SCHEMA-401-024 | TODO | Extend Signals findings with `uncertainty.states[]`, entropy fields, and `riskScore`; emit `FindingUncertaintyUpdated` events and persist evidence per docs. | Signals Guild (`src/Signals/StellaOps.Signals`, `docs/uncertainty/README.md`) |
| UNCERTAINTY-SCORER-401-025 | TODO | Implement the entropy-aware risk scorer (`riskScore = base × reach × trust × (1 + entropyBoost)`) and wire it into finding writes. | Signals Guild (`src/Signals/StellaOps.Signals.Application`, `docs/uncertainty/README.md`) |
| UNCERTAINTY-POLICY-401-026 | TODO | Update policy guidance (Concelier/Excitors) with uncertainty gates (U1/U2/U3), sample YAML rules, and remediation actions. | Policy Guild · Concelier Guild (`docs/policy/dsl.md`, `docs/uncertainty/README.md`) |
| UNCERTAINTY-UI-401-027 | TODO | Surface uncertainty chips/tooltips in the Console (React UI) + CLI output (risk score + entropy states). | UI Guild · CLI Guild (`src/UI/StellaOps.UI`, `src/Cli/StellaOps.Cli`, `docs/uncertainty/README.md`) |
| PROV-INLINE-401-028 | DOING | Extend Authority/Feedser event writers to attach inline DSSE + Rekor references on every SBOM/VEX/scan event using `StellaOps.Provenance.Mongo`. | Authority Guild · Feedser Guild (`docs/provenance/inline-dsse.md`, `src/__Libraries/StellaOps.Provenance.Mongo`) |
| PROV-BACKFILL-401-029 | TODO | Backfill historical Mongo events with DSSE/Rekor metadata by resolving known attestations per subject digest. | Platform Guild (`docs/provenance/inline-dsse.md`, `scripts/publish_attestation_with_provenance.sh`) |
| PROV-INDEX-401-030 | TODO | Deploy provenance indexes (`events_by_subject_kind_provenance`, etc.) and expose compliance/replay queries. | Platform Guild · Ops Guild (`docs/provenance/inline-dsse.md`, `ops/mongo/indices/events_provenance_indices.js`) |
> Use `docs/reachability/DELIVERY_GUIDE.md` for architecture context, dependencies, and acceptance tests. > Use `docs/reachability/DELIVERY_GUIDE.md` for architecture context, dependencies, and acceptance tests.

View File

@@ -9,7 +9,6 @@ Each wave groups sprints that declare the same leading dependency. Start waves o
- Shared prerequisite(s): None (explicit) - Shared prerequisite(s): None (explicit)
- Parallelism guidance: No upstream sprint recorded; confirm module AGENTS and readiness gates before parallel execution. - Parallelism guidance: No upstream sprint recorded; confirm module AGENTS and readiness gates before parallel execution.
- Sprints: - Sprints:
- SPRINT_130_scanner_surface.md — Sprint 130 - Scanner & Surface. Done.
- SPRINT_138_scanner_ruby_parity.md — Sprint 138 - Scanner & Surface. In progress. - SPRINT_138_scanner_ruby_parity.md — Sprint 138 - Scanner & Surface. In progress.
- SPRINT_140_runtime_signals.md — Sprint 140 - Runtime & Signals. In progress. - SPRINT_140_runtime_signals.md — Sprint 140 - Runtime & Signals. In progress.
- SPRINT_150_scheduling_automation.md — Sprint 150 - Scheduling & Automation - SPRINT_150_scheduling_automation.md — Sprint 150 - Scheduling & Automation

View File

@@ -19,10 +19,11 @@ The service operates strictly downstream of the **Aggregation-Only Contract (AOC
- Join SBOM inventory, Concelier advisories, and Excititor VEX evidence via canonical linksets and equivalence tables. - Join SBOM inventory, Concelier advisories, and Excititor VEX evidence via canonical linksets and equivalence tables.
- Materialise effective findings (`effective_finding_{policyId}`) with append-only history and produce explain traces. - Materialise effective findings (`effective_finding_{policyId}`) with append-only history and produce explain traces.
- Emit per-finding OpenVEX decisions anchored to reachability evidence, forward them to Signer/Attestor for DSSE/Rekor, and publish the resulting artifacts for bench/verification consumers. - Emit per-finding OpenVEX decisions anchored to reachability evidence, forward them to Signer/Attestor for DSSE/Rekor, and publish the resulting artifacts for bench/verification consumers.
- Consume reachability lattice decisions (`ReachDecision`, `docs/reachability/lattice.md`) to drive confidence-based VEX gates (not_affected / under_investigation / affected) and record the policy hash used for each decision.
- Operate incrementally: react to change streams (advisory/vex/SBOM deltas) with ≤5min SLA. - Operate incrementally: react to change streams (advisory/vex/SBOM deltas) with ≤5min SLA.
- Provide simulations with diff summaries for UI/CLI workflows without modifying state. - Provide simulations with diff summaries for UI/CLI workflows without modifying state.
- Enforce strict determinism guard (no wall-clock, RNG, network beyond allow-listed services) and RBAC + tenancy via Authority scopes. - Enforce strict determinism guard (no wall-clock, RNG, network beyond allow-listed services) and RBAC + tenancy via Authority scopes.
- Support sealed/air-gapped deployments with offline bundles and sealed-mode hints. - Support sealed/air-gapped deployments with offline bundles and sealed-mode hints.
Non-goals: policy authoring UI (handled by Console), ingestion or advisory normalisation (Concelier), VEX consensus (Excititor), runtime enforcement (Zastava). Non-goals: policy authoring UI (handled by Console), ingestion or advisory normalisation (Concelier), VEX consensus (Excititor), runtime enforcement (Zastava).

View File

@@ -134,8 +134,9 @@ No confidences. Either a fact is proven with listed mechanisms, or it is not cla
* `images { imageDigest, repo, tag?, arch, createdAt, lastSeen }` * `images { imageDigest, repo, tag?, arch, createdAt, lastSeen }`
* `layers { layerDigest, mediaType, size, createdAt, lastSeen }` * `layers { layerDigest, mediaType, size, createdAt, lastSeen }`
* `links { fromType, fromDigest, artifactId }` // image/layer -> artifact * `links { fromType, fromDigest, artifactId }` // image/layer -> artifact
* `jobs { _id, kind, args, state, startedAt, heartbeatAt, endedAt, error }` * `jobs { _id, kind, args, state, startedAt, heartbeatAt, endedAt, error }`
* `lifecycleRules { ruleId, scope, ttlDays, retainIfReferenced, immutable }` * `lifecycleRules { ruleId, scope, ttlDays, retainIfReferenced, immutable }`
* `ruby.packages { _id: scanId, imageDigest, generatedAtUtc, packages[] }` // decoded `RubyPackageInventory` documents for CLI/Policy reuse
### 3.3 Object store layout (RustFS) ### 3.3 Object store layout (RustFS)
@@ -164,9 +165,10 @@ All under `/api/v1/scanner`. Auth: **OpTok** (DPoP/mTLS); RBAC scopes.
``` ```
POST /scans { imageRef|digest, force?:bool } → { scanId } POST /scans { imageRef|digest, force?:bool } → { scanId }
GET /scans/{id} → { status, imageDigest, artifacts[], rekor? } GET /scans/{id} → { status, imageDigest, artifacts[], rekor? }
GET /sboms/{imageDigest} ?format=cdx-json|cdx-pb|spdx-json&view=inventory|usage → bytes GET /sboms/{imageDigest} ?format=cdx-json|cdx-pb|spdx-json&view=inventory|usage → bytes
GET /diff?old=<digest>&new=<digest>&view=inventory|usage → diff.json GET /scans/{id}/ruby-packages → { scanId, imageDigest, generatedAt, packages[] }
GET /diff?old=<digest>&new=<digest>&view=inventory|usage → diff.json
POST /exports { imageDigest, format, view, attest?:bool } → { artifactId, rekor? } POST /exports { imageDigest, format, view, attest?:bool } → { artifactId, rekor? }
POST /reports { imageDigest, policyRevision? } → { reportId, rekor? } # delegates to backend policy+vex POST /reports { imageDigest, policyRevision? } → { reportId, rekor? } # delegates to backend policy+vex
GET /catalog/artifacts/{id} → { meta } GET /catalog/artifacts/{id} → { meta }
@@ -226,6 +228,7 @@ When `scanner.events.enabled = true`, the WebService serialises the signed repor
* The exported metadata (`stellaops.os.*` properties, license list, source package) feeds policy scoring and export pipelines * The exported metadata (`stellaops.os.*` properties, license list, source package) feeds policy scoring and export pipelines
directly Policy evaluates quiet rules against package provenance while Exporters forward the enriched fields into directly Policy evaluates quiet rules against package provenance while Exporters forward the enriched fields into
downstream JSON/Trivy payloads. downstream JSON/Trivy payloads.
* **Reachability lattice**: analyzers + runtime probes emit `Evidence`/`Mitigation` records (see `docs/reachability/lattice.md`). The lattice engine joins static path evidence, runtime hits (EventPipe/JFR), taint flows, environment gates, and mitigations into `ReachDecision` documents that feed VEX gating and event graph storage.
* Sprint401 introduces `StellaOps.Scanner.Symbols.Native` (DWARF/PDB reader + demangler) and `StellaOps.Scanner.CallGraph.Native` * Sprint401 introduces `StellaOps.Scanner.Symbols.Native` (DWARF/PDB reader + demangler) and `StellaOps.Scanner.CallGraph.Native`
(function boundary detector + call-edge builder). These libraries feed `FuncNode`/`CallEdge` CAS bundles and enrich reachability (function boundary detector + call-edge builder). These libraries feed `FuncNode`/`CallEdge` CAS bundles and enrich reachability
graphs with `{code_id, confidence, evidence}` so Signals/Policy/UI can cite function-level justifications. graphs with `{code_id, confidence, evidence}` so Signals/Policy/UI can cite function-level justifications.

View File

@@ -91,7 +91,9 @@
## 5. Data Contracts ## 5. Data Contracts
| Artifact | Shape | Consumer | | Artifact | Shape | Consumer |
|----------|-------|----------| |----------|-------|----------|
| `ruby_packages.json` | Array `{id, name, version, source, provenance, groups[], platform}` | SBOM Composer, Policy Engine | | `ruby_packages.json` | `RubyPackageInventory { scanId, imageDigest, generatedAt, packages[] }` where each package mirrors `{id, name, version, source, provenance, groups[], platform, runtime.*}` | SBOM Composer, Policy Engine |
`ruby_packages.json` records are persisted in Mongos `ruby.packages` collection via the `RubyPackageInventoryStore`. Scanner.WebService exposes the same payload through `GET /api/scans/{scanId}/ruby-packages` so Policy, CLI, and Offline Kit consumers can reuse the canonical inventory without re-running the analyzer. Each document is keyed by `scanId` and includes the resolved `imageDigest` plus the UTC timestamp recorded by the Worker.
| `ruby_runtime_edges.json` | Edges `{from, to, reason, confidence}` | EntryTrace overlay, Policy explain traces | | `ruby_runtime_edges.json` | Edges `{from, to, reason, confidence}` | EntryTrace overlay, Policy explain traces |
| `ruby_capabilities.json` | Capability `{kind, location, evidenceHash, params}` | Policy Engine (capability predicates) | | `ruby_capabilities.json` | Capability `{kind, location, evidenceHash, params}` | Policy Engine (capability predicates) |
| `ruby_observation.json` | Summary document (packages, runtime edges, capability flags) | Surface manifest, Policy explain traces | | `ruby_observation.json` | Summary document (packages, runtime edges, capability flags) | Surface manifest, Policy explain traces |

View File

@@ -8,11 +8,12 @@ This document specifies the `stella-dsl@1` grammar, semantics, and guardrails us
## 1·Design Goals ## 1·Design Goals
- **Deterministic:** Same policy + same inputs ⇒ identical findings on every machine. - **Deterministic:** Same policy + same inputs ⇒ identical findings on every machine.
- **Declarative:** No arbitrary loops, network calls, or clock access. - **Declarative:** No arbitrary loops, network calls, or clock access.
- **Explainable:** Every decision records the rule, inputs, and rationale in the explain trace. - **Explainable:** Every decision records the rule, inputs, and rationale in the explain trace.
- **Lean authoring:** Common precedence, severity, and suppression patterns are first-class. - **Lean authoring:** Common precedence, severity, and suppression patterns are first-class.
- **Offline-friendly:** Grammar and built-ins avoid cloud dependencies, run the same in sealed deployments. - **Offline-friendly:** Grammar and built-ins avoid cloud dependencies, run the same in sealed deployments.
- **Reachability-aware:** Policies can consume reachability lattice states (`ReachState`) and evidence scores to drive VEX gates (`not_affected`, `under_investigation`, `affected`).
--- ---
@@ -144,7 +145,7 @@ Within predicates and actions you may reference the following namespaces:
| `vex.any(...)`, `vex.all(...)`, `vex.count(...)` | Functions operating over all matching statements. | | `vex.any(...)`, `vex.all(...)`, `vex.count(...)` | Functions operating over all matching statements. |
| `run` | `policyId`, `policyVersion`, `tenant`, `timestamp` | Metadata for explain annotations. | | `run` | `policyId`, `policyVersion`, `tenant`, `timestamp` | Metadata for explain annotations. |
| `env` | Arbitrary key/value pairs injected per run (e.g., `environment`, `runtime`). | | `env` | Arbitrary key/value pairs injected per run (e.g., `environment`, `runtime`). |
| `telemetry` | Optional reachability signals; missing fields evaluate to `unknown`. | | `telemetry` | Optional reachability signals. Example fields: `telemetry.reachability.state`, `telemetry.reachability.score`, `telemetry.reachability.policyVersion`. Missing fields evaluate to `unknown`. |
| `secret` | `findings`, `bundle`, helper predicates | Populated when the Secrets Analyzer runs. Exposes masked leak findings and bundle metadata for policy decisions. | | `secret` | `findings`, `bundle`, helper predicates | Populated when the Secrets Analyzer runs. Exposes masked leak findings and bundle metadata for policy decisions. |
| `profile.<name>` | Values computed inside profile blocks (maps, scalars). | | `profile.<name>` | Values computed inside profile blocks (maps, scalars). |

View File

@@ -0,0 +1,204 @@
# Inline DSSE Provenance
> **Status:** Draft aligns with the November2025 advisory “store DSSE attestation refs inline on every SBOM/VEX event node.”
> **Owners:** Authority Guild · Feedser Guild · Platform Guild · Docs Guild.
This document defines how StellaOps records provenance for SBOM, VEX, scan, and derived events: every event node in the Mongo event graph includes DSSE + Rekor references and verification metadata so audits and replay become first-class queries.
---
## 1. Event patch (Mongo schema)
```jsonc
{
"_id": "evt_...",
"kind": "SBOM|VEX|SCAN|INGEST|DERIVED",
"subject": {
"purl": "pkg:nuget/example@1.2.3",
"digest": { "sha256": "..." },
"version": "1.2.3"
},
"provenance": {
"dsse": {
"envelopeDigest": "sha256:...",
"payloadType": "application/vnd.in-toto+json",
"key": {
"keyId": "cosign:SHA256-PKIX:ABC...",
"issuer": "fulcio",
"algo": "ECDSA"
},
"rekor": {
"logIndex": 1234567,
"uuid": "b3f0...",
"integratedTime": 1731081600,
"mirrorSeq": 987654 // optional
},
"chain": [
{ "type": "build", "id": "att:build#...", "digest": "sha256:..." },
{ "type": "sbom", "id": "att:sbom#...", "digest": "sha256:..." }
]
}
},
"trust": {
"verified": true,
"verifier": "Authority@stella",
"witnesses": 1,
"policyScore": 0.92
},
"ts": "2025-11-11T12:00:00Z"
}
```
### Key fields
| Field | Description |
|-------|-------------|
| `provenance.dsse.envelopeDigest` | SHA-256 of the DSSE envelope (not payload). |
| `provenance.dsse.payloadType` | Usually `application/vnd.in-toto+json`. |
| `provenance.dsse.key` | Key fingerprint / issuer / algorithm. |
| `provenance.dsse.rekor` | Rekor transparency log metadata (index, UUID, integrated time). |
| `provenance.dsse.chain` | Optional chain of dependent attestations (build → sbom → scan). |
| `trust.*` | Result of local verification (DSSE signature, Rekor proof, policy). |
---
## 2. Write path (ingest flow)
1. **Obtain provenance metadata** for each attested artifact (build, SBOM, VEX, scan). The CI script (`scripts/publish_attestation_with_provenance.sh`) captures `envelopeDigest`, Rekor `logIndex`/`uuid`, and key info.
2. **Authority/Feedser** verify the DSSE + Rekor proof (local cosign/rekor libs or the Signer service) and set `trust.verified = true`, `trust.verifier = "Authority@stella"`, `trust.witnesses = 1`.
3. **Attach** the provenance block before appending the event to Mongo, using `StellaOps.Provenance.Mongo` helpers.
4. **Backfill** historical events by resolving known subjects → attestation digests and running an update script.
Reference helper: `src/__Libraries/StellaOps.Provenance.Mongo/ProvenanceMongoExtensions.cs`.
---
## 3. CI/CD snippet
See `scripts/publish_attestation_with_provenance.sh`:
```bash
rekor-cli upload --rekor_server "$REKOR_URL" \
--artifact "$ATTEST_PATH" --type dsse --format json > rekor-upload.json
LOG_INDEX=$(jq '.LogIndex' rekor-upload.json)
UUID=$(jq -r '.UUID' rekor-upload.json)
ENVELOPE_SHA256=$(sha256sum "$ATTEST_PATH" | awk '{print $1}')
cat > provenance-meta.json <<EOF
{
"subject": { "imageRef": "$IMAGE_REF", "digest": { "sha256": "$IMAGE_DIGEST" } },
"dsse": {
"envelopeDigest": "sha256:$ENVELOPE_SHA256",
"payloadType": "application/vnd.in-toto+json",
"key": { "keyId": "$KEY_ID", "issuer": "$KEY_ISSUER", "algo": "$KEY_ALGO" },
"rekor": { "logIndex": $LOG_INDEX, "uuid": "$UUID", "integratedTime": $(jq '.IntegratedTime' rekor-upload.json) }
}
}
EOF
```
Feedser ingests this JSON and maps it to `DsseProvenance` + `TrustInfo`.
---
## 4. Mongo indexes
Create indexes to keep provenance queries fast (`mongosh`):
```javascript
db.events.createIndex(
{ "subject.digest.sha256": 1, "kind": 1, "provenance.dsse.rekor.logIndex": 1 },
{ name: "events_by_subject_kind_provenance" }
);
db.events.createIndex(
{ "kind": 1, "trust.verified": 1, "provenance.dsse.rekor.logIndex": 1 },
{ name: "events_unproven_by_kind" }
);
db.events.createIndex(
{ "provenance.dsse.rekor.logIndex": 1 },
{ name: "events_by_rekor_logindex" }
);
```
Corresponding C# helper: `MongoIndexes.EnsureEventIndexesAsync`.
---
## 5. Query recipes
* **All proven VEX for an image digest:**
```javascript
db.events.find({
kind: "VEX",
"subject.digest.sha256": "<digest>",
"provenance.dsse.rekor.logIndex": { $exists: true },
"trust.verified": true
})
```
* **Compliance gap (unverified data used for decisions):**
```javascript
db.events.aggregate([
{ $match: { kind: { $in: ["VEX","SBOM","SCAN"] } } },
{ $match: {
$or: [
{ "trust.verified": { $ne: true } },
{ "provenance.dsse.rekor.logIndex": { $exists: false } }
]
}
},
{ $group: { _id: "$kind", count: { $sum: 1 } } }
])
```
* **Replay slice:** filter for events where `provenance.dsse.chain` covers build → sbom → scan and export referenced attestation digests.
---
## 6. Policy gates
Examples:
```yaml
rules:
- id: GATE-PROVEN-VEX
when:
all:
- kind: "VEX"
- trust.verified: true
- key.keyId in VendorAllowlist
- rekor.integratedTime <= releaseFreeze
then:
decision: ALLOW
- id: BLOCK-UNPROVEN
when:
any:
- trust.verified != true
- provenance.dsse.rekor.logIndex missing
then:
decision: FAIL
reason: "Unproven evidence influences decision; require Rekor-backed attestation."
```
---
## 7. UI nudges
* **Provenance chip** on findings/events: `Verified • Rekor#1234567 • KeyID:cosign:...` (click → inclusion proof & DSSE preview).
* Facet filter: `Provenance = Verified / Missing / Key-Policy-Mismatch`.
---
## 8. Implementation tasks
| Task ID | Scope |
|---------|-------|
| `PROV-INLINE-401-028` | Extend Authority/Feedser write-paths to attach `provenance.dsse` + `trust` blocks using `StellaOps.Provenance.Mongo`. |
| `PROV-BACKFILL-401-029` | Backfill historical events with DSSE/Rekor refs based on existing attestation digests. |
| `PROV-INDEX-401-030` | Create Mongo indexes and expose helper queries for audits. |
Keep this document updated when new attestation types or mirror/witness policies land.

View File

@@ -108,6 +108,7 @@ Each sprint is two weeks; refer to `docs/implplan/SPRINT_401_reachability_eviden
- [Function-level evidence guide](function-level-evidence.md) captures the Nov2025 advisory scope, task references, and schema expectations; keep it in lockstep with sprint status. - [Function-level evidence guide](function-level-evidence.md) captures the Nov2025 advisory scope, task references, and schema expectations; keep it in lockstep with sprint status.
- [Reachability runtime runbook](../runbooks/reachability-runtime.md) now documents ingestion, CAS staging, air-gap handling, and troubleshooting—link every runtime feature PR to this guide. - [Reachability runtime runbook](../runbooks/reachability-runtime.md) now documents ingestion, CAS staging, air-gap handling, and troubleshooting—link every runtime feature PR to this guide.
- [VEX Evidence Playbook](../benchmarks/vex-evidence-playbook.md) defines the bench repo layout, artifact shapes, verifier tooling, and metrics; keep it updated when Policy/Signer/CLI features land. - [VEX Evidence Playbook](../benchmarks/vex-evidence-playbook.md) defines the bench repo layout, artifact shapes, verifier tooling, and metrics; keep it updated when Policy/Signer/CLI features land.
- [Reachability lattice](lattice.md) describes the confidence states, evidence/mitigation kinds, scoring policy, event graph schema, and VEX gates; update it when lattices or probes change.
- Update module dossiers (Scanner, Signals, Replay, Authority, Policy, UI) once each guild lands work. - Update module dossiers (Scanner, Signals, Replay, Authority, Policy, UI) once each guild lands work.
--- ---

View File

@@ -0,0 +1,198 @@
# Reachability Lattice & Scoring Model
> **Status:** Draft mirrors the November2025 advisory on confidence-based reachability.
> **Owners:** Scanner Guild · Policy Guild · Signals Guild.
This document defines the confidence lattice, evidence types, mitigation scoring, and policy gates used to turn static/runtime signals into reproducible reachability decisions and VEX statuses.
---
## 1. Overview
Classic “reachable: true/false” answers are too brittle. StellaOps models reachability as an **ordered lattice** with explicit states and scores. Each analyzer/runtime probe emits `Evidence` documents; mitigations add `Mitigation` entries. The lattice engine joins both inputs into a `ReachDecision`:
```
UNOBSERVED (09)
< POSSIBLE (1029)
< STATIC_PATH (3059)
< DYNAMIC_SEEN (6079)
< DYNAMIC_USER_TAINTED (8099)
< EXPLOIT_CONSTRAINTS_REMOVED (100)
```
Each state corresponds to increasing confidence that a vulnerability can execute. Mitigations reduce scores; policy gates map scores to VEX statuses (`not_affected`, `under_investigation`, `affected`).
---
## 2. Core types
```csharp
public enum ReachState { Unobserved, Possible, StaticPath, DynamicSeen, DynamicUserTainted, ExploitConstraintsRemoved }
public enum EvidenceKind {
StaticCallEdge, StaticEntryPointProximity, StaticPackageDeclaredOnly,
RuntimeMethodHit, RuntimeStackSample, RuntimeHttpRouteHit,
UserInputSource, DataTaintFlow, ConfigFlagOn, ConfigFlagOff,
ContainerNetworkRestricted, ContainerNetworkOpen,
WafRulePresent, PatchLevel, VendorVexNotAffected, VendorVexAffected,
ManualOverride
}
public sealed record Evidence(
string Id,
EvidenceKind Kind,
double Weight,
string Source,
DateTimeOffset Timestamp,
string? ArtifactRef,
string? Details);
public enum MitigationKind { WafRule, FeatureFlagDisabled, AuthZEnforced, InputValidation, PatchedVersion, ContainerNetworkIsolation, RuntimeGuard, KillSwitch, Other }
public sealed record Mitigation(
string Id,
MitigationKind Kind,
double Strength,
string Source,
DateTimeOffset Timestamp,
string? ConfigHash,
string? Details);
public sealed record ReachDecision(
string VulnerabilityId,
string ComponentPurl,
ReachState State,
int Score,
string PolicyVersion,
IReadOnlyList<Evidence> Evidence,
IReadOnlyList<Mitigation> Mitigations,
IReadOnlyDictionary<string,string> Metadata);
```
---
## 3. Scoring policy (default)
| Evidence class | Base score contribution |
|--------------------------|-------------------------|
| Static path (call graph) | ≥30 |
| Runtime hit | ≥60 |
| User-tainted flow | ≥80 |
| “Constraints removed” | =100 |
| Lockfile-only evidence | 10 (if no other signals)|
Mitigations subtract up to 40 points (configurable):
```
effectiveScore = baseScore - min(sum(m.Strength), 1.0) * MaxMitigationDelta
```
Clamp final scores to 0100.
---
## 4. State & VEX gates
Default thresholds (edit in `reachability.policy.yml`):
| State | Score range |
|----------------------------|-------------|
| UNOBSERVED | 09 |
| POSSIBLE | 1029 |
| STATIC_PATH | 3059 |
| DYNAMIC_SEEN | 6079 |
| DYNAMIC_USER_TAINTED | 8099 |
| EXPLOIT_CONSTRAINTS_REMOVED| 100 |
VEX mapping:
* **not_affected**: score ≤25 or mitigations dominate (score reduced below threshold).
* **affected**: score ≥60 (dynamic evidence without sufficient mitigation).
* **under_investigation**: everything between.
Each decision records `reachability.policy.version`, analyzer versions, policy hash, and config snapshot so downstream verifiers can replay the exact logic.
---
## 5. Evidence sources
| Signal group | Producers | EvidenceKind |
|--------------|-----------|--------------|
| Static call graph | Roslyn/IL walkers, ASP.NET routing models, JVM/JIT analyzers | `StaticCallEdge`, `StaticEntryPointProximity`, `StaticFrameworkRouteEdge` |
| Runtime sampling | .NET EventPipe, JFR, Node inspector, Go/Rust probes | `RuntimeMethodHit`, `RuntimeStackSample`, `RuntimeHttpRouteHit` |
| Data/taint | Taint analyzers, user-input detectors | `UserInputSource`, `DataTaintFlow` |
| Environment | Config snapshot, container args, network policy | `ConfigFlagOn/Off`, `ContainerNetworkRestricted/Open` |
| Mitigations | WAF connectors, patch diff, kill switches | `MitigationKind.*` via `Mitigation` records |
| Trust | Vendor VEX statements, manual overrides | `VendorVexNotAffected/Affected`, `ManualOverride` |
Each evidence object **must** log `Source`, timestamps, and references (function IDs, config hashes) so auditors can trace it in the event graph.
---
## 6. Event graph schema
Persist function-level edges and evidence in Mongo (or your event store) under:
* `reach_functions` documents keyed by `FunctionId`.
* `reach_call_sites` `CallSite` edges (`caller`, `callee`, `frameworkEdge`).
* `reach_evidence` array of `Evidence` per `(scanId, vulnId, component)`.
* `reach_mitigations` array of `Mitigation` entries with config hashes.
* `reach_decisions` final `ReachDecision` document; references above IDs.
All collections are tenant-scoped and include analyzer/policy version metadata.
---
## 7. Policy gates → VEX decisions
VEXer consumes `ReachDecision` and `reachability.policy.yml` to emit:
```json
{
"vulnerability": "CVE-2025-1234",
"products": ["pkg:nuget/Example@1.2.3"],
"status": "not_affected|under_investigation|affected",
"status_notes": "Reachability score 22 (Possible) with WAF rule mitigation.",
"justification": "component_not_present|vulnerable_code_not_present|... or custom reason",
"action_statement": "Monitor config ABC",
"impact_statement": "Runtime probes observed 0 hits; static call graph absent.",
"timestamp": "...",
"custom": {
"reachability": {
"state": "POSSIBLE",
"score": 22,
"policyVersion": "reach-1",
"evidenceRefs": ["evidence:123", "mitigation:456"]
}
}
}
```
Justifications cite specific evidence/mitigation IDs so replay bundles (`docs/replay/DETERMINISTIC_REPLAY.md`) can prove the decision.
---
## 8. Runtime probes (overview)
* .NET: EventPipe session watching `Microsoft-Windows-DotNETRuntime/Loader,JIT``RuntimeMethodHit`.
* JVM: JFR recording with `MethodProfilingSample` events.
* Node/TS: Inspector or `--trace-event-categories node.async_hooks,node.perf` sample.
* Go/Rust: `pprof`/probe instrumentation.
All runtime probes write evidence via `IRuntimeEvidenceSink`, which deduplicates hits, enriches them with `FunctionId`, and stores them in `reach_evidence`.
See `src/Scanner/StellaOps.Scanner.WebService/Reachability/Runtime/DotNetRuntimeProbe.cs` (once implemented) for reference.
---
## 9. Roadmap
| Task | Description |
|------|-------------|
| `REACH-LATTICE-401-023` | Initial lattice types + scoring engine + event graph schema. |
| `REACH-RUNTIME-402-024` | Productionize runtime probes (EventPipe/JFR) with opt-in config and telemetry. |
| `REACH-VEX-402-025` | Wire `ReachDecision` into VEX generator; ensure OpenVEX/CSAF cite reachability evidence. |
| `REACH-POLICY-402-026` | Expose reachability gates in Policy DSL & CLI (edit/lint/test). |
Keep this doc updated as the lattice evolves or new signals/mitigations are added.
*** End Patch

149
docs/uncertainty/README.md Normal file
View File

@@ -0,0 +1,149 @@
# Uncertainty States & Entropy Scoring
> **Status:** Draft aligns with the November2025 advisory on explicit uncertainty tracking.
> **Owners:** Signals Guild · Concelier Guild · UI Guild.
StellaOps treats missing data and untrusted evidence as **first-class uncertainty states**, not silent false negatives. Each finding stores a list of `UncertaintyState` entries plus supporting evidence; the risk scorer uses their entropy to adjust final risk. Policy and UI surfaces reveal uncertainty to operators rather than hiding it.
---
## 1. Core states (extensible)
| Code | Name | Meaning |
|------|------------------------|---------------------------------------------------------------------------|
| `U1` | MissingSymbolResolution| Vulnerability → function mapping unresolved (no PDB/IL map, missing dSYMs). |
| `U2` | MissingPurl | Package identity/version ambiguous (lockfile absent, heuristics only). |
| `U3` | UntrustedAdvisory | Advisory source lacks DSSE/Sigstore provenance or corroboration. |
| `U4+`| (future) | e.g. partial SBOM coverage, missing container layers, unresolved transitives. |
Each state records `entropy` (01) and an evidence list pointing to analyzers, heuristics, or advisory sources that asserted the uncertainty.
---
## 2. Schema
```jsonc
{
"uncertainty": {
"states": [
{
"code": "U1",
"name": "MissingSymbolResolution",
"entropy": 0.72,
"evidence": [
{
"type": "AnalyzerProbe",
"sourceId": "dotnet.symbolizer",
"detail": "No PDB/IL map for Foo.Bar::DoWork"
}
],
"timestamp": "2025-11-12T14:12:00Z"
},
{
"code": "U2",
"name": "MissingPurl",
"entropy": 0.55,
"evidence": [
{
"type": "PackageHeuristic",
"sourceId": "jar.manifest",
"detail": "Guessed groupId=com.example, version ~= 1.9.x"
}
]
}
]
}
}
```
### C# models
```csharp
public sealed record UncertaintyEvidence(string Type, string SourceId, string Detail);
public sealed record UncertaintyState(
string Code,
string Name,
double Entropy,
IReadOnlyList<UncertaintyEvidence> Evidence);
```
Store them alongside `FindingDocument` in Signals and expose via APIs/CLI/GraphQL so downstream services can display them or enforce policies.
---
## 3. Risk score math
```
riskScore = baseScore
× reachabilityFactor (0..1)
× trustFactor (0..1)
× (1 + entropyBoost)
entropyBoost = clamp(avg(uncertainty[i].entropy) × k, 0 .. 0.5)
```
* `k` defaults to `0.5`. With mean entropy = 0.8, boost = 0.4 → risk increases 40% to highlight unknowns.
* If no uncertainty states exist, entropy boost = 0 and the previous scoring remains.
Persist both `uncertainty.states` and `riskScore` so policies, dashboards, and APIs stay deterministic.
---
## 4. Policy + actions
Use uncertainty in Concelier/Excitors policies:
* **Block release** if critical CVE has `U1` with entropy ≥0.70 until symbols or runtime probes are provided.
* **Warn** when only `U3` exists allow deployment but require corroboration (OSV/GHSA, CSAF).
* **Auto-create tasks** for `U2` to fix SBOM/purl data quality.
Recommended policy predicates:
```yaml
when:
all:
- uncertaintyCodesAny: ["U1"]
- maxEntropyGte: 0.7
```
Excitors can suggest remediation actions (upload PDBs, add lockfiles, fetch signed CSAF) based on state codes.
---
## 5. UI guidelines
* Display chips `U1`, `U2`, … on each finding. Tooltip: entropy level + evidence bullets (“AnalyzerProbe/dotnet.symbolizer: …”).
* Provide “How to reduce entropy” hints: symbol uploads, EventPipe probes, purl overrides, advisory verification.
* Show entropy in filters (e.g., “entropy ≥ 0.5”) so teams can prioritise closing uncertainty gaps.
See `components/UncertaintyChipStack` (planned) for a reference implementation.
---
## 6. Event sourcing / audit
Emit `FindingUncertaintyUpdated` events whenever the set changes:
```json
{
"type": "FindingUncertaintyUpdated",
"findingId": "finding:service:prod:CVE-2023-12345",
"updatedAt": "2025-11-12T14:21:33Z",
"uncertainty": [ ...states... ]
}
```
Projections recompute `riskScore` deterministically, and the event log provides an audit trail showing when/why entropy changed.
---
## 7. Action hints (per state)
| Code | Suggested remediation |
|------|-----------------------|
| `U1` | Upload PDBs/dSYM files, enable symbolizer connectors, attach runtime probes (EventPipe/JFR). |
| `U2` | Provide package overrides, ingest lockfiles, fix SBOM generator metadata. |
| `U3` | Obtain signed CSAF/OSV evidence, verify via Excitors connectors, or mark trust overrides in policy. |
Keep this file updated as new states (U4+) or tooling hooks land. Link additional guides (symbol upload, purl overrides) once available.

View File

@@ -6888,14 +6888,23 @@ internal static class CommandHandlers
logger.LogInformation("Resolving Ruby packages for scan {ScanId}.", identifier); logger.LogInformation("Resolving Ruby packages for scan {ScanId}.", identifier);
activity?.SetTag("stellaops.cli.scan_id", identifier); activity?.SetTag("stellaops.cli.scan_id", identifier);
var packages = await client.GetRubyPackagesAsync(identifier, cancellationToken).ConfigureAwait(false); var inventory = await client.GetRubyPackagesAsync(identifier, cancellationToken).ConfigureAwait(false);
var report = RubyResolveReport.Create(identifier, packages); if (inventory is null)
{
outcome = "empty";
Environment.ExitCode = 0;
AnsiConsole.MarkupLine("[yellow]Ruby package inventory is not available for scan {0}.[/]", Markup.Escape(identifier));
return;
}
var report = RubyResolveReport.Create(inventory);
if (!report.HasPackages) if (!report.HasPackages)
{ {
outcome = "empty"; outcome = "empty";
Environment.ExitCode = 0; Environment.ExitCode = 0;
AnsiConsole.MarkupLine("[yellow]No Ruby packages found for scan {0}.[/]", Markup.Escape(identifier)); var displayScanId = string.IsNullOrWhiteSpace(report.ScanId) ? identifier : report.ScanId;
AnsiConsole.MarkupLine("[yellow]No Ruby packages found for scan {0}.[/]", Markup.Escape(displayScanId));
return; return;
} }
@@ -7225,6 +7234,12 @@ internal static class CommandHandlers
[JsonPropertyName("scanId")] [JsonPropertyName("scanId")]
public string ScanId { get; } public string ScanId { get; }
[JsonPropertyName("imageDigest")]
public string ImageDigest { get; }
[JsonPropertyName("generatedAt")]
public DateTimeOffset GeneratedAt { get; }
[JsonPropertyName("groups")] [JsonPropertyName("groups")]
public IReadOnlyList<RubyResolveGroup> Groups { get; } public IReadOnlyList<RubyResolveGroup> Groups { get; }
@@ -7234,15 +7249,17 @@ internal static class CommandHandlers
[JsonIgnore] [JsonIgnore]
public int TotalPackages => Groups.Sum(static group => group.Packages.Count); public int TotalPackages => Groups.Sum(static group => group.Packages.Count);
private RubyResolveReport(string scanId, IReadOnlyList<RubyResolveGroup> groups) private RubyResolveReport(string scanId, string imageDigest, DateTimeOffset generatedAt, IReadOnlyList<RubyResolveGroup> groups)
{ {
ScanId = scanId; ScanId = scanId;
ImageDigest = imageDigest;
GeneratedAt = generatedAt;
Groups = groups; Groups = groups;
} }
public static RubyResolveReport Create(string scanId, IReadOnlyList<RubyPackageArtifactModel>? packages) public static RubyResolveReport Create(RubyPackageInventoryModel inventory)
{ {
var resolved = (packages ?? Array.Empty<RubyPackageArtifactModel>()) var resolved = (inventory.Packages ?? Array.Empty<RubyPackageArtifactModel>())
.Select(RubyResolvePackage.FromModel) .Select(RubyResolvePackage.FromModel)
.ToArray(); .ToArray();
@@ -7272,7 +7289,9 @@ internal static class CommandHandlers
.ToArray())) .ToArray()))
.ToArray(); .ToArray();
return new RubyResolveReport(scanId, grouped); var normalizedScanId = inventory.ScanId ?? string.Empty;
var normalizedDigest = inventory.ImageDigest ?? string.Empty;
return new RubyResolveReport(normalizedScanId, normalizedDigest, inventory.GeneratedAt, grouped);
} }
} }

View File

@@ -892,7 +892,7 @@ internal sealed class BackendOperationsClient : IBackendOperationsClient
return result; return result;
} }
public async Task<IReadOnlyList<RubyPackageArtifactModel>> GetRubyPackagesAsync(string scanId, CancellationToken cancellationToken) public async Task<RubyPackageInventoryModel?> GetRubyPackagesAsync(string scanId, CancellationToken cancellationToken)
{ {
EnsureBackendConfigured(); EnsureBackendConfigured();
@@ -907,7 +907,7 @@ internal sealed class BackendOperationsClient : IBackendOperationsClient
using var response = await _httpClient.SendAsync(request, cancellationToken).ConfigureAwait(false); using var response = await _httpClient.SendAsync(request, cancellationToken).ConfigureAwait(false);
if (response.StatusCode == HttpStatusCode.NotFound) if (response.StatusCode == HttpStatusCode.NotFound)
{ {
return Array.Empty<RubyPackageArtifactModel>(); return null;
} }
if (!response.IsSuccessStatusCode) if (!response.IsSuccessStatusCode)
@@ -916,11 +916,25 @@ internal sealed class BackendOperationsClient : IBackendOperationsClient
throw new InvalidOperationException(failure); throw new InvalidOperationException(failure);
} }
var packages = await response.Content var inventory = await response.Content
.ReadFromJsonAsync<IReadOnlyList<RubyPackageArtifactModel>>(SerializerOptions, cancellationToken) .ReadFromJsonAsync<RubyPackageInventoryModel>(SerializerOptions, cancellationToken)
.ConfigureAwait(false); .ConfigureAwait(false);
return packages ?? Array.Empty<RubyPackageArtifactModel>(); if (inventory is null)
{
throw new InvalidOperationException("Ruby package response payload was empty.");
}
var normalizedScanId = string.IsNullOrWhiteSpace(inventory.ScanId) ? scanId : inventory.ScanId;
var normalizedDigest = inventory.ImageDigest ?? string.Empty;
var packages = inventory.Packages ?? Array.Empty<RubyPackageArtifactModel>();
return inventory with
{
ScanId = normalizedScanId,
ImageDigest = normalizedDigest,
Packages = packages
};
} }
public async Task<AdvisoryPipelinePlanResponseModel> CreateAdvisoryPipelinePlanAsync( public async Task<AdvisoryPipelinePlanResponseModel> CreateAdvisoryPipelinePlanAsync(

View File

@@ -49,7 +49,7 @@ internal interface IBackendOperationsClient
Task<EntryTraceResponseModel?> GetEntryTraceAsync(string scanId, CancellationToken cancellationToken); Task<EntryTraceResponseModel?> GetEntryTraceAsync(string scanId, CancellationToken cancellationToken);
Task<IReadOnlyList<RubyPackageArtifactModel>> GetRubyPackagesAsync(string scanId, CancellationToken cancellationToken); Task<RubyPackageInventoryModel?> GetRubyPackagesAsync(string scanId, CancellationToken cancellationToken);
Task<AdvisoryPipelinePlanResponseModel> CreateAdvisoryPipelinePlanAsync(AdvisoryAiTaskType taskType, AdvisoryPipelinePlanRequestModel request, CancellationToken cancellationToken); Task<AdvisoryPipelinePlanResponseModel> CreateAdvisoryPipelinePlanAsync(AdvisoryAiTaskType taskType, AdvisoryPipelinePlanRequestModel request, CancellationToken cancellationToken);

View File

@@ -1,3 +1,4 @@
using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.Text.Json.Serialization; using System.Text.Json.Serialization;
@@ -26,3 +27,8 @@ internal sealed record RubyPackageRuntime(
[property: JsonPropertyName("files")] IReadOnlyList<string>? Files, [property: JsonPropertyName("files")] IReadOnlyList<string>? Files,
[property: JsonPropertyName("reasons")] IReadOnlyList<string>? Reasons); [property: JsonPropertyName("reasons")] IReadOnlyList<string>? Reasons);
internal sealed record RubyPackageInventoryModel(
[property: JsonPropertyName("scanId")] string ScanId,
[property: JsonPropertyName("imageDigest")] string ImageDigest,
[property: JsonPropertyName("generatedAt")] DateTimeOffset GeneratedAt,
[property: JsonPropertyName("packages")] IReadOnlyList<RubyPackageArtifactModel> Packages);

View File

@@ -2,5 +2,4 @@
| Task ID | State | Notes | | Task ID | State | Notes |
| --- | --- | --- | | --- | --- | --- |
| `SCANNER-CLI-0001` | DOING (2025-11-09) | Add Ruby-specific verbs/help, refresh docs & goldens per Sprint 138. | | `SCANNER-CLI-0001` | DONE (2025-11-12) | Ruby verbs now consume the persisted `RubyPackageInventory`, warn when inventories are missing, and docs/tests were refreshed per Sprint 138. |

View File

@@ -515,16 +515,18 @@ public sealed class CommandHandlersTests
var originalExit = Environment.ExitCode; var originalExit = Environment.ExitCode;
var backend = new StubBackendClient(new JobTriggerResult(true, "ok", null, null)) var backend = new StubBackendClient(new JobTriggerResult(true, "ok", null, null))
{ {
RubyPackages = new[] RubyInventory = CreateRubyInventory(
{ "scan-ruby",
CreateRubyPackageArtifact("pkg-rack", "rack", "3.1.0", new[] { "default", "web" }, runtimeUsed: true), new[]
CreateRubyPackageArtifact("pkg-sidekiq", "sidekiq", "7.2.1", groups: null, runtimeUsed: false, metadataOverrides: new Dictionary<string, string?>
{ {
["groups"] = "jobs", CreateRubyPackageArtifact("pkg-rack", "rack", "3.1.0", new[] { "default", "web" }, runtimeUsed: true),
["runtime.entrypoints"] = "config/jobs.rb", CreateRubyPackageArtifact("pkg-sidekiq", "sidekiq", "7.2.1", groups: null, runtimeUsed: false, metadataOverrides: new Dictionary<string, string?>
["runtime.files"] = "config/jobs.rb" {
["groups"] = "jobs",
["runtime.entrypoints"] = "config/jobs.rb",
["runtime.files"] = "config/jobs.rb"
})
}) })
}
}; };
var provider = BuildServiceProvider(backend); var provider = BuildServiceProvider(backend);
@@ -557,15 +559,17 @@ public sealed class CommandHandlersTests
public async Task HandleRubyResolveAsync_WritesJson() public async Task HandleRubyResolveAsync_WritesJson()
{ {
var originalExit = Environment.ExitCode; var originalExit = Environment.ExitCode;
const string identifier = "ruby-scan-json";
var backend = new StubBackendClient(new JobTriggerResult(true, "ok", null, null)) var backend = new StubBackendClient(new JobTriggerResult(true, "ok", null, null))
{ {
RubyPackages = new[] RubyInventory = CreateRubyInventory(
{ identifier,
CreateRubyPackageArtifact("pkg-rack-json", "rack", "3.1.0", new[] { "default" }, runtimeUsed: true) new[]
} {
CreateRubyPackageArtifact("pkg-rack-json", "rack", "3.1.0", new[] { "default" }, runtimeUsed: true)
})
}; };
var provider = BuildServiceProvider(backend); var provider = BuildServiceProvider(backend);
const string identifier = "ruby-scan-json";
try try
{ {
@@ -608,6 +612,35 @@ public sealed class CommandHandlersTests
} }
} }
[Fact]
public async Task HandleRubyResolveAsync_NotifiesWhenInventoryMissing()
{
var originalExit = Environment.ExitCode;
var backend = new StubBackendClient(new JobTriggerResult(true, "ok", null, null));
var provider = BuildServiceProvider(backend);
try
{
var output = await CaptureTestConsoleAsync(async _ =>
{
await CommandHandlers.HandleRubyResolveAsync(
provider,
imageReference: null,
scanId: "scan-missing",
format: "table",
verbose: false,
cancellationToken: CancellationToken.None);
});
Assert.Equal(0, Environment.ExitCode);
Assert.Contains("not available", output.Combined, StringComparison.OrdinalIgnoreCase);
}
finally
{
Environment.ExitCode = originalExit;
}
}
[Fact] [Fact]
public async Task HandleAdviseRunAsync_WritesOutputAndSetsExitCode() public async Task HandleAdviseRunAsync_WritesOutputAndSetsExitCode()
{ {
@@ -3520,6 +3553,18 @@ spec:
mergedMetadata); mergedMetadata);
} }
private static RubyPackageInventoryModel CreateRubyInventory(
string scanId,
IReadOnlyList<RubyPackageArtifactModel> packages,
string? imageDigest = null)
{
return new RubyPackageInventoryModel(
scanId,
imageDigest ?? "sha256:inventory",
DateTimeOffset.UtcNow,
packages);
}
private static string ComputeSha256Base64(string path) private static string ComputeSha256Base64(string path)
{ {
@@ -3601,8 +3646,8 @@ spec:
public string? LastExcititorRoute { get; private set; } public string? LastExcititorRoute { get; private set; }
public HttpMethod? LastExcititorMethod { get; private set; } public HttpMethod? LastExcititorMethod { get; private set; }
public object? LastExcititorPayload { get; private set; } public object? LastExcititorPayload { get; private set; }
public IReadOnlyList<RubyPackageArtifactModel> RubyPackages { get; set; } = Array.Empty<RubyPackageArtifactModel>(); public RubyPackageInventoryModel? RubyInventory { get; set; }
public Exception? RubyPackagesException { get; set; } public Exception? RubyInventoryException { get; set; }
public string? LastRubyPackagesScanId { get; private set; } public string? LastRubyPackagesScanId { get; private set; }
public List<(string ExportId, string DestinationPath, string? Algorithm, string? Digest)> ExportDownloads { get; } = new(); public List<(string ExportId, string DestinationPath, string? Algorithm, string? Digest)> ExportDownloads { get; } = new();
public ExcititorOperationResult? ExcititorResult { get; set; } = new ExcititorOperationResult(true, "ok", null, null); public ExcititorOperationResult? ExcititorResult { get; set; } = new ExcititorOperationResult(true, "ok", null, null);
@@ -3830,15 +3875,15 @@ spec:
return Task.FromResult(EntryTraceResponse); return Task.FromResult(EntryTraceResponse);
} }
public Task<IReadOnlyList<RubyPackageArtifactModel>> GetRubyPackagesAsync(string scanId, CancellationToken cancellationToken) public Task<RubyPackageInventoryModel?> GetRubyPackagesAsync(string scanId, CancellationToken cancellationToken)
{ {
LastRubyPackagesScanId = scanId; LastRubyPackagesScanId = scanId;
if (RubyPackagesException is not null) if (RubyInventoryException is not null)
{ {
throw RubyPackagesException; throw RubyInventoryException;
} }
return Task.FromResult(RubyPackages); return Task.FromResult(RubyInventory);
} }
public Task<AdvisoryPipelinePlanResponseModel> CreateAdvisoryPipelinePlanAsync(AdvisoryAiTaskType taskType, AdvisoryPipelinePlanRequestModel request, CancellationToken cancellationToken) public Task<AdvisoryPipelinePlanResponseModel> CreateAdvisoryPipelinePlanAsync(AdvisoryAiTaskType taskType, AdvisoryPipelinePlanRequestModel request, CancellationToken cancellationToken)

View File

@@ -0,0 +1,21 @@
using System.Text.Json.Serialization;
using StellaOps.Scanner.Core.Contracts;
namespace StellaOps.Scanner.WebService.Contracts;
public sealed record RubyPackagesResponse
{
[JsonPropertyName("scanId")]
public string ScanId { get; init; } = string.Empty;
[JsonPropertyName("imageDigest")]
public string ImageDigest { get; init; } = string.Empty;
[JsonPropertyName("generatedAt")]
public DateTimeOffset GeneratedAt { get; init; }
= DateTimeOffset.UtcNow;
[JsonPropertyName("packages")]
public IReadOnlyList<RubyPackageArtifact> Packages { get; init; }
= Array.Empty<RubyPackageArtifact>();
}

View File

@@ -13,6 +13,8 @@ using StellaOps.Scanner.WebService.Domain;
using StellaOps.Scanner.WebService.Infrastructure; using StellaOps.Scanner.WebService.Infrastructure;
using StellaOps.Scanner.WebService.Security; using StellaOps.Scanner.WebService.Security;
using StellaOps.Scanner.WebService.Services; using StellaOps.Scanner.WebService.Services;
using DomainScanProgressEvent = StellaOps.Scanner.WebService.Domain.ScanProgressEvent;
using StellaOps.Scanner.Core.Contracts;
using StellaOps.Scanner.EntryTrace; using StellaOps.Scanner.EntryTrace;
namespace StellaOps.Scanner.WebService.Endpoints; namespace StellaOps.Scanner.WebService.Endpoints;
@@ -54,6 +56,12 @@ internal static class ScanEndpoints
.Produces<EntryTraceResponse>(StatusCodes.Status200OK) .Produces<EntryTraceResponse>(StatusCodes.Status200OK)
.Produces(StatusCodes.Status404NotFound) .Produces(StatusCodes.Status404NotFound)
.RequireAuthorization(ScannerPolicies.ScansRead); .RequireAuthorization(ScannerPolicies.ScansRead);
scans.MapGet("/{scanId}/ruby-packages", HandleRubyPackagesAsync)
.WithName("scanner.scans.ruby-packages")
.Produces<RubyPackagesResponse>(StatusCodes.Status200OK)
.Produces(StatusCodes.Status404NotFound)
.RequireAuthorization(ScannerPolicies.ScansRead);
} }
private static async Task<IResult> HandleSubmitAsync( private static async Task<IResult> HandleSubmitAsync(
@@ -311,6 +319,46 @@ internal static class ScanEndpoints
return Json(response, StatusCodes.Status200OK); return Json(response, StatusCodes.Status200OK);
} }
private static async Task<IResult> HandleRubyPackagesAsync(
string scanId,
IRubyPackageInventoryStore inventoryStore,
HttpContext context,
CancellationToken cancellationToken)
{
ArgumentNullException.ThrowIfNull(inventoryStore);
if (!ScanId.TryParse(scanId, out var parsed))
{
return ProblemResultFactory.Create(
context,
ProblemTypes.Validation,
"Invalid scan identifier",
StatusCodes.Status400BadRequest,
detail: "Scan identifier is required.");
}
var inventory = await inventoryStore.GetAsync(parsed.Value, cancellationToken).ConfigureAwait(false);
if (inventory is null)
{
return ProblemResultFactory.Create(
context,
ProblemTypes.NotFound,
"Ruby packages not found",
StatusCodes.Status404NotFound,
detail: "Ruby package inventory is not available for the requested scan.");
}
var response = new RubyPackagesResponse
{
ScanId = inventory.ScanId,
ImageDigest = inventory.ImageDigest,
GeneratedAt = inventory.GeneratedAtUtc,
Packages = inventory.Packages
};
return Json(response, StatusCodes.Status200OK);
}
private static IReadOnlyDictionary<string, string> NormalizeMetadata(IDictionary<string, string> metadata) private static IReadOnlyDictionary<string, string> NormalizeMetadata(IDictionary<string, string> metadata)
{ {
if (metadata is null || metadata.Count == 0) if (metadata is null || metadata.Count == 0)
@@ -342,7 +390,7 @@ internal static class ScanEndpoints
await writer.WriteAsync(new[] { (byte)'\n' }, cancellationToken).ConfigureAwait(false); await writer.WriteAsync(new[] { (byte)'\n' }, cancellationToken).ConfigureAwait(false);
} }
private static async Task WriteSseAsync(PipeWriter writer, object payload, ScanProgressEvent progressEvent, CancellationToken cancellationToken) private static async Task WriteSseAsync(PipeWriter writer, object payload, DomainScanProgressEvent progressEvent, CancellationToken cancellationToken)
{ {
var json = JsonSerializer.Serialize(payload, SerializerOptions); var json = JsonSerializer.Serialize(payload, SerializerOptions);
var eventName = progressEvent.State.ToLowerInvariant(); var eventName = progressEvent.State.ToLowerInvariant();

View File

@@ -19,6 +19,7 @@ using StellaOps.Cryptography.DependencyInjection;
using StellaOps.Cryptography.Plugin.BouncyCastle; using StellaOps.Cryptography.Plugin.BouncyCastle;
using StellaOps.Policy; using StellaOps.Policy;
using StellaOps.Scanner.Cache; using StellaOps.Scanner.Cache;
using StellaOps.Scanner.Core.Contracts;
using StellaOps.Scanner.Surface.Env; using StellaOps.Scanner.Surface.Env;
using StellaOps.Scanner.Surface.FS; using StellaOps.Scanner.Surface.FS;
using StellaOps.Scanner.Surface.Secrets; using StellaOps.Scanner.Surface.Secrets;

View File

@@ -13,6 +13,7 @@ using StellaOps.Scanner.Storage.Catalog;
using StellaOps.Scanner.Storage.ObjectStore; using StellaOps.Scanner.Storage.ObjectStore;
using StellaOps.Scanner.Storage.Repositories; using StellaOps.Scanner.Storage.Repositories;
using StellaOps.Scanner.Surface.Env; using StellaOps.Scanner.Surface.Env;
using StellaOps.Scanner.Surface.FS;
using StellaOps.Scanner.WebService.Contracts; using StellaOps.Scanner.WebService.Contracts;
using StellaOps.Scanner.WebService.Options; using StellaOps.Scanner.WebService.Options;

View File

@@ -0,0 +1,112 @@
using System;
using System.Collections.Generic;
using System.Linq;
using StellaOps.Scanner.Analyzers.Lang;
using StellaOps.Scanner.Core.Contracts;
namespace StellaOps.Scanner.Worker.Processing.Surface;
internal static class RubyPackageInventoryBuilder
{
private const string AnalyzerId = "ruby";
public static IReadOnlyList<RubyPackageArtifact> Build(LanguageAnalyzerResult result)
{
ArgumentNullException.ThrowIfNull(result);
var artifacts = new List<RubyPackageArtifact>();
foreach (var component in result.Components)
{
if (!component.AnalyzerId.Equals(AnalyzerId, StringComparison.OrdinalIgnoreCase))
{
continue;
}
if (!string.Equals(component.Type, "gem", StringComparison.OrdinalIgnoreCase))
{
continue;
}
var metadata = component.Metadata ?? new Dictionary<string, string?>(StringComparer.OrdinalIgnoreCase);
var metadataCopy = new Dictionary<string, string?>(metadata, StringComparer.OrdinalIgnoreCase);
var groups = SplitList(metadataCopy, "groups");
var entrypoints = SplitList(metadataCopy, "runtime.entrypoints");
var runtimeFiles = SplitList(metadataCopy, "runtime.files");
var runtimeReasons = SplitList(metadataCopy, "runtime.reasons");
var declaredOnly = TryParseBool(metadataCopy, "declaredOnly");
var runtimeUsed = TryParseBool(metadataCopy, "runtime.used") ?? component.UsedByEntrypoint;
var source = GetString(metadataCopy, "source");
var platform = GetString(metadataCopy, "platform");
var lockfile = GetString(metadataCopy, "lockfile");
var artifactLocator = GetString(metadataCopy, "artifact");
var provenance = (source is not null || lockfile is not null || artifactLocator is not null)
? new RubyPackageProvenance(source, lockfile, artifactLocator ?? lockfile)
: null;
RubyPackageRuntime? runtime = null;
if (entrypoints is { Count: > 0 } || runtimeFiles is { Count: > 0 } || runtimeReasons is { Count: > 0 })
{
runtime = new RubyPackageRuntime(entrypoints, runtimeFiles, runtimeReasons);
}
artifacts.Add(new RubyPackageArtifact(
component.ComponentKey,
component.Name,
component.Version,
source,
platform,
groups,
declaredOnly,
runtimeUsed,
provenance,
runtime,
metadataCopy));
}
return artifacts;
}
private static IReadOnlyList<string>? SplitList(IReadOnlyDictionary<string, string?> metadata, string key)
{
if (!metadata.TryGetValue(key, out var raw) || string.IsNullOrWhiteSpace(raw))
{
return Array.Empty<string>();
}
var values = raw
.Split(';', StringSplitOptions.RemoveEmptyEntries | StringSplitOptions.TrimEntries)
.Where(static value => !string.IsNullOrWhiteSpace(value))
.Distinct(StringComparer.OrdinalIgnoreCase)
.ToArray();
return values.Length == 0 ? Array.Empty<string>() : values;
}
private static bool? TryParseBool(IReadOnlyDictionary<string, string?> metadata, string key)
{
if (!metadata.TryGetValue(key, out var value) || string.IsNullOrWhiteSpace(value))
{
return null;
}
if (bool.TryParse(value, out var parsed))
{
return parsed;
}
return null;
}
private static string? GetString(IReadOnlyDictionary<string, string?> metadata, string key)
{
if (!metadata.TryGetValue(key, out var value) || string.IsNullOrWhiteSpace(value))
{
return null;
}
return value.Trim();
}
}

View File

@@ -1,5 +1,6 @@
using System.Collections.Generic; using System.Collections.Generic;
using System.Collections.Immutable; using System.Collections.Immutable;
using System.Collections.ObjectModel;
using System.Diagnostics; using System.Diagnostics;
using System.Globalization; using System.Globalization;
using System.Reflection; using System.Reflection;
@@ -7,6 +8,7 @@ using System.Text;
using System.Text.Json; using System.Text.Json;
using System.Text.Json.Serialization; using System.Text.Json.Serialization;
using Microsoft.Extensions.Logging; using Microsoft.Extensions.Logging;
using StellaOps.Scanner.Analyzers.Lang;
using StellaOps.Scanner.Core.Contracts; using StellaOps.Scanner.Core.Contracts;
using StellaOps.Scanner.EntryTrace; using StellaOps.Scanner.EntryTrace;
using StellaOps.Scanner.EntryTrace.Serialization; using StellaOps.Scanner.EntryTrace.Serialization;
@@ -38,6 +40,7 @@ internal sealed class SurfaceManifestStageExecutor : IScanStageExecutor
private readonly ScannerWorkerMetrics _metrics; private readonly ScannerWorkerMetrics _metrics;
private readonly ILogger<SurfaceManifestStageExecutor> _logger; private readonly ILogger<SurfaceManifestStageExecutor> _logger;
private readonly ICryptoHash _hash; private readonly ICryptoHash _hash;
private readonly IRubyPackageInventoryStore _rubyPackageStore;
private readonly string _componentVersion; private readonly string _componentVersion;
public SurfaceManifestStageExecutor( public SurfaceManifestStageExecutor(
@@ -46,7 +49,8 @@ internal sealed class SurfaceManifestStageExecutor : IScanStageExecutor
ISurfaceEnvironment surfaceEnvironment, ISurfaceEnvironment surfaceEnvironment,
ScannerWorkerMetrics metrics, ScannerWorkerMetrics metrics,
ILogger<SurfaceManifestStageExecutor> logger, ILogger<SurfaceManifestStageExecutor> logger,
ICryptoHash hash) ICryptoHash hash,
IRubyPackageInventoryStore rubyPackageStore)
{ {
_publisher = publisher ?? throw new ArgumentNullException(nameof(publisher)); _publisher = publisher ?? throw new ArgumentNullException(nameof(publisher));
_surfaceCache = surfaceCache ?? throw new ArgumentNullException(nameof(surfaceCache)); _surfaceCache = surfaceCache ?? throw new ArgumentNullException(nameof(surfaceCache));
@@ -54,6 +58,7 @@ internal sealed class SurfaceManifestStageExecutor : IScanStageExecutor
_metrics = metrics ?? throw new ArgumentNullException(nameof(metrics)); _metrics = metrics ?? throw new ArgumentNullException(nameof(metrics));
_logger = logger ?? throw new ArgumentNullException(nameof(logger)); _logger = logger ?? throw new ArgumentNullException(nameof(logger));
_hash = hash ?? throw new ArgumentNullException(nameof(hash)); _hash = hash ?? throw new ArgumentNullException(nameof(hash));
_rubyPackageStore = rubyPackageStore ?? throw new ArgumentNullException(nameof(rubyPackageStore));
_componentVersion = Assembly.GetExecutingAssembly().GetName().Version?.ToString() ?? "unknown"; _componentVersion = Assembly.GetExecutingAssembly().GetName().Version?.ToString() ?? "unknown";
} }
@@ -64,6 +69,7 @@ internal sealed class SurfaceManifestStageExecutor : IScanStageExecutor
ArgumentNullException.ThrowIfNull(context); ArgumentNullException.ThrowIfNull(context);
var payloads = CollectPayloads(context); var payloads = CollectPayloads(context);
await PersistRubyPackagesAsync(context, cancellationToken).ConfigureAwait(false);
if (payloads.Count == 0) if (payloads.Count == 0)
{ {
_metrics.RecordSurfaceManifestSkipped(context); _metrics.RecordSurfaceManifestSkipped(context);
@@ -182,6 +188,33 @@ internal sealed class SurfaceManifestStageExecutor : IScanStageExecutor
return payloads; return payloads;
} }
private async Task PersistRubyPackagesAsync(ScanJobContext context, CancellationToken cancellationToken)
{
if (!context.Analysis.TryGet<ReadOnlyDictionary<string, LanguageAnalyzerResult>>(ScanAnalysisKeys.LanguageAnalyzerResults, out var results))
{
return;
}
if (!results.TryGetValue("ruby", out var rubyResult) || rubyResult is null)
{
return;
}
var packages = RubyPackageInventoryBuilder.Build(rubyResult);
if (packages.Count == 0)
{
return;
}
var inventory = new RubyPackageInventory(
context.ScanId,
ResolveImageDigest(context),
context.TimeProvider.GetUtcNow(),
packages);
await _rubyPackageStore.StoreAsync(inventory, cancellationToken).ConfigureAwait(false);
}
private async Task PersistPayloadsToSurfaceCacheAsync( private async Task PersistPayloadsToSurfaceCacheAsync(
ScanJobContext context, ScanJobContext context,
string tenant, string tenant,

View File

@@ -1,15 +1,16 @@
using System.Diagnostics; using System.Diagnostics;
using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.DependencyInjection.Extensions;
using Microsoft.Extensions.Hosting; using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging; using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options; using Microsoft.Extensions.Options;
using Microsoft.Extensions.DependencyInjection.Extensions;
using StellaOps.Auth.Client; using StellaOps.Auth.Client;
using StellaOps.Configuration; using StellaOps.Configuration;
using StellaOps.Scanner.Cache; using StellaOps.Scanner.Cache;
using StellaOps.Scanner.Analyzers.OS.Plugin; using StellaOps.Scanner.Analyzers.OS.Plugin;
using StellaOps.Scanner.Analyzers.Lang.Plugin; using StellaOps.Scanner.Analyzers.Lang.Plugin;
using StellaOps.Scanner.EntryTrace; using StellaOps.Scanner.EntryTrace;
using StellaOps.Scanner.Core.Contracts;
using StellaOps.Scanner.Core.Security; using StellaOps.Scanner.Core.Security;
using StellaOps.Scanner.Surface.Env; using StellaOps.Scanner.Surface.Env;
using StellaOps.Scanner.Surface.FS; using StellaOps.Scanner.Surface.FS;
@@ -59,6 +60,10 @@ if (!string.IsNullOrWhiteSpace(connectionString))
builder.Services.AddSingleton<ISurfaceManifestPublisher, SurfaceManifestPublisher>(); builder.Services.AddSingleton<ISurfaceManifestPublisher, SurfaceManifestPublisher>();
builder.Services.AddSingleton<IScanStageExecutor, SurfaceManifestStageExecutor>(); builder.Services.AddSingleton<IScanStageExecutor, SurfaceManifestStageExecutor>();
} }
else
{
builder.Services.TryAddSingleton<IRubyPackageInventoryStore, NullRubyPackageInventoryStore>();
}
builder.Services.TryAddSingleton<IScanJobSource, NullScanJobSource>(); builder.Services.TryAddSingleton<IScanJobSource, NullScanJobSource>();
builder.Services.TryAddSingleton<IPluginCatalogGuard, RestartOnlyPluginGuard>(); builder.Services.TryAddSingleton<IPluginCatalogGuard, RestartOnlyPluginGuard>();

View File

@@ -0,0 +1,54 @@
using System.Text.Json.Serialization;
namespace StellaOps.Scanner.Core.Contracts;
public sealed record RubyPackageInventory(
string ScanId,
string ImageDigest,
DateTimeOffset GeneratedAtUtc,
IReadOnlyList<RubyPackageArtifact> Packages);
public sealed record RubyPackageArtifact(
[property: JsonPropertyName("id")] string Id,
[property: JsonPropertyName("name")] string Name,
[property: JsonPropertyName("version")] string? Version,
[property: JsonPropertyName("source")] string? Source,
[property: JsonPropertyName("platform")] string? Platform,
[property: JsonPropertyName("groups")] IReadOnlyList<string>? Groups,
[property: JsonPropertyName("declaredOnly")] bool? DeclaredOnly,
[property: JsonPropertyName("runtimeUsed")] bool? RuntimeUsed,
[property: JsonPropertyName("provenance")] RubyPackageProvenance? Provenance,
[property: JsonPropertyName("runtime")] RubyPackageRuntime? Runtime,
[property: JsonPropertyName("metadata")] IReadOnlyDictionary<string, string?>? Metadata);
public sealed record RubyPackageProvenance(
[property: JsonPropertyName("source")] string? Source,
[property: JsonPropertyName("lockfile")] string? Lockfile,
[property: JsonPropertyName("locator")] string? Locator);
public sealed record RubyPackageRuntime(
[property: JsonPropertyName("entrypoints")] IReadOnlyList<string>? Entrypoints,
[property: JsonPropertyName("files")] IReadOnlyList<string>? Files,
[property: JsonPropertyName("reasons")] IReadOnlyList<string>? Reasons);
public interface IRubyPackageInventoryStore
{
Task StoreAsync(RubyPackageInventory inventory, CancellationToken cancellationToken);
Task<RubyPackageInventory?> GetAsync(string scanId, CancellationToken cancellationToken);
}
public sealed class NullRubyPackageInventoryStore : IRubyPackageInventoryStore
{
public Task StoreAsync(RubyPackageInventory inventory, CancellationToken cancellationToken)
{
ArgumentNullException.ThrowIfNull(inventory);
return Task.CompletedTask;
}
public Task<RubyPackageInventory?> GetAsync(string scanId, CancellationToken cancellationToken)
{
ArgumentException.ThrowIfNullOrWhiteSpace(scanId);
return Task.FromResult<RubyPackageInventory?>(null);
}
}

View File

@@ -0,0 +1,79 @@
using MongoDB.Bson.Serialization.Attributes;
using StellaOps.Scanner.Core.Contracts;
namespace StellaOps.Scanner.Storage.Catalog;
[BsonIgnoreExtraElements]
public sealed class RubyPackageInventoryDocument
{
[BsonId]
public string ScanId { get; set; } = string.Empty;
[BsonElement("imageDigest")]
[BsonIgnoreIfNull]
public string? ImageDigest { get; set; }
= null;
[BsonElement("generatedAtUtc")]
public DateTime GeneratedAtUtc { get; set; }
= DateTime.UtcNow;
[BsonElement("packages")]
public List<RubyPackageDocument> Packages { get; set; }
= new();
}
[BsonIgnoreExtraElements]
public sealed class RubyPackageDocument
{
[BsonElement("id")]
public string Id { get; set; } = string.Empty;
[BsonElement("name")]
public string Name { get; set; } = string.Empty;
[BsonElement("version")]
[BsonIgnoreIfNull]
public string? Version { get; set; }
= null;
[BsonElement("source")]
[BsonIgnoreIfNull]
public string? Source { get; set; }
= null;
[BsonElement("platform")]
[BsonIgnoreIfNull]
public string? Platform { get; set; }
= null;
[BsonElement("groups")]
[BsonIgnoreIfNull]
public List<string>? Groups { get; set; }
= null;
[BsonElement("declaredOnly")]
[BsonIgnoreIfNull]
public bool? DeclaredOnly { get; set; }
= null;
[BsonElement("runtimeUsed")]
[BsonIgnoreIfNull]
public bool? RuntimeUsed { get; set; }
= null;
[BsonElement("provenance")]
[BsonIgnoreIfNull]
public RubyPackageProvenance? Provenance { get; set; }
= null;
[BsonElement("runtime")]
[BsonIgnoreIfNull]
public RubyPackageRuntime? Runtime { get; set; }
= null;
[BsonElement("metadata")]
[BsonIgnoreIfNull]
public Dictionary<string, string?>? Metadata { get; set; }
= null;
}

View File

@@ -9,6 +9,7 @@ using Microsoft.Extensions.DependencyInjection.Extensions;
using Microsoft.Extensions.Logging; using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options; using Microsoft.Extensions.Options;
using MongoDB.Driver; using MongoDB.Driver;
using StellaOps.Scanner.Core.Contracts;
using StellaOps.Scanner.EntryTrace; using StellaOps.Scanner.EntryTrace;
using StellaOps.Scanner.Storage.Migrations; using StellaOps.Scanner.Storage.Migrations;
using StellaOps.Scanner.Storage.Mongo; using StellaOps.Scanner.Storage.Mongo;
@@ -67,7 +68,9 @@ public static class ServiceCollectionExtensions
services.TryAddSingleton<LifecycleRuleRepository>(); services.TryAddSingleton<LifecycleRuleRepository>();
services.TryAddSingleton<RuntimeEventRepository>(); services.TryAddSingleton<RuntimeEventRepository>();
services.TryAddSingleton<EntryTraceRepository>(); services.TryAddSingleton<EntryTraceRepository>();
services.TryAddSingleton<RubyPackageInventoryRepository>();
services.AddSingleton<IEntryTraceResultStore, EntryTraceResultStore>(); services.AddSingleton<IEntryTraceResultStore, EntryTraceResultStore>();
services.AddSingleton<IRubyPackageInventoryStore, RubyPackageInventoryStore>();
services.AddHttpClient(RustFsArtifactObjectStore.HttpClientName) services.AddHttpClient(RustFsArtifactObjectStore.HttpClientName)
.ConfigureHttpClient((sp, client) => .ConfigureHttpClient((sp, client) =>

View File

@@ -37,15 +37,16 @@ public sealed class MongoBootstrapper
private async Task EnsureCollectionsAsync(CancellationToken cancellationToken) private async Task EnsureCollectionsAsync(CancellationToken cancellationToken)
{ {
var targetCollections = new[] var targetCollections = new[]
{ {
ScannerStorageDefaults.Collections.Artifacts, ScannerStorageDefaults.Collections.Artifacts,
ScannerStorageDefaults.Collections.Images, ScannerStorageDefaults.Collections.Images,
ScannerStorageDefaults.Collections.Layers, ScannerStorageDefaults.Collections.Layers,
ScannerStorageDefaults.Collections.Links, ScannerStorageDefaults.Collections.Links,
ScannerStorageDefaults.Collections.Jobs, ScannerStorageDefaults.Collections.Jobs,
ScannerStorageDefaults.Collections.LifecycleRules, ScannerStorageDefaults.Collections.LifecycleRules,
ScannerStorageDefaults.Collections.RuntimeEvents, ScannerStorageDefaults.Collections.RuntimeEvents,
ScannerStorageDefaults.Collections.RubyPackages,
ScannerStorageDefaults.Collections.Migrations, ScannerStorageDefaults.Collections.Migrations,
}; };
@@ -66,13 +67,14 @@ public sealed class MongoBootstrapper
private async Task EnsureIndexesAsync(CancellationToken cancellationToken) private async Task EnsureIndexesAsync(CancellationToken cancellationToken)
{ {
await EnsureArtifactIndexesAsync(cancellationToken).ConfigureAwait(false); await EnsureArtifactIndexesAsync(cancellationToken).ConfigureAwait(false);
await EnsureImageIndexesAsync(cancellationToken).ConfigureAwait(false); await EnsureImageIndexesAsync(cancellationToken).ConfigureAwait(false);
await EnsureLayerIndexesAsync(cancellationToken).ConfigureAwait(false); await EnsureLayerIndexesAsync(cancellationToken).ConfigureAwait(false);
await EnsureLinkIndexesAsync(cancellationToken).ConfigureAwait(false); await EnsureLinkIndexesAsync(cancellationToken).ConfigureAwait(false);
await EnsureJobIndexesAsync(cancellationToken).ConfigureAwait(false); await EnsureJobIndexesAsync(cancellationToken).ConfigureAwait(false);
await EnsureLifecycleIndexesAsync(cancellationToken).ConfigureAwait(false); await EnsureLifecycleIndexesAsync(cancellationToken).ConfigureAwait(false);
await EnsureRuntimeEventIndexesAsync(cancellationToken).ConfigureAwait(false); await EnsureRuntimeEventIndexesAsync(cancellationToken).ConfigureAwait(false);
await EnsureRubyPackageIndexesAsync(cancellationToken).ConfigureAwait(false);
} }
private Task EnsureArtifactIndexesAsync(CancellationToken cancellationToken) private Task EnsureArtifactIndexesAsync(CancellationToken cancellationToken)
@@ -216,4 +218,20 @@ public sealed class MongoBootstrapper
return collection.Indexes.CreateManyAsync(models, cancellationToken); return collection.Indexes.CreateManyAsync(models, cancellationToken);
} }
private Task EnsureRubyPackageIndexesAsync(CancellationToken cancellationToken)
{
var collection = _database.GetCollection<RubyPackageInventoryDocument>(ScannerStorageDefaults.Collections.RubyPackages);
var models = new List<CreateIndexModel<RubyPackageInventoryDocument>>
{
new(
Builders<RubyPackageInventoryDocument>.IndexKeys.Ascending(x => x.ImageDigest),
new CreateIndexOptions { Name = "rubyPackages_imageDigest", Sparse = true }),
new(
Builders<RubyPackageInventoryDocument>.IndexKeys.Ascending(x => x.GeneratedAtUtc),
new CreateIndexOptions { Name = "rubyPackages_generatedAt" })
};
return collection.Indexes.CreateManyAsync(models, cancellationToken);
}
} }

View File

@@ -23,6 +23,7 @@ public sealed class MongoCollectionProvider
public IMongoCollection<LifecycleRuleDocument> LifecycleRules => GetCollection<LifecycleRuleDocument>(ScannerStorageDefaults.Collections.LifecycleRules); public IMongoCollection<LifecycleRuleDocument> LifecycleRules => GetCollection<LifecycleRuleDocument>(ScannerStorageDefaults.Collections.LifecycleRules);
public IMongoCollection<RuntimeEventDocument> RuntimeEvents => GetCollection<RuntimeEventDocument>(ScannerStorageDefaults.Collections.RuntimeEvents); public IMongoCollection<RuntimeEventDocument> RuntimeEvents => GetCollection<RuntimeEventDocument>(ScannerStorageDefaults.Collections.RuntimeEvents);
public IMongoCollection<EntryTraceDocument> EntryTrace => GetCollection<EntryTraceDocument>(ScannerStorageDefaults.Collections.EntryTrace); public IMongoCollection<EntryTraceDocument> EntryTrace => GetCollection<EntryTraceDocument>(ScannerStorageDefaults.Collections.EntryTrace);
public IMongoCollection<RubyPackageInventoryDocument> RubyPackages => GetCollection<RubyPackageInventoryDocument>(ScannerStorageDefaults.Collections.RubyPackages);
private IMongoCollection<TDocument> GetCollection<TDocument>(string name) private IMongoCollection<TDocument> GetCollection<TDocument>(string name)
{ {

View File

@@ -0,0 +1,33 @@
using MongoDB.Driver;
using StellaOps.Scanner.Storage.Catalog;
using StellaOps.Scanner.Storage.Mongo;
namespace StellaOps.Scanner.Storage.Repositories;
public sealed class RubyPackageInventoryRepository
{
private readonly MongoCollectionProvider _collections;
public RubyPackageInventoryRepository(MongoCollectionProvider collections)
{
_collections = collections ?? throw new ArgumentNullException(nameof(collections));
}
public async Task<RubyPackageInventoryDocument?> GetAsync(string scanId, CancellationToken cancellationToken)
{
ArgumentException.ThrowIfNullOrWhiteSpace(scanId);
return await _collections.RubyPackages
.Find(x => x.ScanId == scanId)
.FirstOrDefaultAsync(cancellationToken)
.ConfigureAwait(false);
}
public async Task UpsertAsync(RubyPackageInventoryDocument document, CancellationToken cancellationToken)
{
ArgumentNullException.ThrowIfNull(document);
var options = new ReplaceOptions { IsUpsert = true };
await _collections.RubyPackages
.ReplaceOneAsync(x => x.ScanId == document.ScanId, document, options, cancellationToken)
.ConfigureAwait(false);
}
}

View File

@@ -23,6 +23,7 @@ public static class ScannerStorageDefaults
public const string LifecycleRules = "lifecycle_rules"; public const string LifecycleRules = "lifecycle_rules";
public const string RuntimeEvents = "runtime.events"; public const string RuntimeEvents = "runtime.events";
public const string EntryTrace = "entrytrace"; public const string EntryTrace = "entrytrace";
public const string RubyPackages = "ruby.packages";
public const string Migrations = "schema_migrations"; public const string Migrations = "schema_migrations";
} }

View File

@@ -0,0 +1,92 @@
using System.Collections.Generic;
using System.Collections.Immutable;
using StellaOps.Scanner.Core.Contracts;
using StellaOps.Scanner.Storage.Catalog;
using StellaOps.Scanner.Storage.Repositories;
namespace StellaOps.Scanner.Storage.Services;
public sealed class RubyPackageInventoryStore : IRubyPackageInventoryStore
{
private readonly RubyPackageInventoryRepository _repository;
public RubyPackageInventoryStore(RubyPackageInventoryRepository repository)
{
_repository = repository ?? throw new ArgumentNullException(nameof(repository));
}
public async Task StoreAsync(RubyPackageInventory inventory, CancellationToken cancellationToken)
{
ArgumentNullException.ThrowIfNull(inventory);
var document = new RubyPackageInventoryDocument
{
ScanId = inventory.ScanId,
ImageDigest = inventory.ImageDigest,
GeneratedAtUtc = inventory.GeneratedAtUtc.UtcDateTime,
Packages = inventory.Packages.Select(ToDocument).ToList()
};
await _repository.UpsertAsync(document, cancellationToken).ConfigureAwait(false);
}
public async Task<RubyPackageInventory?> GetAsync(string scanId, CancellationToken cancellationToken)
{
ArgumentException.ThrowIfNullOrWhiteSpace(scanId);
var document = await _repository.GetAsync(scanId, cancellationToken).ConfigureAwait(false);
if (document is null)
{
return null;
}
var generatedAt = DateTime.SpecifyKind(document.GeneratedAtUtc, DateTimeKind.Utc);
var packages = document.Packages?.Select(FromDocument).ToImmutableArray()
?? ImmutableArray<RubyPackageArtifact>.Empty;
return new RubyPackageInventory(
document.ScanId,
document.ImageDigest ?? string.Empty,
new DateTimeOffset(generatedAt),
packages);
}
private static RubyPackageDocument ToDocument(RubyPackageArtifact artifact)
{
var doc = new RubyPackageDocument
{
Id = artifact.Id,
Name = artifact.Name,
Version = artifact.Version,
Source = artifact.Source,
Platform = artifact.Platform,
Groups = artifact.Groups?.ToList(),
DeclaredOnly = artifact.DeclaredOnly,
RuntimeUsed = artifact.RuntimeUsed,
Provenance = artifact.Provenance,
Runtime = artifact.Runtime,
Metadata = artifact.Metadata is null ? null : new Dictionary<string, string?>(artifact.Metadata, StringComparer.OrdinalIgnoreCase)
};
return doc;
}
private static RubyPackageArtifact FromDocument(RubyPackageDocument document)
{
IReadOnlyList<string>? groups = document.Groups;
IReadOnlyDictionary<string, string?>? metadata = document.Metadata;
return new RubyPackageArtifact(
document.Id,
document.Name,
document.Version,
document.Source,
document.Platform,
groups,
document.DeclaredOnly,
document.RuntimeUsed,
document.Provenance,
document.Runtime,
metadata);
}
}

View File

@@ -17,5 +17,6 @@
</ItemGroup> </ItemGroup>
<ItemGroup> <ItemGroup>
<ProjectReference Include="..\\StellaOps.Scanner.EntryTrace\\StellaOps.Scanner.EntryTrace.csproj" /> <ProjectReference Include="..\\StellaOps.Scanner.EntryTrace\\StellaOps.Scanner.EntryTrace.csproj" />
<ProjectReference Include="..\\StellaOps.Scanner.Core\\StellaOps.Scanner.Core.csproj" />
</ItemGroup> </ItemGroup>
</Project> </Project>

View File

@@ -0,0 +1,121 @@
using System.Collections.Generic;
using Microsoft.Extensions.Options;
using MongoDB.Driver;
using StellaOps.Scanner.Core.Contracts;
using StellaOps.Scanner.Storage;
using StellaOps.Scanner.Storage.Mongo;
using StellaOps.Scanner.Storage.Repositories;
using StellaOps.Scanner.Storage.Services;
using StellaOps.Scanner.Storage.Catalog;
using Xunit;
namespace StellaOps.Scanner.Storage.Tests;
public sealed class RubyPackageInventoryStoreTests : IClassFixture<ScannerMongoFixture>
{
private readonly ScannerMongoFixture _fixture;
public RubyPackageInventoryStoreTests(ScannerMongoFixture fixture)
{
_fixture = fixture;
}
[Fact]
public async Task StoreAsync_ThrowsWhenInventoryNull()
{
var store = CreateStore();
await Assert.ThrowsAsync<ArgumentNullException>(async () =>
{
RubyPackageInventory? inventory = null;
await store.StoreAsync(inventory!, CancellationToken.None);
});
}
[Fact]
public async Task GetAsync_ReturnsNullWhenMissing()
{
await ClearCollectionAsync();
var store = CreateStore();
var inventory = await store.GetAsync("scan-missing", CancellationToken.None);
Assert.Null(inventory);
}
[Fact]
public async Task StoreAsync_RoundTripsInventory()
{
await ClearCollectionAsync();
var store = CreateStore();
var scanId = $"scan-{Guid.NewGuid():n}";
var generatedAt = new DateTimeOffset(2025, 11, 12, 16, 10, 0, TimeSpan.Zero);
var packages = new[]
{
new RubyPackageArtifact(
Id: "purl::pkg:gem/rack@3.1.2",
Name: "rack",
Version: "3.1.2",
Source: "rubygems",
Platform: "ruby",
Groups: new[] {"default"},
DeclaredOnly: true,
RuntimeUsed: true,
Provenance: new RubyPackageProvenance("rubygems", "Gemfile.lock", "Gemfile.lock"),
Runtime: new RubyPackageRuntime(
new[] { "config.ru" },
new[] { "config.ru" },
new[] { "require-static" }),
Metadata: new Dictionary<string, string?>(StringComparer.OrdinalIgnoreCase)
{
["source"] = "rubygems",
["lockfile"] = "Gemfile.lock",
["groups"] = "default"
})
};
var inventory = new RubyPackageInventory(scanId, "sha256:image", generatedAt, packages);
await store.StoreAsync(inventory, CancellationToken.None);
var stored = await store.GetAsync(scanId, CancellationToken.None);
Assert.NotNull(stored);
Assert.Equal(scanId, stored!.ScanId);
Assert.Equal("sha256:image", stored.ImageDigest);
Assert.Equal(generatedAt, stored.GeneratedAtUtc);
Assert.Single(stored.Packages);
Assert.Equal("rack", stored.Packages[0].Name);
Assert.Equal("rubygems", stored.Packages[0].Source);
}
private async Task ClearCollectionAsync()
{
var provider = CreateProvider();
await provider.RubyPackages.DeleteManyAsync(Builders<RubyPackageInventoryDocument>.Filter.Empty);
}
private RubyPackageInventoryStore CreateStore()
{
var provider = CreateProvider();
var repository = new RubyPackageInventoryRepository(provider);
return new RubyPackageInventoryStore(repository);
}
private MongoCollectionProvider CreateProvider()
{
var options = Options.Create(new ScannerStorageOptions
{
Mongo = new MongoOptions
{
ConnectionString = _fixture.Runner.ConnectionString,
DatabaseName = _fixture.Database.DatabaseNamespace.DatabaseName,
UseMajorityReadConcern = false,
UseMajorityWriteConcern = false
}
});
return new MongoCollectionProvider(_fixture.Database, options);
}
}

View File

@@ -13,6 +13,7 @@ using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc.Testing; using Microsoft.AspNetCore.Mvc.Testing;
using Microsoft.AspNetCore.TestHost; using Microsoft.AspNetCore.TestHost;
using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.DependencyInjection;
using StellaOps.Scanner.Core.Contracts;
using StellaOps.Scanner.EntryTrace; using StellaOps.Scanner.EntryTrace;
using StellaOps.Scanner.EntryTrace.Serialization; using StellaOps.Scanner.EntryTrace.Serialization;
using StellaOps.Scanner.Storage.Catalog; using StellaOps.Scanner.Storage.Catalog;
@@ -365,6 +366,66 @@ public sealed class ScansEndpointsTests
Assert.Equal(ndjson, payload.Ndjson); Assert.Equal(ndjson, payload.Ndjson);
} }
[Fact]
public async Task RubyPackagesEndpointReturnsNotFoundWhenMissing()
{
using var factory = new ScannerApplicationFactory();
using var client = factory.CreateClient();
var response = await client.GetAsync("/api/v1/scans/scan-ruby-missing/ruby-packages");
Assert.Equal(HttpStatusCode.NotFound, response.StatusCode);
}
[Fact]
public async Task RubyPackagesEndpointReturnsInventory()
{
const string scanId = "scan-ruby-existing";
const string digest = "sha256:feedfacefeedfacefeedfacefeedfacefeedfacefeedfacefeedfacefeedface";
var generatedAt = DateTime.UtcNow.AddMinutes(-10);
using var factory = new ScannerApplicationFactory();
using (var scope = factory.Services.CreateScope())
{
var repository = scope.ServiceProvider.GetRequiredService<RubyPackageInventoryRepository>();
var document = new RubyPackageInventoryDocument
{
ScanId = scanId,
ImageDigest = digest,
GeneratedAtUtc = generatedAt,
Packages = new List<RubyPackageDocument>
{
new()
{
Id = "pkg:gem/rack@3.1.0",
Name = "rack",
Version = "3.1.0",
Source = "rubygems",
Platform = "ruby",
Groups = new List<string> { "default" },
RuntimeUsed = true,
Provenance = new RubyPackageProvenance("rubygems", "Gemfile.lock", "Gemfile.lock")
}
}
};
await repository.UpsertAsync(document, CancellationToken.None).ConfigureAwait(false);
}
using var client = factory.CreateClient();
var response = await client.GetAsync($"/api/v1/scans/{scanId}/ruby-packages");
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
var payload = await response.Content.ReadFromJsonAsync<RubyPackagesResponse>();
Assert.NotNull(payload);
Assert.Equal(scanId, payload!.ScanId);
Assert.Equal(digest, payload.ImageDigest);
Assert.Single(payload.Packages);
Assert.Equal("rack", payload.Packages[0].Name);
Assert.Equal("rubygems", payload.Packages[0].Source);
}
private sealed class RecordingCoordinator : IScanCoordinator private sealed class RecordingCoordinator : IScanCoordinator
{ {
private readonly IHttpContextAccessor accessor; private readonly IHttpContextAccessor accessor;

View File

@@ -10,5 +10,6 @@
<ItemGroup> <ItemGroup>
<ProjectReference Include="../../StellaOps.Scanner.Worker/StellaOps.Scanner.Worker.csproj" /> <ProjectReference Include="../../StellaOps.Scanner.Worker/StellaOps.Scanner.Worker.csproj" />
<ProjectReference Include="../../__Libraries/StellaOps.Scanner.Queue/StellaOps.Scanner.Queue.csproj" /> <ProjectReference Include="../../__Libraries/StellaOps.Scanner.Queue/StellaOps.Scanner.Queue.csproj" />
<ProjectReference Include="../../__Libraries/StellaOps.Scanner.Analyzers.Lang.Ruby/StellaOps.Scanner.Analyzers.Lang.Ruby.csproj" />
</ItemGroup> </ItemGroup>
</Project> </Project>

View File

@@ -1,6 +1,7 @@
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.Collections.Immutable; using System.Collections.Immutable;
using System.Collections.ObjectModel;
using System.IO; using System.IO;
using System.Linq; using System.Linq;
using System.Text; using System.Text;
@@ -11,6 +12,8 @@ using System.Threading.Tasks;
using System.Security.Cryptography; using System.Security.Cryptography;
using Microsoft.Extensions.Logging.Abstractions; using Microsoft.Extensions.Logging.Abstractions;
using Microsoft.Extensions.Options; using Microsoft.Extensions.Options;
using StellaOps.Scanner.Analyzers.Lang;
using StellaOps.Scanner.Analyzers.Lang.Ruby;
using StellaOps.Scanner.Core.Contracts; using StellaOps.Scanner.Core.Contracts;
using StellaOps.Scanner.EntryTrace; using StellaOps.Scanner.EntryTrace;
using StellaOps.Scanner.Surface.Env; using StellaOps.Scanner.Surface.Env;
@@ -44,7 +47,8 @@ public sealed class SurfaceManifestStageExecutorTests
environment, environment,
metrics, metrics,
NullLogger<SurfaceManifestStageExecutor>.Instance, NullLogger<SurfaceManifestStageExecutor>.Instance,
hash); hash,
new NullRubyPackageInventoryStore());
var context = CreateContext(); var context = CreateContext();
@@ -80,7 +84,8 @@ public sealed class SurfaceManifestStageExecutorTests
environment, environment,
metrics, metrics,
NullLogger<SurfaceManifestStageExecutor>.Instance, NullLogger<SurfaceManifestStageExecutor>.Instance,
hash); hash,
new NullRubyPackageInventoryStore());
var context = CreateContext(); var context = CreateContext();
PopulateAnalysis(context); PopulateAnalysis(context);
@@ -158,6 +163,69 @@ public sealed class SurfaceManifestStageExecutorTests
context.Analysis.Set(ScanAnalysisKeys.LayerComponentFragments, ImmutableArray.Create(fragment)); context.Analysis.Set(ScanAnalysisKeys.LayerComponentFragments, ImmutableArray.Create(fragment));
} }
[Fact]
public async Task ExecuteAsync_PersistsRubyPackageInventoryWhenResultsExist()
{
var metrics = new ScannerWorkerMetrics();
var publisher = new TestSurfaceManifestPublisher("tenant-a");
var cache = new RecordingSurfaceCache();
var environment = new TestSurfaceEnvironment("tenant-a");
var hash = CreateCryptoHash();
var packageStore = new RecordingRubyPackageStore();
var executor = new SurfaceManifestStageExecutor(
publisher,
cache,
environment,
metrics,
NullLogger<SurfaceManifestStageExecutor>.Instance,
hash,
packageStore);
var context = CreateContext();
PopulateAnalysis(context);
await PopulateRubyAnalyzerResultsAsync(context);
await executor.ExecuteAsync(context, CancellationToken.None);
Assert.NotNull(packageStore.LastInventory);
Assert.Equal(context.ScanId, packageStore.LastInventory!.ScanId);
Assert.NotEmpty(packageStore.LastInventory!.Packages);
}
private static async Task PopulateRubyAnalyzerResultsAsync(ScanJobContext context)
{
var fixturePath = Path.Combine(
ResolveRepositoryRoot(),
"src",
"Scanner",
"__Tests",
"StellaOps.Scanner.Analyzers.Lang.Ruby.Tests",
"Fixtures",
"lang",
"ruby",
"simple-app");
var analyzer = new RubyLanguageAnalyzer();
var engine = new LanguageAnalyzerEngine(new ILanguageAnalyzer[] { analyzer });
var analyzerContext = new LanguageAnalyzerContext(
fixturePath,
TimeProvider.System,
usageHints: null,
services: null,
analysisStore: context.Analysis);
var result = await engine.AnalyzeAsync(analyzerContext, CancellationToken.None);
var dictionary = new Dictionary<string, LanguageAnalyzerResult>(StringComparer.OrdinalIgnoreCase)
{
["ruby"] = result
};
context.Analysis.Set(
ScanAnalysisKeys.LanguageAnalyzerResults,
new ReadOnlyDictionary<string, LanguageAnalyzerResult>(dictionary));
}
[Fact] [Fact]
public async Task ExecuteAsync_IncludesDenoObservationPayloadWhenPresent() public async Task ExecuteAsync_IncludesDenoObservationPayloadWhenPresent()
{ {
@@ -172,7 +240,8 @@ public sealed class SurfaceManifestStageExecutorTests
environment, environment,
metrics, metrics,
NullLogger<SurfaceManifestStageExecutor>.Instance, NullLogger<SurfaceManifestStageExecutor>.Instance,
hash); hash,
new NullRubyPackageInventoryStore());
var context = CreateContext(); var context = CreateContext();
var observationBytes = Encoding.UTF8.GetBytes("{\"entrypoints\":[\"mod.ts\"]}"); var observationBytes = Encoding.UTF8.GetBytes("{\"entrypoints\":[\"mod.ts\"]}");
@@ -390,6 +459,36 @@ public sealed class SurfaceManifestStageExecutorTests
} }
} }
private sealed class RecordingRubyPackageStore : IRubyPackageInventoryStore
{
public RubyPackageInventory? LastInventory { get; private set; }
public Task StoreAsync(RubyPackageInventory inventory, CancellationToken cancellationToken)
{
LastInventory = inventory;
return Task.CompletedTask;
}
public Task<RubyPackageInventory?> GetAsync(string scanId, CancellationToken cancellationToken)
=> Task.FromResult(LastInventory);
}
private static string ResolveRepositoryRoot()
{
var directory = AppContext.BaseDirectory;
while (!string.IsNullOrWhiteSpace(directory))
{
if (Directory.Exists(Path.Combine(directory, ".git")))
{
return directory;
}
directory = Path.GetDirectoryName(directory) ?? string.Empty;
}
throw new InvalidOperationException("Repository root not found.");
}
private sealed class FakeJobLease : IScanJobLease private sealed class FakeJobLease : IScanJobLease
{ {
private readonly Dictionary<string, string> _metadata = new() private readonly Dictionary<string, string> _metadata = new()