Files
git.stella-ops.org/docs/modules/attestor/architecture.md
2026-03-10 00:25:34 +02:00

122 KiB
Raw Blame History

component_architecture_attestor.md — Stella Ops Attestor (2025Q4)

Derived from Epic 19 – Attestor Console with provenance hooks aligned to the Export Center bundle workflows scoped in Epic 10.

Scope. Implementationâ€ready architecture for the Attestor: the service that submits DSSE envelopes to Rekor v2, retrieves/validates inclusion proofs, caches results, and exposes verification APIs. It accepts DSSE only from the Signer over mTLS, enforces chainâ€ofâ€trust to Stella Ops roots, and returns {uuid, index, proof, logURL} to calling services (Scanner.WebService for SBOMs; backend for final reports; Excititor exports when configured).


0) Mission & boundaries

Mission. Turn a signed DSSE envelope from the Signer into a transparencyâ€logged, verifiable fact with a durable, replayable proof (Merkle inclusion + (optional) checkpoint anchoring). Provide fast verification for downstream consumers and a stable retrieval interface for UI/CLI.

Boundaries.

  • Attestor does not sign; it must not accept unsigned or thirdâ€partyâ€signed bundles.
  • Attestor does not decide PASS/FAIL; it logs attestations for SBOMs, reports, and export artifacts.
  • Rekor v2 backends may be local (selfâ€hosted) or remote; Attestor handles both with retries, backoff, and idempotency.

1) Topology & dependencies

Process shape: single stateless service stellaops/attestor behind mTLS.

Dependencies:

  • Signer (caller) — authenticated via mTLS and Authority OpToks.
  • Rekor v2 — tileâ€backed transparency log endpoint(s).
  • RustFS (S3-compatible) — optional archive store for DSSE envelopes & verification bundles.
  • PostgreSQL — local cache of {uuid, index, proof, artifactSha256, bundleSha256}; job state; audit.
  • Valkey — dedupe/idempotency keys and shortâ€lived rateâ€limit buckets.
  • Licensing Service (optional) — “endorse” call for crossâ€log publishing when customer optsâ€in.

Trust boundary: Only the Signer is allowed to call submission endpoints; enforced by mTLS peer cert allowlist + aud=attestor OpTok.


Roles, identities & scopes

  • Subjects — immutable digests for artifacts (container images, SBOMs, reports) referenced in DSSE envelopes.
  • Issuers — authenticated builders/scanners/policy engines signing evidence; tracked with mode (keyless, kms, hsm, fido2) and tenant scope.
  • Consumers — Scanner, Export Center, CLI, Console, Policy Engine that verify proofs using Attestor APIs.
  • Authority scopes — attestor.write, attestor.verify, attestor.read, and administrative scopes for key management; all calls mTLS/DPoP-bound.

Supported predicate types

  • StellaOps.BuildProvenance@1
  • StellaOps.SBOMAttestation@1
  • StellaOps.ScanResults@1
  • StellaOps.PolicyEvaluation@1
  • StellaOps.VEXAttestation@1
  • StellaOps.RiskProfileEvidence@1
  • StellaOps.SignedException@1

Each predicate embeds subject digests, issuer metadata, policy context, materials, and optional transparency hints. Unsupported predicates return 422 predicate_unsupported.

Golden fixtures: Deterministic JSON statements for each predicate live in src/Attestor/StellaOps.Attestor.Types/samples. They are kept stable by the StellaOps.Attestor.Types.Tests project so downstream docs and contracts can rely on them without drifting.

Envelope & signature model

  • DSSE envelopes canonicalised (stable JSON ordering) prior to hashing.
  • Signature modes: keyless (Fulcio cert chain), keyful (KMS/HSM), hardware (FIDO2/WebAuthn). Multiple signatures allowed.
  • Rekor entry stores bundle hash, certificate chain, and optional witness endorsements.
  • Archive CAS retains original envelope plus metadata for offline verification.
  • Envelope serializer emits compact (canonical, minified) and expanded (annotated, indented) JSON variants off the same canonical byte stream so hashing stays deterministic while humans get context.
  • Payload handling supports optional compression (gzip, brotli) with compression metadata recorded in the expanded view and digesting always performed over the uncompressed bytes.
  • Expanded envelopes surface detached payload references (URI, digest, media type, size) so large artifacts can live in CAS/object storage while the canonical payload remains embedded for verification.
  • Payload previews auto-render JSON or UTF-8 text in the expanded output to simplify triage in air-gapped and offline review flows.

Verification pipeline overview

  1. Fetch envelope (from request, cache, or storage) and validate DSSE structure.
  2. Verify signature(s) against configured trust roots; evaluate issuer policy.
  3. Retrieve or acquire inclusion proof from Rekor (primary + optional mirror).
  4. Validate Merkle proof against checkpoint; optionally verify witness endorsement.
  5. Return cached verification bundle including policy verdict and timestamps.

Rekor Inclusion Proof Verification (SPRINT_3000_0001_0001)

The Attestor implements RFC 6962-compliant Merkle inclusion proof verification for Rekor transparency log entries:

Components:

  • MerkleProofVerifier — Verifies Merkle audit paths per RFC 6962 Section 2.1.1
  • CheckpointSignatureVerifier — Parses and verifies Rekor checkpoint signatures (ECDSA/Ed25519)
  • RekorVerificationOptions — Configuration for public keys, offline mode, and checkpoint caching

Verification Flow:

  1. Parse checkpoint body (origin, tree size, root hash)
  2. Verify checkpoint signature against Rekor public key
  3. Compute leaf hash from canonicalized entry
  4. Walk Merkle path from leaf to root using RFC 6962 interior node hashing
  5. Compare computed root with checkpoint root hash (constant-time)

Offline Mode:

  • Bundled checkpoints can be used in air-gapped environments
  • EnableOfflineMode and OfflineCheckpointBundlePath configuration options
  • AllowOfflineWithoutSignature for fully disconnected scenarios (reduced security)

Metrics:

  • attestor.rekor_inclusion_verify_total — Verification attempts by result
  • attestor.rekor_checkpoint_verify_total — Checkpoint signature verifications
  • attestor.rekor_offline_verify_total — Offline mode verifications
  • attestor.rekor_checkpoint_cache_hits/misses — Checkpoint cache performance

UI & CLI touchpoints

  • Console: Evidence browser, verification report, chain-of-custody graph, issuer/key management, attestation workbench, bulk verification views.
  • CLI: stella attest sign|verify|list|fetch|key with offline verification and export bundle support.
  • SDKs expose sign/verify primitives for build pipelines.

Performance & observability targets

  • Throughput goal: ≥1 000 envelopes/minute per worker with cached verification.
  • Metrics: attestor_submission_total, attestor_verify_seconds, attestor_rekor_latency_seconds, attestor_cache_hit_ratio.
  • Logs include tenant, issuer, subjectDigest, rekorUuid, proofStatus; traces cover submission → Rekor → cache → response path.

2) Data model (PostgreSQL)

Database: attestor

Tables & schemas

  • entries table

    CREATE TABLE attestor.entries (
      id UUID PRIMARY KEY,                                -- rekor-uuid
      artifact_sha256 TEXT NOT NULL,
      artifact_kind TEXT NOT NULL,                        -- sbom|report|vex-export
      artifact_image_digest TEXT,
      artifact_subject_uri TEXT,
      bundle_sha256 TEXT NOT NULL,                        -- canonicalized DSSE
      log_index INTEGER,                                  -- log index/sequence if provided by backend
      proof_checkpoint JSONB,                             -- { origin, size, rootHash, timestamp }
      proof_inclusion JSONB,                              -- { leafHash, path[] } Merkle path (tiles)
      log_url TEXT,
      log_id TEXT,
      created_at TIMESTAMPTZ DEFAULT NOW(),
      status TEXT NOT NULL,                               -- included|pending|failed
      signer_identity JSONB                               -- { mode, issuer, san?, kid? }
    );
    
  • dedupe table

    CREATE TABLE attestor.dedupe (
      key TEXT PRIMARY KEY,                               -- bundle:<sha256> idempotency key
      rekor_uuid UUID NOT NULL,
      created_at TIMESTAMPTZ DEFAULT NOW(),
      ttl_at TIMESTAMPTZ NOT NULL                         -- for scheduled cleanup
    );
    
  • audit table

    CREATE TABLE attestor.audit (
      id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
      ts TIMESTAMPTZ DEFAULT NOW(),
      caller_cn TEXT,
      caller_mtls_thumbprint TEXT,
      caller_sub TEXT,
      caller_aud TEXT,
      action TEXT NOT NULL,                               -- submit|verify|fetch
      artifact_sha256 TEXT,
      bundle_sha256 TEXT,
      rekor_uuid UUID,
      log_index INTEGER,
      result TEXT NOT NULL,
      latency_ms INTEGER,
      backend TEXT
    );
    

Indexes:

  • entries: indexes on artifact_sha256, bundle_sha256, created_at, and composite (status, created_at DESC).
  • dedupe: unique index on key; scheduled job cleans rows where ttl_at < NOW() (24–48h retention).
  • audit: index on ts for timeâ€range queries.

2.1) Content-Addressed Identifier Formats

The ProofChain library (StellaOps.Attestor.ProofChain) defines canonical content-addressed identifiers for all proof chain components. These IDs ensure determinism, tamper-evidence, and reproducibility.

Identifier Types

ID Type Format Source Example
ArtifactID sha256:<64-hex> Container manifest or binary hash sha256:a1b2c3d4e5f6...
SBOMEntryID <sbomDigest>:<purl>[@<version>] SBOM hash + component PURL sha256:91f2ab3c:pkg:npm/lodash@4.17.21
EvidenceID sha256:<hash> Canonical evidence JSON sha256:e7f8a9b0c1d2...
ReasoningID sha256:<hash> Canonical reasoning JSON sha256:f0e1d2c3b4a5...
VEXVerdictID sha256:<hash> Canonical VEX verdict JSON sha256:d4c5b6a7e8f9...
ProofBundleID sha256:<merkle_root> Merkle root of bundle components sha256:1a2b3c4d5e6f...
GraphRevisionID grv_sha256:<hash> Merkle root of graph state grv_sha256:9f8e7d6c5b4a...

Canonicalization (RFC 8785)

All JSON-based IDs use RFC 8785 (JCS) canonicalization:

  • UTF-8 encoding
  • Lexicographically sorted keys
  • No whitespace (minified)
  • No volatile fields (timestamps, random values excluded)

Implementation: StellaOps.Attestor.ProofChain.Json.Rfc8785JsonCanonicalizer

Merkle Tree Construction

ProofBundleID and GraphRevisionID use deterministic binary Merkle trees:

  • SHA-256 hash function
  • Lexicographically sorted leaf inputs
  • Standard binary tree construction (pair-wise hashing)
  • Odd leaves promoted to next level

Implementation: StellaOps.Attestor.ProofChain.Merkle.DeterministicMerkleTreeBuilder

ID Generation Interface

// Core interface for ID generation
public interface IContentAddressedIdGenerator
{
    EvidenceId GenerateEvidenceId(EvidencePredicate predicate);
    ReasoningId GenerateReasoningId(ReasoningPredicate predicate);
    VexVerdictId GenerateVexVerdictId(VexPredicate predicate);
    ProofBundleId GenerateProofBundleId(SbomEntryId sbom, EvidenceId[] evidence, 
        ReasoningId reasoning, VexVerdictId verdict);
    GraphRevisionId GenerateGraphRevisionId(GraphState state);
}

Predicate Types

The ProofChain library defines DSSE predicates for proof chain attestations. All predicates follow the in-toto Statement/v1 format.

Predicate Type Registry

Predicate Type URI Purpose Signer Role
Evidence evidence.stella/v1 Raw evidence from scanner/ingestor (findings, reachability data) Scanner/Ingestor key
Reasoning reasoning.stella/v1 Policy evaluation trace with inputs and intermediate findings Policy/Authority key
VEX Verdict cdx-vex.stella/v1 VEX verdict with status, justification, and provenance VEXer/Vendor key
Proof Spine proofspine.stella/v1 Merkle-aggregated proof spine linking evidence to verdict Authority key
Verdict Receipt verdict.stella/v1 Final surfaced decision receipt with policy rule reference Authority key
SBOM Linkage https://stella-ops.org/predicates/sbom-linkage/v1 SBOM-to-component linkage metadata Generator key
Signed Exception https://stellaops.io/attestation/v1/signed-exception DSSE-signed budget exception with recheck policy Authority key

Evidence Statement (evidence.stella/v1)

Captures raw evidence collected from scanners or vulnerability feeds.

Field Type Description
source string Scanner or feed name that produced this evidence
sourceVersion string Version of the source tool
collectionTime DateTimeOffset UTC timestamp when evidence was collected
sbomEntryId string Reference to the SBOM entry this evidence relates to
vulnerabilityId string? CVE or vulnerability identifier if applicable
rawFinding object Pointer to or inline representation of raw finding data
evidenceId string Content-addressed ID (sha256:<hash>)

Reasoning Statement (reasoning.stella/v1)

Captures policy evaluation traces linking evidence to decisions.

Field Type Description
sbomEntryId string SBOM entry this reasoning applies to
evidenceIds string[] Evidence IDs considered in this reasoning
policyVersion string Version of the policy used for evaluation
inputs object Inputs to the reasoning process (evaluation time, thresholds, lattice rules)
intermediateFindings object? Intermediate findings from the evaluation
reasoningId string Content-addressed ID (sha256:<hash>)

VEX Verdict Statement (cdx-vex.stella/v1)

Captures VEX status determinations with provenance.

Field Type Description
sbomEntryId string SBOM entry this verdict applies to
vulnerabilityId string CVE, GHSA, or other vulnerability identifier
status string VEX status: not_affected, affected, fixed, under_investigation
justification string Justification for the VEX status
policyVersion string Version of the policy used
reasoningId string Reference to the reasoning that led to this verdict
vexVerdictId string Content-addressed ID (sha256:<hash>)

Proof Spine Statement (proofspine.stella/v1)

Merkle-aggregated proof bundle linking all chain components.

Field Type Description
sbomEntryId string SBOM entry this proof spine covers
evidenceIds string[] Sorted list of evidence IDs included in this proof bundle
reasoningId string Reasoning ID linking evidence to verdict
vexVerdictId string VEX verdict ID for this entry
policyVersion string Version of the policy used
proofBundleId string Content-addressed ID (sha256:<merkle_root>)

Verdict Receipt Statement (verdict.stella/v1)

Final surfaced decision receipt with full provenance.

Field Type Description
graphRevisionId string Graph revision ID this verdict was computed from
findingKey object Finding key (sbomEntryId + vulnerabilityId)
rule object Policy rule that produced this verdict
decision object Decision made by the rule
inputs object Inputs used to compute this verdict
outputs object Outputs/references from this verdict
createdAt DateTimeOffset UTC timestamp when verdict was created

SBOM Linkage Statement (sbom-linkage/v1)

SBOM-to-component linkage metadata.

Field Type Description
sbom object SBOM descriptor (id, format, specVersion, mediaType, sha256, location)
generator object Generator tool descriptor
generatedAt DateTimeOffset UTC timestamp when linkage was generated
incompleteSubjects object[]? Subjects that could not be fully resolved
tags object? Arbitrary tags for classification or filtering

Reference: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Statements/

Signed Exception Statement (signed-exception/v1)

DSSE-signed exception objects with recheck policy for independent verification and automated re-approval workflows.

Field Type Description
schemaVersion string Schema version (current: "1.0")
exception object The wrapped BudgetExceptionEntry
exceptionContentId string Content-addressed ID (sha256:<hash>) for deduplication
signedAt DateTimeOffset UTC timestamp when the exception was signed
recheckPolicy object Recheck policy configuration
environments string[]? Environments this exception applies to (dev, staging, prod)
coveredViolationIds string[]? IDs of violations this exception covers
approvalPolicyDigest string? Digest of the policy bundle that approved this exception
renewsExceptionId string? Previous exception ID for renewal chains
status string Status: Active, PendingRecheck, Expired, Revoked, PendingApproval
Recheck Policy Schema
Field Type Description
recheckIntervalDays int Interval in days between rechecks (default: 30)
autoRecheckEnabled bool Whether automatic recheck scheduling is enabled
maxRenewalCount int? Maximum renewals before escalated approval required
renewalCount int Current renewal count
nextRecheckAt DateTimeOffset? Next scheduled recheck timestamp
lastRecheckAt DateTimeOffset? Last completed recheck timestamp
requiresReapprovalOnExpiry bool Whether re-approval is required after expiry
approvalRoles string[]? Roles required for approval
Exception Signing API

The exception signing service provides endpoints for signing, verifying, and renewing exceptions:

Endpoint Method Description
/internal/api/v1/exceptions/sign POST Sign an exception and wrap in DSSE envelope
/internal/api/v1/exceptions/verify POST Verify a signed exception envelope
/internal/api/v1/exceptions/recheck-status POST Check if exception requires recheck
/internal/api/v1/exceptions/renew POST Renew an expired/expiring exception

Reference: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Statements/DsseSignedExceptionPayload.cs


3) Input contract (from Signer)

Attestor accepts only DSSE envelopes that satisfy all of:

  1. mTLS peer certificate maps to signer service (CAâ€pinned).
  2. Authority OpTok with aud=attestor, scope=attestor.write, DPoP or mTLS bound.
  3. DSSE envelope is signed by the Signer’s key (or includes a Fulcioâ€issued cert chain) and chains to configured roots (Fulcio/KMS).
  4. Predicate type is one of Stella Ops types (sbom/report/vexâ€export) with valid schema.
  5. subject[*].digest.sha256 is present and canonicalized.

Wire shape (JSON):

{
  "bundle": { "dsse": { "payloadType": "application/vnd.in-toto+json", "payload": "<b64>", "signatures": [ ... ] },
              "certificateChain": [ "-----BEGIN CERTIFICATE-----..." ],
              "mode": "keyless" },
  "meta": {
    "artifact": { "sha256": "<subject sha256>", "kind": "sbom|report|vex-export", "imageDigest": "sha256:..." },
    "bundleSha256": "<sha256 of canonical dsse>",
    "logPreference": "primary",               // "primary" | "mirror" | "both"
    "archive": true                           // whether Attestor should archive bundle to S3
  }
}

4) APIs

4.1 Signing

POST /api/v1/attestations:sign (mTLS + OpTok required)

  • Purpose: Deterministically wrap Stella Ops payloads in DSSE envelopes before Rekor submission. Reuses the submission rate limiter and honours caller tenancy/audience scopes.

  • Body:

    {
      "keyId": "signing-key-id",
      "payloadType": "application/vnd.in-toto+json",
      "payload": "<base64 payload>",
      "mode": "keyless|keyful|kms",
      "certificateChain": ["-----BEGIN CERTIFICATE-----..."],
      "artifact": {
        "sha256": "<subject sha256>",
        "kind": "sbom|report|vex-export",
        "imageDigest": "sha256:...",
        "subjectUri": "oci://..."
      },
      "logPreference": "primary|mirror|both",
      "archive": true
    }
    
  • Behaviour:

    • Resolve the signing key from attestor.signing.keys[] (includes algorithm, provider, and optional KMS version).
    • Compute DSSE preâ€authentication encoding, sign with the resolved provider (default EC, BouncyCastle Ed25519, or Fileâ€KMS ES256), and add static + request certificate chains.
    • Canonicalise the resulting bundle, derive bundleSha256, and mirror the request meta shape used by /api/v1/rekor/entries.
    • Emit attestor.sign_total{result,algorithm,provider} and attestor.sign_latency_seconds{algorithm,provider} metrics and append an audit row (action=sign).
  • Response 200:

    {
      "bundle": { "dsse": { "payloadType": "...", "payload": "...", "signatures": [{ "keyid": "signing-key-id", "sig": "..." }] }, "certificateChain": ["..."], "mode": "kms" },
      "meta": { "artifact": { "sha256": "...", "kind": "sbom" }, "bundleSha256": "...", "logPreference": "primary", "archive": true },
      "key": { "keyId": "signing-key-id", "algorithm": "ES256", "mode": "kms", "provider": "kms", "signedAt": "2025-11-01T12:34:56Z" }
    }
    
  • Errors: 400 key_not_found, 400 payload_missing|payload_invalid_base64|artifact_sha_missing, 400 mode_not_allowed, 403 client_certificate_required, 401 invalid_token, 500 signing_failed.

4.2 Submission

POST /api/v1/rekor/entries (mTLS + OpTok required)

  • Body: as above.

  • Behavior:

    • Verify caller (mTLS + OpTok).
    • Validate DSSE bundle (signature, cert chain to Fulcio/KMS; DSSE structure; payloadType allowed).
    • Idempotency: compute bundleSha256; check dedupe. If present, return existing rekorUuid.
    • Rekor pre-check: call Rekor index lookup (/api/v2/index/retrieve with v1 fallback) by bundle hash before submit; if a UUID is found, fetch and reuse existing entry metadata instead of creating a duplicate.
    • Submit canonicalized bundle to Rekor v2 (primary or mirror according to logPreference).
    • Retrieve inclusion proof (blocking until inclusion or up to proofTimeoutMs); if backend returns promise only, return status=pending and retry asynchronously.
    • Persist entries record; archive DSSE to S3 if archive=true.
  • Response 200:

    {
      "uuid": "…",
      "index": 123456,
      "proof": {
        "checkpoint": { "origin": "rekor@site", "size": 987654, "rootHash": "…", "timestamp": "…" },
        "inclusion": { "leafHash": "…", "path": ["…","…"] }
      },
      "logURL": "https://rekor…/api/v2/log/…/entries/…",
      "status": "included"
    }
    
  • Errors: 401 invalid_token, 403 not_signer|chain_untrusted, 409 duplicate_bundle (with existing uuid), 502 rekor_unavailable, 504 proof_timeout.

4.3 Proof retrieval

GET /api/v1/rekor/entries/{uuid}

  • Returns entries row (refreshes proof from Rekor if stale/missing).
  • Accepts ?refresh=true to force backend query.

4.4 Verification (thirdâ€party or internal)

POST /api/v1/rekor/verify

  • Body (one of):

    • { "uuid": "…" }
    • { "bundle": { …DSSE… } }
    • { "artifactSha256": "…" } (looks up most recent entry)
  • Checks:

    1. Bundle signature → cert chain to Fulcio/KMS roots configured.
    2. Inclusion proof → recompute leaf hash; verify Merkle path against checkpoint root.
    3. Optionally verify checkpoint against local trust anchors (if Rekor signs checkpoints).
    4. Confirm subject.digest matches callerâ€provided hash (when given).
    5. Fetch transparency witness statement when enabled; cache results and downgrade status to WARN when endorsements are missing or mismatched.
  • Response:

    { "ok": true, "uuid": "…", "index": 123, "logURL": "…", "checkedAt": "…" }
    

4.5 Bulk verification

POST /api/v1/rekor/verify:bulk enqueues a verification job containing up to quotas.bulk.maxItemsPerJob items. Each item mirrors the single verification payload (uuid | artifactSha256 | subject+envelopeId, optional policyVersion/refreshProof). The handler persists a PostgreSQL job record (bulk_jobs table) and returns 202 Accepted with a job descriptor and polling URL.

GET /api/v1/rekor/verify:bulk/{jobId} returns progress and per-item results (subject/uuid, status, issues, cached verification report if available). Jobs are tenant- and subject-scoped; only the initiating principal can read their progress.

Worker path: BulkVerificationWorker claims queued jobs (status=queued → running), executes items sequentially through the cached verification service, updates progress counters, and records metrics:

  • attestor.bulk_jobs_total{status} – completed/failed jobs
  • attestor.bulk_job_duration_seconds{status} – job runtime
  • attestor.bulk_items_total{status} – per-item outcomes (succeeded, verification_failed, exception)

The worker honours bulkVerification.itemDelayMilliseconds for throttling and reschedules persistence conflicts with optimistic version checks. Results hydrate the verification cache; failed items record the error reason without aborting the overall job.


5) Rekor v2 driver (backend)

  • Canonicalization: DSSE envelopes are normalized (stable JSON ordering, no insignificant whitespace) before hashing and submission.

  • Transport: HTTP/2 with retries (exponential backoff, jitter), budgeted timeouts.

  • Idempotency: if backend returns “already exists,” map to existing uuid.

  • Proof acquisition:

    • In synchronous mode, poll the log for inclusion up to proofTimeoutMs.
    • In asynchronous mode, return pending and schedule a proof fetcher job (PostgreSQL job record + backoff).
  • Mirrors/dual logs:

    • When logPreference="both", submit to primary and mirror; store both UUIDs (primary canonical).
    • Optional cloud endorsement: POST to the Stella Ops cloud /attest/endorse with {uuid, artifactSha256}; store returned endorsement id.

6) Security model

  • mTLS required for submission from Signer (CAâ€pinned).

  • Authority token with aud=attestor and DPoP/mTLS binding must be presented; Attestor verifies both.

  • Bundle acceptance policy:

    • DSSE signature must chain to the configured Fulcio (keyless) or KMS/HSM roots.
    • SAN (Subject Alternative Name) must match Signer identity policy (e.g., urn:stellaops:signer or pinned OIDC issuer).
    • Predicate predicateType must be on allowlist (sbom/report/vex-export).
    • subject.digest.sha256 values must be present and wellâ€formed (hex).
  • No public submission path. Never accept bundles from untrusted clients.

  • Client certificate allowlists: optional security.mtls.allowedSubjects / allowedThumbprints tighten peer identity checks beyond CA pinning.

  • Rate limits: token-bucket per caller derived from quotas.perCaller (QPS/burst) returns 429 + Retry-After when exceeded.

  • Scope enforcement: API separates attestor.write, attestor.verify, and attestor.read policies; verification/list endpoints accept read or verify scopes while submission endpoints remain write-only.

  • Request hygiene: JSON content-type is mandatory (415 returned otherwise); DSSE payloads are capped (default 2 MiB), certificate chains limited to six entries, and signatures to six per envelope to mitigate parsing abuse.

  • Redaction: Attestor never logs secret material; DSSE payloads should be public by design (SBOMs/reports). If customers require redaction, enforce policy at Signer (predicate minimization) before Attestor.


7) Storage & archival

  • Entries in PostgreSQL provide a local ledger keyed by rekorUuid and artifact sha256 for quick reverse lookups.

  • S3 archival (if enabled):

    s3://stellaops/attest/
      dsse/<bundleSha256>.json
      proof/<rekorUuid>.json
      bundle/<artifactSha256>.zip               # optional verification bundle
    
  • Verification bundles (zip):

    • DSSE (*.dsse.json), proof (*.proof.json), chain.pem (certs), README.txt with verification steps & hashes.

8) Observability & audit

Metrics (Prometheus):

  • attestor.sign_total{result,algorithm,provider}
  • attestor.sign_latency_seconds{algorithm,provider}
  • attestor.submit_total{result,backend}
  • attestor.submit_latency_seconds{backend}
  • attestor.proof_fetch_total{subject,issuer,policy,result,attestor.log.backend}
  • attestor.verify_total{subject,issuer,policy,result}
  • attestor.verify_latency_seconds{subject,issuer,policy,result}
  • attestor.dedupe_hits_total
  • attestor.errors_total{type}

SLO guardrails:

  • attestor.verify_latency_seconds P95 ≤ 2 s per policy.
  • attestor.verify_total{result="failed"} ≤ 1 % of attestor.verify_total over 30 min rolling windows.

Correlation:

  • HTTP callers may supply X-Correlation-Id; Attestor will echo the header and push CorrelationId into the log scope for cross-service tracing.

Tracing:

  • Spans: attestor.sign, validate, rekor.submit, rekor.poll, persist, archive, attestor.verify, attestor.verify.refresh_proof.

Audit:

  • Immutable audit rows (ts, caller, action, hashes, uuid, index, backend, result, latency).

9) Configuration (YAML)

attestor:
  listen: "https://0.0.0.0:8444"
  security:
    mtls:
      caBundle: /etc/ssl/signer-ca.pem
      requireClientCert: true
    authority:
      issuer: "https://authority.internal"
      jwksUrl: "https://authority.internal/jwks"
      requireSenderConstraint: "dpop"   # or "mtls"
    signerIdentity:
      mode: ["keyless","kms"]
      fulcioRoots: ["/etc/fulcio/root.pem"]
      allowedSANs: ["urn:stellaops:signer"]
      kmsKeys: ["kms://cluster-kms/stellaops-signer"]
    submissionLimits:
      maxPayloadBytes: 2097152
      maxCertificateChainEntries: 6
      maxSignatures: 6
  signing:
    preferredProviders: ["kms","bouncycastle.ed25519","default"]
    kms:
      enabled: true
      rootPath: "/var/lib/stellaops/kms"
      password: "${ATTESTOR_KMS_PASSWORD}"
    keys:
      - keyId: "kms-primary"
        algorithm: ES256
        mode: kms
        provider: "kms"
        providerKeyId: "kms-primary"
        kmsVersionId: "v1"
      - keyId: "ed25519-offline"
        algorithm: Ed25519
        mode: keyful
        provider: "bouncycastle.ed25519"
        materialFormat: base64
        materialPath: "/etc/stellaops/keys/ed25519.key"
        certificateChain:
          - "-----BEGIN CERTIFICATE-----...-----END CERTIFICATE-----"
  rekor:
    primary:
      url: "https://rekor-v2.internal"
      proofTimeoutMs: 15000
      pollIntervalMs: 250
      maxAttempts: 60
    mirror:
      enabled: false
      url: "https://rekor-v2.mirror"
  postgres:
    connectionString: "Host=postgres;Port=5432;Database=attestor;Username=stellaops;Password=secret"
  s3:
    enabled: true
    endpoint: "http://rustfs:8080"
    bucket: "stellaops"
    prefix: "attest/"
    objectLock: "governance"
  valkey:
    url: "valkey://valkey:6379/2"
  quotas:
    perCaller:
      qps: 50
      burst: 100

Notes:

  • signing.preferredProviders defines the resolution order when multiple providers support the requested algorithm. Omit to fall back to registration order.
  • File-backed KMS (signing.kms) is required when at least one key uses mode: kms; the password should be injected via secret store or environment.
  • For keyful providers, supply inline material or materialPath plus materialFormat (pem (default), base64, or hex). KMS keys ignore these fields and require kmsVersionId.
  • certificateChain entries are appended to returned bundles so offline verifiers do not need to dereference external stores.

10) Endâ€toâ€end sequences

A) Submit & include (happy path)

sequenceDiagram
  autonumber
  participant SW as Scanner.WebService
  participant SG as Signer
  participant AT as Attestor
  participant RK as Rekor v2

  SW->>SG: POST /sign/dsse (OpTok+PoE)
  SG-->>SW: DSSE bundle (+certs)
  SW->>AT: POST /rekor/entries (mTLS + OpTok)
  AT->>AT: Validate DSSE (chain to Fulcio/KMS; signer identity)
  AT->>RK: submit(bundle)
  RK-->>AT: {uuid, index?}
  AT->>RK: poll inclusion until proof or timeout
  RK-->>AT: inclusion proof (checkpoint + path)
  AT-->>SW: {uuid, index, proof, logURL}

B) Verify by artifact digest (CLI)

sequenceDiagram
  autonumber
  participant CLI as stellaops verify
  participant SW as Scanner.WebService
  participant AT as Attestor

  CLI->>SW: GET /catalog/artifacts/{id}
  SW-->>CLI: {artifactSha256, rekor: {uuid}}
  CLI->>AT: POST /rekor/verify { uuid }
  AT-->>CLI: { ok: true, index, logURL }

11) Failure modes & responses

Condition Return Details
mTLS/OpTok invalid 401 invalid_token Include WWW-Authenticate DPoP challenge when applicable
Bundle not signed by trusted identity 403 chain_untrusted DSSE accepted only from Signer identities
Duplicate bundle 409 duplicate_bundle Return existing uuid (idempotent)
Rekor unreachable/timeout 502 rekor_unavailable Retry with backoff; surface Retry-After
Inclusion proof timeout 202 accepted status=pending, background job continues to fetch proof
Archive failure 207 multi-status Entry recorded; archive will retry asynchronously
Verification mismatch 400 verify_failed Include reason: chain leafHash rootMismatch

12) Performance & scale

  • Stateless; scale horizontally.

  • Targets:

    • Submit+proof P95 ≤ 300 ms (warm log; local Rekor).
    • Verify P95 ≤ 30 ms from cache; ≤ 120 ms with live proof fetch.
    • 1k submissions/minute per replica sustained.
  • Hot caches: dedupe (bundle hash → uuid), recent entries by artifact sha256.


13) Testing matrix

  • Happy path: valid DSSE, inclusion within timeout.
  • Idempotency: resubmit same bundleSha256 → same uuid.
  • Security: reject nonâ€Signer mTLS, wrong aud, DPoP replay, untrusted cert chain, forbidden predicateType.
  • Rekor variants: promiseâ€thenâ€proof, proof delayed, mirror dualâ€submit, mirror failure.
  • Verification: corrupt leaf path, wrong root, tampered bundle.
  • Throughput: soak test with 10k submissions; latency SLOs, zero drops.

14) Implementation notes

  • Language: .NET 10 minimal API; HttpClient with sockets handler tuned for HTTP/2.
  • JSON: canonical writer for DSSE payload hashing.
  • Crypto: use BouncyCastle/System.Security.Cryptography; PEM parsing for cert chains.
  • Rekor client: pluggable driver; treat backend errors as retryable/nonâ€retryable with granular mapping.
  • Safety: size caps on bundles; decompress bombs guarded; strict UTFâ€8.
  • CLI integration: stellaops verify attestation <uuid|bundle|artifact> calls /rekor/verify.

15) Optional features

  • Dualâ€log write (primary + mirror) and crossâ€log proof packaging.
  • Cloud endorsement: send {uuid, artifactSha256} to Stella Ops cloud; store returned endorsement id for marketing/chainâ€ofâ€custody.
  • Checkpoint pinning: periodically pin latest Rekor checkpoints to an external audit store for independent monitoring.

16) Observability (stub)

  • Runbook + dashboard placeholder for offline import: operations/observability.md, operations/dashboards/attestor-observability.json.
  • Metrics to surface: signing latency p95/p99, verification failure rate, transparency log submission lag, key rotation age, queue backlog, attestation bundle size histogram.
  • Health endpoints: /health/liveness, /health/readiness, /status; verification probe /api/attestations/verify once demo bundle is available (see runbook).
  • Alert hints: signing latency > 1s p99, verification failure spikes, tlog submission lag >10s, key rotation age over policy threshold, backlog above configured threshold.

17) Rekor Entry Events

Sprint: SPRINT_20260112_007_ATTESTOR_rekor_entry_events

Attestor emits deterministic events when DSSE bundles are logged to Rekor and inclusion proofs become available. These events drive policy reanalysis.

Event Types

Event Type Constant Description
rekor.entry.logged RekorEventTypes.EntryLogged Bundle successfully logged with inclusion proof
rekor.entry.queued RekorEventTypes.EntryQueued Bundle queued for logging (async mode)
rekor.entry.inclusion_verified RekorEventTypes.InclusionVerified Inclusion proof independently verified
rekor.entry.failed RekorEventTypes.EntryFailed Logging or verification failed

RekorEntryEvent Schema

{
  "eventId": "rekor-evt-sha256:...",
  "eventType": "rekor.entry.logged",
  "tenant": "default",
  "bundleDigest": "sha256:abc123...",
  "artifactDigest": "sha256:def456...",
  "predicateType": "StellaOps.ScanResults@1",
  "rekorEntry": {
    "uuid": "24296fb24b8ad77a...",
    "logIndex": 123456789,
    "logUrl": "https://rekor.sigstore.dev",
    "integratedTime": "2026-01-15T10:30:02Z"
  },
  "reanalysisHints": {
    "cveIds": ["CVE-2026-1234"],
    "productKeys": ["pkg:npm/lodash@4.17.21"],
    "mayAffectDecision": true,
    "reanalysisScope": "immediate"
  },
  "occurredAtUtc": "2026-01-15T10:30:05Z"
}

Offline Mode Behavior

When operating in offline/air-gapped mode:

  1. Events are not emitted when Rekor is unreachable
  2. Bundles are queued locally for later submission
  3. Verification uses bundled checkpoints
  4. Events are generated when connectivity is restored

Snapshot Export/Import for Air-Gap Transfer

Sprint: SPRINT_20260208_021_Attestor_snapshot_export_import_for_air_gap

The Offline library provides snapshot export and import for transferring attestation state to air-gapped systems via portable archives.

Snapshot Levels:

Level Contents Use Case
A Attestation bundles only Online verification still available
B Evidence + verification material (Fulcio roots, Rekor keys) Standard air-gap transfer
C Full state: policies, trust anchors, org keys Fully disconnected deployment

Key Types:

  • SnapshotManifest — Content-addressed manifest with SHA-256 digests per entry
  • SnapshotManifestEntry — Individual artifact with RelativePath, Digest, SizeBytes, Category
  • ISnapshotExporter — Produces portable JSON archives at the requested level
  • ISnapshotImporter — Validates archive integrity and ingests entries into local stores
  • SnapshotExportRequest/Result, SnapshotImportRequest/Result — Request/response models

Integrity:

  • Each entry carries a SHA-256 digest; the manifest digest is computed from sorted path:digest pairs plus the creation timestamp.
  • Import verifies all entry digests before ingestion (configurable via VerifyIntegrity).
  • Existing entries can be skipped during import (SkipExisting).

DI Registration:

services.AddAttestorOffline(); // registers ISnapshotExporter, ISnapshotImporter

18) Identity Watchlist & Monitoring

Sprint: SPRINT_0129_001_ATTESTOR_identity_watchlist_alerting

The Attestor provides proactive monitoring for signing identities appearing in transparency logs. Organizations can define watchlists to receive alerts when specific identities sign artifacts.

Purpose

  • Credential compromise detection: Alert when your signing identity appears unexpectedly
  • Third-party monitoring: Watch for specific vendors or dependencies signing artifacts
  • Compliance auditing: Track all signing activity for specific issuers

Watchlist Entry Model

{
  "id": "uuid",
  "tenantId": "tenant-123",
  "scope": "tenant",           // tenant | global | system
  "displayName": "GitHub Actions Signer",
  "description": "Watch for GitHub Actions OIDC tokens",

  // Identity fields (at least one required)
  "issuer": "https://token.actions.githubusercontent.com",
  "subjectAlternativeName": "repo:org/repo:*",  // glob pattern
  "keyId": null,

  "matchMode": "glob",         // exact | prefix | glob | regex

  // Alert configuration
  "severity": "warning",       // info | warning | critical
  "enabled": true,
  "channelOverrides": ["slack-security"],
  "suppressDuplicatesMinutes": 60,

  "tags": ["github", "ci-cd"],
  "createdAt": "2026-01-29T10:00:00Z",
  "createdBy": "admin@example.com"
}

Matching Modes

Mode Behavior Example Pattern Matches
exact Case-insensitive equality alice@example.com Alice@example.com
prefix Starts-with match https://accounts.google.com/ Any Google OIDC issuer
glob Glob pattern (*, ?) *@example.com alice@example.com, bob@example.com
regex Full regex (with timeout) repo:org/(frontend|backend):.* repo:org/frontend:ref:main

Scope Hierarchy

Scope Visibility Who Can Create
tenant Owning tenant only Operators with trust:write
global All tenants Platform admins with trust:admin
system All tenants (read-only) System bootstrap

Authorization for the live watchlist surface follows the canonical trust scope family (trust:read, trust:write, trust:admin). The service still accepts legacy watchlist:* aliases for backward compatibility, but new clients and UI sessions should rely on the trust scopes.

Event Flow

New AttestorEntry persisted
    → SignerIdentityDescriptor extracted
    → IIdentityMatcher.MatchAsync()
    → For each match:
        → Check dedup window (default 60 min)
        → Emit attestor.identity.matched event
        → Route via Notifier rules → Slack/Email/Webhook

Event Schema (IdentityAlertEvent)

{
  "eventId": "uuid",
  "eventKind": "attestor.identity.matched",
  "tenantId": "tenant-123",
  "watchlistEntryId": "uuid",
  "watchlistEntryName": "GitHub Actions Signer",
  "matchedIdentity": {
    "issuer": "https://token.actions.githubusercontent.com",
    "subjectAlternativeName": "repo:org/repo:ref:refs/heads/main",
    "keyId": null
  },
  "rekorEntry": {
    "uuid": "24296fb24b8ad77a...",
    "logIndex": 123456789,
    "artifactSha256": "sha256:abc123...",
    "integratedTimeUtc": "2026-01-29T10:30:00Z"
  },
  "severity": "warning",
  "occurredAtUtc": "2026-01-29T10:30:05Z",
  "suppressedCount": 0
}

API Endpoints

Method Path Description
POST /api/v1/watchlist Create watchlist entry
GET /api/v1/watchlist List entries (tenant + optional global)
GET /api/v1/watchlist/{id} Get single entry
PUT /api/v1/watchlist/{id} Update entry
DELETE /api/v1/watchlist/{id} Delete entry
POST /api/v1/watchlist/{id}/test Test pattern against sample identity
GET /api/v1/watchlist/alerts List recent alerts (paginated)

CLI Commands

# Add a watchlist entry
stella watchlist add --issuer "https://token.actions.githubusercontent.com" \
    --san "repo:org/*" --match-mode glob --severity warning

# List entries
stella watchlist list --include-global

# Test a pattern
stella watchlist test <id> --issuer "https://..." --san "repo:org/repo:ref:main"

# View recent alerts
stella watchlist alerts --since 24h --severity warning

Metrics

Metric Description
attestor.watchlist.entries_scanned_total Entries processed by monitor
attestor.watchlist.matches_total{severity} Pattern matches by severity
attestor.watchlist.alerts_emitted_total Alerts sent to notification system
attestor.watchlist.alerts_suppressed_total Alerts deduplicated
attestor.watchlist.scan_latency_seconds Per-entry scan duration

Configuration

attestor:
  watchlist:
    enabled: true
    monitorMode: "changefeed"    # changefeed | polling
    pollingIntervalSeconds: 5    # only for polling mode
    maxEventsPerSecond: 100      # rate limit
    defaultDedupWindowMinutes: 60
    regexTimeoutMs: 100          # safety limit
    maxWatchlistEntriesPerTenant: 1000

Offline Mode

In air-gapped environments:

  • Polling mode used instead of Postgres NOTIFY
  • Alerts queued locally if notification channels unavailable
  • Alerts delivered when connectivity restored

Unknowns Five-Dimensional Triage Scoring (P/E/U/C/S)

Sprint: SPRINT_20260208_022_Attestor_unknowns_five_dimensional_triage_scoring

Overview

The triage scorer extends the existing IUnknownsAggregator pipeline with a five-dimensional scoring model for unknowns, enabling prioritized triage and temperature-band classification.

Scoring Dimensions

Dimension Code Range Description
Probability P [0,1] Likelihood of exploitability or relevance
Exposure E [0,1] Attack surface exposure (internal → internet-facing)
Uncertainty U [0,1] Confidence deficit (fully understood → unknown)
Consequence C [0,1] Impact severity (negligible → catastrophic)
Signal Freshness S [0,1] Recency of intelligence (stale → just reported)

Composite Score

Composite = Σ(dimension × weight) / Σ(weights), clamped to [0, 1].

Default weights: P=0.30, E=0.25, U=0.20, C=0.15, S=0.10 (configurable via TriageDimensionWeights).

Temperature Bands

Band Threshold Action
Hot ≥ 0.70 Immediate triage required
Warm ≥ 0.40 Scheduled review
Cold < 0.40 Archive / low priority

Thresholds are configurable via TriageBandThresholds.

Key Types

  • IUnknownsTriageScorer — Interface: Score(), ComputeComposite(), Classify()
  • UnknownsTriageScorer — Implementation with OTel counters
  • TriageScore — Five-dimensional score vector
  • TriageDimensionWeights — Configurable weights with static Default
  • TriageBandThresholds — Configurable Hot/Warm thresholds with static Default
  • TriageScoredItem — Scored unknown with composite score and band
  • TriageScoringRequest/Result — Batch scoring request/response

OTel Metrics

Metric Description
triage.scored.total Total unknowns scored
triage.band.hot.total Unknowns classified as Hot
triage.band.warm.total Unknowns classified as Warm
triage.band.cold.total Unknowns classified as Cold

DI Registration

services.AddAttestorProofChain(); // registers IUnknownsTriageScorer

VEX Findings API with Proof Artifacts

Sprint: SPRINT_20260208_023_Attestor_vex_findings_api_with_proof_artifacts

Overview

The VEX Findings API provides a query and resolution service for VEX findings (CVE + component combinations) with their associated proof artifacts. Each finding carries DSSE signatures, Rekor receipts, Merkle proofs, and policy decision attestations that prove how the VEX status was determined.

Key Types

  • VexFinding — A finding with FindingId, VulnerabilityId, ComponentPurl, Status, Justification, ProofArtifacts, DeterminedAt
  • ProofArtifact — Proof material: Kind (DsseSignature/RekorReceipt/MerkleProof/ PolicyDecision/VexDelta/ReachabilityWitness), Digest, Payload, ProducedAt
  • VexFindingStatus — NotAffected | Affected | Fixed | UnderInvestigation
  • IVexFindingsServiceGetByIdAsync, QueryAsync, ResolveProofsAsync, UpsertAsync
  • VexFindingQuery — Filters: VulnerabilityId, ComponentPurlPrefix, Status, TenantId, Limit, Offset

Proof Resolution

ResolveProofsAsync() merges new proof artifacts into a finding, deduplicating by digest. This allows incremental proof collection as new evidence is produced.

Finding IDs

Finding IDs are deterministic: SHA-256(vulnId:componentPurl) prefixed with finding:. This ensures the same CVE + component always maps to the same ID.

OTel Metrics

Metric Description
findings.get.total Findings retrieved by ID
findings.query.total Finding queries executed
findings.upsert.total Findings upserted
findings.resolve.total Proof resolution requests
findings.proofs.total Proof artifacts resolved

DI Registration

services.AddAttestorProofChain(); // registers IVexFindingsService

Binary Fingerprint Store & Trust Scoring

Overview

The Binary Fingerprint Store is a content-addressed repository for section-level binary hashes (ELF .text/.rodata, PE sections) with golden-set management and trust scoring. It enables:

  • Content-addressed lookup: Fingerprints identified by fp:sha256:… computed from (format, architecture, sectionHashes).
  • Section-level matching: Find closest match by comparing individual section hashes with a similarity score.
  • Golden-set management: Define named sets of known-good fingerprints for baseline comparison.
  • Trust scoring: Multi-factor score (0.00.99) based on golden membership, Build-ID, section coverage, evidence, and package provenance.

Library: StellaOps.Attestor.ProofChain Namespace: StellaOps.Attestor.ProofChain.FingerprintStore

Models

Type Purpose
BinaryFingerprintRecord Stored fingerprint: ID, format, architecture, file SHA-256, Build-ID, section hashes, package PURL, golden-set flag, trust score, evidence digests, timestamps.
FingerprintRegistration Input for RegisterAsync: format, architecture, file hash, section hashes, optional PURL/Build-ID/evidence.
FingerprintLookupResult Match result: found flag, matched record, golden match, section similarity (0.01.0), matched/differing section lists.
TrustScoreBreakdown Decomposed score: golden bonus, Build-ID score, section coverage, evidence score, provenance score.
GoldenSet Named golden set with count and timestamps.
FingerprintQuery Filters: format, architecture, PURL prefix, golden flag, golden set name, min trust score, limit/offset.

Service Interface (IBinaryFingerprintStore)

Method Description
RegisterAsync(registration) Register fingerprint (idempotent by content-addressed ID).
GetByIdAsync(fingerprintId) Look up by content-addressed ID.
GetByFileSha256Async(fileSha256) Look up by whole-file hash.
FindBySectionHashesAsync(sectionHashes, minSimilarity) Best-match search by section hashes.
ComputeTrustScoreAsync(fingerprintId) Detailed trust-score breakdown.
ListAsync(query) Filtered + paginated listing.
AddToGoldenSetAsync(fingerprintId, goldenSetName) Mark fingerprint as golden (recalculates trust score).
RemoveFromGoldenSetAsync(fingerprintId) Remove golden flag.
CreateGoldenSetAsync(name, description) Create a named golden set.
ListGoldenSetsAsync() List all golden sets.
GetGoldenSetMembersAsync(goldenSetName) List members of a golden set.
DeleteAsync(fingerprintId) Remove fingerprint from store.

Trust Score Computation

Factor Weight Raw value
Golden-set membership 0.30 1.0 if golden, 0.0 otherwise
Build-ID present 0.20 1.0 if Build-ID exists, 0.0 otherwise
Section coverage 0.25 Ratio of key sections (.text, .rodata, .data, .bss) present
Evidence count 0.15 min(count/5, 1.0)
Package provenance 0.10 1.0 if PURL present, 0.0 otherwise

Final score is capped at 0.99.

DI Registration

AddProofChainServices() registers IBinaryFingerprintStore → BinaryFingerprintStore (singleton, via TryAddSingleton).

Observability (OTel Metrics)

Meter: StellaOps.Attestor.ProofChain.FingerprintStore

Metric Type Description
fingerprint.store.registered Counter Fingerprints registered
fingerprint.store.lookups Counter Store lookups performed
fingerprint.store.golden_added Counter Fingerprints added to golden sets
fingerprint.store.deleted Counter Fingerprints deleted

Test Coverage

30 tests in StellaOps.Attestor.ProofChain.Tests/FingerprintStore/BinaryFingerprintStoreTests.cs:

  • Registration (new, idempotent, different sections → different IDs, validation)
  • Lookup (by ID, by file SHA-256, not-found cases)
  • Section-hash matching (exact, partial, below threshold, empty)
  • Trust scoring (with/without Build-ID/PURL, minimal, golden bonus, cap at 0.99, determinism)
  • Golden-set management (create, add, remove, list members, list sets)
  • List/query with filters (format, min trust score)
  • Delete (existing, non-existent)
  • Content-addressed ID determinism

Content-Addressed Store (CAS) for SBOM/VEX/Attestation Artifacts

Overview

The CAS provides a unified content-addressed storage service for all artifact types (SBOM, VEX, attestation, proof bundles, evidence packs, binary fingerprints). All blobs are keyed by SHA-256 digest of their raw content. Puts are idempotent: storing the same content twice returns the existing record with a dedup flag.

Library: StellaOps.Attestor.ProofChain Namespace: StellaOps.Attestor.ProofChain.Cas

Artifact Types

Type Description
Sbom Software Bill of Materials
Vex VEX (Vulnerability Exploitability Exchange) document
Attestation DSSE-signed attestation envelope
ProofBundle Proof chain bundle
EvidencePack Evidence pack manifest
BinaryFingerprint Binary fingerprint record
Other Generic/other artifact

Models

Type Purpose
CasArtifact Stored artifact metadata: digest, type, media type, size, tags, related digests, timestamps, dedup flag.
CasPutRequest Input: raw content bytes, artifact type, media type, optional tags and related digests.
CasPutResult Output: stored artifact + dedup flag.
CasGetResult Retrieved artifact with content bytes.
CasQuery Filters: artifact type, media type, tag key/value, limit/offset.
CasStatistics Store metrics: total artifacts, bytes, dedup count, type breakdown.

Service Interface (IContentAddressedStore)

Method Description
PutAsync(request) Store artifact (idempotent by SHA-256 digest). Returns dedup flag.
GetAsync(digest) Retrieve artifact + content by digest.
ExistsAsync(digest) Check existence by digest.
DeleteAsync(digest) Remove artifact.
ListAsync(query) Filtered + paginated listing.
GetStatisticsAsync() Total artifacts, bytes, dedup savings, type breakdown.

Deduplication

When PutAsync receives content whose SHA-256 digest already exists in the store:

  1. The existing artifact metadata is returned (no duplicate storage).
  2. CasPutResult.Deduplicated is set to true.
  3. An OTel counter is incremented for audit.

DI Registration

AddProofChainServices() registers IContentAddressedStore → InMemoryContentAddressedStore (singleton, via TryAddSingleton).

Observability (OTel Metrics)

Meter: StellaOps.Attestor.ProofChain.Cas

Metric Type Description
cas.puts Counter CAS put operations
cas.deduplications Counter Deduplicated puts
cas.gets Counter CAS get operations
cas.deletes Counter CAS delete operations

Test Coverage

24 tests in StellaOps.Attestor.ProofChain.Tests/Cas/InMemoryContentAddressedStoreTests.cs:

  • Put (new, dedup, different content, validation, tags, related digests)
  • Get (existing, non-existent)
  • Exists (stored, not stored)
  • Delete (existing, non-existent)
  • List with filters (artifact type, media type, tags, pagination)
  • Statistics (counts, bytes, dedup tracking)
  • Digest determinism

Crypto-Sovereign Design (eIDAS/FIPS/GOST/SM/PQC)

Overview

The crypto-sovereign subsystem bridges the Attestor's role-based SigningKeyProfile (Evidence, Reasoning, VexVerdict, Authority, Generator, Exception) to algorithm-specific crypto profiles governed by regional compliance constraints. This enables a single Attestor deployment to enforce eIDAS qualified signatures, FIPS-approved algorithms, GOST, SM2, or Post-Quantum Cryptography depending on the configured region.

Library: StellaOps.Attestor.ProofChain Namespace: StellaOps.Attestor.ProofChain.Signing

Algorithm Profiles

Profile Algorithm ID Standard
Ed25519 ED25519 RFC 8032
EcdsaP256 ES256 NIST FIPS 186-4
EcdsaP384 ES384 NIST FIPS 186-4
RsaPss PS256 PKCS#1 v2.1
Gost2012_256 GOST-R34.10-2012-256 Russian Federation
Gost2012_512 GOST-R34.10-2012-512 Russian Federation
Sm2 SM2 Chinese GB/T 32918
Dilithium3 DILITHIUM3 NIST FIPS 204 (ML-DSA)
Falcon512 FALCON512 NIST PQC Round 3
EidasRsaSha256 eIDAS-RSA-SHA256 EU eIDAS + CAdES
EidasEcdsaSha256 eIDAS-ECDSA-SHA256 EU eIDAS + CAdES

Sovereign Regions

Region Default Algorithm Requirements
International Ed25519 None
EuEidas eIDAS-RSA-SHA256 Qualified timestamp (Article 42), CAdES-T minimum
UsFips ECDSA-P256 HSM-backed keys
RuGost GOST-2012-256 GOST algorithms only
CnSm SM2 SM national standards only
PostQuantum Dilithium3 PQC finalist algorithms only

Service Interface (ICryptoProfileResolver)

Method Description
ResolveAsync(keyProfile) Resolve key profile using active region.
ResolveAsync(keyProfile, region) Resolve key profile with explicit region override.
ActiveRegion Get the configured sovereign region.
GetPolicy(region) Get the sovereign policy for a region.
ValidateQualifiedTimestampAsync(...) eIDAS Article 42 timestamp validation.

Resolution Flow

  1. SigningKeyProfile (role: Evidence/Reasoning/etc.) arrives at ICryptoProfileResolver
  2. Active CryptoSovereignRegion determines the CryptoSovereignPolicy
  3. Policy's DefaultAlgorithm produces a CryptoProfileBinding
  4. Binding carries: algorithm ID, region, CAdES level, HSM/timestamp requirements
  5. Caller (or composition root) uses binding to resolve key material from ICryptoProviderRegistry

eIDAS Article 42 Qualified Timestamp Validation

ValidateQualifiedTimestampAsync performs structural validation of RFC 3161 timestamp tokens:

  • Non-eIDAS regions return IsQualified = false immediately
  • Empty tokens or signed data are rejected
  • ASN.1 SEQUENCE tag (0x30) is verified as structural check
  • Full TSA certificate chain and EU Trusted List validation deferred to eIDAS plugin integration

CAdES Levels

Level Description
CadesB Basic Electronic Signature
CadesT With Timestamp (Article 42 minimum)
CadesLT With Long-Term validation data
CadesLTA With Long-Term Archival validation data

DI Registration

AddProofChainServices() registers ICryptoProfileResolver → DefaultCryptoProfileResolver (singleton, via TryAddSingleton). The Attestor Infrastructure layer can pre-register a registry-aware implementation that bridges ICryptoProviderRegistry before this fallback applies.

Observability (OTel Metrics)

Meter: StellaOps.Attestor.ProofChain.CryptoSovereign

Metric Type Description
crypto_sovereign.resolves Counter Profile resolution operations (tagged by region)
crypto_sovereign.timestamp_validations Counter Qualified timestamp validations

Test Coverage

27 tests in StellaOps.Attestor.ProofChain.Tests/Signing/DefaultCryptoProfileResolverTests.cs:

  • Region-based resolution (International/eIDAS/FIPS/GOST/SM/PQC default algorithms)
  • Explicit region override
  • All key profiles resolve for all regions
  • Active region property
  • Policy access and validation (all regions, eIDAS timestamp requirement, FIPS HSM requirement)
  • Algorithm ID mapping (all 11 profiles)
  • Qualified timestamp validation (non-eIDAS, empty token, empty data, invalid ASN.1, valid structure)
  • Cancellation handling
  • Determinism (same inputs → identical bindings)
  • Policy consistency (default in allowed list, non-empty allowed lists)

DSSE Envelope Size Management (Guardrails, Chunking, Gateway Awareness)

Overview

Pre-submission size guard for DSSE envelopes submitted to Rekor transparency logs. Validates envelope size against a configurable policy and determines the submission mode: full envelope (under soft limit), hash-only fallback, chunked with manifest, or rejected.

Library: StellaOps.Attestor.ProofChain Namespace: StellaOps.Attestor.ProofChain.Rekor

Submission Modes

Mode Trigger Behavior
FullEnvelope Size ≤ soft limit Envelope submitted to Rekor as-is
HashOnly Soft limit < size ≤ hard limit, hash-only enabled Only SHA-256 payload digest submitted
Chunked Soft limit < size ≤ hard limit, chunking enabled Envelope split into chunks with manifest
Rejected Size > hard limit, or no fallback available Submission blocked

Size Policy (DsseEnvelopeSizePolicy)

Property Default Description
SoftLimitBytes 102,400 (100 KB) Threshold for hash-only/chunked fallback
HardLimitBytes 1,048,576 (1 MB) Absolute rejection threshold
ChunkSizeBytes 65,536 (64 KB) Maximum size per chunk
EnableHashOnlyFallback true Allow hash-only submission for oversized envelopes
EnableChunking false Allow chunked submission (takes priority over hash-only)
HashAlgorithm "SHA-256" Hash algorithm for digest computation

Service Interface (IDsseEnvelopeSizeGuard)

Method Description
ValidateAsync(DsseEnvelope) Validate a typed DSSE envelope against size policy
ValidateAsync(ReadOnlyMemory<byte>) Validate raw serialized envelope bytes
Policy Get the active size policy

Chunk Manifest

When chunking is enabled and an envelope exceeds the soft limit, the guard produces an EnvelopeChunkManifest containing:

  • TotalSizeBytes: original envelope size
  • ChunkCount: number of chunks
  • OriginalDigest: SHA-256 digest of the complete original envelope
  • Chunks: ordered array of ChunkDescriptor (index, size, digest, offset)

Each chunk is content-addressed by its SHA-256 digest for integrity verification.

DI Registration

AddProofChainServices() registers IDsseEnvelopeSizeGuard → DsseEnvelopeSizeGuard (singleton, via TryAddSingleton). Default policy uses 100 KB soft / 1 MB hard limits.

Observability (OTel Metrics)

Meter: StellaOps.Attestor.ProofChain.EnvelopeSize

Metric Type Description
envelope_size.validations Counter Total envelope size validations
envelope_size.hash_only_fallbacks Counter Hash-only fallback activations
envelope_size.chunked Counter Chunked submission activations
envelope_size.rejections Counter Envelope rejections

Test Coverage

28 tests in StellaOps.Attestor.ProofChain.Tests/Rekor/DsseEnvelopeSizeGuardTests.cs:

  • Full envelope (small, exact soft limit)
  • Hash-only fallback (activation, digest determinism)
  • Chunked mode (activation, correct chunk count, priority over hash-only)
  • Hard limit rejection
  • Both fallbacks disabled rejection
  • Raw bytes validation (under limit, empty rejection)
  • Policy validation (negative soft, hard < soft, zero chunk size, defaults)
  • Cancellation handling
  • Digest determinism (same/different input)
  • Chunk manifest determinism
  • Size tracking

DSSE-Wrapped Reach-Maps

Purpose

Reach-maps are standalone in-toto attestation artifacts that capture the full reachability graph for a scanned artifact. Unlike micro-witnesses (which capture individual vulnerability reachability paths), a reach-map aggregates the entire graph — all nodes, edges, findings, and analysis metadata — into a single DSSE-wrapped statement that can be stored, transmitted, and verified independently.

Predicate Type

URI: reach-map.stella/v1

The reach-map predicate follows Pattern B (predicate model in Predicates/, statement delegates PredicateType).

Data Model

ReachMapPredicate

Top-level predicate record containing:

Field Type Description
SchemaVersion string Always "1.0.0"
GraphDigest string Deterministic SHA-256 digest of sorted graph content
GraphCasUri string? Optional CAS URI for externalized graph storage
ScanId string Identifier of the originating scan
ArtifactRef string Package URL or image reference of the scanned artifact
Nodes ImmutableArray<ReachMapNode> All nodes in the reachability graph
Edges ImmutableArray<ReachMapEdge> All edges (call relationships)
Findings ImmutableArray<ReachMapFinding> Vulnerability findings with reachability status
AggregatedWitnessIds ImmutableArray<string> Deduplicated witness IDs from findings + explicit additions
Analysis ReachMapAnalysis Analyzer metadata (tool, version, confidence, completeness)
Summary ReachMapSummary Computed statistics (counts of nodes, edges, entry points, sinks)

ReachMapNode

Field Type Description
NodeId string Unique identifier for the node
QualifiedName string Fully qualified name (e.g., class.method)
Module string Module or assembly containing the node
IsEntryPoint bool Whether this node is a graph entry point
IsSink bool Whether this node is a vulnerability sink
ReachabilityState string One of the 8-state lattice values

ReachMapEdge

Field Type Description
SourceNodeId string Origin node of the call edge
TargetNodeId string Destination node of the call edge
CallType string Edge type (direct, virtual, reflection, etc.)
Confidence double Edge confidence score (0.01.0), default 1.0

ReachMapFinding

Field Type Description
VulnId string Vulnerability identifier
CveId string? Optional CVE identifier
Purl string? Optional package URL
IsReachable bool Whether the vulnerability is reachable
ConfidenceScore double Reachability confidence (0.01.0)
SinkNodeIds ImmutableArray<string> Nodes where the vulnerability manifests
ReachableEntryPointIds ImmutableArray<string> Entry points that can reach sinks
WitnessId string? Optional micro-witness identifier

ReachMapBuilder (Fluent API)

ReachMapBuilder provides a fluent interface for constructing reach-map predicates:

var predicate = new ReachMapBuilder()
    .WithScanId("scan-001")
    .WithArtifactRef("pkg:docker/myapp@sha256:abc123")
    .WithAnalyzer("stella-reach", "2.0.0", 0.95, "full")
    .WithGeneratedAt(DateTimeOffset.UtcNow)
    .AddNodes(nodes)
    .AddEdges(edges)
    .AddFindings(findings)
    .Build();

Deterministic Graph Digest

The builder computes a deterministic SHA-256 digest over the graph content:

  1. Nodes are sorted by NodeId, each contributing NodeId|QualifiedName|ReachabilityState
  2. Edges are sorted by SourceNodeId then TargetNodeId, each contributing Source→Target|CallType
  3. Findings are sorted by VulnId, each contributing VulnId|IsReachable|ConfidenceScore
  4. All contributions are concatenated with newlines and hashed

This ensures identical graphs always produce the same digest regardless of insertion order.

Witness Aggregation

Witness IDs are collected from two sources:

  • WitnessId fields on individual ReachMapFinding records
  • Explicit AddWitnessId() calls on the builder

All witness IDs are deduplicated in the final predicate.

Schema Validation

The reach-map predicate type is registered in PredicateSchemaValidator:

  • HasSchema("reach-map.stella/v1")true
  • ValidateByPredicateType routes to ValidateReachMapPredicate
  • Required JSON properties: graph_digest, scan_id, artifact_ref, nodes, edges, analysis, summary

Statement Integration

ReachMapStatement extends InTotoStatement with:

  • PredicateType"reach-map.stella/v1" (from ReachMapPredicate.PredicateTypeUri)
  • Type"https://in-toto.io/Statement/v1" (inherited)

Source Files

  • Predicate: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Predicates/ReachMapPredicate.cs
  • Statement: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Statements/ReachMapStatement.cs
  • Builder: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Rekor/ReachMapBuilder.cs
  • Validator: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Json/PredicateSchemaValidator.DeltaValidators.cs

Test Coverage (25 tests)

  • Build validation (missing ScanId, ArtifactRef, Analyzer)
  • Minimal build, full build with summary statistics
  • Graph digest determinism (same input, different order, different content)
  • Witness aggregation (from findings, explicit, deduplication)
  • Bulk add operations (AddNodes, AddEdges, AddFindings)
  • CAS URI inclusion
  • Statement integration (predicate type, statement type)
  • Null argument protection (5 tests)

Evidence Coverage Score for AI Gating

Purpose

The Evidence Coverage Scorer provides a deterministic, multi-dimensional assessment of how thoroughly an artifact's evidence base covers the key verification axes. This score directly gates AI auto-processing decisions: AI-generated artifacts (explanations, remediation plans, VEX drafts, policy drafts) can only be promoted to verdicts when evidence coverage meets a configurable threshold.

Evidence Dimensions

The scorer evaluates five independent dimensions:

Dimension Default Weight Description
Reachability 0.25 Call graph analysis, micro-witnesses, reach-maps
BinaryAnalysis 0.20 Binary fingerprints, build-id verification, section hashes
SbomCompleteness 0.25 Component inventory, dependency resolution completeness
VexCoverage 0.20 Vulnerability status decisions (affected/not_affected/fixed)
Provenance 0.10 Build provenance, source attestation, supply chain evidence

Scoring Algorithm

  1. For each dimension, the scorer receives a list of evidence identifiers
  2. Each identifier is checked against an evidence resolver (Func<string, bool>) — the same pattern used by AIAuthorityClassifier
  3. Dimension score = (resolvable count) / (total count), producing a 0.01.0 value
  4. Overall score = weighted average across all dimensions (normalized by total weight)
  5. Missing dimensions receive a score of 0.0

Coverage Levels (Badge Rendering)

Level Threshold Meaning
Green >= 80% (configurable) Full evidence coverage, auto-processing eligible
Yellow >= 50% (configurable) Partial coverage, manual review recommended
Red < 50% Insufficient evidence, gating blocks promotion

AI Gating Policy

The EvidenceCoveragePolicy record controls:

  • Per-dimension weights (must be non-negative)
  • AI gating threshold (default 0.80) — minimum overall score for auto-processing
  • Green/yellow badge thresholds

When MeetsAiGatingThreshold is false, the AIAuthorityClassifier's CanAutoProcess path should be blocked.

DI Registration

Registered via ProofChainServiceCollectionExtensions.AddProofChainServices():

  • IEvidenceCoverageScorer -> EvidenceCoverageScorer (TryAddSingleton)
  • Default evidence resolver returns false (no evidence resolvable) — Infrastructure layer overrides with a persistence-backed resolver

OTel Metrics

Meter: StellaOps.Attestor.ProofChain.EvidenceCoverage

Counter Description
coverage.evaluations Total coverage evaluations performed
coverage.gating.pass Evaluations that met AI gating threshold
coverage.gating.fail Evaluations that failed AI gating threshold

Source Files

  • Models: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Predicates/AI/EvidenceCoverageModels.cs
  • Interface: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Predicates/AI/IEvidenceCoverageScorer.cs
  • Implementation: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Predicates/AI/EvidenceCoverageScorer.cs
  • DI: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/ProofChainServiceCollectionExtensions.cs

Test Coverage (24 tests)

  • Full coverage (all dimensions resolvable, Green level)
  • No evidence (empty inputs, Red, zero score)
  • Partial coverage (weighted score calculation)
  • Per-dimension breakdown (counts, reasons)
  • Missing dimensions (zero score)
  • Gating threshold (at threshold, below threshold)
  • Custom thresholds (coverage level boundaries)
  • Policy validation (negative weight, invalid threshold, green < yellow)
  • Null argument protection (policy, resolver, meter factory, subject ref, inputs, result)
  • Cancellation handling
  • Determinism (same inputs produce same results)
  • Default policy values
  • Reason text verification

Evidence Subgraph UI Visualization

Purpose

The Subgraph Visualization Service renders proof graph subgraphs into multiple visualization formats suitable for interactive frontend rendering. It bridges the existing IProofGraphService.GetArtifactSubgraphAsync() BFS traversal with UI-ready output in Mermaid, Graphviz DOT, and structured JSON formats.

Render Formats

Format Use Case Output
Mermaid Browser-side rendering via Mermaid.js graph TD markup with class definitions
Dot Static/server-side rendering via Graphviz digraph markup with color/shape attributes
Json Custom frontend rendering (D3.js, Cytoscape.js) Structured {nodes, edges} JSON

Visualization Models

VisualizationNode

Field Type Description
Id string Unique node identifier
Label string Formatted display label (type + truncated digest)
Type string Node type string for icon/color selection
ContentDigest string? Full content digest for provenance verification
IsRoot bool Whether this is the subgraph root
Depth int BFS depth from root (for layout layering)
Metadata ImmutableDictionary? Optional key-value pairs for tooltips

VisualizationEdge

Field Type Description
Source string Source node ID
Target string Target node ID
Label string Human-readable edge type label
Type string Edge type string for styling

Depth Computation

The service computes BFS depth from the root node bidirectionally through all edges, enabling hierarchical layout rendering. Unreachable nodes receive the maximum depth value.

Node Type Styling

Node Type Mermaid Shape DOT Color
Artifact / Subject [box] #4CAF50 (green)
SbomDocument ([stadium]) #2196F3 (blue)
InTotoStatement / DsseEnvelope [[subroutine]] #FF9800 (orange)
VexStatement ([stadium]) #9C27B0 (purple)
RekorEntry [(cylinder)] #795548 (brown)
SigningKey / TrustAnchor ((circle)) #607D8B (blue-grey)

DI Registration

ISubgraphVisualizationService -> SubgraphVisualizationService (TryAddSingleton)

Source Files

  • Models: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Graph/SubgraphVisualizationModels.cs
  • Interface: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Graph/ISubgraphVisualizationService.cs
  • Implementation: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Graph/SubgraphVisualizationService.cs

Test Coverage (22 tests)

  • Empty subgraph rendering
  • Single node with root detection and depth
  • Multi-node depth computation (root=0, child=1, grandchild=2)
  • Mermaid format (graph directive, node/edge content, class definitions)
  • DOT format (digraph directive, node colors)
  • JSON format (valid JSON output)
  • Edge type labels (5 inline data tests)
  • Node type preservation (4 inline data tests)
  • Content digest truncation in labels
  • Cancellation handling
  • Null argument protection
  • Determinism (same input produces same output)
  • All three formats produce non-empty content (3 inline data tests)

Field-Level Ownership Map for Receipts and Bundles

Purpose

The Field-Level Ownership Map provides a machine-readable and human-readable document that maps each field in VerificationReceipt and VerificationCheck to the responsible module. This enables automated validation that fields are populated by their designated owner module, supporting audit trails and cross-module accountability.

Owner Modules

Module Responsibility
Core Fundamental identifiers, timestamps, versions, tool digests
Signing Key identifiers and signature-related fields
Rekor Transparency log indices and inclusion proofs
Verification Trust anchors, verification results, check details
SbomVex SBOM/VEX document references
Provenance Provenance and build attestation fields
Policy Policy evaluation results
External Fields populated by external integrations

Ownership Map Structure

The FieldOwnershipMap record contains:

  • DocumentType — the document being mapped (e.g., "VerificationReceipt")
  • SchemaVersion — version of the ownership schema (default "1.0")
  • Entries — immutable list of FieldOwnershipEntry records

Each FieldOwnershipEntry declares:

  • FieldPath — dot-path or array-path (e.g., proofBundleId, checks[].keyId)
  • Owner — the OwnerModule responsible for populating the field
  • IsRequired — whether the field must be populated for validity
  • Description — human-readable purpose of the field

Default Receipt Ownership Map (14 entries)

Field Path Owner Required
proofBundleId Core Yes
verifiedAt Core Yes
verifierVersion Core Yes
anchorId Verification Yes
result Verification Yes
checks Verification Yes
checks[].check Verification Yes
checks[].status Verification Yes
checks[].keyId Signing No
checks[].logIndex Rekor No
checks[].expected Verification No
checks[].actual Verification No
checks[].details Verification No
toolDigests Core No

Validation

ValidateReceiptOwnershipAsync checks a VerificationReceipt against the ownership map:

  1. Iterates top-level fields, recording population status
  2. Expands per-check fields for each VerificationCheck entry
  3. Counts missing required fields
  4. Returns FieldOwnershipValidationResult with computed properties:
    • IsValid — true when MissingRequiredCount == 0
    • TotalFields — total field population records
    • PopulatedCount — fields that have values
    • ValidCount — fields with valid ownership

DI Registration

IFieldOwnershipValidator -> FieldOwnershipValidator (TryAddSingleton)

Source Files

  • Models: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Receipts/FieldOwnershipModels.cs
  • Interface: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Receipts/IFieldOwnershipValidator.cs
  • Implementation: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Receipts/FieldOwnershipValidator.cs

Test Coverage (24 tests)

  • Ownership map structure (document type, entry count, top-level fields, check fields)
  • Owner assignment theories (7 top-level + 4 check-level field-to-owner mappings)
  • Description completeness (all entries have descriptions)
  • Full receipt validation (valid, all populated, correct counts)
  • Minimal receipt validation (valid, optional fields not populated)
  • Empty checks validation (missing required → invalid)
  • Multi-check field expansion (fields per check entry)
  • Ownership validity (all fields valid in static map)
  • ValidatedAt propagation
  • Null receipt protection
  • Cancellation token handling
  • Determinism (same inputs produce same results)
  • Static map required/optional field markers
  • Computed property correctness

Idempotent SBOM/Attestation APIs

Purpose

The Idempotent Ingest Service provides content-hash-based deduplication for SBOM ingest and attestation verification operations. Duplicate submissions return the original result without creating duplicate records, ensuring safe retries and deterministic outcomes.

Architecture

The service builds on the existing IContentAddressedStore (CAS), which already provides SHA-256-based deduplication at the storage layer. The idempotent service adds:

  1. SBOM Ingest — wraps CAS PutAsync with SBOM-specific metadata (media type, tags, artifact type) and returns a typed SbomEntryId
  2. Attestation Verify — stores attestation in CAS, performs verification checks, and caches results by content hash in a ConcurrentDictionary
  3. Idempotency Key Support — optional client-provided keys that map to content digests, enabling safe retries even when content bytes differ

Idempotency Guarantees

Scenario Behavior
Same content, no key CAS deduplicates by SHA-256 hash, returns Deduplicated = true
Same content, same key Returns cached result via key lookup
Different content, same key Returns original result mapped to the key
Same content, different key Both keys map to the same digest

Verification Checks

The baseline attestation verification performs three deterministic checks:

Check Description
content_present Content is non-empty
digest_format Valid SHA-256 digest format (71 chars)
json_structure Content starts with { and ends with }

Infrastructure layer may override with full DSSE/Rekor verification.

Models

Type Description
SbomIngestRequest Content, MediaType, Tags, optional IdempotencyKey
SbomIngestResult Digest, Deduplicated, Artifact, SbomEntryId
AttestationVerifyRequest Content, MediaType, optional IdempotencyKey
AttestationVerifyResult Digest, CacheHit, Verified, Summary, Checks, VerifiedAt
AttestationCheckResult Check, Passed, Details
IdempotencyKeyEntry Key, Digest, CreatedAt, OperationType

DI Registration

IIdempotentIngestService -> IdempotentIngestService (TryAddSingleton factory)

  • Resolves: IContentAddressedStore, optional TimeProvider, IMeterFactory

OTel Metrics

Meter: StellaOps.Attestor.ProofChain.Idempotency

Counter Description
idempotent.sbom.ingests Total SBOM ingest operations
idempotent.sbom.deduplications SBOM submissions that were deduplicated
idempotent.attest.verifications Total attestation verifications (non-cached)
idempotent.attest.cache_hits Attestation verifications served from cache
idempotent.key.hits Idempotency key lookups that found existing entries

Source Files

  • Models: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Idempotency/IdempotentIngestModels.cs
  • Interface: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Idempotency/IIdempotentIngestService.cs
  • Implementation: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Idempotency/IdempotentIngestService.cs

Test Coverage (30 tests)

  • SBOM ingest: first submission, duplicate dedup, different content, tags, idempotency key retry, empty content, empty media type, null request, cancellation, artifact type
  • Attestation verify: first submission, duplicate cache hit, JSON structure pass/fail, content check, digest check, idempotency key, null request, empty content, cancellation, determinism, summary text
  • Idempotency key lookup: unknown key, after ingest, after verify, null key
  • Constructor validation: null store, null meter factory, null time provider

Regulatory Compliance Report Generator (NIS2/DORA/ISO-27001/EU CRA)

Purpose

The Compliance Report Generator provides a static registry of regulatory controls and maps evidence artifacts to regulatory requirements. It generates compliance reports that identify which controls are satisfied by available evidence and which have gaps, enabling auditable regulatory alignment for release decisions.

Supported Frameworks

Framework Controls Description
NIS2 5 EU Network and Information Security Directive 2
DORA 5 EU Digital Operational Resilience Act
ISO-27001 6 ISO/IEC 27001 Information Security Management
EU CRA 4 EU Cyber Resilience Act

Evidence Artifact Types

Type Description
Sbom Software Bill of Materials
VexStatement Vulnerability Exploitability Exchange statement
SignedAttestation Signed attestation envelope
TransparencyLogEntry Rekor transparency log entry
VerificationReceipt Proof of verification
ProofBundle Bundled evidence pack
ReachabilityAnalysis Binary fingerprint or reachability analysis
PolicyEvaluation Policy evaluation result
ProvenanceAttestation Build origin proof
IncidentReport Incident response documentation

Control Registry (20 controls)

NIS2 Controls

ID Category Satisfied By
NIS2-Art21.2d Supply Chain Security SBOM, VEX, Provenance
NIS2-Art21.2e Supply Chain Security VEX, Reachability
NIS2-Art21.2a Risk Management Policy, Attestation
NIS2-Art21.2g Risk Management Receipt, ProofBundle
NIS2-Art23 Incident Management Incident, Transparency

DORA Controls

ID Category Satisfied By
DORA-Art6.1 ICT Risk Management Policy, Attestation
DORA-Art9.1 ICT Risk Management Attestation, Receipt, ProofBundle
DORA-Art17 Incident Classification Incident, VEX
DORA-Art28 Third-Party Risk SBOM, Provenance, Reachability
DORA-Art11 ICT Risk Management (optional) ProofBundle, Transparency

ISO-27001 Controls

ID Category Satisfied By
A.8.28 Application Security SBOM, Reachability, Provenance
A.8.9 Configuration Management Policy, Attestation
A.8.8 Vulnerability Management VEX, Reachability, SBOM
A.5.23 Cloud Security (optional) Provenance, ProofBundle
A.5.37 Operations Security Receipt, Transparency
A.5.21 Supply Chain Security SBOM, VEX, Provenance

EU CRA Controls

ID Category Satisfied By
CRA-AnnexI.2.1 Product Security SBOM
CRA-AnnexI.2.5 Vulnerability Management VEX, Reachability
CRA-Art11 Vulnerability Management VEX, Incident, Transparency
CRA-AnnexI.1.2 Product Security Policy, Attestation, Receipt

Report Structure

ComplianceReport computed properties:

  • CompliancePercentage — ratio of satisfied to total controls
  • MandatoryGapCount — mandatory controls not satisfied
  • MeetsMinimumCompliance — true when all mandatory controls satisfied

DI Registration

IComplianceReportGenerator -> ComplianceReportGenerator (TryAddSingleton factory)

OTel Metrics

Meter: StellaOps.Attestor.ProofChain.Compliance

Counter Description
compliance.reports.generated Total compliance reports generated
compliance.controls.evaluated Total individual controls evaluated

Source Files

  • Models: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Compliance/RegulatoryComplianceModels.cs
  • Interface: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Compliance/IComplianceReportGenerator.cs
  • Implementation: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Compliance/ComplianceReportGenerator.cs

Test Coverage (26 tests)

  • Supported frameworks (count and membership)
  • Control counts per framework (4 theories)
  • Control ID presence per framework (4 theories)
  • Framework assignment and required field validation
  • Full evidence → 100% compliance (4 theories)
  • No evidence → 0% compliance (4 theories)
  • Partial evidence → partial compliance
  • Subject ref and framework recording
  • Generated timestamp
  • Artifact ref tracing
  • Gap descriptions (present for unsatisfied, absent for satisfied)
  • Null subject/evidence protection
  • Cancellation token
  • Determinism
  • Constructor validation
  • Mandatory vs optional controls
  • NIS2 control categories (5 theories)

The LinkCapture subsystem provides in-toto link attestation capture for supply chain step recording. It captures materials (inputs) and products (outputs) with content-addressed deduplication, enabling CI pipeline step evidence collection.

Domain Model

Record Purpose
CapturedMaterial Input artifact (URI + digest map)
CapturedProduct Output artifact (URI + digest map)
CapturedEnvironment Execution context (hostname, OS, variables)
LinkCaptureRequest Capture request with step, functionary, command, materials, products, env, byproducts, pipeline/step IDs
LinkCaptureResult Result with content-addressed digest, dedup flag, stored record
CapturedLinkRecord Stored link with all fields + CapturedAt timestamp
LinkCaptureQuery Query filter: step name, functionary, pipeline ID, limit

Deduplication

Content-addressed deduplication uses canonical hashing:

  • Canonical form: step name + functionary + command + sorted materials + sorted products
  • Environment and byproducts are excluded from the digest to ensure deterministic deduplication across different execution contexts
  • SHA-256 digest with sha256: prefix
  • Materials and products sorted by URI (ordinal) before hashing

Service Interface

ILinkCaptureService:

  • CaptureAsync(LinkCaptureRequest)LinkCaptureResult — idempotent capture
  • GetByDigestAsync(string digest)CapturedLinkRecord? — lookup by content digest
  • QueryAsync(LinkCaptureQuery)ImmutableArray<CapturedLinkRecord> — filtered query (case-insensitive, ordered by descending timestamp)

DI Registration

ILinkCaptureService -> LinkCaptureService (TryAddSingleton factory)

OTel Metrics

Meter: StellaOps.Attestor.ProofChain.LinkCapture

Counter Description
link.captures Total new link attestations captured
link.deduplications Total deduplicated captures
link.queries Total query operations

Source Files

  • Models: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/LinkCapture/LinkCaptureModels.cs
  • Interface: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/LinkCapture/ILinkCaptureService.cs
  • Implementation: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/LinkCapture/LinkCaptureService.cs

Test Coverage (30 tests)

  • Basic capture with digest, step, functionary verification
  • Timestamp from TimeProvider
  • Materials and products recording
  • Environment and byproducts recording
  • Pipeline/step ID recording
  • Deduplication (same request returns deduplicated=true)
  • Different step/functionary/materials produce different digests
  • Deterministic digest (material order invariance)
  • Environment excluded from digest
  • Null/empty validation (request, step, functionary)
  • Cancellation token handling
  • GetByDigest (found, not found, null, cancelled)
  • Query by step name, functionary, pipeline ID
  • Case-insensitive query filtering
  • Empty store query
  • No-filter returns all
  • Limit enforcement
  • Descending timestamp ordering
  • Constructor validation

Monthly Bundle Rotation and Re-Signing (Sprint 016)

The BundleRotation subsystem provides scheduled key rotation for DSSE-signed bundles. It verifies bundles with the old key, re-signs them with a new key, and records a transition attestation for audit trail.

Domain Model

Record Purpose
RotationStatus Enum: Pending, Verified, ReSigned, Completed, Failed, Skipped
RotationCadence Enum: Monthly, Quarterly, OnDemand
KeyTransition Old/new key IDs, algorithm, effective date, grace period
BundleRotationRequest Rotation cycle request with transition, bundle digests, cadence, tenant
BundleRotationEntry Per-bundle result (original/new digest, status, error)
BundleRotationResult Full cycle result with computed SuccessCount/FailureCount/SkippedCount
TransitionAttestation Audit record: attestation ID, rotation ID, result digest, counts
RotationScheduleEntry Schedule config: cadence, next/last rotation, current key, enabled
RotationHistoryQuery Query filter: tenant, key ID, status, limit

Re-Signing Workflow

  1. Validate request (rotation ID, key IDs, bundle digests)
  2. Verify old key and new key exist in IProofChainKeyStore
  3. For each bundle: verify with old key → compute re-signed digest → record entry
  4. Determine overall status from individual entries
  5. Create TransitionAttestation with result digest for integrity verification
  6. Store in rotation history

Service Interface

IBundleRotationService:

  • RotateAsync(BundleRotationRequest)BundleRotationResult — execute rotation cycle
  • GetTransitionAttestationAsync(string rotationId)TransitionAttestation? — get audit attestation
  • QueryHistoryAsync(RotationHistoryQuery)ImmutableArray<BundleRotationResult> — query history
  • ComputeNextRotationDate(RotationCadence, DateTimeOffset?)DateTimeOffset — schedule computation

DI Registration

IBundleRotationService -> BundleRotationService (TryAddSingleton factory, requires IProofChainKeyStore)

OTel Metrics

Meter: StellaOps.Attestor.ProofChain.Signing.Rotation

Counter Description
rotation.cycles.started Total rotation cycles initiated
rotation.cycles.completed Total rotation cycles completed
rotation.bundles.resigned Total bundles successfully re-signed
rotation.bundles.skipped Total bundles skipped
rotation.bundles.failed Total bundles that failed rotation

Source Files

  • Models: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Signing/BundleRotationModels.cs
  • Interface: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Signing/IBundleRotationService.cs
  • Implementation: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Signing/BundleRotationService.cs

Test Coverage (35 tests)

  • Basic rotation (completed result, success count, new digests, transition, timestamps)
  • Key validation (old key missing, new key missing → all fail)
  • Empty bundle digest → entry fails
  • Argument validation (null request, empty rotation ID, empty bundles, empty key IDs, cancellation)
  • Transition attestation (created after rotation, has result digest, records transition, not found for unknown, null/cancel)
  • Query history (empty, after rotation, filter by key ID, filter by status, limit, null/cancel)
  • Schedule computation (monthly +1 month, quarterly +3 months, on-demand immediate, null last uses current time)
  • Determinism (same inputs → same re-signed digests)
  • Constructor validation (null key store, null meter factory, null time provider OK)

Noise Ledger — Audit Log of Suppressions (Sprint 017)

The NoiseLedger subsystem provides an auditable, queryable log of all suppression decisions in the attestation pipeline. It records VEX overrides, alert deduplications, policy-based suppressions, operator acknowledgments, and false positive determinations.

Domain Model

Type Purpose
SuppressionCategory Enum: VexOverride, AlertDedup, PolicyRule, OperatorAck, SeverityFilter, ComponentExclusion, FalsePositive
FindingSeverity Enum: None, Low, Medium, High, Critical
NoiseLedgerEntry Immutable record with digest, finding, category, severity, component, justification, suppressor, timestamps, expiry, evidence
RecordSuppressionRequest Request to log a suppression
RecordSuppressionResult Result with digest, dedup flag, entry
NoiseLedgerQuery Query filter: finding, category, severity, component, suppressor, tenant, active-only, limit
SuppressionStatistics Aggregated counts by category, severity, active/expired

Deduplication

Content-addressed using SHA-256 of canonical form: findingId + category + severity + componentRef + suppressedBy + justification.

Service Interface

INoiseLedgerService:

  • RecordAsync(RecordSuppressionRequest)RecordSuppressionResult — idempotent record
  • GetByDigestAsync(string)NoiseLedgerEntry? — lookup by digest
  • QueryAsync(NoiseLedgerQuery)ImmutableArray<NoiseLedgerEntry> — filtered query
  • GetStatisticsAsync(string? tenantId)SuppressionStatistics — aggregated stats

DI Registration

INoiseLedgerService -> NoiseLedgerService (TryAddSingleton factory)

OTel Metrics

Meter: StellaOps.Attestor.ProofChain.Audit.NoiseLedger

Counter Description
noise.suppressions.recorded New suppression entries
noise.suppressions.deduplicated Deduplicated entries
noise.queries.executed Query operations
noise.statistics.computed Statistics computations

Source Files

  • Models: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Audit/NoiseLedgerModels.cs
  • Interface: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Audit/INoiseLedgerService.cs
  • Implementation: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Audit/NoiseLedgerService.cs

Test Coverage (34 tests)

  • Basic recording (digest, timestamp, all fields, evidence, correlation)
  • Deduplication (same request, different finding/category)
  • Validation (null, empty findingId/componentRef/justification/suppressedBy, cancellation)
  • GetByDigest (found, not found, null)
  • Query by findingId, category, severity, componentRef, active-only
  • No-filter returns all, limit enforcement
  • Statistics: empty, by category, by severity, active/expired tracking
  • IsExpired model method (expired, no expiration)
  • Constructor validation
  • Determinism (same inputs → same digest)

PostgreSQL Persistence Layer — Schema Isolation, RLS, Temporal Tables

Sprint: SPRINT_20260208_018_Attestor_postgresql_persistence_layer

Purpose

Manages per-module PostgreSQL schema isolation, Row-Level Security (RLS) policy scaffolding, and temporal table configuration for Attestor persistence modules. Generates SQL statements for schema provisioning, tenant isolation, and history tracking without modifying existing ProofChainDbContext or entity classes.

Schema Registry

Five schema assignments covering all Attestor persistence modules:

Schema PostgreSQL Name Tables
ProofChain proofchain sbom_entries, dsse_envelopes, spines, trust_anchors, rekor_entries, audit_log
Attestor attestor rekor_submission_queue, submission_state
Verdict verdict verdict_ledger, verdict_policies
Watchlist watchlist watched_identities, identity_alerts, alert_dedup
Audit audit noise_ledger, hash_audit_log, suppression_stats

RLS Policy Coverage

Tenant isolation policies are defined for schemas that contain tenant-scoped data:

  • Verdict: verdict_ledger, verdict_policies
  • Watchlist: watched_identities, identity_alerts
  • Attestor: rekor_submission_queue
  • Audit: noise_ledger
  • ProofChain: No RLS (shared read-only reference data)

All policies use tenant_id column with current_setting('app.current_tenant') expression.

Temporal Table Configuration

Three tables configured for system-versioned history tracking:

Table History Table Retention
verdict.verdict_ledger verdict.verdict_ledger_history 7 years
watchlist.watched_identities watchlist.watched_identities_history 1 year
audit.noise_ledger audit.noise_ledger_history 7 years

Temporal tables use PostgreSQL trigger-based versioning with sys_period_start/sys_period_end period columns.

SQL Generation (Not Execution)

The service generates SQL statements for operators to review and execute:

  • Provisioning: CREATE SCHEMA IF NOT EXISTS, GRANT USAGE, default privileges, documentation comments
  • RLS: ENABLE ROW LEVEL SECURITY, FORCE ROW LEVEL SECURITY, CREATE POLICY with tenant isolation
  • Temporal: Period column addition, history table creation, trigger functions, trigger attachment

DI Registration

PersistenceServiceCollectionExtensions.AddAttestorPersistence() registers ISchemaIsolationService as a singleton with TimeProvider and IMeterFactory.

OTel Metrics

Meter: StellaOps.Attestor.Persistence.SchemaIsolation

Counter Description
schema.provisioning.operations Schema provisioning SQL generations
schema.rls.operations RLS policy SQL generations
schema.temporal.operations Temporal table SQL generations

Source Files

  • Models: src/Attestor/__Libraries/StellaOps.Attestor.Persistence/SchemaIsolationModels.cs
  • Interface: src/Attestor/__Libraries/StellaOps.Attestor.Persistence/ISchemaIsolationService.cs
  • Implementation: src/Attestor/__Libraries/StellaOps.Attestor.Persistence/SchemaIsolationService.cs
  • DI: src/Attestor/__Libraries/StellaOps.Attestor.Persistence/PersistenceServiceCollectionExtensions.cs

Test Coverage (40 tests)

  • GetAssignment per schema (5 schemas, correct names, table counts)
  • Invalid schema throws ArgumentException
  • GetAllAssignments returns all five, all have tables
  • Provisioning SQL: CREATE SCHEMA, GRANT, default privileges, comment, timestamp, statement count
  • RLS policies per schema (Verdict has policies, ProofChain empty, all have tenant_id, UsingExpression)
  • RLS SQL: ENABLE/FORCE/CREATE POLICY, permissive mode, empty for ProofChain, multiple for Watchlist
  • Temporal tables: count, retention values per table, history table names
  • Temporal SQL: period columns, history table, trigger function, trigger, retention comment, statement count
  • GetSummary: complete data, ProvisionedCount, RlsEnabledCount, timestamp
  • Constructor validation (null TimeProvider fallback, null MeterFactory throws)
  • Cross-schema consistency (RLS references valid schemas, temporal references valid schemas)
  • Determinism (provisioning, RLS, temporal SQL produce identical output)

S3/MinIO/GCS Object Storage for Tiles

Sprint: SPRINT_20260208_019_Attestor_s3_minio_gcs_object_storage_for_tiles

Purpose

Provides a pluggable object storage abstraction for the Content-Addressed Store (CAS), enabling durable blob storage via S3-compatible backends (AWS S3, MinIO, Wasabi), Google Cloud Storage, or local filesystem. The existing InMemoryContentAddressedStore is complemented by ObjectStorageContentAddressedStore which delegates to an IObjectStorageProvider for persistence.

Architecture

IContentAddressedStore (existing interface)
├── InMemoryContentAddressedStore (existing, for tests)
└── ObjectStorageContentAddressedStore (new, durable)
        └── delegates to IObjectStorageProvider
                ├── FileSystemObjectStorageProvider (offline/air-gap)
                ├── S3-compatible (AWS/MinIO/Wasabi) — future
                └── GCS — future

Provider Interface

IObjectStorageProvider defines five low-level operations:

  • PutAsync — Store a blob by key, idempotent with write-once support
  • GetAsync — Retrieve blob content and metadata by key
  • ExistsAsync — Check blob existence
  • DeleteAsync — Remove a blob (blocked in WORM mode)
  • ListAsync — List blobs with prefix filtering and pagination

Storage Layout

Content blobs: blobs/sha256:<hex> — raw content Metadata sidecars: meta/sha256:<hex>.json — JSON with artifact type, tags, timestamps

Configuration

ObjectStorageConfig selects the backend and connection details:

Property Description
Provider FileSystem, S3Compatible, or Gcs
RootPath Root directory (FileSystem only)
BucketName S3/GCS bucket name
EndpointUrl Custom endpoint (MinIO, localstack)
Region AWS/GCS region
Prefix Key prefix for namespace isolation
EnforceWriteOnce WORM mode (prevents deletes and overwrites)

FileSystem Provider

  • Atomic writes via temp file + rename
  • Metadata stored as .meta sidecar files
  • WORM enforcement: skips overwrite, blocks delete
  • Offset-based pagination for listing

DI Registration

IObjectStorageProviderFileSystemObjectStorageProvider registered via TryAddSingleton in ProofChainServiceCollectionExtensions. Override with S3/GCS provider for cloud deployments.

OTel Metrics

Meter: StellaOps.Attestor.ProofChain.Cas.FileSystem

Counter Description
objectstorage.fs.puts Filesystem put operations
objectstorage.fs.gets Filesystem get operations
objectstorage.fs.deletes Filesystem delete operations

Meter: StellaOps.Attestor.ProofChain.Cas.ObjectStorage

Counter Description
cas.objectstorage.puts CAS put via object storage
cas.objectstorage.deduplications Deduplicated puts
cas.objectstorage.gets CAS get via object storage
cas.objectstorage.deletes CAS delete via object storage

Source Files

  • Models: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Cas/ObjectStorageModels.cs
  • Provider interface: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Cas/IObjectStorageProvider.cs
  • Filesystem provider: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Cas/FileSystemObjectStorageProvider.cs
  • CAS bridge: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Cas/ObjectStorageContentAddressedStore.cs

Test Coverage (42 tests)

ObjectStorageContentAddressedStore (27 tests):

  • Put: store, dedup, null/empty-media-type throws, tags, related digests, timestamp
  • Get: retrieves, missing returns null, null/empty throws
  • Exists: true for stored, false for missing
  • Delete: removes, false for missing
  • List: returns all, filters by type, respects limit
  • Statistics: accurate counts, dedup tracking
  • Constructor validation (null provider/meterFactory, null timeProvider fallback)
  • Determinism: same content → same digest

FileSystemObjectStorageProvider (13 tests):

  • Put: store and retrieve, write-once enforcement
  • Exists: true/false
  • Delete: removes, false for missing, blocked in WORM mode
  • List: returns stored, empty directory
  • Metadata preservation
  • Constructor validation (null config, empty root, null meterFactory)

ObjectStorageModels (5 tests):

  • Default values for config, put request, get result, list query
  • Provider kind enum count
  • Determinism (provisioning, RLS, temporal SQL produce identical output)

Score Replay and Verification

Sprint: SPRINT_20260208_020_Attestor_score_replay_and_verification

Purpose

Enables deterministic replay of verdict scores by re-executing scoring computations with captured inputs, comparing original and replayed scores to quantify divergence, and producing DSSE-ready attestations with payload type application/vnd.stella.score+json.

Architecture

The score replay service sits alongside the existing AI artifact replay infrastructure in ProofChain/Replay/ and provides:

  1. Score Replay — Re-executes deterministic scoring from captured inputs (policy weights, coverage data, severity), computing a replayed score and determinism hash
  2. Score Comparison — Compares two replay results, quantifying divergence and identifying specific differences (score, hash, status)
  3. DSSE Attestation — Produces JSON-encoded attestation payloads ready for DSSE signing with application/vnd.stella.score+json payload type

Deterministic Scoring

  • Inputs sorted by key (ordinal) for canonical ordering
  • Weighted average of numeric values, normalized to [0, 1]
  • Weight inputs identified by key containing "weight"
  • Non-numeric inputs silently ignored
  • Determinism hash computed from canonical key=value\n format

Models

Type Description
ScoreReplayRequest Replay request with verdict ID, original score, scoring inputs
ScoreReplayResult Result with replay digest, status, replayed/original scores, divergence, determinism hash
ScoreReplayStatus Matched, Diverged, FailedMissingInputs, FailedError
ScoreComparisonRequest Request to compare two replays by digest
ScoreComparisonResult Comparison with divergence, determinism flag, difference details
ScoreReplayAttestation DSSE-ready attestation with JSON payload and signing key slot
ScoreReplayQuery Query with verdict ID, tenant, status, limit filters

DI Registration

IScoreReplayServiceScoreReplayService registered via TryAddSingleton in ProofChainServiceCollectionExtensions.

OTel Metrics

Meter: StellaOps.Attestor.ProofChain.Replay.Score

Counter Description
score.replays.executed Total replay executions
score.replays.matched Replays matching original score
score.replays.diverged Replays diverging from original
score.comparisons.executed Comparison operations
score.attestations.created Attestation productions

Source Files

  • Models: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Replay/ScoreReplayModels.cs
  • Interface: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Replay/IScoreReplayService.cs
  • Implementation: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Replay/ScoreReplayService.cs

Test Coverage (37 tests)

  • ReplayAsync: produces digest, matched/diverged status, duration, determinism hash match/mismatch, null original hash, empty inputs, validation (null request, empty verdictId, cancellation)
  • CompareAsync: identical results deterministic, divergent reports differences, null validation
  • CreateAttestationAsync: payload type, valid JSON, null signing key, null validation
  • GetByDigestAsync: stored result, missing returns null, null throws
  • QueryAsync: no filter, verdict ID filter, status filter, limit enforcement, null throws
  • ComputeScore: empty inputs, non-numeric ignored, deterministic, clamped [0,1]
  • ComputeDeterminismHash: same inputs same hash, different inputs different hash
  • Constructor validation (null meterFactory throws, null timeProvider fallback)

VEX Receipt Sidebar

Converts VerificationReceipt domain objects into sidebar-ready DTOs for the UI, providing a formatted view of DSSE signature verification, Rekor inclusion proofs, and per-check results.

Architecture

  1. FormatReceipt — Converts VerificationReceiptReceiptSidebarDetail: maps ProofBundleId.Digest → string, TrustAnchorId.Value → string, iterates checks to build ReceiptCheckDetail list, derives overall ReceiptVerificationStatus from pass/fail counts, sets DsseVerified and RekorInclusionVerified by scanning check names for DSSE/Rekor keywords
  2. GetDetailAsync — Looks up registered receipt by bundle ID, returns ReceiptSidebarDetail with optional check and tool digest exclusion
  3. GetContextAsync — Returns VexReceiptSidebarContext combining receipt detail with VEX decision, justification, evidence refs, and finding metadata; falls back to receipt-only context when no explicit context is registered

Verification Status Derivation

Condition Status
No checks present Unverified
All checks pass Verified
Some pass, some fail PartiallyVerified
All checks fail Failed

Models

Type Description
ReceiptVerificationStatus Verified, PartiallyVerified, Unverified, Failed
ReceiptCheckDetail Single check formatted for sidebar (Name, Passed, KeyId?, LogIndex?, Detail?)
ReceiptSidebarDetail Full receipt DTO with computed TotalChecks/PassedChecks/FailedChecks, DsseVerified, RekorInclusionVerified
VexReceiptSidebarContext Receipt + Decision + Justification + EvidenceRefs + finding metadata
ReceiptSidebarRequest Query by BundleId with IncludeChecks/IncludeToolDigests flags

DI Registration

IReceiptSidebarServiceReceiptSidebarService registered via TryAddSingleton in ProofChainServiceCollectionExtensions.

OTel Metrics

Meter: StellaOps.Attestor.ProofChain.Receipts.Sidebar

Counter Description
sidebar.detail.total Sidebar detail requests
sidebar.context.total Sidebar context requests
sidebar.format.total Receipts formatted for sidebar

Source Files

  • Models: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Receipts/ReceiptSidebarModels.cs
  • Interface: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Receipts/IReceiptSidebarService.cs
  • Implementation: src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Receipts/ReceiptSidebarService.cs

Test Coverage (35 tests)

  • ReceiptVerificationStatus: 4 enum values
  • ReceiptCheckDetail: property roundtrips, optional defaults
  • ReceiptSidebarDetail: computed check counts, empty checks
  • VexReceiptSidebarContext: defaults, full roundtrip
  • ReceiptSidebarRequest: defaults
  • FormatReceipt: bundle/anchor/version mapping, all-pass/mixed/all-fail/no-checks status, DSSE verified/not-verified, Rekor verified/absent, check detail mapping, expected/actual formatting, tool digests mapping, null tool digests, null throws
  • GetDetailAsync: unknown returns null, registered returns detail, exclude checks, exclude tool digests, null throws
  • GetContextAsync: unknown returns null, registered context, fallback receipt-only, null/empty/whitespace throws
  • DeriveVerificationStatus: single pass, single fail
  • Register: null throws
  • RegisterContext: null/empty/whitespace bundleId throws

Advisory Commitments (2026-02-26 Batch)

  • SPRINT_20260226_225_Attestor_signature_trust_and_verdict_api_hardening governs:

    • DSSE signature verifier trust behavior (including deterministic failure reasons).
    • authority roster validation for verdict creation.
    • authenticated tenant context enforcement over header-only spoofable inputs.
    • deterministic verdict retrieval APIs for hash-based lookup.
  • Rekor/tile verification commitments from Deterministic tile verification with Rekor v2 are coordinated with Symbols sprint SPRINT_20260226_226_Symbols_dsse_rekor_merkle_and_hash_integrity.


Trust Domain Model (Sprint 204 -- 2026-03-04)

Overview

As of Sprint 204, the Attestor module directory (src/Attestor/) is the trust domain owner for three runtime services and their supporting libraries:

  1. Attestor -- transparency log submission, inclusion proof verification, evidence caching
  2. Signer -- DSSE envelope creation, cryptographic signing (keyless/keyful/HSM), entitlement enforcement
  3. Provenance -- SLSA/DSSE attestation generation, Merkle tree construction, verification tooling

Source consolidation places all trust-domain code under a single directory for ownership clarity, while preserving runtime service identities and security boundaries.

Trust Data Classification

Data Category Owner Service Storage Sensitivity
Attestation evidence (proofchain, inclusion proofs, Rekor entries) Attestor attestor PostgreSQL schema High -- tamper-evident, integrity-critical
Provenance evidence (SLSA predicates, build attestations, Merkle trees) Provenance (library) Consumed by Attestor/EvidenceLocker High -- deterministic, reproducible
Signer metadata (audit events, signing ceremony state, rate limits) Signer signer PostgreSQL schema High -- operational security
Signer key material (KMS/HSM refs, Fulcio certs, trust anchors, rotation state) Signer (KeyManagement) key_management PostgreSQL schema Critical -- cryptographic trust root

PostgreSQL Schema Ownership

Each trust-domain service retains its own DbContext and dedicated PostgreSQL schema:

  • attestor schema -- Owned by the Attestor service. Contains entries, dedupe, audit tables for transparency log state.
  • signer schema -- Owned by the Signer service. Contains signing ceremony audit, rate limit state, and operational metadata.
  • key_management schema -- Owned by the Signer KeyManagement library. Contains key rotation records, trust anchor configurations, and HSM/KMS binding metadata.

There is no cross-schema merge. Each service connects with its own connection string scoped to its own schema.

Security Boundary: No-Merge Decision (ADR)

Decision: Signer key-material isolation from attestation evidence is a deliberate security boundary. The schemas will NOT be merged into a unified DbContext.

Rationale:

  • A merged DbContext would require a single connection string with access to both key material (signing keys, HSM/KMS bindings, trust anchors) and evidence stores (proofchain entries, Rekor logs).
  • This widens the blast radius of any credential compromise: an attacker gaining the Attestor database credential would also gain access to key rotation state and trust anchor configurations.
  • Schema isolation is a defense-in-depth measure. Each service authenticates to PostgreSQL independently, with schema-level GRANT restrictions.
  • The Signer's KeyManagement database contains material that, if compromised, could allow forging of signatures. This material must be isolated from the higher-volume, lower-privilege evidence store.

Implications:

  • No shared EF Core DbContext across trust services.
  • Each service manages its own migrations independently (src/Attestor/__Libraries/StellaOps.Attestor.Persistence/ for Attestor; src/Attestor/__Libraries/StellaOps.Signer.KeyManagement/ for Signer key management).
  • Cross-service queries (e.g., "find the signing identity for a given attestation entry") use API calls, not database joins.

Source Layout (post-Sprint 204)

src/Attestor/
  StellaOps.Attestation/              # DSSE envelope model library
  StellaOps.Attestation.Tests/
  StellaOps.Attestor/                 # Attestor service (Core, Infrastructure, WebService, Tests)
  StellaOps.Attestor.Envelope/        # Envelope serialization
  StellaOps.Attestor.TileProxy/       # Rekor tile proxy
  StellaOps.Attestor.Types/           # Shared predicate types
  StellaOps.Attestor.Verify/          # Verification pipeline
  StellaOps.Signer/                   # Signer service (Core, Infrastructure, WebService, Tests)
  StellaOps.Provenance.Attestation/   # Provenance attestation library
  StellaOps.Provenance.Attestation.Tool/  # Forensic verification CLI tool
  __Libraries/
    StellaOps.Attestor.*/             # Attestor domain libraries
    StellaOps.Signer.KeyManagement/   # Key rotation and trust anchor management
    StellaOps.Signer.Keyless/         # Keyless (Fulcio/Sigstore) signing support
  __Tests/
    StellaOps.Attestor.*/             # Attestor test projects
    StellaOps.Provenance.Attestation.Tests/  # Provenance test project

What Did NOT Change

  • Namespaces -- All StellaOps.Signer.* and StellaOps.Provenance.* namespaces are preserved.
  • Runtime service identities -- Docker image names (stellaops/signer), container names, network aliases, and API base paths (/api/v1/signer/) are unchanged.
  • Database schemas -- No schema changes, no migrations, no data movement.
  • API contracts -- All endpoints including /api/v1/signer/sign/dsse remain stable.