Files
git.stella-ops.org/docs/modules/zastava/architecture.md
2026-01-24 00:12:43 +02:00

28 KiB
Raw Blame History

component_architecture_zastava.md — StellaOps Zastava (2025Q4)

Scope. Implementationready architecture for Zastava: the runtime inspector/enforcer that watches real workloads, detects drift from the scanned baseline, verifies image/SBOM/attestation posture, and (optionally) admits/blocks deployments. Includes Kubernetes & plainDocker topologies, data contracts, APIs, security posture, performance targets, test matrices, and failure modes.


0) Mission & boundaries

Mission. Give operators groundtruth from running environments and a fast guardrail before workloads land:

  • Observer: inventory containers, entrypoints actually executed, and DSOs actually loaded; verify image signature, SBOM referrers, and attestation presence; detect drift (unexpected processes/paths) and policy violations; publish runtime events to Scanner.WebService.
  • Admission (optional): Kubernetes ValidatingAdmissionWebhook that enforces minimal posture (signed images, SBOM availability, known base images, policy PASS) preflight.

Boundaries.

  • Zastava does not compute SBOMs and does not sign; it consumes Scanner/WebService outputs and enforces backend policy verdicts.
  • Zastava can request a delta scan when the baseline is missing/stale, but scanning is done by Scanner.Worker.
  • On nonK8s Docker hosts, Zastava runs as a host service with observeronly features.

1) Topology & processes

1.1 Components (Kubernetes)

stellaops/zastava-observer    # DaemonSet on every node (read-only host mounts)
stellaops/zastava-webhook     # ValidatingAdmissionWebhook (Deployment, 2+ replicas)

1.2 Components (Docker/VM)

stellaops/zastava-agent       # System service; watch Docker events; observer only

1.3 Dependencies

  • Authority (OIDC): short OpToks (DPoP/mTLS) for API calls to Scanner.WebService.
  • Scanner.WebService: /runtime/events ingestion; /policy/runtime fetch.
  • OCI Registry (optional): for direct referrers/sig checks if not delegated to backend.
  • Container runtime: containerd/CRIO/Docker (read interfaces only).
  • Kubernetes API (watch Pods in cluster; validating webhook).
  • Host mounts (K8s DaemonSet): /proc, /var/lib/containerd (or CRIO), /run/containerd/containerd.sock (optional, readonly).

2) Data contracts

2.1 Runtime event (observer → Scanner.WebService)

{
  "eventId": "9f6a…",
  "when": "2025-10-17T12:34:56Z",
  "kind": "CONTAINER_START|CONTAINER_STOP|DRIFT|POLICY_VIOLATION|ATTESTATION_STATUS",
  "tenant": "tenant-01",
  "node": "ip-10-0-1-23",
  "runtime": { "engine": "containerd", "version": "1.7.19" },
  "workload": {
    "platform": "kubernetes",
    "namespace": "payments",
    "pod": "api-7c9fbbd8b7-ktd84",
    "container": "api",
    "containerId": "containerd://...",
    "imageRef": "ghcr.io/acme/api@sha256:abcd…",
    "owner": { "kind": "Deployment", "name": "api" }
  },
  "process": {
    "pid": 12345,
    "entrypoint": ["/entrypoint.sh", "--serve"],
    "entryTrace": [
      {"file":"/entrypoint.sh","line":3,"op":"exec","target":"/usr/bin/python3"},
      {"file":"<argv>","op":"python","target":"/opt/app/server.py"}
    ],
    "buildId": "9f3a1cd4c0b7adfe91c0e3b51d2f45fb0f76a4c1"
  },
  "loadedLibs": [
    { "path": "/lib/x86_64-linux-gnu/libssl.so.3", "inode": 123456, "sha256": "…"},
    { "path": "/usr/lib/x86_64-linux-gnu/libcrypto.so.3", "inode": 123457, "sha256": "…"}
  ],
  "posture": {
    "imageSigned": true,
    "sbomReferrer": "present|missing",
    "attestation": { "uuid": "rekor-uuid", "verified": true }
  },
  "delta": {
    "baselineImageDigest": "sha256:abcd…",
    "changedFiles": ["/opt/app/server.py"],           // optional quick signal
    "newBinaries": [{ "path":"/usr/local/bin/helper","sha256":"…" }]
  },
  "evidence": [
    {"signal":"procfs.maps","value":"/lib/.../libssl.so.3@0x7f..."},
    {"signal":"cri.task.inspect","value":"pid=12345"},
    {"signal":"registry.referrers","value":"sbom: application/vnd.cyclonedx+json"}
  ]
}

2.2 Admission decision (webhook → API server)

{
  "admissionId": "…",
  "namespace": "payments",
  "podSpecDigest": "sha256:…",
  "images": [
    {
      "name": "ghcr.io/acme/api:1.2.3",
      "resolved": "ghcr.io/acme/api@sha256:abcd…",
      "signed": true,
      "hasSbomReferrers": true,
      "policyVerdict": "pass|warn|fail",
      "reasons": ["unsigned base image", "missing SBOM"]
    }
  ],
  "decision": "Allow|Deny",
  "ttlSeconds": 300
}

2.3 Schema negotiation & hashing guarantees

  • Every payload is wrapped in an envelope with schemaVersion set to "<schema>@v<major>.<minor>". Version negotiation keeps the major line in lockstep (zastava.runtime.event@v1.x, zastava.admission.decision@v1.x) and selects the highest mutually supported minor. If no overlap exists, the local default (@v1.0) is used.
  • Components use the shared ZastavaContractVersions helper for parsing/negotiation and the canonical JSON serializer to guarantee identical byte sequences prior to hashing, ensuring multihash IDs such as sha256-<base64url> are reproducible across observers, webhooks, and backend jobs.
  • Schema evolution rules: backwards-compatible fields append to the end of the canonical property order; breaking changes bump the major and require dual-writer/reader rollout per deployment playbook.

3) Observer — node agent (DaemonSet)

3.1 Responsibilities

  • Watch container lifecycle (start/stop) via CRI (/run/containerd/containerd.sock gRPC readonly) or /var/log/containers/*.log tail fallback.

  • Resolve container → image digest, mount point rootfs.

  • Trace entrypoint: attach shortlived nsenter/exec to PID 1 in container, parse shell for exec chain (bounded depth), record terminal program.

  • Sample loaded libs: read /proc/<pid>/maps and exe symlink to collect actually loaded DSOs; compute sha256 for each mapped file (bounded count/size).

  • Record GNU build-id: parse NT_GNU_BUILD_ID from /proc/<pid>/exe and attach the normalized hex to runtime events for symbol/debug-store correlation.

  • Posture check (cheap):

    • Image signature presence (if cosign policies are local; else ask backend).
    • SBOM referrers presence (HEAD to registry, optional).
    • Rekor UUID known (query Scanner.WebService by image digest).
  • Publish runtime events to Scanner.WebService /runtime/events (batch & compress).

  • Request delta scan if: no SBOM in catalog OR base differs from known baseline.

3.2 Privileges & mounts (K8s)

  • SecurityContext: runAsUser: 0, readOnlyRootFilesystem: true, allowPrivilegeEscalation: false.

  • Capabilities: CAP_SYS_PTRACE (optional if using nsenter trace), CAP_DAC_READ_SEARCH.

  • Host mounts (readonly):

    • /proc (host) → /host/proc
    • /run/containerd/containerd.sock (or CRIO socket)
    • /var/lib/containerd/io.containerd.runtime.v2.task (rootfs paths & pids)
  • Networking: clusterinternal egress to Scanner.WebService only.

  • Rate limits: hard caps for bytes hashed and file count per container to avoid noisy tenants.

3.3 Event batching

  • Buffer NDJSON; flush by N events or 2s.
  • Backpressure: local disk ring buffer (50MB default) if Scanner is temporarily unavailable; drop oldest after cap with metrics and warning event.

3.4 Build-id capture & validation workflow

  1. When Observer sees a CONTAINER_START it dereferences /proc/<pid>/exe, extracts the NT_GNU_BUILD_ID note, normalises it to lower-case hex, and sends it as process.buildId in the runtime envelope.
  2. Scanner.WebService persists the observation and propagates the most recent hashes into /policy/runtime responses (buildIds list) and policy caches consumed by the webhook/CLI.
  3. Release engineering copies the matching .debug files into the bundle (debug/.build-id/<aa>/<rest>.debug) and publishes debug/debug-manifest.json with per-hash digests. Offline Kit packaging reuses those artefacts verbatim (see ops/offline-kit/mirror_debug_store.py).
  4. Operators resolve symbols by either:
    • calling stellaops-cli runtime policy test --image <digest> to read the current buildIds and then fetching the corresponding .debug file from the bundle/offline mirror, or
    • piping the hash into debuginfod-find debuginfo <buildId> when a debuginfod service is wired against the mirrored tree.
  5. Missing hashes indicate stripped binaries without GNU notes; operators should trigger a rebuild with -Wl,--build-id or register a fallback symbol package as described in the runtime operations runbook.

4) Admission Webhook (Kubernetes)

4.1 Gate criteria

Configurable policy (fetched from backend and cached):

  • Image signature: must be cosignverifiable to configured key(s) or keyless identities.
  • SBOM availability: at least one CycloneDX referrer or Scanner.WebService catalog entry.
  • Scanner policy verdict: backend PASS required for namespaces/labels matching rules; allow WARN if configured.
  • Registry allowlists/denylists.
  • Tag bans (e.g., :latest).
  • Base image allowlists (by digest).

4.2 Flow

sequenceDiagram
  autonumber
  participant K8s as API Server
  participant WH as Zastava Webhook
  participant SW as Scanner.WebService

  K8s->>WH: AdmissionReview(Pod)
  WH->>WH: Resolve images to digests (remote HEAD/pull if needed)
  WH->>SW: POST /policy/runtime { digests, namespace, labels }
  SW-->>WH: { per-image: {signed, hasSbom, verdict, reasons}, ttl }
  alt All pass
    WH-->>K8s: AdmissionResponse(Allow, ttl)
  else Any fail (enforce=true)
    WH-->>K8s: AdmissionResponse(Deny, message)
  end

Caching: Perdigest result cached ttlSeconds (default 300 s). Failopen or failclosed is configurable per namespace.

4.3 TLS & HA

  • Webhook has its own serving cert signed by cluster CA (or custom cert + CA bundle on configuration).
  • Deployment ≥ 2 replicas; leaderless; stateless.

5) Backend integration (Scanner.WebService)

5.1 Ingestion endpoint

POST /api/v1/scanner/runtime/events (OpTok + DPoP/mTLS)

  • Validates event schema; enforces rate caps by tenant/node; persists to PostgreSQL (runtime.events table with TTL-based retention).

  • Performs correlation:

    • Attach nearest image SBOM (inventory/usage) and BOMIndex if known.
    • If unknown/missing, schedule delta scan and return 202 Accepted.
  • Emits derived signals (usedByEntrypoint per component based on /proc/<pid>/maps).

5.2 Policy decision API (for webhook)

POST /api/v1/scanner/policy/runtime

The webhook reuses the shared runtime stack (AddZastavaRuntimeCore + IZastavaAuthorityTokenProvider) so OpTok caching, DPoP enforcement, and telemetry behave identically to the observer plane.

Request:

{
  "namespace": "payments",
  "labels": { "app": "api", "env": "prod" },
  "images": ["ghcr.io/acme/api@sha256:...", "ghcr.io/acme/nginx@sha256:..."]
}

Response:

{
  "ttlSeconds": 300,
  "results": {
    "ghcr.io/acme/api@sha256:...": {
      "signed": true,
      "hasSbom": true,
      "policyVerdict": "pass",
      "reasons": [],
      "rekor": { "uuid": "..." }
    },
    "ghcr.io/acme/nginx@sha256:...": {
      "signed": false,
      "hasSbom": false,
      "policyVerdict": "fail",
      "reasons": ["unsigned", "missing SBOM"]
    }
  }
}

6) Configuration (YAML)

zastava:
  mode:
    observer: true
    webhook: true
  backend:
    baseAddress: "https://scanner-web.internal"
    policyPath: "/api/v1/scanner/policy/runtime"
    requestTimeoutSeconds: 5
    allowInsecureHttp: false
  runtime:
    authority:
      issuer: "https://authority.internal"
      clientId: "zastava-observer"
      audience: ["scanner","zastava"]
      scopes:
        - "api:scanner.runtime.write"
      refreshSkewSeconds: 120
      requireDpop: true
      requireMutualTls: true
      allowStaticTokenFallback: false
      staticTokenPath: null      # Optional bootstrap secret
    tenant: "tenant-01"
    environment: "prod"
    deployment: "cluster-a"
    logging:
      includeScopes: true
      includeActivityTracking: true
      staticScope:
        plane: "runtime"
    metrics:
      meterName: "StellaOps.Zastava"
      meterVersion: "1.0.0"
      commonTags:
        cluster: "prod-cluster"
    engine: "auto"    # containerd|cri-o|docker|auto
    procfs: "/host/proc"
    collect:
      entryTrace: true
      loadedLibs: true
      maxLibs: 256
      maxHashBytesPerContainer: 64_000_000
      maxDepth: 48
  admission:
    enforce: true
    failOpenNamespaces: ["dev", "test"]
    verify:
      imageSignature: true
      sbomReferrer: true
      scannerPolicyPass: true
    cacheTtlSeconds: 300
    resolveTags: true          # do remote digest resolution for tag-only images
  limits:
    eventsPerSecond: 50
    burst: 200
    perNodeQueue: 10_000
  security:
    mounts:
      containerdSock: "/run/containerd/containerd.sock:ro"
      proc: "/proc:/host/proc:ro"
      runtimeState: "/var/lib/containerd:ro"

Implementation note: both zastava-observer and zastava-webhook call services.AddZastavaRuntimeCore(configuration, "<component>") during start-up to bind the zastava:runtime section, enforce validation, and register canonical log scopes + meters.


7) Security posture

  • AuthN/Z: Authority OpToks (DPoP preferred) to backend; webhook does not require client auth from API server (K8s handles).
  • Least privileges: readonly host mounts; optional CAP_SYS_PTRACE; no host networking; no write mounts.
  • Isolation: never exec untrusted code; nsenter only to read /proc/<pid>.
  • Data minimization: do not exfiltrate env vars or command arguments unless policy explicitly enables diagnostic mode.
  • Rate limiting: pernode caps; pertenant caps at backend.
  • Hard caps: bytes hashed, files inspected, depth of shell parsing.
  • Authority guardrails: AddZastavaRuntimeCore binds zastava.runtime.authority and refuses tokens without aud:<tenant> scope; optional knobs (requireDpop, requireMutualTls, allowStaticTokenFallback) emit structured warnings when relaxed.

8) Metrics, logs, tracing

Observer

  • zastava.runtime.events.total{kind}
  • zastava.runtime.backend.latency.ms{endpoint="events"}
  • zastava.proc_maps.samples.total{result}
  • zastava.entrytrace.depth{p99}
  • zastava.hash.bytes.total
  • zastava.buffer.drops.total

Webhook

  • zastava.admission.decisions.total{decision}
  • zastava.runtime.backend.latency.ms{endpoint="policy"}
  • zastava.admission.cache.hits.total
  • zastava.backend.failures.total

Logs (structured): node, pod, image digest, decision, reasons. Tracing: spans for observe→batch→post; webhook request→resolve→respond.


9) Performance & scale targets

  • Observer: ≤ 30ms to sample /proc/<pid>/maps and compute quick hashes for ≤ 64 files; ≤ 200ms for full library set (256 libs).
  • Webhook: P95 ≤ 8ms with warm cache; ≤ 50ms with one backend roundtrip.
  • Throughput: 1k admission requests/min/replica; 5k runtime events/min/node with batching.

10) Drift detection model

Signals

  • Process drift: terminal program differs from EntryTrace baseline.
  • Library drift: loaded DSOs not present in Usage SBOM view.
  • Filesystem drift: new executable files under /usr/local/bin, /opt, /app with mtime after image creation.
  • Network drift (optional): listening sockets on unexpected ports (from policy).

Action

  • Emit DRIFT event with evidence; backend can autoqueue a delta scan; policy may escalate to alert/block (Admission cannot block alreadyrunning pods; rely on K8s policies/PodSecurity or operator action).

11) Test matrix

  • Engines: containerd, CRIO, Docker; ensure PID resolution and rootfs mapping.
  • EntryTrace: bash features (case, if, runparts, ./source), language launchers (python/node/java).
  • Procfs: multiple arches, musl/glibc images; static binaries (maps minimal).
  • Admission: unsigned images, missing SBOM referrers, tagonly images, digest resolution, backend latency, cache TTL.
  • Perf/soak: 500 Pods/node churn; webhook under HPA growth.
  • Security: attempt privilege escalation disabled, readonly mounts enforced, ratelimit abuse.
  • Failure injection: backend down (observer buffers, webhook failopen/closed), registry throttling, containerd socket unavailable.

12) Failure modes & responses

Condition Observer behavior Webhook behavior
Backend unreachable Buffer to disk; drop after cap; emit metric Failopen/closed per namespace config
PID vanished midsample Retry once; emit partial evidence N/A
CRI socket missing Fallback to K8s events only (reduced fidelity) N/A
Registry digest resolve blocked Defer to backend; mark resolve=unknown Deny or allow per resolveTags & failOpenNamespaces
Excessive events Apply local rate limit, coalesce N/A

13) Deployment notes (K8s)

DaemonSet (snippet):

apiVersion: apps/v1
kind: DaemonSet
metadata: { name: zastava-observer, namespace: stellaops }
spec:
  template:
    spec:
      serviceAccountName: zastava
      hostPID: true
      containers:
      - name: observer
        image: stellaops/zastava-observer:2.3
        securityContext:
          runAsUser: 0
          readOnlyRootFilesystem: true
          allowPrivilegeEscalation: false
          capabilities: { add: ["SYS_PTRACE","DAC_READ_SEARCH"] }
        volumeMounts:
        - { name: proc, mountPath: /host/proc, readOnly: true }
        - { name: containerd-sock, mountPath: /run/containerd/containerd.sock, readOnly: true }
        - { name: containerd-state, mountPath: /var/lib/containerd, readOnly: true }
      volumes:
      - { name: proc, hostPath: { path: /proc } }
      - { name: containerd-sock, hostPath: { path: /run/containerd/containerd.sock } }
      - { name: containerd-state, hostPath: { path: /var/lib/containerd } }

Webhook (snippet):

apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
webhooks:
- name: gate.zastava.stella-ops.org
  admissionReviewVersions: ["v1"]
  sideEffects: None
  failurePolicy: Ignore   # or Fail
  rules:
  - operations: ["CREATE","UPDATE"]
    apiGroups: [""]
    apiVersions: ["v1"]
    resources: ["pods"]
  clientConfig:
    service:
      namespace: stellaops
      name: zastava-webhook
      path: /admit
    caBundle: <base64 CA>

14) Implementation notes

  • Language: Rust (observer) for lowlatency /proc parsing; Go/.NET viable too. Webhook can be .NET 10 for parity with backend.
  • CRI drivers: pluggable (containerd, cri-o, docker). Prefer CRI over parsing logs.
  • Shell parser: reuse Scanner.EntryTrace grammar for consistent results (compile to WASM if observer is Rust/Go).
  • Hashing: BLAKE3 for speed locally, then convert to sha256 (or compute sha256 directly when budget allows).
  • Resilience: never block container start; observer is passive; only webhook decides allow/deny.

15) Roadmap

  • eBPF option for syscall/library load tracing (kernellevel, optin).
  • Windows containers support (ETW providers, loaded modules).
  • Network posture checks: listening ports vs policy.
  • Live usedbyentrypoint synthesis: send compact bitset diff to backend to tighten Usage view.
  • Admission dryrun dashboards (simulate block lists before enforcing).

16) Observability (stub)

  • Runbook + dashboard placeholder for offline import: operations/observability.md, operations/dashboards/zastava-observability.json.
  • Metrics to surface: admission latency p95/p99, allow/deny counts, Surface.Env miss rate, Surface.Secrets failures, Surface.FS cache freshness, drift events.
  • Health endpoints: /health/liveness, /health/readiness, /status, /surface/fs/cache/status (see runbook).
  • Alert hints: deny spikes, latency > 800ms p99, cache freshness lag > 10m, any secrets failure.

17) Offline Witness Verification

Sprint: SPRINT_20260122_038_Scanner_ebpf_probe_type (EBPF-004)

This section documents the deterministic replay verification algorithm for runtime witnesses, enabling air-gapped environments to independently verify witness attestations.

17.1 Input Canonicalization (RFC 8785 JCS)

All witness payloads MUST be canonicalized before hashing or signing using JSON Canonicalization Scheme (JCS) per RFC 8785:

  1. Property ordering: Object properties are sorted lexicographically by key name (Unicode code point order).
  2. Number serialization: Numbers are serialized without unnecessary precision; integers as integers, decimals with minimal representation.
  3. String encoding: UTF-8 with no BOM; escape sequences normalized to \uXXXX form for control characters.
  4. Whitespace: No insignificant whitespace between tokens.
  5. Null handling: Explicit null values are preserved; absent keys are omitted.

Canonicalization algorithm:

function canonicalize(json_object):
    if json_object is null:
        return "null"
    if json_object is boolean:
        return "true" | "false"
    if json_object is number:
        return serialize_number(json_object)  # RFC 8785 §3.2.2.3
    if json_object is string:
        return quote(escape(json_object))
    if json_object is array:
        return "[" + join(",", [canonicalize(elem) for elem in json_object]) + "]"
    if json_object is object:
        keys = sorted(json_object.keys(), key=unicode_codepoint_order)
        pairs = [quote(key) + ":" + canonicalize(json_object[key]) for key in keys]
        return "{" + join(",", pairs) + "}"

17.2 Observation Ordering Rules

When a witness contains multiple observations (e.g., from eBPF probes), they MUST be ordered deterministically before hashing:

  1. Primary sort: By observedAt timestamp (UTC, ascending)
  2. Secondary sort: By nodeHash (lexicographic ascending)
  3. Tertiary sort: By observationId (lexicographic ascending, for tie-breaking)

Observation hash computation:

function compute_observations_hash(observations):
    sorted_observations = sort(observations,
        key=lambda o: (o.observedAt, o.nodeHash, o.observationId))

    canonical_array = []
    for obs in sorted_observations:
        canonical_array.append({
            "observedAt": obs.observedAt.toISOString(),
            "nodeHash": obs.nodeHash,
            "functionName": obs.functionName,
            "probeType": obs.probeType,  # EBPF-001: kprobe|uprobe|tracepoint|usdt|fentry|fexit
            "containerHash": sha256(obs.containerId + obs.podName + obs.namespace)
        })

    return sha256(canonicalize(canonical_array))

17.3 Signature Verification Sequence

Offline verification MUST follow this exact sequence to ensure deterministic results:

  1. Parse DSSE envelope: Extract payloadType, payload (base64-decoded), and signatures[].

  2. Verify payload hash:

    expected_hash = sha256(payload_bytes)
    assert envelope.payload_sha256 == expected_hash
    
  3. Verify DSSE signature(s): For each signature in signatures[]:

    pae_string = "DSSEv1 " + len(payloadType) + " " + payloadType + " " + len(payload) + " " + payload
    verify_signature(signature.sig, pae_string, get_public_key(signature.keyid))
    
  4. Verify Rekor inclusion (if present):

    fetch_or_load_checkpoint(rekor_log_id)
    verify_merkle_inclusion(entry_hash, inclusion_proof, checkpoint.root_hash)
    verify_checkpoint_signature(checkpoint, rekor_public_key)
    
  5. Verify timestamp (if RFC 3161 TST present):

    verify_tst_signature(tst, tsa_certificate)
    assert tst.timestamp <= now() + allowed_skew
    
  6. Verify witness content:

    witness = parse_json(payload)
    recomputed_observations_hash = compute_observations_hash(witness.observations)
    assert witness.observationsDigest == recomputed_observations_hash
    

17.4 Offline Bundle Structure Requirements

A StellaBundle for offline witness verification MUST include:

bundle/
├── manifest.json                    # Bundle manifest v2.0.0
├── witnesses/
│   └── <claim_id>.witness.dsse.json # DSSE-signed witness
├── proofs/
│   ├── rekor-inclusion.json         # Rekor inclusion proof
│   ├── checkpoint.json              # Rekor checkpoint (signed)
│   └── rfc3161-tst.der             # Optional RFC 3161 timestamp
├── observations/
│   └── observations.ndjson          # Raw observations (for replay)
├── keys/
│   ├── signing-key.pub              # Public key for DSSE verification
│   └── rekor-key.pub                # Rekor log public key
└── trust/
    └── trust-root.json              # Trust anchors for key verification

Manifest schema (witnesses section):

{
  "schemaVersion": "2.0.0",
  "artifacts": [
    {
      "type": "witness",
      "path": "witnesses/<claim_id>.witness.dsse.json",
      "digest": "sha256:...",
      "predicateType": "https://stella.ops/predicates/runtime-witness/v1",
      "proofs": {
        "rekor": "proofs/rekor-inclusion.json",
        "checkpoint": "proofs/checkpoint.json",
        "tst": "proofs/rfc3161-tst.der"
      },
      "observationsRef": "observations/observations.ndjson"
    }
  ]
}

17.5 Verification CLI Commands

# Verify a witness bundle offline
stella bundle verify --bundle witness-bundle.tar.gz --offline

# Verify with replay (recompute observations hash)
stella bundle verify --bundle witness-bundle.tar.gz --offline --replay

# Verify specific witness from bundle
stella witness verify --bundle witness-bundle.tar.gz --witness-id wit:sha256:abc123 --offline

# Export verification report
stella witness verify --bundle witness-bundle.tar.gz --offline --output report.json

17.6 Determinism Guarantees

The verification algorithm guarantees:

  1. Idempotent: Running verification N times produces identical results.
  2. Reproducible: Different systems with the same bundle produce identical verification outcomes.
  3. Isolated: Verification requires no network access (fully air-gapped).
  4. Auditable: Every step produces evidence that can be independently checked.

Test criteria (per advisory):

  • Offline verifier reproduces the same mapping on 3 separate air-gapped runs.
  • No randomness in canonicalization, ordering, or hash computation.
  • Timestamps use UTC with fixed precision (milliseconds).