docs consolidation
This commit is contained in:
43
CLAUDE.md
43
CLAUDE.md
@@ -72,17 +72,46 @@ The codebase follows a monorepo pattern with modules under `src/`:
|
||||
|
||||
| Module | Path | Purpose |
|
||||
|--------|------|---------|
|
||||
| **Core Platform** | | |
|
||||
| Authority | `src/Authority/` | Authentication, authorization, OAuth/OIDC, DPoP |
|
||||
| Gateway | `src/Gateway/` | API gateway with routing and transport abstraction |
|
||||
| Router | `src/__Libraries/StellaOps.Router.*` | Transport-agnostic messaging (TCP/TLS/UDP/RabbitMQ/Valkey) |
|
||||
| **Data Ingestion** | | |
|
||||
| Concelier | `src/Concelier/` | Vulnerability advisory ingestion and merge engine |
|
||||
| CLI | `src/Cli/` | Command-line interface for scanner distribution and job control |
|
||||
| Scanner | `src/Scanner/` | Container scanning with SBOM generation |
|
||||
| Authority | `src/Authority/` | Authentication and authorization |
|
||||
| Signer | `src/Signer/` | Cryptographic signing operations |
|
||||
| Attestor | `src/Attestor/` | in-toto/DSSE attestation generation |
|
||||
| Excititor | `src/Excititor/` | VEX document ingestion and export |
|
||||
| Policy | `src/Policy/` | OPA/Rego policy engine |
|
||||
| VexLens | `src/VexLens/` | VEX consensus computation across issuers |
|
||||
| IssuerDirectory | `src/IssuerDirectory/` | Issuer trust registry (CSAF publishers) |
|
||||
| **Scanning & Analysis** | | |
|
||||
| Scanner | `src/Scanner/` | Container scanning with SBOM generation (11 language analyzers) |
|
||||
| BinaryIndex | `src/BinaryIndex/` | Binary identity extraction and fingerprinting |
|
||||
| AdvisoryAI | `src/AdvisoryAI/` | AI-assisted advisory analysis |
|
||||
| **Artifacts & Evidence** | | |
|
||||
| Attestor | `src/Attestor/` | in-toto/DSSE attestation generation |
|
||||
| Signer | `src/Signer/` | Cryptographic signing operations |
|
||||
| SbomService | `src/SbomService/` | SBOM storage, versioning, and lineage ledger |
|
||||
| EvidenceLocker | `src/EvidenceLocker/` | Sealed evidence storage and export |
|
||||
| ExportCenter | `src/ExportCenter/` | Batch export and report generation |
|
||||
| VexHub | `src/VexHub/` | VEX distribution and exchange hub |
|
||||
| **Policy & Risk** | | |
|
||||
| Policy | `src/Policy/` | Policy engine with K4 lattice logic |
|
||||
| VulnExplorer | `src/VulnExplorer/` | Vulnerability exploration and triage UI backend |
|
||||
| **Operations** | | |
|
||||
| Scheduler | `src/Scheduler/` | Job scheduling and queue management |
|
||||
| Notify | `src/Notify/` | Notification delivery (Email, Slack, Teams) |
|
||||
| Orchestrator | `src/Orchestrator/` | Workflow orchestration and task coordination |
|
||||
| TaskRunner | `src/TaskRunner/` | Task pack execution engine |
|
||||
| Notify | `src/Notify/` | Notification delivery (Email, Slack, Teams, Webhooks) |
|
||||
| **Integration** | | |
|
||||
| CLI | `src/Cli/` | Command-line interface (Native AOT) |
|
||||
| Zastava | `src/Zastava/` | Container registry webhook observer |
|
||||
| Web | `src/Web/` | Angular 17 frontend SPA |
|
||||
| **Infrastructure** | | |
|
||||
| Cryptography | `src/Cryptography/` | Crypto plugins (FIPS, eIDAS, GOST, SM, PQ) |
|
||||
| Telemetry | `src/Telemetry/` | OpenTelemetry traces, metrics, logging |
|
||||
| Graph | `src/Graph/` | Call graph and reachability data structures |
|
||||
| Signals | `src/Signals/` | Runtime signal collection and correlation |
|
||||
| Replay | `src/Replay/` | Deterministic replay engine |
|
||||
|
||||
> **Note:** See `docs/modules/<module>/architecture.md` for detailed module dossiers.
|
||||
|
||||
### Code Organization Patterns
|
||||
|
||||
|
||||
@@ -181,7 +181,7 @@ Authorization: Bearer <token>
|
||||
* Hardware reference: 8 vCPU, 8 GB RAM, NVMe SSD.
|
||||
* PostgreSQL and Valkey run co-located unless horizontal scaling enabled.
|
||||
* All docker images tagged `latest` are immutable (CI process locks digests).
|
||||
* Rego evaluation runs in embedded OPA Go‑library (no external binary).
|
||||
* Policy evaluation uses native `stella-dsl@1` DSL implemented in .NET; OPA/Rego integration available for Enterprise tier via external adapter.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -417,8 +417,8 @@ Authority now understands two flavours of sender-constrained OAuth clients:
|
||||
enabled: true
|
||||
ttl: "00:10:00"
|
||||
maxIssuancePerMinute: 120
|
||||
store: "redis"
|
||||
redisConnectionString: "redis://authority-redis:6379?ssl=false"
|
||||
store: "redis" # Uses Valkey (Redis-compatible)
|
||||
redisConnectionString: "valkey:6379"
|
||||
requiredAudiences:
|
||||
- "signer"
|
||||
- "attestor"
|
||||
|
||||
@@ -25,7 +25,7 @@
|
||||
|
||||
| Asset | Threats | Mitigations |
|
||||
| -------------------- | --------------------- | ---------------------------------------------------------------------- |
|
||||
| SBOMs & scan results | Disclosure, tamper | TLS‑in‑transit, read‑only Redis volume, RBAC, Cosign‑verified plug‑ins |
|
||||
| SBOMs & scan results | Disclosure, tamper | TLS‑in‑transit, read‑only Valkey volume, RBAC, Cosign‑verified plug‑ins |
|
||||
| Backend container | RCE, code‑injection | Distroless image, non‑root UID, read‑only FS, seccomp + `CAP_DROP:ALL` |
|
||||
| Update artefacts | Supply‑chain attack | Cosign‑signed images & SBOMs, enforced by admission controller |
|
||||
| Admin credentials | Phishing, brute force | OAuth 2.0 with 12‑h token TTL, optional mTLS |
|
||||
@@ -72,10 +72,10 @@ services:
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
|
||||
redis:
|
||||
image: redis:7.2-alpine
|
||||
command: ["redis-server", "--requirepass", "${REDIS_PASS}", "--rename-command", "FLUSHALL", ""]
|
||||
user: "redis"
|
||||
valkey:
|
||||
image: valkey/valkey:8.0-alpine
|
||||
command: ["valkey-server", "--requirepass", "${VALKEY_PASS}", "--rename-command", "FLUSHALL", ""]
|
||||
user: "valkey"
|
||||
read_only: true
|
||||
cap_drop: [ALL]
|
||||
tmpfs:
|
||||
|
||||
@@ -133,7 +133,7 @@ docker compose -f docker-compose.dev.yaml up -d
|
||||
- Attestor (port 8442) - in-toto attestation generation
|
||||
- Scanner.Web (port 8444) - Scan API
|
||||
- Concelier (port 8445) - Advisory ingestion
|
||||
- And 30+ more services...
|
||||
- Plus additional services (Scheduler, Excititor, AdvisoryAI, IssuerDirectory, etc.)
|
||||
|
||||
### Step 4: Verify Services Are Running
|
||||
|
||||
|
||||
@@ -72,7 +72,7 @@ evidence/
|
||||
### Step 2: Evidence Collection
|
||||
|
||||
**Parsers:**
|
||||
- `CycloneDxParser` - Parses CycloneDX 1.4/1.5/1.6 format
|
||||
- `CycloneDxParser` - Parses CycloneDX 1.4–1.7 format
|
||||
- `SpdxParser` - Parses SPDX 2.3 format
|
||||
- `DsseAttestationParser` - Parses DSSE envelopes
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@
|
||||
|
||||
* **Signer** (caller) — authenticated via **mTLS** and **Authority** OpToks.
|
||||
* **Rekor v2** — tile‑backed transparency log endpoint(s).
|
||||
* **MinIO (S3)** — optional archive store for DSSE envelopes & verification bundles.
|
||||
* **RustFS (S3-compatible)** — optional archive store for DSSE envelopes & verification bundles.
|
||||
* **PostgreSQL** — local cache of `{uuid, index, proof, artifactSha256, bundleSha256}`; job state; audit.
|
||||
* **Valkey** — dedupe/idempotency keys and short‑lived rate‑limit buckets.
|
||||
* **Licensing Service (optional)** — “endorse” call for cross‑log publishing when customer opts‑in.
|
||||
@@ -615,7 +615,7 @@ attestor:
|
||||
connectionString: "Host=postgres;Port=5432;Database=attestor;Username=stellaops;Password=secret"
|
||||
s3:
|
||||
enabled: true
|
||||
endpoint: "http://minio:9000"
|
||||
endpoint: "http://rustfs:8080"
|
||||
bucket: "stellaops"
|
||||
prefix: "attest/"
|
||||
objectLock: "governance"
|
||||
|
||||
239
docs/modules/attestor/bundle-format.md
Normal file
239
docs/modules/attestor/bundle-format.md
Normal file
@@ -0,0 +1,239 @@
|
||||
# Sigstore Bundle Format
|
||||
|
||||
This document describes the Sigstore Bundle v0.3 format implementation in StellaOps for offline DSSE envelope verification.
|
||||
|
||||
## Overview
|
||||
|
||||
A Sigstore bundle is a self-contained package that includes all verification material needed to verify a DSSE envelope without network access. This enables offline verification scenarios critical for air-gapped environments.
|
||||
|
||||
## Bundle Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"mediaType": "application/vnd.dev.sigstore.bundle.v0.3+json",
|
||||
"verificationMaterial": {
|
||||
"certificate": { ... },
|
||||
"tlogEntries": [ ... ],
|
||||
"timestampVerificationData": { ... }
|
||||
},
|
||||
"dsseEnvelope": {
|
||||
"payloadType": "application/vnd.in-toto+json",
|
||||
"payload": "<base64-encoded-payload>",
|
||||
"signatures": [
|
||||
{
|
||||
"sig": "<base64-encoded-signature>",
|
||||
"keyid": "<optional-key-id>"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Components
|
||||
|
||||
### Media Type
|
||||
|
||||
- **v0.3**: `application/vnd.dev.sigstore.bundle.v0.3+json` (current)
|
||||
- **v0.2**: `application/vnd.dev.sigstore.bundle.v0.2+json` (legacy)
|
||||
|
||||
### Verification Material
|
||||
|
||||
Contains cryptographic material for verification:
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `certificate` | Object | X.509 signing certificate (keyless signing) |
|
||||
| `publicKey` | Object | Public key (keyful signing, alternative to certificate) |
|
||||
| `tlogEntries` | Array | Transparency log entries from Rekor |
|
||||
| `timestampVerificationData` | Object | RFC 3161 timestamps |
|
||||
|
||||
#### Certificate
|
||||
|
||||
```json
|
||||
{
|
||||
"certificate": {
|
||||
"rawBytes": "<base64-DER-certificate>"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Public Key (alternative to certificate)
|
||||
|
||||
```json
|
||||
{
|
||||
"publicKey": {
|
||||
"hint": "<optional-key-hint>",
|
||||
"rawBytes": "<base64-public-key>"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Transparency Log Entry
|
||||
|
||||
```json
|
||||
{
|
||||
"logIndex": "12345",
|
||||
"logId": {
|
||||
"keyId": "<base64-log-key-id>"
|
||||
},
|
||||
"kindVersion": {
|
||||
"kind": "dsse",
|
||||
"version": "0.0.1"
|
||||
},
|
||||
"integratedTime": "1703500000",
|
||||
"inclusionPromise": {
|
||||
"signedEntryTimestamp": "<base64-SET>"
|
||||
},
|
||||
"inclusionProof": {
|
||||
"logIndex": "12345",
|
||||
"rootHash": "<base64-merkle-root>",
|
||||
"treeSize": "100000",
|
||||
"hashes": ["<base64-hash>", ...],
|
||||
"checkpoint": {
|
||||
"envelope": "<checkpoint-note>"
|
||||
}
|
||||
},
|
||||
"canonicalizedBody": "<base64-canonical-body>"
|
||||
}
|
||||
```
|
||||
|
||||
### DSSE Envelope
|
||||
|
||||
Standard DSSE envelope format:
|
||||
|
||||
```json
|
||||
{
|
||||
"payloadType": "application/vnd.in-toto+json",
|
||||
"payload": "<base64-encoded-attestation>",
|
||||
"signatures": [
|
||||
{
|
||||
"sig": "<base64-signature>",
|
||||
"keyid": ""
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Building a Bundle
|
||||
|
||||
```csharp
|
||||
using StellaOps.Attestor.Bundle.Builder;
|
||||
using StellaOps.Attestor.Bundle.Models;
|
||||
|
||||
var bundle = new SigstoreBundleBuilder()
|
||||
.WithDsseEnvelope(payloadType, payload, signatures)
|
||||
.WithCertificate(certificateBytes)
|
||||
.WithRekorEntry(
|
||||
logIndex: "12345",
|
||||
logIdKeyId: logKeyId,
|
||||
integratedTime: "1703500000",
|
||||
canonicalizedBody: body)
|
||||
.WithInclusionProof(inclusionProof)
|
||||
.Build();
|
||||
|
||||
// Serialize to JSON
|
||||
var json = bundle.BuildJson();
|
||||
File.WriteAllText("attestation.bundle", json);
|
||||
```
|
||||
|
||||
### Deserializing a Bundle
|
||||
|
||||
```csharp
|
||||
using StellaOps.Attestor.Bundle.Serialization;
|
||||
|
||||
var json = File.ReadAllText("attestation.bundle");
|
||||
var bundle = SigstoreBundleSerializer.Deserialize(json);
|
||||
|
||||
// Or with error handling
|
||||
if (SigstoreBundleSerializer.TryDeserialize(json, out var bundle))
|
||||
{
|
||||
// Use bundle
|
||||
}
|
||||
```
|
||||
|
||||
### Verifying a Bundle
|
||||
|
||||
```csharp
|
||||
using StellaOps.Attestor.Bundle.Verification;
|
||||
|
||||
var verifier = new SigstoreBundleVerifier();
|
||||
var result = await verifier.VerifyAsync(bundle);
|
||||
|
||||
if (result.IsValid)
|
||||
{
|
||||
Console.WriteLine("Bundle verified successfully");
|
||||
Console.WriteLine($"DSSE Signature: {result.Checks.DsseSignature}");
|
||||
Console.WriteLine($"Certificate Chain: {result.Checks.CertificateChain}");
|
||||
Console.WriteLine($"Inclusion Proof: {result.Checks.InclusionProof}");
|
||||
}
|
||||
else
|
||||
{
|
||||
foreach (var error in result.Errors)
|
||||
{
|
||||
Console.WriteLine($"Error: {error.Code} - {error.Message}");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Verification Options
|
||||
|
||||
```csharp
|
||||
var options = new BundleVerificationOptions
|
||||
{
|
||||
VerifyInclusionProof = true,
|
||||
VerifyTimestamps = false,
|
||||
VerificationTime = DateTimeOffset.UtcNow,
|
||||
TrustedRoots = trustedCertificates
|
||||
};
|
||||
|
||||
var result = await verifier.VerifyAsync(bundle, options);
|
||||
```
|
||||
|
||||
## Verification Checks
|
||||
|
||||
| Check | Description |
|
||||
|-------|-------------|
|
||||
| `DsseSignature` | Verifies DSSE signature using PAE encoding |
|
||||
| `CertificateChain` | Validates certificate validity period |
|
||||
| `InclusionProof` | Verifies Merkle inclusion proof (RFC 6962) |
|
||||
| `TransparencyLog` | Validates transparency log entry |
|
||||
| `Timestamp` | Verifies RFC 3161 timestamps (optional) |
|
||||
|
||||
## Error Codes
|
||||
|
||||
| Code | Description |
|
||||
|------|-------------|
|
||||
| `InvalidBundleStructure` | Bundle JSON structure is invalid |
|
||||
| `MissingDsseEnvelope` | DSSE envelope is required |
|
||||
| `DsseSignatureInvalid` | Signature verification failed |
|
||||
| `MissingCertificate` | No certificate or public key |
|
||||
| `CertificateChainInvalid` | Certificate chain validation failed |
|
||||
| `CertificateExpired` | Certificate has expired |
|
||||
| `CertificateNotYetValid` | Certificate not yet valid |
|
||||
| `InclusionProofInvalid` | Merkle proof verification failed |
|
||||
| `RootHashMismatch` | Computed root doesn't match expected |
|
||||
|
||||
## Cosign Compatibility
|
||||
|
||||
Bundles created by StellaOps are compatible with cosign verification:
|
||||
|
||||
```bash
|
||||
# Verify bundle with cosign
|
||||
cosign verify-attestation \
|
||||
--bundle attestation.bundle \
|
||||
--certificate-identity="subject@example.com" \
|
||||
--certificate-oidc-issuer="https://issuer.example.com" \
|
||||
--type=slsaprovenance \
|
||||
registry.example.com/image:tag
|
||||
```
|
||||
|
||||
See [Cosign Verification Examples](./cosign-verification-examples.md) for more details.
|
||||
|
||||
## References
|
||||
|
||||
- [Sigstore Bundle Specification](https://github.com/sigstore/cosign/blob/main/specs/BUNDLE_SPEC.md)
|
||||
- [Sigstore Protobuf Specs](https://github.com/sigstore/protobuf-specs)
|
||||
- [DSSE Specification](https://github.com/secure-systems-lab/dsse)
|
||||
- [RFC 6962 - Certificate Transparency](https://www.rfc-editor.org/rfc/rfc6962)
|
||||
374
docs/modules/attestor/cosign-verification-examples.md
Normal file
374
docs/modules/attestor/cosign-verification-examples.md
Normal file
@@ -0,0 +1,374 @@
|
||||
# Cosign Verification Examples
|
||||
|
||||
This document provides examples for verifying StellaOps DSSE attestations using Sigstore cosign.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Install Cosign
|
||||
|
||||
```bash
|
||||
# macOS
|
||||
brew install cosign
|
||||
|
||||
# Linux (download latest release)
|
||||
curl -sSfL https://github.com/sigstore/cosign/releases/latest/download/cosign-linux-amd64 -o cosign
|
||||
chmod +x cosign
|
||||
sudo mv cosign /usr/local/bin/
|
||||
|
||||
# Windows (download from releases page)
|
||||
# https://github.com/sigstore/cosign/releases
|
||||
|
||||
# Verify installation
|
||||
cosign version
|
||||
```
|
||||
|
||||
### Required Files
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `attestation.json` | DSSE envelope exported from StellaOps |
|
||||
| `public.key` | Public key for keyful verification |
|
||||
| `trusted_root.json` | Sigstore TUF root for keyless verification |
|
||||
|
||||
## Export Attestation from StellaOps
|
||||
|
||||
```bash
|
||||
# Export attestation for a specific artifact
|
||||
stellaops attestation export \
|
||||
--artifact sha256:abc123... \
|
||||
--output attestation.json
|
||||
|
||||
# Export with certificate chain
|
||||
stellaops attestation export \
|
||||
--artifact sha256:abc123... \
|
||||
--include-certificate-chain \
|
||||
--output attestation-bundle.json
|
||||
|
||||
# Export as Sigstore bundle
|
||||
stellaops attestation export \
|
||||
--artifact sha256:abc123... \
|
||||
--format sigstore-bundle \
|
||||
--output attestation.sigstore.json
|
||||
```
|
||||
|
||||
## Keyful Verification (KMS/HSM Keys)
|
||||
|
||||
### Verify with Public Key
|
||||
|
||||
```bash
|
||||
# Basic verification
|
||||
cosign verify-attestation \
|
||||
--key public.key \
|
||||
--type custom \
|
||||
sha256:abc123...
|
||||
|
||||
# Verify from exported attestation file
|
||||
cosign verify-attestation \
|
||||
--key public.key \
|
||||
--type custom \
|
||||
--attestation attestation.json \
|
||||
sha256:abc123...
|
||||
```
|
||||
|
||||
### Verify with KMS Key
|
||||
|
||||
```bash
|
||||
# AWS KMS
|
||||
cosign verify-attestation \
|
||||
--key awskms:///arn:aws:kms:us-east-1:123456789:key/abc-123 \
|
||||
--type custom \
|
||||
sha256:abc123...
|
||||
|
||||
# GCP KMS
|
||||
cosign verify-attestation \
|
||||
--key gcpkms://projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key \
|
||||
--type custom \
|
||||
sha256:abc123...
|
||||
|
||||
# Azure Key Vault
|
||||
cosign verify-attestation \
|
||||
--key azurekms://mykeyvault.vault.azure.net/keys/mykey \
|
||||
--type custom \
|
||||
sha256:abc123...
|
||||
|
||||
# HashiCorp Vault
|
||||
cosign verify-attestation \
|
||||
--key hashivault://transit/keys/my-key \
|
||||
--type custom \
|
||||
sha256:abc123...
|
||||
```
|
||||
|
||||
## Keyless Verification (Fulcio/OIDC)
|
||||
|
||||
### Verify with Certificate Identity
|
||||
|
||||
```bash
|
||||
# Verify with issuer and subject
|
||||
cosign verify-attestation \
|
||||
--certificate-identity "signer@example.com" \
|
||||
--certificate-oidc-issuer "https://accounts.google.com" \
|
||||
--type custom \
|
||||
sha256:abc123...
|
||||
|
||||
# Verify with identity regex
|
||||
cosign verify-attestation \
|
||||
--certificate-identity-regexp ".*@stellaops\.io" \
|
||||
--certificate-oidc-issuer "https://github.com/login/oauth" \
|
||||
--type custom \
|
||||
sha256:abc123...
|
||||
```
|
||||
|
||||
### Verify GitHub Actions Workload Identity
|
||||
|
||||
```bash
|
||||
cosign verify-attestation \
|
||||
--certificate-identity "https://github.com/org/repo/.github/workflows/build.yml@refs/heads/main" \
|
||||
--certificate-oidc-issuer "https://token.actions.githubusercontent.com" \
|
||||
--type custom \
|
||||
sha256:abc123...
|
||||
```
|
||||
|
||||
## Verify Specific Predicate Types
|
||||
|
||||
### StellaOps Attestation Types
|
||||
|
||||
```bash
|
||||
# Verify SBOM attestation
|
||||
cosign verify-attestation \
|
||||
--key public.key \
|
||||
--type "https://spdx.dev/Document" \
|
||||
sha256:abc123...
|
||||
|
||||
# Verify SLSA Provenance
|
||||
cosign verify-attestation \
|
||||
--key public.key \
|
||||
--type "https://slsa.dev/provenance/v1" \
|
||||
sha256:abc123...
|
||||
|
||||
# Verify StellaOps scan results
|
||||
cosign verify-attestation \
|
||||
--key public.key \
|
||||
--type "https://stella-ops.org/attestation/scan-results/v1" \
|
||||
sha256:abc123...
|
||||
|
||||
# Verify StellaOps policy evaluation
|
||||
cosign verify-attestation \
|
||||
--key public.key \
|
||||
--type "https://stella-ops.org/attestation/policy-evaluation/v1" \
|
||||
sha256:abc123...
|
||||
|
||||
# Verify graph root attestation
|
||||
cosign verify-attestation \
|
||||
--key public.key \
|
||||
--type "https://stella-ops.org/attestation/graph-root/v1" \
|
||||
sha256:abc123...
|
||||
```
|
||||
|
||||
## Offline Verification
|
||||
|
||||
### Verify with Cached Bundle
|
||||
|
||||
```bash
|
||||
# Verify using a Sigstore bundle (includes certificate and Rekor entry)
|
||||
cosign verify-attestation \
|
||||
--bundle attestation.sigstore.json \
|
||||
--certificate-identity "signer@example.com" \
|
||||
--certificate-oidc-issuer "https://accounts.google.com" \
|
||||
sha256:abc123...
|
||||
```
|
||||
|
||||
### Verify with Local TUF Root
|
||||
|
||||
```bash
|
||||
# Initialize TUF root (run once)
|
||||
cosign initialize --mirror https://tuf-repo.sigstore.dev --root root.json
|
||||
|
||||
# Verify using local TUF data
|
||||
SIGSTORE_ROOT_FILE=trusted_root.json \
|
||||
cosign verify-attestation \
|
||||
--certificate-identity "signer@example.com" \
|
||||
--certificate-oidc-issuer "https://accounts.google.com" \
|
||||
sha256:abc123...
|
||||
```
|
||||
|
||||
### Air-Gapped Verification
|
||||
|
||||
```bash
|
||||
# 1. On connected machine: download required artifacts
|
||||
cosign download attestation sha256:abc123... > attestation.json
|
||||
cosign download signature sha256:abc123... > signature.sig
|
||||
|
||||
# 2. Transfer files to air-gapped environment
|
||||
|
||||
# 3. On air-gapped machine: verify with public key
|
||||
cosign verify-attestation \
|
||||
--key public.key \
|
||||
--offline \
|
||||
--type custom \
|
||||
--attestation attestation.json \
|
||||
sha256:abc123...
|
||||
```
|
||||
|
||||
## Verify with Policy
|
||||
|
||||
### CUE Policy
|
||||
|
||||
```cue
|
||||
// policy.cue
|
||||
package attestation
|
||||
|
||||
predicateType: "https://stella-ops.org/attestation/scan-results/v1"
|
||||
predicate: {
|
||||
severity: *"low" | "medium" | "high" | "critical"
|
||||
vulnerabilities: [...{
|
||||
id: =~"^CVE-"
|
||||
severity: !="critical"
|
||||
}]
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
cosign verify-attestation \
|
||||
--key public.key \
|
||||
--type custom \
|
||||
--policy policy.cue \
|
||||
sha256:abc123...
|
||||
```
|
||||
|
||||
### Rego Policy
|
||||
|
||||
```rego
|
||||
# policy.rego
|
||||
package attestation
|
||||
|
||||
default allow = false
|
||||
|
||||
allow {
|
||||
input.predicateType == "https://stella-ops.org/attestation/policy-evaluation/v1"
|
||||
input.predicate.verdict == "PASS"
|
||||
input.predicate.score >= 7.0
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
cosign verify-attestation \
|
||||
--key public.key \
|
||||
--type custom \
|
||||
--policy policy.rego \
|
||||
sha256:abc123...
|
||||
```
|
||||
|
||||
## Multi-Signature Verification
|
||||
|
||||
```bash
|
||||
# Verify that multiple signatures are present
|
||||
cosign verify-attestation \
|
||||
--key builder.pub \
|
||||
--type custom \
|
||||
sha256:abc123... && \
|
||||
cosign verify-attestation \
|
||||
--key witness.pub \
|
||||
--type custom \
|
||||
sha256:abc123...
|
||||
```
|
||||
|
||||
## Output Formats
|
||||
|
||||
### JSON Output
|
||||
|
||||
```bash
|
||||
cosign verify-attestation \
|
||||
--key public.key \
|
||||
--type custom \
|
||||
--output-file verification-result.json \
|
||||
sha256:abc123...
|
||||
```
|
||||
|
||||
### Text Output with Details
|
||||
|
||||
```bash
|
||||
cosign verify-attestation \
|
||||
--key public.key \
|
||||
--type custom \
|
||||
-v \
|
||||
sha256:abc123...
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Errors
|
||||
|
||||
| Error | Cause | Solution |
|
||||
|-------|-------|----------|
|
||||
| `no matching attestation found` | No attestation attached to image | Verify attestation was uploaded |
|
||||
| `key verification failed` | Wrong key or corrupted signature | Check key matches signer |
|
||||
| `certificate expired` | Signing certificate past validity | Use Rekor timestamp verification |
|
||||
| `OIDC issuer mismatch` | Wrong issuer in verify command | Check certificate's issuer field |
|
||||
| `predicate type mismatch` | Wrong --type argument | Use correct predicate URI |
|
||||
|
||||
### Debug Commands
|
||||
|
||||
```bash
|
||||
# List all attestations on an image
|
||||
cosign tree sha256:abc123...
|
||||
|
||||
# Download and inspect attestation
|
||||
cosign download attestation sha256:abc123... | jq .
|
||||
|
||||
# Verify with verbose output
|
||||
cosign verify-attestation \
|
||||
--key public.key \
|
||||
--type custom \
|
||||
-v \
|
||||
sha256:abc123... 2>&1 | tee verify.log
|
||||
|
||||
# Check certificate chain
|
||||
cosign download attestation sha256:abc123... | \
|
||||
jq -r '.payload' | base64 -d | jq -r '.subject'
|
||||
```
|
||||
|
||||
### Verify Certificate Details
|
||||
|
||||
```bash
|
||||
# Extract and inspect the signing certificate
|
||||
cosign download attestation sha256:abc123... | \
|
||||
jq -r '.signatures[0].cert' | base64 -d | \
|
||||
openssl x509 -noout -text
|
||||
```
|
||||
|
||||
## Integration with CI/CD
|
||||
|
||||
### GitHub Actions
|
||||
|
||||
```yaml
|
||||
- name: Verify attestation
|
||||
uses: sigstore/cosign-installer@main
|
||||
|
||||
- name: Verify StellaOps attestation
|
||||
run: |
|
||||
cosign verify-attestation \
|
||||
--certificate-identity "https://github.com/${{ github.repository }}/.github/workflows/build.yml@${{ github.ref }}" \
|
||||
--certificate-oidc-issuer "https://token.actions.githubusercontent.com" \
|
||||
--type "https://stella-ops.org/attestation/scan-results/v1" \
|
||||
${{ env.IMAGE_DIGEST }}
|
||||
```
|
||||
|
||||
### GitLab CI
|
||||
|
||||
```yaml
|
||||
verify-attestation:
|
||||
image: bitnami/cosign:latest
|
||||
script:
|
||||
- cosign verify-attestation
|
||||
--certificate-identity "https://gitlab.com/${CI_PROJECT_PATH}/.gitlab-ci.yml@${CI_COMMIT_REF_NAME}"
|
||||
--certificate-oidc-issuer "https://gitlab.com"
|
||||
--type "https://stella-ops.org/attestation/scan-results/v1"
|
||||
${IMAGE_DIGEST}
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [DSSE Round-Trip Verification](./dsse-roundtrip-verification.md)
|
||||
- [Transparency Log Integration](./transparency.md)
|
||||
- [Air-Gap Operation](./airgap.md)
|
||||
- [Sigstore Documentation](https://docs.sigstore.dev)
|
||||
292
docs/modules/attestor/dsse-roundtrip-verification.md
Normal file
292
docs/modules/attestor/dsse-roundtrip-verification.md
Normal file
@@ -0,0 +1,292 @@
|
||||
# DSSE Round-Trip Verification
|
||||
|
||||
This document describes the DSSE (Dead Simple Signing Envelope) round-trip verification process in StellaOps, including bundling, offline verification, and cosign compatibility.
|
||||
|
||||
## Overview
|
||||
|
||||
DSSE round-trip verification ensures that attestations can be:
|
||||
1. Created and signed
|
||||
2. Serialized to persistent storage
|
||||
3. Deserialized and rebundled
|
||||
4. Verified offline without network access
|
||||
5. Verified by external tools (cosign)
|
||||
|
||||
## Round-Trip Process
|
||||
|
||||
### Standard Flow
|
||||
|
||||
```
|
||||
┌─────────────┐ ┌──────────────┐ ┌─────────────┐
|
||||
│ Payload │───>│ Sign │───>│ DSSE │
|
||||
│ (in-toto) │ │ │ │ Envelope │
|
||||
└─────────────┘ └──────────────┘ └──────┬──────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────┐ ┌──────────────┐ ┌─────────────┐
|
||||
│ Verify │<───│ Deserialize │<───│ Bundle │
|
||||
│ (Pass) │ │ │ │ (JSON) │
|
||||
└──────┬──────┘ └──────────────┘ └─────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────┐ ┌──────────────┐ ┌─────────────┐
|
||||
│ Re-bundle │───>│ Serialize │───>│ Archive │
|
||||
│ │ │ │ │ (.tar.gz) │
|
||||
└──────┬──────┘ └──────────────┘ └─────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────┐
|
||||
│ Re-verify │
|
||||
│ (Pass) │
|
||||
└─────────────┘
|
||||
```
|
||||
|
||||
### Steps
|
||||
|
||||
1. **Create Payload**: Build an in-toto statement with subject digests and predicate
|
||||
2. **Sign**: Create DSSE envelope with signature(s)
|
||||
3. **Bundle**: Wrap envelope in Sigstore-compatible bundle format
|
||||
4. **Serialize**: Convert to JSON bytes
|
||||
5. **Deserialize**: Parse JSON back to bundle structure
|
||||
6. **Extract**: Get DSSE envelope from bundle
|
||||
7. **Re-bundle**: Create new bundle from extracted envelope
|
||||
8. **Re-verify**: Confirm signature validity
|
||||
|
||||
## Verification Types
|
||||
|
||||
### Signature Verification
|
||||
|
||||
Verifies the cryptographic signature against the payload:
|
||||
|
||||
```csharp
|
||||
var result = await signatureService.VerifyAsync(envelope, cancellationToken);
|
||||
if (!result.IsValid)
|
||||
{
|
||||
throw new VerificationException(result.Error);
|
||||
}
|
||||
```
|
||||
|
||||
### Payload Integrity
|
||||
|
||||
Verifies the payload hash matches the signed content:
|
||||
|
||||
```csharp
|
||||
var computedHash = SHA256.HashData(envelope.Payload.Span);
|
||||
var declaredHash = ParseDigest(envelope.PayloadDigest);
|
||||
if (!computedHash.SequenceEqual(declaredHash))
|
||||
{
|
||||
throw new IntegrityException("Payload hash mismatch");
|
||||
}
|
||||
```
|
||||
|
||||
### Certificate Chain Validation
|
||||
|
||||
For keyless (Fulcio) signatures:
|
||||
|
||||
```csharp
|
||||
var chain = envelope.Signatures[0].CertificateChain;
|
||||
var result = await certificateValidator.ValidateAsync(chain, options);
|
||||
// Checks: expiry, revocation, issuer, extended key usage
|
||||
```
|
||||
|
||||
## Determinism Requirements
|
||||
|
||||
Round-trip verification requires deterministic serialization:
|
||||
|
||||
| Property | Requirement |
|
||||
|----------|-------------|
|
||||
| Key Order | Alphabetical |
|
||||
| Whitespace | No trailing whitespace, single space after colons |
|
||||
| Encoding | UTF-8, no BOM |
|
||||
| Numbers | No unnecessary trailing zeros |
|
||||
| Arrays | Stable ordering |
|
||||
|
||||
### Verification
|
||||
|
||||
```csharp
|
||||
var bytes1 = CanonJson.CanonicalizeVersioned(envelope);
|
||||
var envelope2 = JsonSerializer.Deserialize<DsseEnvelope>(bytes1);
|
||||
var bytes2 = CanonJson.CanonicalizeVersioned(envelope2);
|
||||
|
||||
// bytes1 and bytes2 must be identical
|
||||
Assert.Equal(bytes1.ToArray(), bytes2.ToArray());
|
||||
```
|
||||
|
||||
## Multi-Signature Support
|
||||
|
||||
DSSE envelopes can contain multiple signatures from different signers:
|
||||
|
||||
```csharp
|
||||
var envelope = new DsseEnvelope(
|
||||
payloadType: "application/vnd.in-toto+json",
|
||||
payload: canonicalPayload,
|
||||
signatures:
|
||||
[
|
||||
signerA.Sign(canonicalPayload), // Builder signature
|
||||
signerB.Sign(canonicalPayload), // Witness signature
|
||||
signerC.Sign(canonicalPayload) // Approver signature
|
||||
]);
|
||||
```
|
||||
|
||||
Verification checks all signatures:
|
||||
|
||||
```csharp
|
||||
foreach (var signature in envelope.Signatures)
|
||||
{
|
||||
var result = await VerifySignatureAsync(envelope.Payload, signature);
|
||||
if (!result.IsValid)
|
||||
{
|
||||
return VerificationResult.Failure(
|
||||
$"Signature {signature.KeyId} failed: {result.Error}");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Cosign Compatibility
|
||||
|
||||
StellaOps DSSE envelopes are compatible with Sigstore cosign for verification.
|
||||
|
||||
### Envelope Format
|
||||
|
||||
```json
|
||||
{
|
||||
"payloadType": "application/vnd.in-toto+json",
|
||||
"payload": "<base64-encoded-payload>",
|
||||
"signatures": [
|
||||
{
|
||||
"keyid": "SHA256:abc123...",
|
||||
"sig": "<base64-encoded-signature>"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Cosign Verification Commands
|
||||
|
||||
See [Cosign Verification Examples](./cosign-verification-examples.md) for detailed commands.
|
||||
|
||||
## Offline Verification
|
||||
|
||||
For air-gapped environments, verification can proceed without network access:
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. **Trusted Root**: Pre-downloaded Sigstore TUF root
|
||||
2. **Cached Certificates**: Fulcio certificate chain
|
||||
3. **Rekor Entry** (optional): Cached transparency log entry
|
||||
|
||||
### Offline Flow
|
||||
|
||||
```csharp
|
||||
var verifier = new OfflineVerifier(
|
||||
trustedRoot: TrustedRoot.LoadFromFile("trusted_root.json"),
|
||||
rekorEntries: RekorCache.LoadFromDirectory("rekor-cache/"));
|
||||
|
||||
var result = await verifier.VerifyAsync(envelope);
|
||||
```
|
||||
|
||||
### Bundle for Offline Use
|
||||
|
||||
```bash
|
||||
# Export attestation bundle with all dependencies
|
||||
stellaops export attestation-bundle \
|
||||
--artifact sha256:abc123... \
|
||||
--include-certificate-chain \
|
||||
--include-rekor-entry \
|
||||
--output bundle.tar.gz
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Verification Failures
|
||||
|
||||
| Error | Cause | Resolution |
|
||||
|-------|-------|------------|
|
||||
| `SignatureInvalid` | Signature doesn't match payload | Re-sign with correct key |
|
||||
| `CertificateExpired` | Signing certificate expired | Use Rekor entry timestamp |
|
||||
| `PayloadTampered` | Payload modified after signing | Restore original payload |
|
||||
| `KeyNotTrusted` | Key not in trusted set | Add key to trust policy |
|
||||
| `ParseError` | Malformed envelope JSON | Validate envelope format |
|
||||
|
||||
### Example Error Handling
|
||||
|
||||
```csharp
|
||||
try
|
||||
{
|
||||
var result = await verifier.VerifyAsync(envelope);
|
||||
if (!result.IsValid)
|
||||
{
|
||||
logger.LogWarning("Verification failed: {Reason}", result.FailureReason);
|
||||
// Handle policy violation
|
||||
}
|
||||
}
|
||||
catch (CertificateExpiredException ex)
|
||||
{
|
||||
// Fall back to Rekor timestamp verification
|
||||
var result = await verifier.VerifyWithRekorTimestampAsync(
|
||||
envelope, ex.Certificate, rekorEntry);
|
||||
}
|
||||
catch (JsonException ex)
|
||||
{
|
||||
logger.LogError(ex, "Failed to parse envelope");
|
||||
throw new VerificationException("Malformed envelope", ex);
|
||||
}
|
||||
```
|
||||
|
||||
## Testing Round-Trip Verification
|
||||
|
||||
### Unit Test Example
|
||||
|
||||
```csharp
|
||||
[Fact]
|
||||
public async Task RoundTrip_SignVerifyRebundle_Succeeds()
|
||||
{
|
||||
// Arrange
|
||||
var payload = CreateInTotoStatement();
|
||||
var signer = CreateTestSigner();
|
||||
|
||||
// Act - Sign
|
||||
var envelope = await signer.SignAsync(payload);
|
||||
|
||||
// Act - Serialize and deserialize
|
||||
var json = JsonSerializer.Serialize(envelope);
|
||||
var restored = JsonSerializer.Deserialize<DsseEnvelope>(json);
|
||||
|
||||
// Act - Verify restored envelope
|
||||
var result = await signer.VerifyAsync(restored);
|
||||
|
||||
// Assert
|
||||
result.IsValid.Should().BeTrue();
|
||||
restored.Payload.ToArray().Should().Equal(envelope.Payload.ToArray());
|
||||
}
|
||||
```
|
||||
|
||||
### Integration Test Example
|
||||
|
||||
```csharp
|
||||
[Fact]
|
||||
public async Task RoundTrip_ArchiveExtractVerify_Succeeds()
|
||||
{
|
||||
// Arrange
|
||||
var envelope = await CreateSignedEnvelope();
|
||||
var archive = new AttestationArchive();
|
||||
|
||||
// Act - Archive
|
||||
var archivePath = Path.GetTempFileName() + ".tar.gz";
|
||||
await archive.WriteAsync(envelope, archivePath);
|
||||
|
||||
// Act - Extract and verify
|
||||
var extracted = await archive.ReadAsync(archivePath);
|
||||
var result = await verifier.VerifyAsync(extracted);
|
||||
|
||||
// Assert
|
||||
result.IsValid.Should().BeTrue();
|
||||
}
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Proof Chain Specification](./proof-chain-specification.md)
|
||||
- [Graph Root Attestation](./graph-root-attestation.md)
|
||||
- [Transparency Log Integration](./transparency.md)
|
||||
- [Air-Gap Operation](./airgap.md)
|
||||
- [Cosign Verification Examples](./cosign-verification-examples.md)
|
||||
@@ -479,7 +479,7 @@ concelier:
|
||||
postgres:
|
||||
connectionString: "Host=postgres;Port=5432;Database=concelier;Username=stellaops;Password=stellaops"
|
||||
s3:
|
||||
endpoint: "http://minio:9000"
|
||||
endpoint: "http://rustfs:8080"
|
||||
bucket: "stellaops-concelier"
|
||||
scheduler:
|
||||
windowSeconds: 30
|
||||
|
||||
@@ -179,10 +179,8 @@ helm install stella stellaops/platform \
|
||||
--version 2.4.0 \
|
||||
--set global.channel=stable \
|
||||
--set authority.issuer=https://authority.stella.local \
|
||||
--set scanner.minio.endpoint=http://minio.stella.local:9000 \
|
||||
--set scanner.mongo.uri=postgresb://mongo/scanner \
|
||||
--set concelier.mongo.uri=postgresb://mongo/concelier \
|
||||
--set excititor.mongo.uri=postgresb://mongo/excititor
|
||||
--set scanner.rustfs.endpoint=http://rustfs.stella.local:8080 \
|
||||
--set global.postgres.connectionString="Host=postgres.stella.local;Database=stellaops_platform;Username=stellaops;Password=<secret>"
|
||||
```
|
||||
|
||||
* Post‑install job registers **Authority clients** (Scanner, Signer, Attestor, UI) and prints **bootstrap** URLs and client credentials (sealed secrets).
|
||||
@@ -431,7 +429,7 @@ services:
|
||||
scanner-web:
|
||||
image: registry.stella-ops.org/stellaops/scanner-web@sha256:...
|
||||
environment:
|
||||
- SCANNER__S3__ENDPOINT=http://minio:9000
|
||||
- SCANNER__ARTIFACTSTORE__ENDPOINT=http://rustfs:8080
|
||||
scanner-worker:
|
||||
image: registry.stella-ops.org/stellaops/scanner-worker@sha256:...
|
||||
deploy: { replicas: 4 }
|
||||
@@ -441,10 +439,12 @@ services:
|
||||
image: registry.stella-ops.org/stellaops/excititor@sha256:...
|
||||
web-ui:
|
||||
image: registry.stella-ops.org/stellaops/web-ui@sha256:...
|
||||
mongo:
|
||||
image: mongo:7
|
||||
minio:
|
||||
image: minio/minio:RELEASE.2025-07-10T00-00-00Z
|
||||
postgres:
|
||||
image: postgres:16
|
||||
valkey:
|
||||
image: valkey/valkey:8.0
|
||||
rustfs:
|
||||
image: registry.stella-ops.org/stellaops/rustfs:2025.10.0-edge
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
@@ -49,7 +49,7 @@ Infrastructure components (PostgreSQL, Valkey, MinIO, RustFS) are pinned in the
|
||||
Archive the resulting `out/offline-kit/metadata/debug-store.json` alongside the kit bundle.
|
||||
|
||||
5. **Review compatibility matrix**
|
||||
Confirm PostgreSQL, Valkey, MinIO, and RustFS versions in the release manifest match platform SLOs. The default targets are `postgres:16-alpine`, `valkey:8.0`, `minio@sha256:14ce…`, `rustfs:2025.10.0-edge`.
|
||||
Confirm PostgreSQL, Valkey, and RustFS versions in the release manifest match platform SLOs. The default targets are `postgres:16-alpine`, `valkey:8.0`, `rustfs:2025.10.0-edge`.
|
||||
|
||||
6. **Create a rollback bookmark**
|
||||
Record the current Helm revision (`helm history stellaops -n stellaops`) and compose tag (`git describe --tags`) before applying changes.
|
||||
|
||||
@@ -776,7 +776,7 @@ excititor:
|
||||
postgres:
|
||||
connectionString: "Host=postgres;Port=5432;Database=excititor;Username=stellaops;Password=stellaops"
|
||||
s3:
|
||||
endpoint: http://minio:9000
|
||||
endpoint: http://rustfs:8080
|
||||
bucket: stellaops
|
||||
policy:
|
||||
weights:
|
||||
@@ -858,7 +858,7 @@ Run the ingestion endpoint once after applying migration `20251019-consensus-sig
|
||||
* **Scaling:**
|
||||
|
||||
* WebService handles control APIs; **Worker** background services (same image) execute fetch/normalize in parallel with rate‑limits; PostgreSQL writes batched; upserts by natural keys.
|
||||
* Exports stream straight to S3 (MinIO) with rolling buffers.
|
||||
* Exports stream straight to S3 (RustFS) with rolling buffers.
|
||||
|
||||
* **Caching:**
|
||||
|
||||
|
||||
@@ -156,40 +156,40 @@ Downstream automation reads `manifest.json`/`bundle.json` directly, while `/exci
|
||||
|
||||
---
|
||||
|
||||
## 6) Future alignment
|
||||
|
||||
* Replace manual export definitions with generated mirror bundle manifests once `EXCITITOR-EXPORT-01-007` ships.
|
||||
* Extend `/index` payload with quiet-provenance when `EXCITITOR-EXPORT-01-006` adds that metadata.
|
||||
* Integrate domain manifests with DevOps mirror profiles (`DEVOPS-MIRROR-08-001`) so helm/compose overlays can enable or disable domains declaratively.
|
||||
|
||||
---
|
||||
|
||||
## 7) Runbook & observability checklist (Sprint 22 demo refresh · 2025-11-07)
|
||||
|
||||
### Daily / on-call checks
|
||||
1. **Index freshness** – watch `excitor_mirror_export_latency_seconds` (p95 < 180) grouped by `domainId`. If latency grows past 10 minutes, verify the export worker queue (`stellaops-export-worker` logs) and ensure Mongo `vex_exports` has entries newer than `now()-10m`.
|
||||
2. **Quota exhaustion** – alert on `excitor_mirror_quota_exhausted_total{scope="download"}` increases. When triggered, inspect structured logs (`MirrorDomainId`, `QuotaScope`, `RemoteIp`) and either raise limits or throttle abusive clients.
|
||||
3. **Bundle signature health** – metric `excitor_mirror_bundle_signature_verified_total` should match download counts when signing enabled. Deltas indicate missing `.jws` files; rebuild the bundle via export job or copy artefacts from the authority mirror cache.
|
||||
4. **HTTP errors** – dashboards should track 4xx/5xx rates split by route; repeated `503` statuses imply misconfigured exports. Check `mirror/index` logs for `status=misconfigured`.
|
||||
|
||||
### Incident steps
|
||||
1. Use `GET /excititor/mirror/domains/{id}/index` to capture current manifests. Attach the response to the incident log for reproducibility.
|
||||
2. For quota incidents, temporarily raise `maxIndexRequestsPerHour`/`maxDownloadRequestsPerHour` via the `Excititor:Mirror:Domains` config override, redeploy, then work with the consuming team on caching.
|
||||
3. For stale exports, trigger the export job (`Excititor.ExportRunner`) and confirm the artefacts are written to `outputRoot/<domain>`.
|
||||
4. Validate DSSE artefacts by running `cosign verify-blob --certificate-rekor-url=<rekor> --bundle <domain>/bundle.json --signature <domain>/bundle.json.jws`.
|
||||
|
||||
### Logging fields (structured)
|
||||
| Field | Description |
|
||||
| --- | --- |
|
||||
| `MirrorDomainId` | Domain handling the request (matches `id` in config). |
|
||||
| `QuotaScope` | `index` / `download`, useful when alerting on quota events. |
|
||||
| `ExportKey` | Included in download logs to pinpoint misconfigured exports. |
|
||||
| `BundleDigest` | SHA-256 of the artefact; compare with index payload when debugging corruption. |
|
||||
|
||||
### OTEL signals
|
||||
- **Counters:** `excitor.mirror.requests`, `excitor.mirror.quota_blocked`, `excitor.mirror.signature.failures`.
|
||||
- **Histograms:** `excitor.mirror.download.duration`, `excitor.mirror.export.latency`.
|
||||
- **Spans:** `mirror.index`, `mirror.download` include attributes `mirror.domain`, `mirror.export.key`, and `mirror.quota.remaining`.
|
||||
|
||||
Add these instruments via the `MirrorEndpoints` middleware; see `StellaOps.Excititor.WebService/Telemetry/MirrorMetrics.cs`.
|
||||
## 6) Future alignment
|
||||
|
||||
* Replace manual export definitions with generated mirror bundle manifests once `EXCITITOR-EXPORT-01-007` ships.
|
||||
* Extend `/index` payload with quiet-provenance when `EXCITITOR-EXPORT-01-006` adds that metadata.
|
||||
* Integrate domain manifests with DevOps mirror profiles (`DEVOPS-MIRROR-08-001`) so helm/compose overlays can enable or disable domains declaratively.
|
||||
|
||||
---
|
||||
|
||||
## 7) Runbook & observability checklist (Sprint 22 demo refresh · 2025-11-07)
|
||||
|
||||
### Daily / on-call checks
|
||||
1. **Index freshness** – watch `excitor_mirror_export_latency_seconds` (p95 < 180) grouped by `domainId`. If latency grows past 10 minutes, verify the export worker queue (`stellaops-export-worker` logs) and ensure PostgreSQL `vex.exports` has entries newer than `now()-10m`.
|
||||
2. **Quota exhaustion** – alert on `excitor_mirror_quota_exhausted_total{scope="download"}` increases. When triggered, inspect structured logs (`MirrorDomainId`, `QuotaScope`, `RemoteIp`) and either raise limits or throttle abusive clients.
|
||||
3. **Bundle signature health** – metric `excitor_mirror_bundle_signature_verified_total` should match download counts when signing enabled. Deltas indicate missing `.jws` files; rebuild the bundle via export job or copy artefacts from the authority mirror cache.
|
||||
4. **HTTP errors** – dashboards should track 4xx/5xx rates split by route; repeated `503` statuses imply misconfigured exports. Check `mirror/index` logs for `status=misconfigured`.
|
||||
|
||||
### Incident steps
|
||||
1. Use `GET /excititor/mirror/domains/{id}/index` to capture current manifests. Attach the response to the incident log for reproducibility.
|
||||
2. For quota incidents, temporarily raise `maxIndexRequestsPerHour`/`maxDownloadRequestsPerHour` via the `Excititor:Mirror:Domains` config override, redeploy, then work with the consuming team on caching.
|
||||
3. For stale exports, trigger the export job (`Excititor.ExportRunner`) and confirm the artefacts are written to `outputRoot/<domain>`.
|
||||
4. Validate DSSE artefacts by running `cosign verify-blob --certificate-rekor-url=<rekor> --bundle <domain>/bundle.json --signature <domain>/bundle.json.jws`.
|
||||
|
||||
### Logging fields (structured)
|
||||
| Field | Description |
|
||||
| --- | --- |
|
||||
| `MirrorDomainId` | Domain handling the request (matches `id` in config). |
|
||||
| `QuotaScope` | `index` / `download`, useful when alerting on quota events. |
|
||||
| `ExportKey` | Included in download logs to pinpoint misconfigured exports. |
|
||||
| `BundleDigest` | SHA-256 of the artefact; compare with index payload when debugging corruption. |
|
||||
|
||||
### OTEL signals
|
||||
- **Counters:** `excitor.mirror.requests`, `excitor.mirror.quota_blocked`, `excitor.mirror.signature.failures`.
|
||||
- **Histograms:** `excitor.mirror.download.duration`, `excitor.mirror.export.latency`.
|
||||
- **Spans:** `mirror.index`, `mirror.download` include attributes `mirror.domain`, `mirror.export.key`, and `mirror.quota.remaining`.
|
||||
|
||||
Add these instruments via the `MirrorEndpoints` middleware; see `StellaOps.Excititor.WebService/Telemetry/MirrorMetrics.cs`.
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@
|
||||
{ "type": "string", "contentEncoding": "base64" },
|
||||
{ "type": "string" }
|
||||
],
|
||||
"description": "Inline payload if below GridFS threshold; may be empty when stored in GridFS."
|
||||
"description": "Inline payload if below size threshold; may be empty when stored in RustFS (legacy: GridFS prior to Sprint 4400)."
|
||||
},
|
||||
"gridFsObjectId": {
|
||||
"anyOf": [
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
| Component | Requirement |
|
||||
| --- | --- |
|
||||
| Database | PostgreSQL 14+ with `citext`, `uuid-ossp`, `pgcrypto`, and `pg_partman`. Provision dedicated database/user per environment. |
|
||||
| Database | PostgreSQL 16+ with `citext`, `uuid-ossp`, `pgcrypto`, and `pg_partman`. Provision dedicated database/user per environment. |
|
||||
| Storage | Minimum 200 GB SSD per production environment (ledger + projection + Merkle tables). |
|
||||
| TLS & identity | Authority reachable for service-to-service JWTs; mTLS optional but recommended. |
|
||||
| Secrets | Store DB connection string, encryption keys (`LEDGER__ATTACHMENTS__ENCRYPTIONKEY`), signing credentials for Merkle anchoring in secrets manager. |
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
| Concern | Decision | Notes |
|
||||
|---------|----------|-------|
|
||||
| Engine | PostgreSQL 14+ with UTF-8, `jsonb`, and partitioning support | Aligns with shared data plane; deterministic ordering enforced via primary keys. |
|
||||
| Engine | PostgreSQL 16+ with UTF-8, `jsonb`, and partitioning support | Aligns with shared data plane; deterministic ordering enforced via primary keys. |
|
||||
| Tenancy | Range/list partition on `tenant_id` for ledger + projection tables | Simplifies retention and cross-tenant anchoring. |
|
||||
| Time zone | All timestamps stored as `timestamptz` UTC | Canonical JSON uses ISO-8601 (`yyyy-MM-ddTHH:mm:ss.fffZ`). |
|
||||
| Hashing | SHA-256 (lower-case hex) over canonical JSON | Implemented client-side and verified by DB constraint. |
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
## 1 · Prerequisites
|
||||
- Authority must be running and reachable at the issuer URL you configure (default Compose host: `https://authority:8440`).
|
||||
- PostgreSQL 14+ with credentials for the `issuer_directory` database (Compose defaults to the user defined in `.env`).
|
||||
- PostgreSQL 16+ with credentials for the `issuer_directory` database (Compose defaults to the user defined in `.env`).
|
||||
- Network access to Authority, PostgreSQL, and (optionally) Prometheus if you scrape metrics.
|
||||
- Issuer Directory configuration file `etc/issuer-directory.yaml` checked and customised for your environment (tenant header, audiences, telemetry level, CSAF seed path).
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ Include the following artefacts in your Offline Update Kit staging tree:
|
||||
| `images/issuer-directory-web.tar` | `registry.stella-ops.org/stellaops/issuer-directory-web` (digest from `deploy/releases/<channel>.yaml`) | Export with `crane pull --format=tar` or `skopeo copy docker://... oci:...`. |
|
||||
| `config/issuer-directory/issuer-directory.yaml` | `etc/issuer-directory.yaml` (customised) | Replace Authority issuer, tenant header, and log level as required. |
|
||||
| `config/issuer-directory/csaf-publishers.json` | `src/IssuerDirectory/StellaOps.IssuerDirectory/data/csaf-publishers.json` or regional override | Operators can edit before import to add private publishers. |
|
||||
| `secrets/issuer-directory/connection.env` | Secure secret store export (`ISSUER_DIRECTORY_MONGO_CONNECTION_STRING=`) | Encrypt at rest; Offline Kit importer places it in the Compose/Helm secret. |
|
||||
| `secrets/issuer-directory/connection.env` | Secure secret store export (`ISSUER_DIRECTORY_POSTGRES_CONNECTION_STRING=`) | Encrypt at rest; Offline Kit importer places it in the Compose/Helm secret. |
|
||||
| `env/issuer-directory.env` (optional) | Curated `.env` snippet (for example `ISSUER_DIRECTORY_SEED_CSAF=false`) | Helps operators disable reseeding after their first import without editing the main profile. |
|
||||
| `docs/issuer-directory/deployment.md` | `docs/modules/issuer-directory/operations/deployment.md` | Ship alongside kit documentation for operators. |
|
||||
|
||||
|
||||
@@ -56,18 +56,18 @@
|
||||
- **Security:** RBAC tests, tenant isolation, secret reference validation, DSSE signature verification.
|
||||
- **Offline:** export/import round-trips, Offline Kit deployment, manual delivery replay.
|
||||
|
||||
## Definition of done
|
||||
- Notify service, workers, connectors, Console/CLI, observability, and Offline Kit assets shipped with documentation and runbooks.
|
||||
- Compliance checklist appended to docs; ./TASKS.md and ../../TASKS.md updated with progress.
|
||||
|
||||
## Sprint alignment (2025-11-30)
|
||||
- Docs sprint: `docs/implplan/SPRINT_322_docs_modules_notify.md`; statuses mirrored in `docs/modules/notify/TASKS.md`.
|
||||
- Observability evidence stub: `operations/observability.md` and `operations/dashboards/notify-observability.json` (to be populated after next demo outputs).
|
||||
- NOTIFY-DOCS-0002 remains blocked pending NOTIFY-SVC-39-001..004 (correlation/digests/simulation/quiet hours); keep sprint/TASKS synced when those land.
|
||||
|
||||
---
|
||||
|
||||
## Sprint readiness tracker
|
||||
## Definition of done
|
||||
- Notify service, workers, connectors, Console/CLI, observability, and Offline Kit assets shipped with documentation and runbooks.
|
||||
- Compliance checklist appended to docs; ./TASKS.md and ../../TASKS.md updated with progress.
|
||||
|
||||
## Sprint alignment (2025-11-30)
|
||||
- Docs sprint: `docs/implplan/SPRINT_322_docs_modules_notify.md`; statuses mirrored in `docs/modules/notify/TASKS.md`.
|
||||
- Observability evidence stub: `operations/observability.md` and `operations/dashboards/notify-observability.json` (to be populated after next demo outputs).
|
||||
- NOTIFY-DOCS-0002 remains blocked pending NOTIFY-SVC-39-001..004 (correlation/digests/simulation/quiet hours); keep sprint/TASKS synced when those land.
|
||||
|
||||
---
|
||||
|
||||
## Sprint readiness tracker
|
||||
|
||||
> Last updated: 2025-11-27 (NOTIFY-ENG-0001)
|
||||
|
||||
@@ -151,7 +151,7 @@ This section maps delivery phases to implementation sprints and tracks readiness
|
||||
| Attestor payload localization | NOTIFY-ATTEST-74-002 | Freeze pending |
|
||||
| POLICY-RISK-40-002 export | NOTIFY-RISK-66/67/68 | BLOCKED |
|
||||
| Sprint 0172 tenancy model | NOTIFY-TEN-48-001 | In progress |
|
||||
| Telemetry SLO webhook schema | NOTIFY-OBS-51-001 | ✅ Published (`docs/notifications/slo-webhook-schema.md`) |
|
||||
| Telemetry SLO webhook schema | NOTIFY-OBS-51-001 | ✅ Published (`docs/modules/notify/slo-webhook-schema.md`) |
|
||||
|
||||
### Next actions
|
||||
1. Complete NOTIFY-SVC-37-003 dispatch/rendering wiring (Sprint 0172).
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
"executed_at": "2025-12-04T00:00:00Z",
|
||||
"tenant_id": "test-tenant",
|
||||
"fixtures_used": [
|
||||
"docs/notifications/fixtures/rendering/tmpl-incident-start.email.en-US.json"
|
||||
"docs/modules/notify/fixtures/rendering/tmpl-incident-start.email.en-US.json"
|
||||
],
|
||||
"channels_tested": ["email", "slack"],
|
||||
"results": {
|
||||
@@ -23,6 +23,6 @@
|
||||
"approval_required": true,
|
||||
"approval_status": "pending",
|
||||
"evidence_links": [
|
||||
"docs/notifications/fixtures/rendering/index.ndjson"
|
||||
"docs/modules/notify/fixtures/rendering/index.ndjson"
|
||||
]
|
||||
}
|
||||
|
||||
@@ -14,7 +14,7 @@ This document outlines the implementation plan for delivering **Explainable Tria
|
||||
|
||||
| Capability | Implementation | Completeness |
|
||||
|------------|----------------|--------------|
|
||||
| Reachability analysis | 10 language analyzers, binary, runtime | 95% |
|
||||
| Reachability analysis | 11 language analyzers, binary, runtime | 95% |
|
||||
| VEX processing | OpenVEX, CSAF, CycloneDX with lattice | 90% |
|
||||
| Explainability | ExplainTrace with rule steps | 95% |
|
||||
| Evidence generation | Path witnesses, rich graphs | 90% |
|
||||
|
||||
422
docs/modules/policy/design/confidence-to-ews-migration.md
Normal file
422
docs/modules/policy/design/confidence-to-ews-migration.md
Normal file
@@ -0,0 +1,422 @@
|
||||
# Confidence to Evidence-Weighted Score Migration Guide
|
||||
|
||||
> **Version:** 1.0
|
||||
> **Status:** Active
|
||||
> **Last Updated:** 2025-12-31
|
||||
> **Sprint:** 8200.0012.0003 (Policy Engine Integration)
|
||||
|
||||
## Overview
|
||||
|
||||
This document describes the migration path from the legacy **Confidence** scoring system to the new **Evidence-Weighted Score (EWS)** system. The migration is designed to be gradual, low-risk, and fully reversible at each stage.
|
||||
|
||||
### Key Differences
|
||||
|
||||
| Aspect | Confidence (Legacy) | Evidence-Weighted Score |
|
||||
|--------|---------------------|------------------------|
|
||||
| **Score Range** | 0.0–1.0 (decimal) | 0–100 (integer) |
|
||||
| **Direction** | Higher = more confident | Higher = higher risk/priority |
|
||||
| **Basis** | Heuristic confidence in finding | Evidence-backed exploitability |
|
||||
| **Breakdown** | Single value | 6 dimensions (Rch, Rts, Bkp, Xpl, Src, Mit) |
|
||||
| **Determinism** | Limited | Fully deterministic with proofs |
|
||||
| **Attestation** | Not attested | Included in verdict attestation |
|
||||
|
||||
### Semantic Inversion
|
||||
|
||||
The most important difference is **semantic inversion**:
|
||||
|
||||
- **Confidence**: Higher values indicate higher confidence that a finding is accurate
|
||||
- **EWS**: Higher values indicate higher exploitability evidence (more urgent to fix)
|
||||
|
||||
A high-confidence finding may have a low EWS if evidence shows it's mitigated. Conversely, a low-confidence finding may have a high EWS if runtime signals indicate active exploitation.
|
||||
|
||||
---
|
||||
|
||||
## Migration Phases
|
||||
|
||||
### Phase 1: Feature Flag (Opt-In)
|
||||
|
||||
**Duration:** Immediate → 2 weeks
|
||||
**Risk:** None (off by default)
|
||||
|
||||
Enable EWS calculation without changing behavior:
|
||||
|
||||
```json
|
||||
{
|
||||
"Policy": {
|
||||
"EvidenceWeightedScore": {
|
||||
"Enabled": false,
|
||||
"DualEmitMode": false,
|
||||
"UseAsPrimaryScore": false
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**What happens:**
|
||||
- EWS infrastructure is loaded but dormant
|
||||
- No performance impact
|
||||
- No output changes
|
||||
|
||||
**When to proceed:** After infrastructure validation
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Dual-Emit Mode (Parallel Calculation)
|
||||
|
||||
**Duration:** 2–4 weeks
|
||||
**Risk:** Low (additive only)
|
||||
|
||||
Enable both scoring systems in parallel:
|
||||
|
||||
```json
|
||||
{
|
||||
"Policy": {
|
||||
"EvidenceWeightedScore": {
|
||||
"Enabled": true,
|
||||
"DualEmitMode": true,
|
||||
"UseAsPrimaryScore": false
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**What happens:**
|
||||
- Both Confidence AND EWS are calculated
|
||||
- Both appear in verdicts and attestations
|
||||
- Telemetry compares rankings
|
||||
- Existing rules use Confidence (unchanged behavior)
|
||||
|
||||
**Verdict output example:**
|
||||
```json
|
||||
{
|
||||
"findingId": "CVE-2024-1234:pkg:npm/lodash@4.17.0",
|
||||
"status": "block",
|
||||
"confidence": {
|
||||
"value": 0.85,
|
||||
"tier": "High"
|
||||
},
|
||||
"evidenceWeightedScore": {
|
||||
"score": 72,
|
||||
"bucket": "ScheduleNext",
|
||||
"breakdown": {
|
||||
"rch": { "weighted": 18, "weight": 0.25, "raw": 0.72 },
|
||||
"rts": { "weighted": 24, "weight": 0.30, "raw": 0.80 },
|
||||
"bkp": { "weighted": 0, "weight": 0.10, "raw": 0.00 },
|
||||
"xpl": { "weighted": 10, "weight": 0.15, "raw": 0.67 },
|
||||
"src": { "weighted": 12, "weight": 0.15, "raw": 0.80 },
|
||||
"mit": { "weighted": 8, "weight": 0.05, "raw": 1.60 }
|
||||
},
|
||||
"flags": ["live-signal"],
|
||||
"explanations": ["Runtime signal detected (score +8)", "Reachable via call graph"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Monitoring during this phase:**
|
||||
- Use `IMigrationTelemetryService` to track alignment
|
||||
- Review `MigrationTelemetryStats.AlignmentRate`
|
||||
- Investigate samples where rankings diverge significantly
|
||||
|
||||
**When to proceed:** When alignment rate > 80% or divergences are understood
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: EWS as Primary (Shadow Confidence)
|
||||
|
||||
**Duration:** 2–4 weeks
|
||||
**Risk:** Medium (behavior change possible)
|
||||
|
||||
Switch primary scoring while keeping Confidence for validation:
|
||||
|
||||
```json
|
||||
{
|
||||
"Policy": {
|
||||
"EvidenceWeightedScore": {
|
||||
"Enabled": true,
|
||||
"DualEmitMode": true,
|
||||
"UseAsPrimaryScore": true
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**What happens:**
|
||||
- EWS is used for policy rule evaluation
|
||||
- Confidence is still calculated and emitted (deprecated field)
|
||||
- Policy rules should be migrated to use `score` instead of `confidence`
|
||||
|
||||
**Rule migration example:**
|
||||
|
||||
Before (Confidence-based):
|
||||
```yaml
|
||||
rules:
|
||||
- name: block-high-confidence
|
||||
when: confidence >= 0.9
|
||||
then: block
|
||||
```
|
||||
|
||||
After (EWS-based):
|
||||
```yaml
|
||||
rules:
|
||||
- name: block-high-evidence
|
||||
when: score >= 85
|
||||
then: block
|
||||
|
||||
# Or use bucket-based for clearer semantics:
|
||||
- name: block-act-now
|
||||
when: score.bucket == "ActNow"
|
||||
then: block
|
||||
```
|
||||
|
||||
**Recommended rule patterns:**
|
||||
|
||||
| Confidence Rule | EWS Equivalent | Notes |
|
||||
|----------------|----------------|-------|
|
||||
| `confidence >= 0.9` | `score >= 85` or `score.bucket == "ActNow"` | Very high certainty |
|
||||
| `confidence >= 0.7` | `score >= 60` or `score.bucket in ["ActNow", "ScheduleNext"]` | High certainty |
|
||||
| `confidence >= 0.5` | `score >= 40` | Medium certainty |
|
||||
| `confidence < 0.3` | `score < 25` | Low evidence |
|
||||
|
||||
**When to proceed:** After rule migration and 2+ weeks of stable operation
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: EWS-Only (Deprecation Complete)
|
||||
|
||||
**Duration:** Permanent
|
||||
**Risk:** Low (rollback path exists)
|
||||
|
||||
Disable legacy Confidence scoring:
|
||||
|
||||
```json
|
||||
{
|
||||
"Policy": {
|
||||
"EvidenceWeightedScore": {
|
||||
"Enabled": true,
|
||||
"DualEmitMode": false,
|
||||
"UseAsPrimaryScore": true
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**What happens:**
|
||||
- Only EWS is calculated
|
||||
- Confidence field is null in verdicts
|
||||
- Performance improvement (single calculation)
|
||||
- Consumers must use EWS fields
|
||||
|
||||
**Breaking changes to document:**
|
||||
- `Verdict.Confidence` returns null
|
||||
- `ConfidenceScore` type is deprecated (will be removed in v3.0)
|
||||
- Rules referencing `confidence` will fail validation
|
||||
|
||||
---
|
||||
|
||||
## Configuration Reference
|
||||
|
||||
### Full Configuration Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"Policy": {
|
||||
"EvidenceWeightedScore": {
|
||||
"Enabled": true,
|
||||
"DualEmitMode": true,
|
||||
"UseAsPrimaryScore": false,
|
||||
"EnableCaching": true,
|
||||
"CacheDurationSeconds": 300,
|
||||
"Weights": {
|
||||
"Reachability": 0.25,
|
||||
"RuntimeSignal": 0.30,
|
||||
"BackportStatus": 0.10,
|
||||
"ExploitMaturity": 0.15,
|
||||
"SourceTrust": 0.15,
|
||||
"MitigationStatus": 0.05
|
||||
},
|
||||
"BucketThresholds": {
|
||||
"ActNow": 85,
|
||||
"ScheduleNext": 60,
|
||||
"Investigate": 40
|
||||
},
|
||||
"Telemetry": {
|
||||
"EnableMigrationMetrics": true,
|
||||
"SampleRate": 0.1,
|
||||
"MaxSamples": 1000
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `POLICY_EWS_ENABLED` | `false` | Enable EWS calculation |
|
||||
| `POLICY_EWS_DUAL_EMIT` | `false` | Emit both scores |
|
||||
| `POLICY_EWS_PRIMARY` | `false` | Use EWS as primary score |
|
||||
| `POLICY_EWS_CACHE_ENABLED` | `true` | Enable score caching |
|
||||
|
||||
---
|
||||
|
||||
## Telemetry & Monitoring
|
||||
|
||||
### Metrics
|
||||
|
||||
The migration telemetry service exposes these metrics:
|
||||
|
||||
| Metric | Type | Description |
|
||||
|--------|------|-------------|
|
||||
| `stellaops.policy.migration.comparisons_total` | Counter | Total comparisons made |
|
||||
| `stellaops.policy.migration.aligned_total` | Counter | Comparisons where rankings aligned |
|
||||
| `stellaops.policy.migration.score_difference` | Histogram | Distribution of score differences |
|
||||
| `stellaops.policy.migration.tier_bucket_match_total` | Counter | Tier/bucket matches |
|
||||
| `stellaops.policy.dual_emit.verdicts_total` | Counter | Dual-emit verdicts produced |
|
||||
|
||||
### Dashboard Queries
|
||||
|
||||
**Alignment rate over time:**
|
||||
```promql
|
||||
rate(stellaops_policy_migration_aligned_total[5m])
|
||||
/ rate(stellaops_policy_migration_comparisons_total[5m])
|
||||
```
|
||||
|
||||
**Score difference distribution:**
|
||||
```promql
|
||||
histogram_quantile(0.95, stellaops_policy_migration_score_difference_bucket)
|
||||
```
|
||||
|
||||
### Sample Analysis
|
||||
|
||||
Use `IMigrationTelemetryService.GetRecentSamples()` to retrieve divergent samples:
|
||||
|
||||
```csharp
|
||||
var telemetry = serviceProvider.GetRequiredService<IMigrationTelemetryService>();
|
||||
var stats = telemetry.GetStats();
|
||||
|
||||
if (stats.AlignmentRate < 0.8m)
|
||||
{
|
||||
var samples = telemetry.GetRecentSamples(50)
|
||||
.Where(s => !s.IsAligned)
|
||||
.OrderByDescending(s => Math.Abs(s.ScoreDifference));
|
||||
|
||||
foreach (var sample in samples)
|
||||
{
|
||||
Console.WriteLine($"{sample.FindingId}: Conf={sample.ConfidenceValue:F2} → EWS={sample.EwsScore} (Δ={sample.ScoreDifference})");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Rollback Procedures
|
||||
|
||||
### Phase 4 → Phase 3 (Re-enable Dual-Emit)
|
||||
|
||||
```json
|
||||
{
|
||||
"Policy": {
|
||||
"EvidenceWeightedScore": {
|
||||
"DualEmitMode": true
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Restart services. Confidence will be calculated again.
|
||||
|
||||
### Phase 3 → Phase 2 (Revert to Confidence Primary)
|
||||
|
||||
```json
|
||||
{
|
||||
"Policy": {
|
||||
"EvidenceWeightedScore": {
|
||||
"UseAsPrimaryScore": false
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Rules using `confidence` will work again. Rules using `score` will still work.
|
||||
|
||||
### Phase 2 → Phase 1 (Disable EWS)
|
||||
|
||||
```json
|
||||
{
|
||||
"Policy": {
|
||||
"EvidenceWeightedScore": {
|
||||
"Enabled": false
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
No EWS calculation, no performance impact.
|
||||
|
||||
### Emergency Rollback
|
||||
|
||||
Set environment variable for immediate effect without restart (if hot-reload enabled):
|
||||
|
||||
```bash
|
||||
export POLICY_EWS_ENABLED=false
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Rule Migration Checklist
|
||||
|
||||
- [ ] Inventory all policies using `confidence` field
|
||||
- [ ] Map confidence thresholds to EWS thresholds (see table above)
|
||||
- [ ] Update rules to use `score` syntax
|
||||
- [ ] Consider using bucket-based rules for clearer semantics
|
||||
- [ ] Test rules in dual-emit mode before switching primary
|
||||
- [ ] Update documentation and runbooks
|
||||
- [ ] Train operators on new score interpretation
|
||||
- [ ] Update alerting thresholds
|
||||
|
||||
---
|
||||
|
||||
## FAQ
|
||||
|
||||
### Q: Will existing rules break?
|
||||
|
||||
**A:** Not during dual-emit mode. Rules using `confidence` continue to work. Once `UseAsPrimaryScore: true`, new rules should use `score`. Old `confidence` rules will emit deprecation warnings and fail validation in Phase 4.
|
||||
|
||||
### Q: How do I interpret the score difference?
|
||||
|
||||
**A:** The ConfidenceToEwsAdapter maps Confidence (0-1) to an approximate EWS (0-100) with semantic inversion. A "difference" of ±15 points is normal due to the different underlying models. Investigate differences > 30 points.
|
||||
|
||||
### Q: What if my rankings diverge significantly?
|
||||
|
||||
**A:** This is expected for findings where:
|
||||
- Runtime signals (Rts) differ from static analysis
|
||||
- Vendor VEX overrides traditional severity
|
||||
- Reachability analysis shows unreachable code
|
||||
|
||||
Review these cases manually. EWS is likely more accurate due to evidence integration.
|
||||
|
||||
### Q: Can I customize the EWS weights?
|
||||
|
||||
**A:** Yes, via `Weights` configuration. However, changing weights affects determinism proofs. Document any changes and bump the policy version.
|
||||
|
||||
### Q: What about attestations?
|
||||
|
||||
**A:** During dual-emit, attestations include both scores. After Phase 4, only EWS is attested. Old attestations remain verifiable with their original scores.
|
||||
|
||||
---
|
||||
|
||||
## Related Documents
|
||||
|
||||
- [Evidence-Weighted Score Architecture](../../signals/architecture.md)
|
||||
- [Policy DSL Reference](../contracts/policy-dsl.md)
|
||||
- [Verdict Attestation](../verdict-attestation.md)
|
||||
- [Sprint 8200.0012.0003](../../../../implplan/SPRINT_8200_0012_0003_policy_engine_integration.md)
|
||||
|
||||
---
|
||||
|
||||
## Revision History
|
||||
|
||||
| Version | Date | Author | Changes |
|
||||
|---------|------|--------|---------|
|
||||
| 1.0 | 2025-12-31 | Implementer | Initial migration guide |
|
||||
@@ -1,6 +1,6 @@
|
||||
# Provcache Module
|
||||
|
||||
> **Status: Planned** — This module is documented for upcoming implementation in Sprint 8200. The design is finalized but source code does not yet exist.
|
||||
> **Status: Implemented** — Core library shipped in Sprint 8200.0001.0001. API endpoints, caching, invalidation and write-behind queue are operational. Policy Engine integration pending architectural review.
|
||||
|
||||
> Provenance Cache — Maximizing Trust Evidence Density
|
||||
|
||||
@@ -54,11 +54,17 @@ A composite hash that uniquely identifies a provenance decision context:
|
||||
|
||||
```
|
||||
VeriKey = SHA256(
|
||||
"v1|" || // Version prefix for compatibility
|
||||
source_hash || // Image/artifact digest
|
||||
"|" ||
|
||||
sbom_hash || // Canonical SBOM hash
|
||||
"|" ||
|
||||
vex_hash_set_hash || // Sorted VEX statement hashes
|
||||
"|" ||
|
||||
merge_policy_hash || // PolicyBundle hash
|
||||
"|" ||
|
||||
signer_set_hash || // Signer certificate hashes
|
||||
"|" ||
|
||||
time_window // Epoch bucket
|
||||
)
|
||||
```
|
||||
@@ -74,6 +80,31 @@ VeriKey = SHA256(
|
||||
| `signer_set_hash` | Key rotation → new key (security) |
|
||||
| `time_window` | Temporal bucketing → controlled expiry |
|
||||
|
||||
#### VeriKey Composition Rules
|
||||
|
||||
1. **Hash Normalization**: All input hashes are normalized to lowercase with `sha256:` prefix stripped if present
|
||||
2. **Set Hash Computation**: For VEX statements and signer certificates:
|
||||
- Individual hashes are sorted lexicographically (ordinal)
|
||||
- Sorted hashes are concatenated with `|` delimiter
|
||||
- Result is SHA256-hashed
|
||||
- Empty sets use well-known sentinels (`"empty-vex-set"`, `"empty-signer-set"`)
|
||||
3. **Time Window Computation**: `floor(timestamp.Ticks / bucket.Ticks) * bucket.Ticks` in UTC ISO-8601 format
|
||||
4. **Output Format**: `sha256:<64-char-lowercase-hex>`
|
||||
|
||||
#### Code Example
|
||||
|
||||
```csharp
|
||||
var veriKey = new VeriKeyBuilder(options)
|
||||
.WithSourceHash("sha256:abc123...") // Image digest
|
||||
.WithSbomHash("sha256:def456...") // SBOM digest
|
||||
.WithVexStatementHashes(["sha256:v1", "sha256:v2"]) // Sorted automatically
|
||||
.WithMergePolicyHash("sha256:policy...") // Policy bundle
|
||||
.WithCertificateHashes(["sha256:cert1"]) // Signer certs
|
||||
.WithTimeWindow(DateTimeOffset.UtcNow) // Auto-bucketed
|
||||
.Build();
|
||||
// Returns: "sha256:789abc..."
|
||||
```
|
||||
|
||||
### DecisionDigest
|
||||
|
||||
Canonicalized representation of an evaluation result:
|
||||
@@ -231,6 +262,56 @@ stella prov import --input proof-lite.json --lazy-fetch --backend https://api.st
|
||||
|
||||
## Configuration
|
||||
|
||||
### C# Configuration Class
|
||||
|
||||
The `ProvcacheOptions` class (section name: `"Provcache"`) exposes the following settings:
|
||||
|
||||
| Property | Type | Default | Validation | Description |
|
||||
|----------|------|---------|------------|-------------|
|
||||
| `DefaultTtl` | `TimeSpan` | 24h | 1min–7d | Default time-to-live for cache entries |
|
||||
| `MaxTtl` | `TimeSpan` | 7d | 1min–30d | Maximum allowed TTL regardless of request |
|
||||
| `TimeWindowBucket` | `TimeSpan` | 1h | 1min–24h | Time window bucket for VeriKey computation |
|
||||
| `ValkeyKeyPrefix` | `string` | `"stellaops:prov:"` | — | Key prefix for Valkey storage |
|
||||
| `EnableWriteBehind` | `bool` | `true` | — | Enable async Postgres persistence |
|
||||
| `WriteBehindFlushInterval` | `TimeSpan` | 5s | 1s–5min | Interval for flushing write-behind queue |
|
||||
| `WriteBehindMaxBatchSize` | `int` | 100 | 1–10000 | Maximum batch size per flush |
|
||||
| `WriteBehindQueueCapacity` | `int` | 10000 | 100–1M | Max queue capacity (blocks when full) |
|
||||
| `WriteBehindMaxRetries` | `int` | 3 | 0–10 | Retry attempts for failed writes |
|
||||
| `ChunkSize` | `int` | 65536 | 1KB–1MB | Evidence chunk size in bytes |
|
||||
| `MaxChunksPerEntry` | `int` | 1000 | 1–100000 | Max chunks per cache entry |
|
||||
| `AllowCacheBypass` | `bool` | `true` | — | Allow clients to force re-evaluation |
|
||||
| `DigestVersion` | `string` | `"v1"` | — | Serialization version for digests |
|
||||
| `HashAlgorithm` | `string` | `"SHA256"` | — | Hash algorithm for VeriKey/digest |
|
||||
| `EnableValkeyCache` | `bool` | `true` | — | Enable Valkey layer (false = Postgres only) |
|
||||
| `SlidingExpiration` | `bool` | `false` | — | Refresh TTL on cache hits |
|
||||
|
||||
### appsettings.json Example
|
||||
|
||||
```json
|
||||
{
|
||||
"Provcache": {
|
||||
"DefaultTtl": "24:00:00",
|
||||
"MaxTtl": "7.00:00:00",
|
||||
"TimeWindowBucket": "01:00:00",
|
||||
"ValkeyKeyPrefix": "stellaops:prov:",
|
||||
"EnableWriteBehind": true,
|
||||
"WriteBehindFlushInterval": "00:00:05",
|
||||
"WriteBehindMaxBatchSize": 100,
|
||||
"WriteBehindQueueCapacity": 10000,
|
||||
"WriteBehindMaxRetries": 3,
|
||||
"ChunkSize": 65536,
|
||||
"MaxChunksPerEntry": 1000,
|
||||
"AllowCacheBypass": true,
|
||||
"DigestVersion": "v1",
|
||||
"HashAlgorithm": "SHA256",
|
||||
"EnableValkeyCache": true,
|
||||
"SlidingExpiration": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### YAML Example (Helm/Kubernetes)
|
||||
|
||||
```yaml
|
||||
provcache:
|
||||
# TTL configuration
|
||||
@@ -253,6 +334,21 @@ provcache:
|
||||
digestVersion: "v1"
|
||||
```
|
||||
|
||||
### Dependency Injection Registration
|
||||
|
||||
```csharp
|
||||
// In Program.cs or Startup.cs
|
||||
services.AddProvcache(configuration);
|
||||
|
||||
// Or with explicit configuration
|
||||
services.AddProvcache(options =>
|
||||
{
|
||||
options.DefaultTtl = TimeSpan.FromHours(12);
|
||||
options.EnableWriteBehind = true;
|
||||
options.WriteBehindMaxBatchSize = 200;
|
||||
});
|
||||
```
|
||||
|
||||
## Observability
|
||||
|
||||
### Metrics
|
||||
@@ -367,6 +463,35 @@ CREATE TABLE provcache.prov_revocations (
|
||||
);
|
||||
```
|
||||
|
||||
## Implementation Status
|
||||
|
||||
### Completed (Sprint 8200.0001.0001)
|
||||
|
||||
| Component | Path | Status |
|
||||
|-----------|------|--------|
|
||||
| Core Models | `src/__Libraries/StellaOps.Provcache/Models/` | ✅ Done |
|
||||
| VeriKeyBuilder | `src/__Libraries/StellaOps.Provcache/VeriKeyBuilder.cs` | ✅ Done |
|
||||
| DecisionDigest | `src/__Libraries/StellaOps.Provcache/DecisionDigest.cs` | ✅ Done |
|
||||
| Caching Layer | `src/__Libraries/StellaOps.Provcache/Caching/` | ✅ Done |
|
||||
| WriteBehindQueue | `src/__Libraries/StellaOps.Provcache/Persistence/` | ✅ Done |
|
||||
| API Endpoints | `src/__Libraries/StellaOps.Provcache.Api/` | ✅ Done |
|
||||
| Unit Tests (53) | `src/__Libraries/__Tests/StellaOps.Provcache.Tests/` | ✅ Done |
|
||||
|
||||
### Blocked
|
||||
|
||||
| Component | Reason |
|
||||
|-----------|--------|
|
||||
| Policy Engine Integration | `PolicyEvaluator` is `internal sealed`; requires architectural review to expose injection points for `IProvcacheService` |
|
||||
|
||||
### Pending
|
||||
|
||||
| Component | Sprint |
|
||||
|-----------|--------|
|
||||
| Signer Revocation Events | 8200.0001.0002 |
|
||||
| CLI Export/Import | 8200.0001.0002 |
|
||||
| UI Badges & Proof Tree | 8200.0001.0003 |
|
||||
| Grafana Dashboards | 8200.0001.0003 |
|
||||
|
||||
## Implementation Sprints
|
||||
|
||||
| Sprint | Focus | Key Deliverables |
|
||||
|
||||
@@ -44,7 +44,7 @@ Operational rules:
|
||||
## 3) APIs (first wave)
|
||||
- `GET /sbom/paths?purl=...&artifact=...&scope=...&env=...` — returns ordered paths with runtime_flag/blast_radius and nearest-safe-version hint; supports `cursor` pagination.
|
||||
- `GET /sbom/versions?artifact=...` – time-ordered SBOM version timeline for Advisory AI; include provenance and source bundle hash.
|
||||
- `POST /sbom/upload` – BYOS upload endpoint; validates/normalizes SPDX 2.3/3.0 or CycloneDX 1.4–1.6 and registers a ledger version.
|
||||
- `POST /sbom/upload` – BYOS upload endpoint; validates/normalizes SPDX 2.3/3.0 or CycloneDX 1.4–1.7 and registers a ledger version.
|
||||
- `GET /sbom/ledger/history` – list version history for an artifact (cursor pagination).
|
||||
- `GET /sbom/ledger/point` – resolve the SBOM version at a specific timestamp.
|
||||
- `GET /sbom/ledger/range` – query versions within a time range.
|
||||
|
||||
@@ -29,5 +29,5 @@ Example:
|
||||
## Troubleshooting
|
||||
- **"sbom or sbomBase64 is required"**: include an SBOM payload in the request.
|
||||
- **"Unable to detect SBOM format"**: set `format` explicitly or include required root fields.
|
||||
- **Unsupported SBOM format/version**: ensure CycloneDX 1.4–1.6 or SPDX 2.3/3.0.
|
||||
- **Unsupported SBOM format/version**: ensure CycloneDX 1.4–1.7 or SPDX 2.3/3.0.
|
||||
- **Low quality scores**: include PURLs, versions, and license declarations where possible.
|
||||
|
||||
@@ -395,7 +395,7 @@ scanner:
|
||||
postgres:
|
||||
connectionString: "Host=postgres;Port=5432;Database=scanner;Username=stellaops;Password=stellaops"
|
||||
s3:
|
||||
endpoint: "http://minio:9000"
|
||||
endpoint: "http://rustfs:8080"
|
||||
bucket: "stellaops"
|
||||
objectLock: "governance" # or 'compliance'
|
||||
analyzers:
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
| Component | Requirement | Notes |
|
||||
|-----------|-------------|-------|
|
||||
| Runtime | .NET 10.0+ | LTS recommended |
|
||||
| Database | PostgreSQL 15.0+ | For projections and issuer directory |
|
||||
| Database | PostgreSQL 16+ | For projections and issuer directory |
|
||||
| Cache | Valkey 8.0+ (optional) | For caching consensus results |
|
||||
| Memory | 512MB minimum | 2GB recommended for production |
|
||||
| CPU | 2 cores minimum | 4 cores for high throughput |
|
||||
@@ -411,8 +411,8 @@ vexlens:
|
||||
|
||||
caching:
|
||||
enabled: true
|
||||
redis:
|
||||
connectionString: redis://redis:6379
|
||||
redis: # Valkey (Redis-compatible)
|
||||
connectionString: redis://valkey:6379
|
||||
consensusTtlMinutes: 5
|
||||
issuerTtlMinutes: 60
|
||||
```
|
||||
|
||||
@@ -43,12 +43,10 @@ Configure via `vexlens.yaml` or environment variables with `VEXLENS_` prefix:
|
||||
```yaml
|
||||
VexLens:
|
||||
Storage:
|
||||
Driver: mongo # "memory" for testing, "mongo" for production
|
||||
ConnectionString: "mongodb://..."
|
||||
Database: stellaops
|
||||
ProjectionsCollection: vex_consensus
|
||||
HistoryCollection: vex_consensus_history
|
||||
MaxHistoryEntries: 100
|
||||
Driver: postgres # "memory" for testing, "postgres" for production
|
||||
PostgresConnectionString: "Host=postgres;Database=stellaops_platform;Username=stellaops;Password=..."
|
||||
ProjectionRetentionDays: 365
|
||||
EventRetentionDays: 90
|
||||
CommandTimeoutSeconds: 30
|
||||
|
||||
Trust:
|
||||
@@ -98,8 +96,8 @@ VexLens:
|
||||
### 3.2 Environment variable overrides
|
||||
|
||||
```bash
|
||||
VEXLENS_STORAGE__DRIVER=mongo
|
||||
VEXLENS_STORAGE__CONNECTIONSTRING=mongodb://localhost:27017
|
||||
VEXLENS_STORAGE__DRIVER=postgres
|
||||
VEXLENS_STORAGE__POSTGRESCONNECTIONSTRING="Host=localhost;Database=stellaops_platform;Username=stellaops;Password=..."
|
||||
VEXLENS_CONSENSUS__DEFAULTMODE=WeightedVote
|
||||
VEXLENS_AIRGAP__SEALEDMODE=true
|
||||
```
|
||||
|
||||
@@ -10,7 +10,7 @@ This guide covers PostgreSQL operations for StellaOps, including setup, performa
|
||||
|
||||
## 1. Overview
|
||||
|
||||
StellaOps uses PostgreSQL (≥16) as the primary control-plane database with per-module schema isolation. MongoDB is retained only for legacy modules not yet converted.
|
||||
StellaOps uses PostgreSQL (≥16) as the sole control-plane database with per-module schema isolation. MongoDB was fully removed in Sprint 4400.
|
||||
|
||||
### 1.1 Schema Topology
|
||||
|
||||
|
||||
@@ -223,7 +223,7 @@
|
||||
"examples": [
|
||||
{
|
||||
"config": {
|
||||
"endpoint": "http://minio.stellaops.local:9000",
|
||||
"endpoint": "http://rustfs.stellaops.local:8080",
|
||||
"region": "us-east-1",
|
||||
"usePathStyle": true,
|
||||
"bucketPrefix": "stellaops-",
|
||||
|
||||
354
docs/testing/e2e-reproducibility.md
Normal file
354
docs/testing/e2e-reproducibility.md
Normal file
@@ -0,0 +1,354 @@
|
||||
# End-to-End Reproducibility Testing Guide
|
||||
|
||||
> **Sprint:** SPRINT_8200_0001_0004_e2e_reproducibility_test
|
||||
> **Tasks:** E2E-8200-025, E2E-8200-026
|
||||
> **Last Updated:** 2025-06-15
|
||||
|
||||
## Overview
|
||||
|
||||
StellaOps implements comprehensive end-to-end (E2E) reproducibility testing to ensure that identical inputs always produce identical outputs across:
|
||||
|
||||
- Sequential pipeline runs
|
||||
- Parallel pipeline runs
|
||||
- Different execution environments (Ubuntu, Windows, macOS)
|
||||
- Different points in time (using frozen timestamps)
|
||||
|
||||
This document describes the E2E test structure, how to run tests, and how to troubleshoot reproducibility failures.
|
||||
|
||||
## Test Architecture
|
||||
|
||||
### Pipeline Stages
|
||||
|
||||
The E2E reproducibility tests cover the full security scanning pipeline:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ Full E2E Pipeline │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌──────────┐ ┌───────────┐ ┌──────┐ ┌────────┐ ┌──────────┐ │
|
||||
│ │ Ingest │───▶│ Normalize │───▶│ Diff │───▶│ Decide │───▶│ Attest │ │
|
||||
│ │ Advisory │ │ Merge & │ │ SBOM │ │ Policy │ │ DSSE │ │
|
||||
│ │ Feeds │ │ Dedup │ │ vs │ │ Verdict│ │ Envelope │ │
|
||||
│ └──────────┘ └───────────┘ │Adviso│ └────────┘ └──────────┘ │
|
||||
│ │ries │ │ │
|
||||
│ └──────┘ ▼ │
|
||||
│ ┌──────────┐ │
|
||||
│ │ Bundle │ │
|
||||
│ │ Package │ │
|
||||
│ └──────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Key Components
|
||||
|
||||
| Component | File | Purpose |
|
||||
|-----------|------|---------|
|
||||
| Test Project | `StellaOps.Integration.E2E.csproj` | MSBuild project for E2E tests |
|
||||
| Test Fixture | `E2EReproducibilityTestFixture.cs` | Pipeline composition and execution |
|
||||
| Tests | `E2EReproducibilityTests.cs` | Reproducibility verification tests |
|
||||
| Comparer | `ManifestComparer.cs` | Byte-for-byte manifest comparison |
|
||||
| CI Workflow | `.gitea/workflows/e2e-reproducibility.yml` | Cross-platform CI pipeline |
|
||||
|
||||
## Running E2E Tests
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- .NET 10.0 SDK
|
||||
- Docker (for PostgreSQL container)
|
||||
- At least 4GB RAM available
|
||||
|
||||
### Local Execution
|
||||
|
||||
```bash
|
||||
# Run all E2E reproducibility tests
|
||||
dotnet test tests/integration/StellaOps.Integration.E2E/ \
|
||||
--logger "console;verbosity=detailed"
|
||||
|
||||
# Run specific test category
|
||||
dotnet test tests/integration/StellaOps.Integration.E2E/ \
|
||||
--filter "Category=Integration" \
|
||||
--logger "console;verbosity=detailed"
|
||||
|
||||
# Run with code coverage
|
||||
dotnet test tests/integration/StellaOps.Integration.E2E/ \
|
||||
--collect:"XPlat Code Coverage" \
|
||||
--results-directory ./TestResults
|
||||
```
|
||||
|
||||
### CI Execution
|
||||
|
||||
E2E tests run automatically on:
|
||||
|
||||
- Pull requests affecting `src/**` or `tests/integration/**`
|
||||
- Pushes to `main` and `develop` branches
|
||||
- Nightly at 2:00 AM UTC (full cross-platform suite)
|
||||
- Manual trigger with optional cross-platform flag
|
||||
|
||||
## Test Categories
|
||||
|
||||
### 1. Sequential Reproducibility (Tasks 11-14)
|
||||
|
||||
Tests that the pipeline produces identical results when run multiple times:
|
||||
|
||||
```csharp
|
||||
[Fact]
|
||||
public async Task FullPipeline_ProducesIdenticalVerdictHash_AcrossRuns()
|
||||
{
|
||||
// Arrange
|
||||
var inputs = await _fixture.SnapshotInputsAsync();
|
||||
|
||||
// Act - Run twice
|
||||
var result1 = await _fixture.RunFullPipelineAsync(inputs);
|
||||
var result2 = await _fixture.RunFullPipelineAsync(inputs);
|
||||
|
||||
// Assert
|
||||
result1.VerdictId.Should().Be(result2.VerdictId);
|
||||
result1.BundleManifestHash.Should().Be(result2.BundleManifestHash);
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Parallel Reproducibility (Task 14)
|
||||
|
||||
Tests that concurrent execution produces identical results:
|
||||
|
||||
```csharp
|
||||
[Fact]
|
||||
public async Task FullPipeline_ParallelExecution_10Concurrent_AllIdentical()
|
||||
{
|
||||
var inputs = await _fixture.SnapshotInputsAsync();
|
||||
const int concurrentRuns = 10;
|
||||
|
||||
var tasks = Enumerable.Range(0, concurrentRuns)
|
||||
.Select(_ => _fixture.RunFullPipelineAsync(inputs));
|
||||
|
||||
var results = await Task.WhenAll(tasks);
|
||||
var comparison = ManifestComparer.CompareMultiple(results.ToList());
|
||||
|
||||
comparison.AllMatch.Should().BeTrue();
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Cross-Platform Reproducibility (Tasks 15-18)
|
||||
|
||||
Tests that identical inputs produce identical outputs on different operating systems:
|
||||
|
||||
| Platform | Runner | Status |
|
||||
|----------|--------|--------|
|
||||
| Ubuntu | `ubuntu-latest` | Primary (runs on every PR) |
|
||||
| Windows | `windows-latest` | Nightly / On-demand |
|
||||
| macOS | `macos-latest` | Nightly / On-demand |
|
||||
|
||||
### 4. Golden Baseline Verification (Tasks 19-21)
|
||||
|
||||
Tests that current results match a pre-approved baseline:
|
||||
|
||||
```json
|
||||
// bench/determinism/golden-baseline/e2e-hashes.json
|
||||
{
|
||||
"verdict_hash": "sha256:abc123...",
|
||||
"manifest_hash": "sha256:def456...",
|
||||
"envelope_hash": "sha256:ghi789...",
|
||||
"updated_at": "2025-06-15T12:00:00Z",
|
||||
"updated_by": "ci",
|
||||
"commit": "abc123def456"
|
||||
}
|
||||
```
|
||||
|
||||
## Troubleshooting Reproducibility Failures
|
||||
|
||||
### Common Causes
|
||||
|
||||
#### 1. Non-Deterministic Ordering
|
||||
|
||||
**Symptom:** Different verdict hashes despite identical inputs.
|
||||
|
||||
**Diagnosis:**
|
||||
```csharp
|
||||
// Check if collections are being ordered
|
||||
var comparison = ManifestComparer.Compare(result1, result2);
|
||||
var report = ManifestComparer.GenerateDiffReport(comparison);
|
||||
Console.WriteLine(report);
|
||||
```
|
||||
|
||||
**Solution:** Ensure all collections are sorted before hashing:
|
||||
```csharp
|
||||
// Bad - non-deterministic
|
||||
var findings = results.ToList();
|
||||
|
||||
// Good - deterministic
|
||||
var findings = results.OrderBy(f => f.CveId, StringComparer.Ordinal)
|
||||
.ThenBy(f => f.Purl, StringComparer.Ordinal)
|
||||
.ToList();
|
||||
```
|
||||
|
||||
#### 2. Timestamp Drift
|
||||
|
||||
**Symptom:** Bundle manifests differ in `createdAt` field.
|
||||
|
||||
**Diagnosis:**
|
||||
```csharp
|
||||
var jsonComparison = ManifestComparer.CompareJson(
|
||||
result1.BundleManifest,
|
||||
result2.BundleManifest);
|
||||
```
|
||||
|
||||
**Solution:** Use frozen timestamps in tests:
|
||||
```csharp
|
||||
// In test fixture
|
||||
public DateTimeOffset FrozenTimestamp { get; } =
|
||||
new DateTimeOffset(2025, 6, 15, 12, 0, 0, TimeSpan.Zero);
|
||||
```
|
||||
|
||||
#### 3. Platform-Specific Behavior
|
||||
|
||||
**Symptom:** Tests pass on Ubuntu but fail on Windows/macOS.
|
||||
|
||||
**Common causes:**
|
||||
- Line ending differences (`\n` vs `\r\n`)
|
||||
- Path separator differences (`/` vs `\`)
|
||||
- Unicode normalization differences
|
||||
- Floating-point representation differences
|
||||
|
||||
**Diagnosis:**
|
||||
```bash
|
||||
# Download artifacts from all platforms
|
||||
# Compare hex dumps
|
||||
xxd ubuntu-manifest.bin > ubuntu.hex
|
||||
xxd windows-manifest.bin > windows.hex
|
||||
diff ubuntu.hex windows.hex
|
||||
```
|
||||
|
||||
**Solution:** Use platform-agnostic serialization:
|
||||
```csharp
|
||||
// Use canonical JSON
|
||||
var json = CanonJson.Serialize(data);
|
||||
|
||||
// Normalize line endings
|
||||
var normalized = content.Replace("\r\n", "\n");
|
||||
```
|
||||
|
||||
#### 4. Key/Signature Differences
|
||||
|
||||
**Symptom:** Envelope hashes differ despite identical payloads.
|
||||
|
||||
**Diagnosis:**
|
||||
```csharp
|
||||
// Compare envelope structure
|
||||
var envelope1 = JsonSerializer.Deserialize<DsseEnvelope>(result1.EnvelopeBytes);
|
||||
var envelope2 = JsonSerializer.Deserialize<DsseEnvelope>(result2.EnvelopeBytes);
|
||||
|
||||
// Check if payloads match
|
||||
envelope1.Payload.SequenceEqual(envelope2.Payload).Should().BeTrue();
|
||||
```
|
||||
|
||||
**Solution:** Use deterministic key generation:
|
||||
```csharp
|
||||
// Generate key from fixed seed for reproducibility
|
||||
private static ECDsa GenerateDeterministicKey(int seed)
|
||||
{
|
||||
var rng = new DeterministicRng(seed);
|
||||
var keyBytes = new byte[32];
|
||||
rng.GetBytes(keyBytes);
|
||||
// ... create key from bytes
|
||||
}
|
||||
```
|
||||
|
||||
### Debugging Tools
|
||||
|
||||
#### ManifestComparer
|
||||
|
||||
```csharp
|
||||
// Full comparison
|
||||
var comparison = ManifestComparer.Compare(expected, actual);
|
||||
|
||||
// Multiple results
|
||||
var multiComparison = ManifestComparer.CompareMultiple(results);
|
||||
|
||||
// Detailed report
|
||||
var report = ManifestComparer.GenerateDiffReport(comparison);
|
||||
|
||||
// Hex dump for byte-level debugging
|
||||
var hexDump = ManifestComparer.GenerateHexDump(expected.BundleManifest, actual.BundleManifest);
|
||||
```
|
||||
|
||||
#### JSON Comparison
|
||||
|
||||
```csharp
|
||||
var jsonComparison = ManifestComparer.CompareJson(
|
||||
expected.BundleManifest,
|
||||
actual.BundleManifest);
|
||||
|
||||
foreach (var diff in jsonComparison.Differences)
|
||||
{
|
||||
Console.WriteLine($"Path: {diff.Path}");
|
||||
Console.WriteLine($"Expected: {diff.Expected}");
|
||||
Console.WriteLine($"Actual: {diff.Actual}");
|
||||
}
|
||||
```
|
||||
|
||||
## Updating the Golden Baseline
|
||||
|
||||
When intentional changes affect reproducibility (e.g., new fields, algorithm changes):
|
||||
|
||||
### 1. Manual Update
|
||||
|
||||
```bash
|
||||
# Run tests and capture new hashes
|
||||
dotnet test tests/integration/StellaOps.Integration.E2E/ \
|
||||
--results-directory ./TestResults
|
||||
|
||||
# Update baseline
|
||||
cp ./TestResults/verdict_hash.txt ./bench/determinism/golden-baseline/
|
||||
# ... update e2e-hashes.json
|
||||
```
|
||||
|
||||
### 2. CI Update (Recommended)
|
||||
|
||||
```bash
|
||||
# Trigger workflow with update flag
|
||||
# Via Gitea UI: Actions → E2E Reproducibility → Run workflow
|
||||
# Set update_baseline = true
|
||||
```
|
||||
|
||||
### 3. Approval Process
|
||||
|
||||
1. Create PR with baseline update
|
||||
2. Explain why the change is intentional
|
||||
3. Verify all platforms produce consistent results
|
||||
4. Get approval from Platform Guild lead
|
||||
5. Merge after CI passes
|
||||
|
||||
## CI Workflow Reference
|
||||
|
||||
### Jobs
|
||||
|
||||
| Job | Runs On | Trigger | Purpose |
|
||||
|-----|---------|---------|---------|
|
||||
| `reproducibility-ubuntu` | Every PR | PR/Push | Primary reproducibility check |
|
||||
| `reproducibility-windows` | Nightly | Schedule/Manual | Cross-platform Windows |
|
||||
| `reproducibility-macos` | Nightly | Schedule/Manual | Cross-platform macOS |
|
||||
| `cross-platform-compare` | After platform jobs | Schedule/Manual | Compare hashes |
|
||||
| `golden-baseline` | After Ubuntu | Always | Baseline verification |
|
||||
| `reproducibility-gate` | After all | Always | Final status check |
|
||||
|
||||
### Artifacts
|
||||
|
||||
| Artifact | Retention | Contents |
|
||||
|----------|-----------|----------|
|
||||
| `e2e-results-{platform}` | 14 days | Test results (.trx), logs |
|
||||
| `hashes-{platform}` | 14 days | Hash files for comparison |
|
||||
| `cross-platform-report` | 30 days | Markdown comparison report |
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Reproducibility Architecture](../reproducibility.md)
|
||||
- [VerdictId Content-Addressing](../modules/policy/architecture.md#verdictid)
|
||||
- [DSSE Envelope Format](../modules/attestor/architecture.md#dsse)
|
||||
- [Determinism Testing](./determinism-verification.md)
|
||||
|
||||
## Sprint History
|
||||
|
||||
- **8200.0001.0004** - Initial E2E reproducibility test implementation
|
||||
- **8200.0001.0001** - VerdictId content-addressing (dependency)
|
||||
- **8200.0001.0002** - DSSE round-trip testing (dependency)
|
||||
202
docs/testing/schema-validation.md
Normal file
202
docs/testing/schema-validation.md
Normal file
@@ -0,0 +1,202 @@
|
||||
# SBOM Schema Validation
|
||||
|
||||
This document describes the schema validation system for SBOM (Software Bill of Materials) fixtures in StellaOps.
|
||||
|
||||
## Overview
|
||||
|
||||
StellaOps validates all SBOM fixtures against official JSON schemas to detect schema drift before runtime. This ensures:
|
||||
|
||||
- CycloneDX 1.6 fixtures are compliant with the official schema
|
||||
- SPDX 3.0.1 fixtures meet specification requirements
|
||||
- OpenVEX fixtures follow the 0.2.0 specification
|
||||
- Invalid fixtures are detected early in the CI pipeline
|
||||
|
||||
## Supported Formats
|
||||
|
||||
| Format | Version | Schema Location | Validator |
|
||||
|--------|---------|-----------------|-----------|
|
||||
| CycloneDX | 1.6 | `docs/schemas/cyclonedx-bom-1.6.schema.json` | sbom-utility |
|
||||
| SPDX | 3.0.1 | `docs/schemas/spdx-jsonld-3.0.1.schema.json` | pyspdxtools / check-jsonschema |
|
||||
| OpenVEX | 0.2.0 | `docs/schemas/openvex-0.2.0.schema.json` | ajv-cli |
|
||||
|
||||
## CI Workflows
|
||||
|
||||
### Schema Validation Workflow
|
||||
|
||||
**File:** `.gitea/workflows/schema-validation.yml`
|
||||
|
||||
Runs on:
|
||||
- Pull requests touching `bench/golden-corpus/**`, `src/Scanner/**`, `docs/schemas/**`, or `scripts/validate-*.sh`
|
||||
- Push to `main` branch
|
||||
|
||||
Jobs:
|
||||
1. **validate-cyclonedx** - Validates all CycloneDX 1.6 fixtures
|
||||
2. **validate-spdx** - Validates all SPDX 3.0.1 fixtures
|
||||
3. **validate-vex** - Validates all OpenVEX 0.2.0 fixtures
|
||||
4. **validate-negative** - Verifies invalid fixtures are correctly rejected
|
||||
5. **summary** - Aggregates results
|
||||
|
||||
### Determinism Gate Integration
|
||||
|
||||
**File:** `.gitea/workflows/determinism-gate.yml`
|
||||
|
||||
The determinism gate includes schema validation as a prerequisite step. If schema validation fails, determinism checks are blocked.
|
||||
|
||||
To skip schema validation (e.g., during debugging):
|
||||
```bash
|
||||
# Via workflow_dispatch
|
||||
skip_schema_validation: true
|
||||
```
|
||||
|
||||
## Fixture Directories
|
||||
|
||||
Validation scans these directories for SBOM fixtures:
|
||||
|
||||
| Directory | Purpose |
|
||||
|-----------|---------|
|
||||
| `bench/golden-corpus/` | Golden reference fixtures for reproducibility testing |
|
||||
| `tests/fixtures/` | Test fixtures for unit and integration tests |
|
||||
| `seed-data/` | Initial seed data for development environments |
|
||||
| `tests/fixtures/invalid/` | **Excluded** - Contains intentionally invalid fixtures for negative testing |
|
||||
|
||||
## Local Validation
|
||||
|
||||
### Using the Validation Scripts
|
||||
|
||||
```bash
|
||||
# Validate a single CycloneDX file
|
||||
./scripts/validate-sbom.sh path/to/sbom.json
|
||||
|
||||
# Validate all CycloneDX files in a directory
|
||||
./scripts/validate-sbom.sh --all path/to/directory
|
||||
|
||||
# Validate SPDX file
|
||||
./scripts/validate-spdx.sh path/to/sbom.spdx.json
|
||||
|
||||
# Validate OpenVEX file
|
||||
./scripts/validate-vex.sh path/to/vex.openvex.json
|
||||
```
|
||||
|
||||
### Using sbom-utility Directly
|
||||
|
||||
```bash
|
||||
# Install sbom-utility
|
||||
curl -sSfL "https://github.com/CycloneDX/sbom-utility/releases/download/v0.16.0/sbom-utility-v0.16.0-linux-amd64.tar.gz" | tar xz
|
||||
sudo mv sbom-utility /usr/local/bin/
|
||||
|
||||
# Validate
|
||||
sbom-utility validate --input-file sbom.json --schema docs/schemas/cyclonedx-bom-1.6.schema.json
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Validation Errors
|
||||
|
||||
#### 1. Invalid specVersion
|
||||
|
||||
**Error:** `enum: must be equal to one of the allowed values`
|
||||
|
||||
**Cause:** The `specVersion` field contains an invalid or unsupported version.
|
||||
|
||||
**Solution:**
|
||||
```json
|
||||
// Invalid
|
||||
"specVersion": "2.0"
|
||||
|
||||
// Valid
|
||||
"specVersion": "1.6"
|
||||
```
|
||||
|
||||
#### 2. Missing Required Fields
|
||||
|
||||
**Error:** `required: must have required property 'name'`
|
||||
|
||||
**Cause:** A component is missing required fields.
|
||||
|
||||
**Solution:** Ensure all components have required fields:
|
||||
```json
|
||||
{
|
||||
"type": "library",
|
||||
"name": "example-package",
|
||||
"version": "1.0.0"
|
||||
}
|
||||
```
|
||||
|
||||
#### 3. Invalid Component Type
|
||||
|
||||
**Error:** `enum: type must be equal to one of the allowed values`
|
||||
|
||||
**Cause:** The component type is not a valid CycloneDX type.
|
||||
|
||||
**Solution:** Use valid types: `application`, `framework`, `library`, `container`, `operating-system`, `device`, `firmware`, `file`, `data`
|
||||
|
||||
#### 4. Invalid PURL Format
|
||||
|
||||
**Error:** `format: must match format "purl"`
|
||||
|
||||
**Cause:** The package URL (purl) is malformed.
|
||||
|
||||
**Solution:** Use correct purl format:
|
||||
```json
|
||||
// Invalid
|
||||
"purl": "npm:example@1.0.0"
|
||||
|
||||
// Valid
|
||||
"purl": "pkg:npm/example@1.0.0"
|
||||
```
|
||||
|
||||
### CI Failure Recovery
|
||||
|
||||
1. **Identify the failing fixture:** Check CI logs for the specific file
|
||||
2. **Download the fixture:** `cat path/to/failing-fixture.json`
|
||||
3. **Run local validation:** `./scripts/validate-sbom.sh path/to/failing-fixture.json`
|
||||
4. **Fix the schema issues:** Use the error messages to guide corrections
|
||||
5. **Verify the fix:** Re-run local validation
|
||||
6. **Push and verify CI passes**
|
||||
|
||||
### Negative Test Failures
|
||||
|
||||
If negative tests fail with "UNEXPECTED PASS":
|
||||
|
||||
1. The invalid fixture in `tests/fixtures/invalid/` somehow passed validation
|
||||
2. Review the fixture to ensure it contains actual schema violations
|
||||
3. Update the fixture to include more obvious violations
|
||||
4. Document the expected error in `tests/fixtures/invalid/README.md`
|
||||
|
||||
## Adding New Fixtures
|
||||
|
||||
### Valid Fixtures
|
||||
|
||||
1. Create fixture in appropriate directory (`bench/golden-corpus/`, `tests/fixtures/`)
|
||||
2. Ensure it contains the format marker:
|
||||
- CycloneDX: `"bomFormat": "CycloneDX"`
|
||||
- SPDX: `"spdxVersion"` or `"@context"` with SPDX
|
||||
- OpenVEX: `"@context"` with openvex
|
||||
3. Run local validation before committing
|
||||
4. CI will automatically validate on PR
|
||||
|
||||
### Invalid Fixtures (Negative Testing)
|
||||
|
||||
1. Create fixture in `tests/fixtures/invalid/`
|
||||
2. Add `$comment` field explaining the defect
|
||||
3. Update `tests/fixtures/invalid/README.md` with expected error
|
||||
4. Ensure the fixture has the correct format marker
|
||||
5. CI will verify it fails validation
|
||||
|
||||
## Schema Updates
|
||||
|
||||
When updating schema versions:
|
||||
|
||||
1. Download new schema to `docs/schemas/`
|
||||
2. Update `SBOM_UTILITY_VERSION` in workflows if needed
|
||||
3. Run full validation to check for new violations
|
||||
4. Update documentation with new version
|
||||
5. Update `docs/reproducibility.md` with schema version changes
|
||||
|
||||
## References
|
||||
|
||||
- [CycloneDX Specification](https://cyclonedx.org/specification/overview/)
|
||||
- [CycloneDX sbom-utility](https://github.com/CycloneDX/sbom-utility)
|
||||
- [SPDX Specification](https://spdx.github.io/spdx-spec/v3.0.1/)
|
||||
- [SPDX Python Tools](https://github.com/spdx/tools-python)
|
||||
- [OpenVEX Specification](https://github.com/openvex/spec)
|
||||
Reference in New Issue
Block a user