docs consolidation
This commit is contained in:
@@ -28,7 +28,7 @@
|
||||
* **Rekor v2** — tile‑backed transparency log endpoint(s).
|
||||
* **MinIO (S3)** — optional archive store for DSSE envelopes & verification bundles.
|
||||
* **PostgreSQL** — local cache of `{uuid, index, proof, artifactSha256, bundleSha256}`; job state; audit.
|
||||
* **Redis** — dedupe/idempotency keys and short‑lived rate‑limit buckets.
|
||||
* **Valkey** — dedupe/idempotency keys and short‑lived rate‑limit buckets.
|
||||
* **Licensing Service (optional)** — “endorse” call for cross‑log publishing when customer opts‑in.
|
||||
|
||||
Trust boundary: **Only the Signer** is allowed to call submission endpoints; enforced by **mTLS peer cert allowlist** + `aud=attestor` OpTok.
|
||||
@@ -619,8 +619,8 @@ attestor:
|
||||
bucket: "stellaops"
|
||||
prefix: "attest/"
|
||||
objectLock: "governance"
|
||||
redis:
|
||||
url: "redis://redis:6379/2"
|
||||
valkey:
|
||||
url: "valkey://valkey:6379/2"
|
||||
quotas:
|
||||
perCaller:
|
||||
qps: 50
|
||||
|
||||
238
docs/modules/attestor/graph-root-attestation.md
Normal file
238
docs/modules/attestor/graph-root-attestation.md
Normal file
@@ -0,0 +1,238 @@
|
||||
# Graph Root Attestation
|
||||
|
||||
## Overview
|
||||
|
||||
Graph root attestation is a mechanism for creating cryptographically signed, content-addressed proofs of graph state. It enables offline verification that replayed graphs match the original attested state by computing a Merkle root from sorted node/edge IDs and input digests, then wrapping it in a DSSE envelope with an in-toto statement.
|
||||
|
||||
## Purpose
|
||||
|
||||
Graph root attestations solve the problem of proving graph authenticity without reconstructing the entire proof chain. They enable:
|
||||
|
||||
- **Offline Verification**: Download an attestation, recompute the root from stored nodes/edges, compare
|
||||
- **Audit Snapshots**: Point-in-time proof of graph state for compliance
|
||||
- **Evidence Linking**: Reference attested roots (not transient IDs) in evidence chains
|
||||
- **Transparency**: Optional Rekor publication for public auditability
|
||||
|
||||
## Architecture
|
||||
|
||||
### Components
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ GraphRootAttestor │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ AttestAsync(request) → GraphRootAttestationResult │
|
||||
│ VerifyAsync(envelope, nodes, edges) → VerificationResult │
|
||||
└─────────────────┬───────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ IMerkleRootComputer │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ ComputeRoot(leaves) → byte[] │
|
||||
│ Algorithm → "sha256" │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ EnvelopeSignatureService │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Sign(payload, key) → EnvelopeSignature │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Data Flow
|
||||
|
||||
1. **Request** → Sorted node/edge IDs + input digests
|
||||
2. **Merkle Tree** → Compute SHA-256 root from leaves
|
||||
3. **In-Toto Statement** → Build attestation with predicate
|
||||
4. **Canonicalize** → JCS (RFC 8785) with version marker
|
||||
5. **Sign** → DSSE envelope with Ed25519/ECDSA
|
||||
6. **Store/Publish** → CAS storage + optional Rekor
|
||||
|
||||
## Graph Types
|
||||
|
||||
The `GraphType` enum identifies what kind of graph is being attested:
|
||||
|
||||
| GraphType | Description |
|
||||
|-----------|-------------|
|
||||
| `Unknown` | Unspecified graph type |
|
||||
| `CallGraph` | Function/method call relationships |
|
||||
| `DependencyGraph` | Package/library dependencies (SBOM) |
|
||||
| `SbomGraph` | SBOM component graph |
|
||||
| `EvidenceGraph` | Linked evidence records |
|
||||
| `PolicyGraph` | Policy decision trees |
|
||||
| `ProofSpine` | Proof chain spine segments |
|
||||
| `ReachabilityGraph` | Code reachability analysis |
|
||||
| `VexLinkageGraph` | VEX statement linkages |
|
||||
| `Custom` | Application-specific graph |
|
||||
|
||||
## Models
|
||||
|
||||
### GraphRootAttestationRequest
|
||||
|
||||
Input to the attestation service:
|
||||
|
||||
```csharp
|
||||
public sealed record GraphRootAttestationRequest
|
||||
{
|
||||
public required GraphType GraphType { get; init; }
|
||||
public required IReadOnlyList<string> NodeIds { get; init; }
|
||||
public required IReadOnlyList<string> EdgeIds { get; init; }
|
||||
public required string PolicyDigest { get; init; }
|
||||
public required string FeedsDigest { get; init; }
|
||||
public required string ToolchainDigest { get; init; }
|
||||
public required string ParamsDigest { get; init; }
|
||||
public required string ArtifactDigest { get; init; }
|
||||
public IReadOnlyList<string> EvidenceIds { get; init; } = [];
|
||||
public bool PublishToRekor { get; init; } = false;
|
||||
public string? SigningKeyId { get; init; }
|
||||
}
|
||||
```
|
||||
|
||||
### GraphRootAttestation (In-Toto Statement)
|
||||
|
||||
The attestation follows the in-toto Statement/v1 format:
|
||||
|
||||
```json
|
||||
{
|
||||
"_type": "https://in-toto.io/Statement/v1",
|
||||
"subject": [
|
||||
{
|
||||
"name": "sha256:abc123...",
|
||||
"digest": { "sha256": "abc123..." }
|
||||
},
|
||||
{
|
||||
"name": "sha256:artifact...",
|
||||
"digest": { "sha256": "artifact..." }
|
||||
}
|
||||
],
|
||||
"predicateType": "https://stella-ops.org/attestation/graph-root/v1",
|
||||
"predicate": {
|
||||
"graphType": "DependencyGraph",
|
||||
"rootHash": "sha256:abc123...",
|
||||
"rootAlgorithm": "sha256",
|
||||
"nodeCount": 1247,
|
||||
"edgeCount": 3891,
|
||||
"nodeIds": ["node-a", "node-b", ...],
|
||||
"edgeIds": ["edge-1", "edge-2", ...],
|
||||
"inputs": {
|
||||
"policyDigest": "sha256:policy...",
|
||||
"feedsDigest": "sha256:feeds...",
|
||||
"toolchainDigest": "sha256:tools...",
|
||||
"paramsDigest": "sha256:params..."
|
||||
},
|
||||
"evidenceIds": ["ev-1", "ev-2"],
|
||||
"canonVersion": "stella:canon:v1",
|
||||
"computedAt": "2025-12-26T10:30:00Z",
|
||||
"computedBy": "stellaops/attestor/graph-root",
|
||||
"computedByVersion": "1.0.0"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Merkle Root Computation
|
||||
|
||||
The root is computed from leaves in this deterministic order:
|
||||
|
||||
1. **Sorted node IDs** (lexicographic, ordinal)
|
||||
2. **Sorted edge IDs** (lexicographic, ordinal)
|
||||
3. **Policy digest**
|
||||
4. **Feeds digest**
|
||||
5. **Toolchain digest**
|
||||
6. **Params digest**
|
||||
|
||||
Each leaf is SHA-256 hashed, then combined pairwise until a single root remains.
|
||||
|
||||
```
|
||||
ROOT
|
||||
/ \
|
||||
H(L12) H(R12)
|
||||
/ \ / \
|
||||
H(n1) H(n2) H(e1) H(policy)
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Creating an Attestation
|
||||
|
||||
```csharp
|
||||
var services = new ServiceCollection();
|
||||
services.AddGraphRootAttestation(sp => keyId => GetSigningKey(keyId));
|
||||
var provider = services.BuildServiceProvider();
|
||||
|
||||
var attestor = provider.GetRequiredService<IGraphRootAttestor>();
|
||||
|
||||
var request = new GraphRootAttestationRequest
|
||||
{
|
||||
GraphType = GraphType.DependencyGraph,
|
||||
NodeIds = graph.Nodes.Select(n => n.Id).ToList(),
|
||||
EdgeIds = graph.Edges.Select(e => e.Id).ToList(),
|
||||
PolicyDigest = "sha256:abc...",
|
||||
FeedsDigest = "sha256:def...",
|
||||
ToolchainDigest = "sha256:ghi...",
|
||||
ParamsDigest = "sha256:jkl...",
|
||||
ArtifactDigest = imageDigest
|
||||
};
|
||||
|
||||
var result = await attestor.AttestAsync(request);
|
||||
Console.WriteLine($"Root: {result.RootHash}");
|
||||
Console.WriteLine($"Nodes: {result.NodeCount}");
|
||||
Console.WriteLine($"Edges: {result.EdgeCount}");
|
||||
```
|
||||
|
||||
### Verifying an Attestation
|
||||
|
||||
```csharp
|
||||
var envelope = LoadEnvelope("attestation.dsse.json");
|
||||
var nodes = LoadNodes("nodes.ndjson");
|
||||
var edges = LoadEdges("edges.ndjson");
|
||||
|
||||
var result = await attestor.VerifyAsync(envelope, nodes, edges);
|
||||
|
||||
if (result.IsValid)
|
||||
{
|
||||
Console.WriteLine($"✓ Verified: {result.ComputedRoot}");
|
||||
Console.WriteLine($" Nodes: {result.NodeCount}");
|
||||
Console.WriteLine($" Edges: {result.EdgeCount}");
|
||||
}
|
||||
else
|
||||
{
|
||||
Console.WriteLine($"✗ Failed: {result.FailureReason}");
|
||||
Console.WriteLine($" Expected: {result.ExpectedRoot}");
|
||||
Console.WriteLine($" Computed: {result.ComputedRoot}");
|
||||
}
|
||||
```
|
||||
|
||||
## Offline Verification Workflow
|
||||
|
||||
1. **Obtain attestation**: Download DSSE envelope from storage or transparency log
|
||||
2. **Verify signature**: Check envelope signature against trusted public keys
|
||||
3. **Extract predicate**: Parse `GraphRootPredicate` from the payload
|
||||
4. **Fetch graph data**: Download nodes and edges by ID from CAS
|
||||
5. **Recompute root**: Apply Merkle tree algorithm to node/edge IDs + input digests
|
||||
6. **Compare**: Computed root must match `predicate.RootHash`
|
||||
|
||||
## Determinism Guarantees
|
||||
|
||||
Graph root attestations are fully deterministic:
|
||||
|
||||
- **Sorting**: All IDs sorted lexicographically (ordinal comparison)
|
||||
- **Canonicalization**: RFC 8785 JCS with `stella:canon:v1` version marker
|
||||
- **Hashing**: SHA-256 only
|
||||
- **Timestamps**: UTC ISO-8601 (not included in root computation)
|
||||
|
||||
Same inputs always produce the same root hash, enabling replay verification.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- **Signature verification**: Always verify DSSE envelope signatures before trusting attestations
|
||||
- **Key management**: Use short-lived signing keys; rotate regularly
|
||||
- **Transparency**: Publish to Rekor for tamper-evident audit trail
|
||||
- **Input validation**: Validate all digests are properly formatted before attestation
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [DSSE Envelopes](./dsse-envelopes.md) - Envelope format and signing
|
||||
- [Proof Chain](./proof-chain.md) - Overall proof chain architecture
|
||||
- [Canonical JSON](../../modules/platform/canonical-json.md) - Canonicalization scheme
|
||||
@@ -1,6 +0,0 @@
|
||||
# Keys and Issuers (DOCS-ATTEST-74-001)
|
||||
|
||||
- Maintain issuer registry (KMS IDs, key IDs, allowed predicates).
|
||||
- Rotate keys with overlap; publish fingerprints and validity in registry file.
|
||||
- Offline operation: bundle registry with bootstrap; no remote fetch.
|
||||
- Each attestation must include issuer ID and key ID; verify against registry.
|
||||
@@ -1,9 +0,0 @@
|
||||
# Attestor Overview (DOCS-ATTEST-73-001)
|
||||
|
||||
High-level description of the Attestor service and its contracts.
|
||||
|
||||
- Purpose: verify DSSE/attestations, supply transparency info, and expose attestation APIs without deriving verdicts.
|
||||
- Components: WebService, Worker, KMS integration, Transparency log (optional), Evidence links.
|
||||
- Rule banner: aggregation-only; no policy decisions.
|
||||
- Tenancy: all attestations scoped per tenant; cross-tenant reads forbidden.
|
||||
- Offline posture: allow offline verification using bundled trust roots and Rekor checkpoints when available.
|
||||
@@ -1,12 +0,0 @@
|
||||
# Attestor Policies (DOCS-ATTEST-73-003)
|
||||
|
||||
Guidance on verification policies applied by Attestor.
|
||||
|
||||
- Scope: DSSE envelope validation, subject hash matching, optional transparency checks.
|
||||
- Policy fields:
|
||||
- allowed issuers / key IDs
|
||||
- required predicates (e.g., `stella.ops/vexObservation@v1`)
|
||||
- transparency requirements (allow/require/skip)
|
||||
- freshness window for attestations
|
||||
- Determinism: policies must be pure; no external lookups in sealed mode.
|
||||
- Versioning: include `policyVersion` and hash; store alongside attestation records.
|
||||
@@ -90,6 +90,68 @@ This specification defines the implementation of a cryptographically verifiable
|
||||
5. **Numbers in shortest form**
|
||||
6. **Deterministic array ordering** (by semantic key: bom-ref, purl)
|
||||
|
||||
### Canonicalization Versioning
|
||||
|
||||
Content-addressed identifiers embed a canonicalization version marker to prevent hash collisions when the canonicalization algorithm evolves. This ensures that:
|
||||
|
||||
- **Forward compatibility**: Future algorithm changes won't invalidate existing hashes.
|
||||
- **Verifier clarity**: Verifiers know exactly which algorithm to use.
|
||||
- **Auditability**: Hash provenance is cryptographically bound to algorithm version.
|
||||
|
||||
**Version Marker Format:**
|
||||
|
||||
```json
|
||||
{
|
||||
"_canonVersion": "stella:canon:v1",
|
||||
"sbomEntryId": "...",
|
||||
"vulnerabilityId": "..."
|
||||
}
|
||||
```
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `_canonVersion` | Underscore prefix ensures lexicographic first position after sorting |
|
||||
| Value format | `stella:canon:v<N>` where N is the version number |
|
||||
| Current version | `stella:canon:v1` (RFC 8785 JSON canonicalization) |
|
||||
|
||||
**V1 Algorithm Specification:**
|
||||
|
||||
| Property | Behavior |
|
||||
|----------|----------|
|
||||
| Standard | RFC 8785 (JSON Canonicalization Scheme) |
|
||||
| Key sorting | Ordinal string comparison |
|
||||
| Whitespace | None (compact JSON) |
|
||||
| Encoding | UTF-8 without BOM |
|
||||
| Numbers | IEEE 754, shortest representation |
|
||||
| Escaping | Minimal (only required characters) |
|
||||
|
||||
**Version Detection:**
|
||||
|
||||
```csharp
|
||||
// Detect if canonical JSON includes version marker
|
||||
public static bool IsVersioned(ReadOnlySpan<byte> canonicalJson)
|
||||
{
|
||||
return canonicalJson.Length > 20 &&
|
||||
canonicalJson.StartsWith("{\"_canonVersion\":"u8);
|
||||
}
|
||||
|
||||
// Extract version from versioned canonical JSON
|
||||
public static string? ExtractVersion(ReadOnlySpan<byte> canonicalJson)
|
||||
{
|
||||
// Parse and return the _canonVersion value, or null if not versioned
|
||||
}
|
||||
```
|
||||
|
||||
**Migration Strategy:**
|
||||
|
||||
| Phase | Behavior | Timeline |
|
||||
|-------|----------|----------|
|
||||
| Phase 1 (Current) | Generate v1 hashes; accept both legacy and v1 for verification | Now |
|
||||
| Phase 2 | Log deprecation warnings for legacy hashes | +6 months |
|
||||
| Phase 3 | Reject legacy hashes; require v1 | +12 months |
|
||||
|
||||
See also: [Canonicalization Migration Guide](../../operations/canon-version-migration.md)
|
||||
|
||||
## DSSE Predicate Types
|
||||
|
||||
### 1. Evidence Statement (`evidence.stella/v1`)
|
||||
@@ -194,6 +256,101 @@ This specification defines the implementation of a cryptographically verifiable
|
||||
```
|
||||
**Signer**: Generator key
|
||||
|
||||
### 7. Graph Root Statement (`graph-root.stella/v1`)
|
||||
|
||||
The Graph Root attestation provides tamper-evident commitment to graph analysis results (dependency graphs, call graphs, reachability graphs) by computing a Merkle root over canonicalized node and edge identifiers.
|
||||
|
||||
```json
|
||||
{
|
||||
"_type": "https://in-toto.io/Statement/v1",
|
||||
"subject": [
|
||||
{
|
||||
"name": "graph-root://<graphType>/<merkleRoot>",
|
||||
"digest": {
|
||||
"sha256": "<merkle-root-hex>"
|
||||
}
|
||||
}
|
||||
],
|
||||
"predicateType": "https://stella-ops.org/predicates/graph-root/v1",
|
||||
"predicate": {
|
||||
"graphType": "DependencyGraph|CallGraph|ReachabilityGraph|...",
|
||||
"merkleRoot": "sha256:<hex>",
|
||||
"nodeCount": 1234,
|
||||
"edgeCount": 5678,
|
||||
"canonVersion": "stella:canon:v1",
|
||||
"inputs": {
|
||||
"sbomDigest": "sha256:<hex>",
|
||||
"analyzerDigest": "sha256:<hex>",
|
||||
"configDigest": "sha256:<hex>"
|
||||
},
|
||||
"createdAt": "2025-01-12T10:30:00Z"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Signer**: Graph Analyzer key
|
||||
|
||||
#### Supported Graph Types
|
||||
|
||||
| Graph Type | Use Case |
|
||||
|------------|----------|
|
||||
| `DependencyGraph` | Package/library dependency analysis |
|
||||
| `CallGraph` | Function-level call relationships |
|
||||
| `ReachabilityGraph` | Vulnerability reachability analysis |
|
||||
| `DataFlowGraph` | Data flow and taint tracking |
|
||||
| `ControlFlowGraph` | Code execution paths |
|
||||
| `InheritanceGraph` | OOP class hierarchies |
|
||||
| `ModuleGraph` | Module/namespace dependencies |
|
||||
| `BuildGraph` | Build system dependencies |
|
||||
| `ContainerLayerGraph` | Container layer relationships |
|
||||
|
||||
#### Merkle Root Computation
|
||||
|
||||
The Merkle root is computed deterministically:
|
||||
|
||||
1. **Canonicalize Node IDs**: Sort all node identifiers lexicographically
|
||||
2. **Canonicalize Edge IDs**: Sort all edge identifiers (format: `{source}->{target}`)
|
||||
3. **Combine**: Concatenate sorted nodes + sorted edges
|
||||
4. **Binary Tree**: Build SHA-256 Merkle tree with odd-node duplication
|
||||
5. **Root**: Extract 32-byte root as `sha256:<hex>`
|
||||
|
||||
```
|
||||
Merkle Tree Structure:
|
||||
[root]
|
||||
/ \
|
||||
[h01] [h23]
|
||||
/ \ / \
|
||||
[n0] [n1] [n2] [n3]
|
||||
```
|
||||
|
||||
#### Integration with Proof Spine
|
||||
|
||||
Graph root attestations can be referenced in proof spines:
|
||||
|
||||
```json
|
||||
{
|
||||
"predicateType": "proofspine.stella/v1",
|
||||
"predicate": {
|
||||
"sbomEntryId": "<SBOMEntryID>",
|
||||
"evidenceIds": ["<ID1>", "<ID2>"],
|
||||
"reasoningId": "<ID>",
|
||||
"vexVerdictId": "<ID>",
|
||||
"graphRootIds": ["<GraphRootID1>"],
|
||||
"policyVersion": "v2.3.1",
|
||||
"proofBundleId": "<ProofBundleID>"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Verification Steps
|
||||
|
||||
1. Parse DSSE envelope and verify signature against allowed keys
|
||||
2. Extract predicate and Merkle root
|
||||
3. Re-canonicalize provided node/edge IDs using `stella:canon:v1`
|
||||
4. Recompute Merkle root from canonicalized inputs
|
||||
5. Compare computed root to claimed root
|
||||
6. If Rekor entry exists, verify transparency log inclusion
|
||||
|
||||
## Database Schema
|
||||
|
||||
### Tables
|
||||
@@ -205,6 +362,7 @@ This specification defines the implementation of a cryptographically verifiable
|
||||
| `proofchain.spines` | Proof spine aggregations linking evidence to verdicts |
|
||||
| `proofchain.trust_anchors` | Trust anchor configurations for verification |
|
||||
| `proofchain.rekor_entries` | Rekor transparency log entries |
|
||||
| `proofchain.graph_roots` | Graph root attestations with Merkle roots |
|
||||
| `proofchain.key_history` | Key lifecycle history for rotation |
|
||||
| `proofchain.key_audit_log` | Audit log for key operations |
|
||||
|
||||
@@ -282,6 +440,7 @@ The 13-step verification algorithm:
|
||||
- [Database Schema Sprint](../../implplan/SPRINT_0501_0006_0001_proof_chain_database_schema.md)
|
||||
- [CLI Integration Sprint](../../implplan/SPRINT_0501_0007_0001_proof_chain_cli_integration.md)
|
||||
- [Key Rotation Sprint](../../implplan/SPRINT_0501_0008_0001_proof_chain_key_rotation.md)
|
||||
- [Graph Root Attestation](./graph-root-attestation.md)
|
||||
- [Attestor Architecture](./architecture.md)
|
||||
- [Signer Architecture](../signer/architecture.md)
|
||||
- [Database Specification](../../db/SPECIFICATION.md)
|
||||
|
||||
@@ -1,49 +0,0 @@
|
||||
# Attestor TTL Validation Runbook
|
||||
|
||||
> **DEPRECATED:** This runbook tests MongoDB TTL indexes, which are no longer used. Attestor now uses PostgreSQL for persistence (Sprint 4400). See `docs/db/SPECIFICATION.md` for current database schema.
|
||||
|
||||
> **Purpose:** confirm MongoDB TTL indexes and Redis expirations for the attestation dedupe store behave as expected on a production-like stack.
|
||||
|
||||
## Prerequisites
|
||||
- Docker Desktop or compatible daemon with the Compose plugin enabled.
|
||||
- Local ports `27017` and `6379` free.
|
||||
- `dotnet` SDK 10.0 preview (same as repo toolchain).
|
||||
- Network access to pull `mongo:7` and `redis:7` images.
|
||||
|
||||
## Quickstart
|
||||
1. From the repo root export any required proxy settings, then run
|
||||
```bash
|
||||
scripts/run-attestor-ttl-validation.sh
|
||||
```
|
||||
The helper script:
|
||||
- Spins up `mongo:7` and `redis:7` containers.
|
||||
- Sets `ATTESTOR_LIVE_MONGO_URI` / `ATTESTOR_LIVE_REDIS_URI`.
|
||||
- Executes the live TTL test suite (`Category=LiveTTL`) in `StellaOps.Attestor.Tests`.
|
||||
- Tears the stack down automatically.
|
||||
|
||||
2. Capture the test output (`ttl-validation-<timestamp>.log`) and attach it to the sprint evidence folder (`docs/modules/attestor/evidence/`).
|
||||
|
||||
## Result handling
|
||||
- **Success:** Tests complete in ~3–4 minutes with `Total tests: 2, Passed: 2`. Store the log and note the run in `docs/implplan/archived/SPRINT_0100_0001_0001_identity_signing.md` under ATTESTOR-72-003.
|
||||
- **Failure:** Preserve:
|
||||
- `docker compose logs` for both services.
|
||||
- `mongosh` output of `db.dedupe.getIndexes()` and sample documents.
|
||||
- `redis-cli --raw ttl attestor:ttl:live:bundle:<id>`.
|
||||
File an incident in the Attestor Guild channel and link the captured artifacts.
|
||||
|
||||
## Manual verification (optional)
|
||||
If the helper script cannot be used:
|
||||
1. Start MongoDB and Redis manually with equivalent configuration.
|
||||
2. Set `ATTESTOR_LIVE_MONGO_URI` and `ATTESTOR_LIVE_REDIS_URI`.
|
||||
3. Run `dotnet test src/Attestor/StellaOps.Attestor.sln --no-build --filter "Category=LiveTTL"`.
|
||||
4. Follow the evidence handling steps above.
|
||||
|
||||
## Ownership
|
||||
- Primary: Attestor Service Guild.
|
||||
- Partner: QA Guild (observes TTL metrics, confirms evidence archiving).
|
||||
|
||||
## 2025-11-03 validation summary
|
||||
- **Stack:** `mongod` 7.0.5 (tarball) + `mongosh` 2.0.2, `redis-server` 7.2.4 (source build) running on localhost without Docker.
|
||||
- **Mongo results:** `dedupe` TTL index (`ttlAt`, `expireAfterSeconds: 0`) confirmed; document inserted with 20 s TTL expired automatically after ~80 s (expected allocator sweep). Evidence: `docs/modules/attestor/evidence/2025-11-03-mongo-ttl-validation.txt`.
|
||||
- **Redis results:** Key `attestor:ttl:live:bundle:validation` set with 45 s TTL reached `TTL=-2` after ~47 s confirming expiry propagation. Evidence: `docs/modules/attestor/evidence/2025-11-03-redis-ttl-validation.txt`.
|
||||
- **Notes:** Local binaries built/run to accommodate sandbox without Docker; services shut down after validation.
|
||||
@@ -1,9 +0,0 @@
|
||||
# Attestor Workflows (DOCS-ATTEST-73-004)
|
||||
|
||||
Sequence of ingest, verify, and bulk operations.
|
||||
|
||||
1. **Ingest**: receive DSSE, validate schema, hash subjects, store envelope + metadata.
|
||||
2. **Verify**: run policy checks (issuer, predicate, transparency optional), compute verification record.
|
||||
3. **Persist**: store verification result with `verificationId`, `attestationId`, `policyVersion`, timestamps.
|
||||
4. **Bulk ops**: batch verify envelopes; export results to timeline/audit logs.
|
||||
5. **Audit**: expose read API for verification records; include determinism hash of inputs.
|
||||
@@ -228,7 +228,7 @@ Services **must** verify `aud` and **sender constraint** (DPoP/mTLS) per their p
|
||||
## 5) Storage & state
|
||||
|
||||
* **Configuration DB** (PostgreSQL/MySQL): clients, audiences, role→scope maps, tenant/installation registry, device code grants, persistent consents (if any).
|
||||
* **Cache** (Redis):
|
||||
* **Cache** (Valkey):
|
||||
|
||||
* DPoP **jti** replay cache (short TTL)
|
||||
* **Nonce** store (per resource server, if they demand nonce)
|
||||
@@ -375,8 +375,8 @@ authority:
|
||||
enabled: true
|
||||
ttl: "00:10:00"
|
||||
maxIssuancePerMinute: 120
|
||||
store: "redis"
|
||||
redisConnectionString: "redis://authority-redis:6379?ssl=false"
|
||||
store: "valkey" # uses redis:// protocol
|
||||
valkeyConnectionString: "redis://authority-valkey:6379?ssl=false"
|
||||
requiredAudiences:
|
||||
- "signer"
|
||||
- "attestor"
|
||||
@@ -428,7 +428,7 @@ authority:
|
||||
* **RBAC**: scope enforcement per audience; over‑privileged client denied.
|
||||
* **Rotation**: JWKS rotation while load‑testing; zero‑downtime verification.
|
||||
* **HA**: kill one Authority instance; verify issuance continues; JWKS served by peers.
|
||||
* **Performance**: 1k token issuance/sec on 2 cores with Redis enabled for jti caching.
|
||||
* **Performance**: 1k token issuance/sec on 2 cores with Valkey enabled for jti caching.
|
||||
|
||||
---
|
||||
|
||||
@@ -448,9 +448,9 @@ authority:
|
||||
## 17) Deployment & HA
|
||||
|
||||
* **Stateless** microservice, containerized; run ≥ 2 replicas behind LB.
|
||||
* **DB**: HA Postgres (or MySQL) for clients/roles; **Redis** for device codes, DPoP nonces/jtis.
|
||||
* **DB**: HA Postgres (or MySQL) for clients/roles; **Valkey** for device codes, DPoP nonces/jtis.
|
||||
* **Secrets**: mount client JWKs via K8s Secrets/HashiCorp Vault; signing keys via KMS.
|
||||
* **Backups**: DB daily; Redis not critical (ephemeral).
|
||||
* **Backups**: DB daily; Valkey not critical (ephemeral).
|
||||
* **Disaster recovery**: export/import of client registry; JWKS rehydrate from KMS.
|
||||
* **Compliance**: TLS audit; penetration testing for OIDC flows.
|
||||
|
||||
@@ -459,7 +459,7 @@ authority:
|
||||
## 18) Implementation notes
|
||||
|
||||
* Reference stack: **.NET 10** + **OpenIddict 6** (or IdentityServer if licensed) with custom DPoP validator and mTLS binding middleware.
|
||||
* Keep the DPoP/JTI cache pluggable; allow Redis/Memcached.
|
||||
* Keep the DPoP/JTI cache pluggable; allow Valkey/Memcached.
|
||||
* Provide **client SDKs** for C# and Go: DPoP key mgmt, proof generation, nonce handling, token refresh helper.
|
||||
|
||||
---
|
||||
|
||||
329
docs/modules/cli/guides/commands/evidence-bundle-format.md
Normal file
329
docs/modules/cli/guides/commands/evidence-bundle-format.md
Normal file
@@ -0,0 +1,329 @@
|
||||
# Evidence Bundle Format Specification
|
||||
|
||||
Version: 1.0
|
||||
Status: Stable
|
||||
Sprint: SPRINT_9200_0001_0003
|
||||
|
||||
## Overview
|
||||
|
||||
Evidence bundles are downloadable archives containing complete evidence packages for findings or scan runs. They enable:
|
||||
|
||||
- **Offline verification**: All evidence is self-contained
|
||||
- **Deterministic replay**: Includes scripts and hashes for verdict reproduction
|
||||
- **Audit compliance**: Provides cryptographic verification of all evidence
|
||||
- **Human readability**: Includes README and manifest for easy inspection
|
||||
|
||||
## Archive Formats
|
||||
|
||||
Evidence bundles are available in two formats:
|
||||
|
||||
| Format | Extension | MIME Type | Use Case |
|
||||
|--------|-----------|-----------|----------|
|
||||
| ZIP | `.zip` | `application/zip` | General use, Windows compatible |
|
||||
| TAR.GZ | `.tar.gz` | `application/gzip` | Unix systems, better compression |
|
||||
|
||||
## Endpoints
|
||||
|
||||
### Single Finding Bundle
|
||||
|
||||
```
|
||||
GET /v1/triage/findings/{findingId}/evidence/export?format=zip
|
||||
```
|
||||
|
||||
Response headers:
|
||||
- `Content-Type: application/zip`
|
||||
- `Content-Disposition: attachment; filename="evidence-{findingId}.zip"`
|
||||
- `X-Archive-Digest: sha256:{digest}`
|
||||
|
||||
### Scan Run Bundle
|
||||
|
||||
```
|
||||
GET /v1/triage/scans/{scanId}/evidence/export?format=zip
|
||||
```
|
||||
|
||||
Response headers:
|
||||
- `Content-Type: application/zip`
|
||||
- `Content-Disposition: attachment; filename="evidence-run-{scanId}.zip"`
|
||||
- `X-Archive-Digest: sha256:{digest}`
|
||||
|
||||
## Finding Bundle Structure
|
||||
|
||||
```
|
||||
evidence-{findingId}/
|
||||
├── manifest.json # Archive manifest with file hashes
|
||||
├── README.md # Human-readable documentation
|
||||
├── sbom.cdx.json # CycloneDX SBOM slice
|
||||
├── reachability.json # Reachability analysis data
|
||||
├── vex/
|
||||
│ ├── vendor.json # Vendor VEX statements
|
||||
│ ├── nvd.json # NVD VEX data
|
||||
│ └── cisa-kev.json # CISA KEV data
|
||||
├── attestations/
|
||||
│ ├── sbom.dsse.json # SBOM DSSE envelope
|
||||
│ └── scan.dsse.json # Scan DSSE envelope
|
||||
├── policy/
|
||||
│ └── evaluation.json # Policy evaluation result
|
||||
├── delta.json # Delta comparison (if available)
|
||||
├── replay-command.txt # Copy-ready replay command
|
||||
├── replay.sh # Bash replay script
|
||||
└── replay.ps1 # PowerShell replay script
|
||||
```
|
||||
|
||||
## Scan Run Bundle Structure
|
||||
|
||||
```
|
||||
evidence-run-{scanId}/
|
||||
├── MANIFEST.json # Run-level manifest
|
||||
├── README.md # Run-level documentation
|
||||
└── findings/
|
||||
├── {findingId1}/
|
||||
│ ├── manifest.json
|
||||
│ ├── README.md
|
||||
│ ├── sbom.cdx.json
|
||||
│ ├── reachability.json
|
||||
│ ├── vex/
|
||||
│ ├── attestations/
|
||||
│ ├── policy/
|
||||
│ ├── delta.json
|
||||
│ ├── replay-command.txt
|
||||
│ ├── replay.sh
|
||||
│ └── replay.ps1
|
||||
├── {findingId2}/
|
||||
│ └── ...
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Manifest Schema
|
||||
|
||||
### Finding Manifest (manifest.json)
|
||||
|
||||
```json
|
||||
{
|
||||
"schemaVersion": "1.0",
|
||||
"findingId": "f-abc123",
|
||||
"generatedAt": "2025-01-15T10:30:00Z",
|
||||
"cacheKey": "sha256:abc123...",
|
||||
"scannerVersion": "10.1.3",
|
||||
"files": [
|
||||
{
|
||||
"path": "sbom.cdx.json",
|
||||
"sha256": "abc123def456...",
|
||||
"size": 12345,
|
||||
"contentType": "application/json"
|
||||
},
|
||||
{
|
||||
"path": "reachability.json",
|
||||
"sha256": "789xyz...",
|
||||
"size": 5678,
|
||||
"contentType": "application/json"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Run Manifest (MANIFEST.json)
|
||||
|
||||
```json
|
||||
{
|
||||
"schemaVersion": "1.0",
|
||||
"scanId": "scan-xyz789",
|
||||
"generatedAt": "2025-01-15T10:30:00Z",
|
||||
"totalFiles": 42,
|
||||
"scannerVersion": "10.1.3",
|
||||
"findings": [
|
||||
{
|
||||
"findingId": "f-abc123",
|
||||
"generatedAt": "2025-01-15T10:30:00Z",
|
||||
"cacheKey": "sha256:abc123...",
|
||||
"files": [...]
|
||||
},
|
||||
{
|
||||
"findingId": "f-def456",
|
||||
"generatedAt": "2025-01-15T10:30:00Z",
|
||||
"cacheKey": "sha256:def456...",
|
||||
"files": [...]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Replay Scripts
|
||||
|
||||
### Bash Script (replay.sh)
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
# StellaOps Evidence Bundle Replay Script
|
||||
# Generated: 2025-01-15T10:30:00Z
|
||||
# Finding: f-abc123
|
||||
# CVE: CVE-2024-1234
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Input hashes for deterministic replay
|
||||
ARTIFACT_DIGEST="sha256:a1b2c3d4e5f6..."
|
||||
MANIFEST_HASH="sha256:abc123def456..."
|
||||
FEED_HASH="sha256:feed789feed..."
|
||||
POLICY_HASH="sha256:policy321..."
|
||||
|
||||
# Verify prerequisites
|
||||
if ! command -v stella &> /dev/null; then
|
||||
echo "Error: stella CLI not found. Install from https://stellaops.org/install"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Replaying verdict for finding: ${ARTIFACT_DIGEST}"
|
||||
echo "Using manifest: ${MANIFEST_HASH}"
|
||||
|
||||
# Execute replay
|
||||
stella scan replay \
|
||||
--artifact "${ARTIFACT_DIGEST}" \
|
||||
--manifest "${MANIFEST_HASH}" \
|
||||
--feeds "${FEED_HASH}" \
|
||||
--policy "${POLICY_HASH}"
|
||||
|
||||
echo "Replay complete. Verify verdict matches original."
|
||||
```
|
||||
|
||||
### PowerShell Script (replay.ps1)
|
||||
|
||||
```powershell
|
||||
# StellaOps Evidence Bundle Replay Script
|
||||
# Generated: 2025-01-15T10:30:00Z
|
||||
# Finding: f-abc123
|
||||
# CVE: CVE-2024-1234
|
||||
|
||||
$ErrorActionPreference = 'Stop'
|
||||
|
||||
# Input hashes for deterministic replay
|
||||
$ArtifactDigest = "sha256:a1b2c3d4e5f6..."
|
||||
$ManifestHash = "sha256:abc123def456..."
|
||||
$FeedHash = "sha256:feed789feed..."
|
||||
$PolicyHash = "sha256:policy321..."
|
||||
|
||||
# Verify prerequisites
|
||||
if (-not (Get-Command stella -ErrorAction SilentlyContinue)) {
|
||||
Write-Error "stella CLI not found. Install from https://stellaops.org/install"
|
||||
exit 1
|
||||
}
|
||||
|
||||
Write-Host "Replaying verdict for finding: $ArtifactDigest"
|
||||
Write-Host "Using manifest: $ManifestHash"
|
||||
|
||||
# Execute replay
|
||||
stella scan replay `
|
||||
--artifact $ArtifactDigest `
|
||||
--manifest $ManifestHash `
|
||||
--feeds $FeedHash `
|
||||
--policy $PolicyHash
|
||||
|
||||
Write-Host "Replay complete. Verify verdict matches original."
|
||||
```
|
||||
|
||||
## README Format
|
||||
|
||||
### Finding README (README.md)
|
||||
|
||||
```markdown
|
||||
# StellaOps Evidence Bundle
|
||||
|
||||
## Overview
|
||||
|
||||
- **Finding ID:** `f-abc123`
|
||||
- **CVE:** `CVE-2024-1234`
|
||||
- **Component:** `pkg:npm/lodash@4.17.15`
|
||||
- **Generated:** 2025-01-15T10:30:00Z
|
||||
|
||||
## Input Hashes for Deterministic Replay
|
||||
|
||||
| Input | Hash |
|
||||
|-------|------|
|
||||
| Artifact Digest | `sha256:a1b2c3d4e5f6...` |
|
||||
| Run Manifest | `sha256:abc123def456...` |
|
||||
| Feed Snapshot | `sha256:feed789feed...` |
|
||||
| Policy | `sha256:policy321...` |
|
||||
|
||||
## Replay Instructions
|
||||
|
||||
### Using Bash
|
||||
```bash
|
||||
chmod +x replay.sh
|
||||
./replay.sh
|
||||
```
|
||||
|
||||
### Using PowerShell
|
||||
```powershell
|
||||
.\replay.ps1
|
||||
```
|
||||
|
||||
## Bundle Contents
|
||||
|
||||
| File | SHA-256 | Size |
|
||||
|------|---------|------|
|
||||
| `sbom.cdx.json` | `abc123...` | 12.3 KB |
|
||||
| `reachability.json` | `789xyz...` | 5.6 KB |
|
||||
| ... | ... | ... |
|
||||
|
||||
## Verification Status
|
||||
|
||||
- **Status:** verified
|
||||
- **Hashes Verified:** ✓
|
||||
- **Attestations Verified:** ✓
|
||||
- **Evidence Complete:** ✓
|
||||
|
||||
---
|
||||
|
||||
*Generated by StellaOps Scanner*
|
||||
```
|
||||
|
||||
## Integrity Verification
|
||||
|
||||
To verify bundle integrity:
|
||||
|
||||
1. **Download with digest header**: The `X-Archive-Digest` response header contains the archive's SHA-256 hash
|
||||
2. **Verify archive hash**: `sha256sum evidence-{findingId}.zip`
|
||||
3. **Verify file hashes**: Compare each file's SHA-256 against `manifest.json`
|
||||
|
||||
Example verification:
|
||||
|
||||
```bash
|
||||
# Verify archive integrity
|
||||
EXPECTED_HASH="abc123..."
|
||||
ACTUAL_HASH=$(sha256sum evidence-f-abc123.zip | cut -d' ' -f1)
|
||||
if [ "$EXPECTED_HASH" = "$ACTUAL_HASH" ]; then
|
||||
echo "Archive integrity verified"
|
||||
else
|
||||
echo "Archive integrity check FAILED"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Verify individual files
|
||||
cd evidence-f-abc123
|
||||
for file in $(jq -r '.files[].path' manifest.json); do
|
||||
expected=$(jq -r ".files[] | select(.path==\"$file\") | .sha256" manifest.json)
|
||||
actual=$(sha256sum "$file" | cut -d' ' -f1)
|
||||
if [ "$expected" = "$actual" ]; then
|
||||
echo "✓ $file"
|
||||
else
|
||||
echo "✗ $file"
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
## Content Types
|
||||
|
||||
| File Type | Content-Type | Description |
|
||||
|-----------|--------------|-------------|
|
||||
| `.json` | `application/json` | JSON data files |
|
||||
| `.cdx.json` | `application/json` | CycloneDX SBOM |
|
||||
| `.dsse.json` | `application/json` | DSSE envelope |
|
||||
| `.sh` | `text/x-shellscript` | Bash script |
|
||||
| `.ps1` | `text/plain` | PowerShell script |
|
||||
| `.md` | `text/markdown` | Markdown documentation |
|
||||
| `.txt` | `text/plain` | Plain text |
|
||||
|
||||
## See Also
|
||||
|
||||
- [stella scan replay Command Reference](../cli/guides/commands/scan-replay.md)
|
||||
- [Deterministic Replay Specification](../replay/DETERMINISTIC_REPLAY.md)
|
||||
- [Unified Evidence Endpoint API](./unified-evidence-endpoint.md)
|
||||
162
docs/modules/cli/guides/commands/scan-replay.md
Normal file
162
docs/modules/cli/guides/commands/scan-replay.md
Normal file
@@ -0,0 +1,162 @@
|
||||
# scan replay Command Reference
|
||||
|
||||
The `stella scan replay` command performs deterministic verdict reproduction using explicit input hashes.
|
||||
|
||||
## Synopsis
|
||||
|
||||
```bash
|
||||
stella scan replay [options]
|
||||
```
|
||||
|
||||
## Description
|
||||
|
||||
Replays a scan with explicit hashes for **deterministic verdict reproduction**. This command enables:
|
||||
|
||||
- **Reproducibility**: Re-execute a scan with the same inputs to verify identical results
|
||||
- **Audit compliance**: Prove historical decisions can be recreated
|
||||
- **Offline verification**: Replay verdicts in air-gapped environments
|
||||
|
||||
Unlike `stella replay --manifest <file>` which uses a manifest file, `stella scan replay` accepts individual hash parameters directly, making it suitable for:
|
||||
|
||||
- Commands copied from evidence bundles
|
||||
- CI/CD pipeline integration
|
||||
- Backend-generated replay commands
|
||||
|
||||
## Options
|
||||
|
||||
### Required Parameters
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--artifact <digest>` | Artifact digest to replay (e.g., `sha256:abc123...`) |
|
||||
| `--manifest <hash>` | Run manifest hash for configuration |
|
||||
| `--feeds <hash>` | Feed snapshot hash at time of scan |
|
||||
| `--policy <hash>` | Policy ruleset hash |
|
||||
|
||||
### Optional Parameters
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--snapshot <id>` | Knowledge snapshot ID for offline replay |
|
||||
| `--offline` | Run in offline/air-gapped mode. Requires all inputs to be locally cached |
|
||||
| `--verify-inputs` | Verify all input hashes before starting replay |
|
||||
| `-o, --output <path>` | Output file path for verdict JSON (defaults to stdout) |
|
||||
| `--verbose` | Enable verbose output with hash confirmation |
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic Replay
|
||||
|
||||
```bash
|
||||
stella scan replay \
|
||||
--artifact sha256:a1b2c3d4e5f6... \
|
||||
--manifest sha256:abc123def456... \
|
||||
--feeds sha256:feed789feed... \
|
||||
--policy sha256:policy321...
|
||||
```
|
||||
|
||||
### Replay with Knowledge Snapshot
|
||||
|
||||
```bash
|
||||
stella scan replay \
|
||||
--artifact sha256:a1b2c3d4e5f6... \
|
||||
--manifest sha256:abc123def456... \
|
||||
--feeds sha256:feed789feed... \
|
||||
--policy sha256:policy321... \
|
||||
--snapshot KS-2025-01-15-001
|
||||
```
|
||||
|
||||
### Offline Replay with Verification
|
||||
|
||||
```bash
|
||||
stella scan replay \
|
||||
--artifact sha256:a1b2c3d4e5f6... \
|
||||
--manifest sha256:abc123def456... \
|
||||
--feeds sha256:feed789feed... \
|
||||
--policy sha256:policy321... \
|
||||
--offline \
|
||||
--verify-inputs \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Save Output to File
|
||||
|
||||
```bash
|
||||
stella scan replay \
|
||||
--artifact sha256:a1b2c3d4e5f6... \
|
||||
--manifest sha256:abc123def456... \
|
||||
--feeds sha256:feed789feed... \
|
||||
--policy sha256:policy321... \
|
||||
--output replay-result.json
|
||||
```
|
||||
|
||||
## Input Hash Verification
|
||||
|
||||
When `--verify-inputs` is specified, the command validates:
|
||||
|
||||
1. **Artifact digest format**: Must start with `sha256:` or `sha512:`
|
||||
2. **Hash lengths**: SHA256 = 64 hex characters, SHA512 = 128 hex characters
|
||||
3. **Local availability** (in offline mode): Verifies cached inputs exist
|
||||
|
||||
## Offline Mode
|
||||
|
||||
The `--offline` flag enables air-gapped replay:
|
||||
|
||||
- All inputs must be pre-cached locally
|
||||
- No network calls are made
|
||||
- Use `stella offline prepare` to pre-fetch required data
|
||||
|
||||
## Output Format
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"artifactDigest": "sha256:a1b2c3d4e5f6...",
|
||||
"manifestHash": "sha256:abc123def456...",
|
||||
"feedSnapshotHash": "sha256:feed789feed...",
|
||||
"policyHash": "sha256:policy321...",
|
||||
"knowledgeSnapshotId": "KS-2025-01-15-001",
|
||||
"offlineMode": false,
|
||||
"startedAt": "2025-01-15T10:30:00Z",
|
||||
"completedAt": "2025-01-15T10:30:45Z",
|
||||
"verdict": {
|
||||
"findingId": "f-abc123",
|
||||
"status": "affected",
|
||||
"confidence": 0.95
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Integration with Evidence Bundles
|
||||
|
||||
Evidence bundles generated by the `/v1/triage/findings/{id}/evidence/export` endpoint include ready-to-run replay scripts:
|
||||
|
||||
- `replay.sh` - Bash script for Linux/macOS
|
||||
- `replay.ps1` - PowerShell script for Windows
|
||||
- `replay-command.txt` - Raw command for copy-paste
|
||||
|
||||
Example from evidence bundle:
|
||||
|
||||
```bash
|
||||
# From evidence bundle replay.sh
|
||||
stella scan replay \
|
||||
--artifact "sha256:a1b2c3d4e5f6..." \
|
||||
--manifest "sha256:abc123def456..." \
|
||||
--feeds "sha256:feed789feed..." \
|
||||
--policy "sha256:policy321..."
|
||||
```
|
||||
|
||||
## Related Commands
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `stella replay --manifest <file>` | Replay using a manifest file |
|
||||
| `stella replay verify` | Verify determinism by replaying twice |
|
||||
| `stella replay snapshot` | Replay using knowledge snapshot ID |
|
||||
| `stella offline prepare` | Pre-fetch data for offline replay |
|
||||
|
||||
## See Also
|
||||
|
||||
- [Deterministic Replay Specification](../../replay/DETERMINISTIC_REPLAY.md)
|
||||
- [Offline Kit Documentation](../../24_OFFLINE_KIT.md)
|
||||
- [Evidence Bundle Format](./evidence-bundle-format.md)
|
||||
@@ -317,7 +317,7 @@ public interface IFeedConnector {
|
||||
| `advisory.observation.updated@1` | `events/advisory.observation.updated@1.json` | Fired on new or superseded observations. Includes `observationId`, source metadata, `linksetSummary` (aliases/purls), supersedes pointer (if any), SHA-256 hash, and `traceId`. |
|
||||
| `advisory.linkset.updated@1` | `events/advisory.linkset.updated@1.json` | Fired when correlation changes. Includes `linksetId`, `key{vulnerabilityId, productKey, confidence}`, observation deltas, conflicts, `updatedAt`, and canonical hash. |
|
||||
|
||||
Events are emitted via NATS (primary) and Redis Stream (fallback). Consumers acknowledge idempotently using the hash; duplicates are safe. Offline Kit captures both topics during bundle creation for air-gapped replay.
|
||||
Events are emitted via NATS (primary) and Valkey Stream (fallback). Consumers acknowledge idempotently using the hash; duplicates are safe. Offline Kit captures both topics during bundle creation for air-gapped replay.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -42,7 +42,7 @@ Purpose: unblock CONCELIER-LNM-21-005 by freezing the platform event shape for l
|
||||
- No judgments: only raw facts, delta descriptions, and provenance pointers; any derived severity/merge content is forbidden.
|
||||
|
||||
### Error contracts for Scheduler
|
||||
- Retryable NATS/Redis failures use backoff capped at 30s; after 5 attempts, emit `concelier.events.dlq` with the same envelope and `error` field describing transport failure.
|
||||
- Retryable NATS/Valkey failures use backoff capped at 30s; after 5 attempts, emit `concelier.events.dlq` with the same envelope and `error` field describing transport failure.
|
||||
- Consumers must NACK on schema validation failure; publisher logs `ERR_EVENT_SCHEMA` and quarantines the offending linkset id.
|
||||
|
||||
## Sample payload
|
||||
|
||||
@@ -31,7 +31,7 @@ Purpose: unblock CONCELIER-GRAPH-21-002 by freezing the platform event shape for
|
||||
- No judgments: only raw facts and hash pointers; any derived severity/merge content is forbidden.
|
||||
|
||||
### Error contracts for Scheduler
|
||||
- Retryable NATS/Redis failures use backoff capped at 30s; after 5 attempts, emit `concelier.events.dlq` with the same envelope and `error` field describing transport failure.
|
||||
- Retryable NATS/Valkey failures use backoff capped at 30s; after 5 attempts, emit `concelier.events.dlq` with the same envelope and `error` field describing transport failure.
|
||||
- Consumers must NACK on schema validation failure; publisher logs `ERR_EVENT_SCHEMA` and quarantines the offending observation id.
|
||||
|
||||
## Sample payload
|
||||
|
||||
@@ -56,7 +56,7 @@ concelier:
|
||||
- `histogram_quantile(0.95, rate(apple_map_affected_count_bucket[1h]))` to watch affected-package fan-out
|
||||
- `increase(apple_parse_failures_total[6h])` to catch parser drift (alerts at `>0`)
|
||||
- **Alerts** – Page if `rate(apple_fetch_items_total[2h]) == 0` during business hours while other connectors are active. This often indicates lookup feed failures or misconfigured allow-lists.
|
||||
- **Logs** – Surface warnings `Apple document {DocumentId} missing GridFS payload` or `Apple parse failed`—repeated hits imply storage issues or HTML regressions.
|
||||
- **Logs** – Surface warnings `Apple document {DocumentId} missing document payload` or `Apple parse failed`—repeated hits imply storage issues or HTML regressions.
|
||||
- **Telemetry pipeline** – `StellaOps.Concelier.WebService` now exports `StellaOps.Concelier.Connector.Vndr.Apple` alongside existing Concelier meters; ensure your OTEL collector or Prometheus scraper includes it.
|
||||
|
||||
## 4. Fixture Maintenance
|
||||
|
||||
@@ -40,7 +40,7 @@ concelier:
|
||||
- `CCCS fetch completed feeds=… items=… newDocuments=… pendingDocuments=…`
|
||||
- `CCCS parse completed parsed=… failures=…`
|
||||
- `CCCS map completed mapped=… failures=…`
|
||||
- Warnings fire when GridFS payloads/DTOs go missing or parser sanitisation fails.
|
||||
- Warnings fire when document payloads/DTOs go missing or parser sanitisation fails.
|
||||
|
||||
Suggested Grafana alerts:
|
||||
- `increase(cccs.fetch.failures_total[15m]) > 0`
|
||||
@@ -53,7 +53,7 @@ Suggested Grafana alerts:
|
||||
2. **Stage ingestion**:
|
||||
- Temporarily raise `maxEntriesPerFetch` (e.g. 500) and restart Concelier workers.
|
||||
- Run chained jobs until `pendingDocuments` drains:
|
||||
Run `stella db fetch --source cccs --stage fetch`, then `--stage parse`, then `--stage map`.
|
||||
Run `stella db fetch --source cccs --stage fetch`, then `--stage parse`, then `--stage map`.
|
||||
- Monitor `cccs.fetch.unchanged` growth; once it approaches dataset size the backfill is complete.
|
||||
3. **Optional pagination sweep** – for incremental mirrors, iterate `page=<n>` (0…N) while `response.Count == 50`, persisting JSON to disk. Store alongside metadata (`language`, `page`, SHA256) so repeated runs detect drift.
|
||||
4. **Language split** – keep EN/FR payloads separate to preserve canonical language fields. The connector emits `Language` directly from the feed entry, so mixed ingestion simply produces parallel advisories keyed by the same serial number.
|
||||
|
||||
@@ -33,7 +33,7 @@ This runbook describes how Ops provisions, rotates, and distributes Cisco PSIRT
|
||||
- Update `concelier:sources:cisco:auth` (or the module-specific secret template) with the stored credentials.
|
||||
- For Offline Kit delivery, export encrypted secrets into `offline-kit/secrets/cisco-openvuln.json` using the platform’s sealed secret format.
|
||||
4. **Connectivity validation**
|
||||
- From the Concelier control plane, run `stella db fetch --source vndr-cisco --stage fetch` (use staging or a controlled window).
|
||||
- From the Concelier control plane, run `stella db fetch --source vndr-cisco --stage fetch` (use staging or a controlled window).
|
||||
- Ensure the Source HTTP diagnostics record `Bearer` authorization headers and no 401/403 responses.
|
||||
|
||||
## 4. Rotation SOP
|
||||
@@ -78,7 +78,7 @@ This runbook describes how Ops provisions, rotates, and distributes Cisco PSIRT
|
||||
- `Cisco fetch completed date=… pages=… added=…` (info)
|
||||
- `Cisco parse completed parsed=… failures=…` (info)
|
||||
- `Cisco map completed mapped=… failures=…` (info)
|
||||
- Warnings surface when DTO serialization fails or GridFS payload is missing.
|
||||
- Warnings surface when DTO serialization fails or document payload is missing.
|
||||
- Suggested alerts: non-zero `cisco.fetch.failures` in 15m, or `cisco.map.success` flatlines while fetch continues.
|
||||
|
||||
## 8. Incident response
|
||||
|
||||
@@ -37,7 +37,7 @@ concelier:
|
||||
- `KISA feed returned {ItemCount}`
|
||||
- `KISA fetched detail for {Idx} … category={Category}`
|
||||
- `KISA mapped advisory {AdvisoryId} (severity={Severity})`
|
||||
- Absence of warnings such as `document missing GridFS payload`.
|
||||
- Absence of warnings such as `document missing payload`.
|
||||
5. Validate PostgreSQL state (schema `vuln`):
|
||||
- `raw_documents` table metadata has `kisa.idx`, `kisa.category`, `kisa.title`.
|
||||
- `dtos` table contains `schemaVersion="kisa.detail.v1"`.
|
||||
|
||||
@@ -43,6 +43,6 @@ For large migrations, seed caches with archived zip bundles, then run fetch/pars
|
||||
|
||||
- Listing failures mark the source state with exponential backoff while attempting cache replay.
|
||||
- Bulletin fetches fall back to cached copies before surfacing an error.
|
||||
- Mongo integration tests rely on bundled OpenSSL 1.1 libraries (`src/Tools/openssl/linux-x64`) to keep `Mongo2Go` operational on modern distros.
|
||||
- Integration tests use Testcontainers with PostgreSQL for connector verification.
|
||||
|
||||
Refer to `ru-nkcki` entries in `src/Concelier/StellaOps.Concelier.PluginBinaries/StellaOps.Concelier.Connector.Ru.Nkcki/TASKS.md` for outstanding items.
|
||||
|
||||
@@ -314,9 +314,9 @@ rustfs://stellaops/
|
||||
|
||||
### 7.5 PostgreSQL server baseline
|
||||
|
||||
* **Minimum supported server:** MongoDB **4.2+**. Driver 3.5.0 removes compatibility shims for 4.0; upstream has already announced 4.0 support will be dropped in upcoming C# driver releases. citeturn1open1
|
||||
* **Deploy images:** Compose/Helm defaults stay on `postgres:16`. For air-gapped installs, refresh Offline Kit bundles so the packaged `postgres` matches ≥4.2.
|
||||
* **Upgrade guard:** During rollout, verify replica sets reach FCV `4.2` or above before swapping binaries; automation should hard-stop if FCV is <4.2.
|
||||
* **Minimum supported server:** PostgreSQL **16+**. Earlier versions lack required features (e.g., enhanced JSON functions, performance improvements).
|
||||
* **Deploy images:** Compose/Helm defaults stay on `postgres:16`. For air-gapped installs, refresh Offline Kit bundles so the packaged PostgreSQL image matches ≥16.
|
||||
* **Upgrade guard:** During rollout, verify PostgreSQL major version ≥16 before applying schema migrations; automation should hard-stop if version check fails.
|
||||
|
||||
---
|
||||
|
||||
@@ -351,7 +351,7 @@ Prometheus + OTLP; Grafana dashboards ship in the charts.
|
||||
* **Vulnerability response**:
|
||||
|
||||
* Concelier red-flag advisories trigger accelerated **stable** patch rollout; UI/CLI “security patch available” notice.
|
||||
* 2025-10: Pinned `MongoDB.Driver` **3.5.0** and `SharpCompress` **0.41.0** across services (DEVOPS-SEC-10-301) to eliminate NU1902/NU1903 warnings surfaced during scanner cache/worker test runs; repacked the local `Mongo2Go` feed so test fixtures inherit the patched dependencies; future bumps follow the same central override pattern.
|
||||
* 2025-10: Pinned `SharpCompress` **0.41.0** across services (DEVOPS-SEC-10-301) to eliminate NU1903 warnings; future bumps follow the central override pattern. MongoDB dependencies were removed in Sprint 4400 (all persistence now uses PostgreSQL).
|
||||
|
||||
* **Backups/DR**:
|
||||
|
||||
|
||||
@@ -2,6 +2,8 @@
|
||||
|
||||
_Updated: 2025-10-26 (UTC)_
|
||||
|
||||
> **Note (2025-12):** This document reflects the state at initial launch. Since then, MongoDB has been fully removed (Sprint 4400) and replaced with PostgreSQL. Redis references now use Valkey. See current deployment docs in `deploy/` for up-to-date configuration.
|
||||
|
||||
This document captures production launch sign-offs, deployment readiness checkpoints, and any open risks that must be tracked before GA cutover.
|
||||
|
||||
## 1. Sign-off Summary
|
||||
@@ -13,11 +15,11 @@ This document captures production launch sign-offs, deployment readiness checkpo
|
||||
| Attestor | Attestor Guild | `ATTESTOR-API-11-201` / `ATTESTOR-VERIFY-11-202` / `ATTESTOR-OBS-11-203` (DONE 2025-10-19) | READY | 2025-10-26T14:10Z | Rekor submission/verification pipeline green; telemetry pack published. |
|
||||
| Scanner Web + Worker | Scanner WebService Guild | `SCANNER-WEB-09-10x`, `SCANNER-RUNTIME-12-30x` (DONE 2025-10-18 -> 2025-10-24) | READY* | 2025-10-26T14:20Z | Orchestrator envelope work (`SCANNER-EVENTS-16-301/302`) still open; see gaps. |
|
||||
| Concelier Core & Connectors | Concelier Core / Ops Guild | Ops runbook sign-off in `docs/modules/concelier/operations/conflict-resolution.md` (2025-10-16) | READY | 2025-10-26T14:25Z | Conflict resolution & connector coverage accepted; Mongo schema hardening pending (see gaps). |
|
||||
| Excititor API | Excititor Core Guild | Wave 0 connector ingest sign-offs (Sprint backlog reference) | READY | 2025-10-26T14:28Z | VEX linkset publishing complete for launch datasets. |
|
||||
| Excititor API | Excititor Core Guild | Wave 0 connector ingest sign-offs (Sprint backlog reference) | READY | 2025-10-26T14:28Z | VEX linkset publishing complete for launch datasets. |
|
||||
| Notify Web (legacy) | Notify Guild | Existing stack carried forward; Notifier program tracked separately (Sprint 38-40) | PENDING | 2025-10-26T14:32Z | Legacy notify web remains operational; migration to Notifier blocked on `SCANNER-EVENTS-16-301`. |
|
||||
| Web UI | UI Guild | Stable build `registry.stella-ops.org/.../web-ui@sha256:10d9248...` deployed in stage and smoke-tested | READY | 2025-10-26T14:35Z | Policy editor GA items (Sprint 20) outside launch scope. |
|
||||
| DevOps / Release | DevOps Guild | `deploy/tools/validate-profiles.sh` run (2025-10-26) covering dev/stage/prod/airgap/mirror | READY | 2025-10-26T15:02Z | Compose/Helm lint + docker compose config validated; see Section 2 for details. |
|
||||
| Offline Kit | Offline Kit Guild | `DEVOPS-OFFLINE-18-004` (Go analyzer) and `DEVOPS-OFFLINE-18-005` (Python analyzer) complete; debug-store mirror pending (`DEVOPS-OFFLINE-17-004`). | PENDING | 2025-11-23T15:05Z | Release workflow now ships `out/release/debug`; run `mirror_debug_store.py` on next release artefact and commit `metadata/debug-store.json`. |
|
||||
| Offline Kit | Offline Kit Guild | `DEVOPS-OFFLINE-18-004` (Go analyzer) and `DEVOPS-OFFLINE-18-005` (Python analyzer) complete; debug-store mirror pending (`DEVOPS-OFFLINE-17-004`). | PENDING | 2025-11-23T15:05Z | Release workflow now ships `out/release/debug`; run `mirror_debug_store.py` on next release artefact and commit `metadata/debug-store.json`. |
|
||||
|
||||
_\* READY with caveat - remaining work noted in Section 3._
|
||||
|
||||
@@ -38,7 +40,7 @@ _\* READY with caveat - remaining work noted in Section 3._
|
||||
| Tenant scope propagation and audit coverage | Authority Core Guild | `AUTH-AOC-19-002` (DOING 2025-10-26) | Land enforcement + audit fixtures by Sprint 19 freeze | Medium - required for multi-tenant GA but does not block initial cutover if tenants scoped manually. |
|
||||
| Orchestrator event envelopes + Notifier handshake | Scanner WebService Guild | `SCANNER-EVENTS-16-301` (BLOCKED), `SCANNER-EVENTS-16-302` (DOING) | Coordinate with Gateway/Notifier owners on preview package replacement or binding redirects; rerun `dotnet test` once patch lands and refresh schema docs. Share envelope samples in `docs/events/` after tests pass. | High — gating Notifier migration; legacy notify path remains functional meanwhile. |
|
||||
| Offline Kit Python analyzer bundle | Offline Kit Guild + Scanner Guild | `DEVOPS-OFFLINE-18-005` (DONE 2025-10-26) | Monitor for follow-up manifest updates and rerun smoke script when analyzers change. | Medium - ensures language analyzer coverage stays current for offline installs. |
|
||||
| Offline Kit debug store mirror | Offline Kit Guild + DevOps Guild | `DEVOPS-OFFLINE-17-004` (TODO 2025-11-23) | Release pipeline now publishes `out/release/debug`; run `mirror_debug_store.py`, verify hashes, and commit `metadata/debug-store.json`. | Low - symbol lookup remains accessible from staging assets but required before next Offline Kit tag. |
|
||||
| Offline Kit debug store mirror | Offline Kit Guild + DevOps Guild | `DEVOPS-OFFLINE-17-004` (TODO 2025-11-23) | Release pipeline now publishes `out/release/debug`; run `mirror_debug_store.py`, verify hashes, and commit `metadata/debug-store.json`. | Low - symbol lookup remains accessible from staging assets but required before next Offline Kit tag. |
|
||||
| Mongo schema validators for advisory ingestion | Concelier Storage Guild | `CONCELIER-STORE-AOC-19-001` (TODO) | Finalize JSON schema + migration toggles; coordinate with Ops for rollout window | Low - current validation handled in app layer; schema guard adds defense-in-depth. |
|
||||
| Authority plugin telemetry alignment | Security Guild | `SEC2.PLG`, `SEC3.PLG`, `SEC5.PLG` (BLOCKED pending AUTH DPoP/MTLS tasks) | Resume once upstream auth surfacing stabilises | Low - plugin remains optional; launch uses default Authority configuration. |
|
||||
|
||||
|
||||
360
docs/modules/evidence/unified-model.md
Normal file
360
docs/modules/evidence/unified-model.md
Normal file
@@ -0,0 +1,360 @@
|
||||
# Unified Evidence Model
|
||||
|
||||
> **Module:** `StellaOps.Evidence.Core`
|
||||
> **Status:** Production
|
||||
> **Owner:** Platform Guild
|
||||
|
||||
## Overview
|
||||
|
||||
The Unified Evidence Model provides a standardized interface (`IEvidence`) and implementation (`EvidenceRecord`) for representing evidence across all StellaOps modules. This enables:
|
||||
|
||||
- **Cross-module evidence linking**: Evidence from Scanner, Attestor, Excititor, and Policy modules share a common contract.
|
||||
- **Content-addressed verification**: Evidence records are immutable and verifiable via deterministic hashing.
|
||||
- **Unified storage**: A single `IEvidenceStore` interface abstracts persistence across modules.
|
||||
- **Cryptographic attestation**: Multiple signatures from different signers (internal, vendor, CI, operator) can vouch for evidence.
|
||||
|
||||
## Core Types
|
||||
|
||||
### IEvidence Interface
|
||||
|
||||
```csharp
|
||||
public interface IEvidence
|
||||
{
|
||||
string SubjectNodeId { get; } // Content-addressed subject
|
||||
EvidenceType EvidenceType { get; } // Type discriminator
|
||||
string EvidenceId { get; } // Computed hash identifier
|
||||
ReadOnlyMemory<byte> Payload { get; } // Canonical JSON payload
|
||||
IReadOnlyList<EvidenceSignature> Signatures { get; }
|
||||
EvidenceProvenance Provenance { get; }
|
||||
string? ExternalPayloadCid { get; } // For large payloads
|
||||
string PayloadSchemaVersion { get; }
|
||||
}
|
||||
```
|
||||
|
||||
### EvidenceType Enum
|
||||
|
||||
The platform supports these evidence types:
|
||||
|
||||
| Type | Value | Description | Example Payload |
|
||||
|------|-------|-------------|-----------------|
|
||||
| `Reachability` | 1 | Call graph analysis | Paths, confidence, graph digest |
|
||||
| `Scan` | 2 | Vulnerability finding | CVE, severity, affected package |
|
||||
| `Policy` | 3 | Policy evaluation | Rule ID, verdict, inputs |
|
||||
| `Artifact` | 4 | SBOM entry metadata | PURL, digest, build info |
|
||||
| `Vex` | 5 | VEX statement | Status, justification, impact |
|
||||
| `Epss` | 6 | EPSS score | Score, percentile, model date |
|
||||
| `Runtime` | 7 | Runtime observation | eBPF/ETW traces, call frames |
|
||||
| `Provenance` | 8 | Build provenance | SLSA attestation, builder info |
|
||||
| `Exception` | 9 | Applied exception | Exception ID, reason, expiry |
|
||||
| `Guard` | 10 | Guard/gate analysis | Gate type, condition, bypass |
|
||||
| `Kev` | 11 | KEV status | In-KEV flag, added date |
|
||||
| `License` | 12 | License analysis | SPDX ID, compliance status |
|
||||
| `Dependency` | 13 | Dependency metadata | Graph edge, version range |
|
||||
| `Custom` | 100 | User-defined | Schema-versioned custom payload |
|
||||
|
||||
### EvidenceRecord
|
||||
|
||||
The concrete implementation with deterministic identity:
|
||||
|
||||
```csharp
|
||||
public sealed record EvidenceRecord : IEvidence
|
||||
{
|
||||
public static EvidenceRecord Create(
|
||||
string subjectNodeId,
|
||||
EvidenceType evidenceType,
|
||||
ReadOnlyMemory<byte> payload,
|
||||
EvidenceProvenance provenance,
|
||||
string payloadSchemaVersion,
|
||||
IReadOnlyList<EvidenceSignature>? signatures = null,
|
||||
string? externalPayloadCid = null);
|
||||
|
||||
public bool VerifyIntegrity();
|
||||
}
|
||||
```
|
||||
|
||||
**EvidenceId Computation:**
|
||||
|
||||
The `EvidenceId` is a SHA-256 hash of the canonicalized fields using versioned prefixing:
|
||||
|
||||
```
|
||||
EvidenceId = "evidence:" + CanonJson.HashVersionedPrefixed("IEvidence", "v1", {
|
||||
SubjectNodeId,
|
||||
EvidenceType,
|
||||
PayloadHash,
|
||||
Provenance.GeneratorId,
|
||||
Provenance.GeneratorVersion,
|
||||
Provenance.GeneratedAt (ISO 8601)
|
||||
})
|
||||
```
|
||||
|
||||
### EvidenceSignature
|
||||
|
||||
Cryptographic attestation by a signer:
|
||||
|
||||
```csharp
|
||||
public sealed record EvidenceSignature
|
||||
{
|
||||
public required string SignerId { get; init; }
|
||||
public required string Algorithm { get; init; } // ES256, RS256, EdDSA
|
||||
public required string SignatureBase64 { get; init; }
|
||||
public required DateTimeOffset SignedAt { get; init; }
|
||||
public SignerType SignerType { get; init; }
|
||||
public IReadOnlyList<string>? CertificateChain { get; init; }
|
||||
}
|
||||
```
|
||||
|
||||
**SignerType Values:**
|
||||
- `Internal` (0): StellaOps service
|
||||
- `Vendor` (1): External vendor/supplier
|
||||
- `CI` (2): CI/CD pipeline
|
||||
- `Operator` (3): Human operator
|
||||
- `TransparencyLog` (4): Rekor/transparency log
|
||||
- `Scanner` (5): Security scanner
|
||||
- `PolicyEngine` (6): Policy engine
|
||||
- `Unknown` (255): Unclassified
|
||||
|
||||
### EvidenceProvenance
|
||||
|
||||
Generation context:
|
||||
|
||||
```csharp
|
||||
public sealed record EvidenceProvenance
|
||||
{
|
||||
public required string GeneratorId { get; init; }
|
||||
public required string GeneratorVersion { get; init; }
|
||||
public required DateTimeOffset GeneratedAt { get; init; }
|
||||
public string? CorrelationId { get; init; }
|
||||
public Guid? TenantId { get; init; }
|
||||
// ... additional fields
|
||||
}
|
||||
```
|
||||
|
||||
## Adapters
|
||||
|
||||
Adapters convert module-specific evidence types to the unified `IEvidence` interface:
|
||||
|
||||
### Available Adapters
|
||||
|
||||
| Adapter | Source Module | Source Type | Target Evidence Types |
|
||||
|---------|---------------|-------------|----------------------|
|
||||
| `EvidenceBundleAdapter` | Scanner | `EvidenceBundle` | Reachability, Vex, Provenance, Scan |
|
||||
| `EvidenceStatementAdapter` | Attestor | `EvidenceStatement` (in-toto) | Scan |
|
||||
| `ProofSegmentAdapter` | Scanner | `ProofSegment` | Varies by segment type |
|
||||
| `VexObservationAdapter` | Excititor | `VexObservation` | Vex, Provenance |
|
||||
| `ExceptionApplicationAdapter` | Policy | `ExceptionApplication` | Exception |
|
||||
|
||||
### Adapter Interface
|
||||
|
||||
```csharp
|
||||
public interface IEvidenceAdapter<TSource>
|
||||
{
|
||||
IReadOnlyList<IEvidence> Convert(
|
||||
TSource source,
|
||||
string subjectNodeId,
|
||||
EvidenceProvenance provenance);
|
||||
|
||||
bool CanConvert(TSource source);
|
||||
}
|
||||
```
|
||||
|
||||
### Using Adapters
|
||||
|
||||
Adapters use **input DTOs** to avoid circular dependencies:
|
||||
|
||||
```csharp
|
||||
// Using VexObservationAdapter
|
||||
var adapter = new VexObservationAdapter();
|
||||
|
||||
var input = new VexObservationInput
|
||||
{
|
||||
ObservationId = "obs-001",
|
||||
ProviderId = "nvd",
|
||||
StreamId = "cve-feed",
|
||||
// ... other fields from VexObservation
|
||||
};
|
||||
|
||||
var provenance = new EvidenceProvenance
|
||||
{
|
||||
GeneratorId = "excititor-ingestor",
|
||||
GeneratorVersion = "1.0.0",
|
||||
GeneratedAt = DateTimeOffset.UtcNow
|
||||
};
|
||||
|
||||
if (adapter.CanConvert(input))
|
||||
{
|
||||
IReadOnlyList<IEvidence> records = adapter.Convert(
|
||||
input,
|
||||
subjectNodeId: "sha256:abc123",
|
||||
provenance);
|
||||
}
|
||||
```
|
||||
|
||||
## Evidence Store
|
||||
|
||||
### IEvidenceStore Interface
|
||||
|
||||
```csharp
|
||||
public interface IEvidenceStore
|
||||
{
|
||||
Task<EvidenceRecord> StoreAsync(
|
||||
EvidenceRecord record,
|
||||
CancellationToken ct = default);
|
||||
|
||||
Task<IReadOnlyList<EvidenceRecord>> StoreBatchAsync(
|
||||
IEnumerable<EvidenceRecord> records,
|
||||
CancellationToken ct = default);
|
||||
|
||||
Task<EvidenceRecord?> GetByIdAsync(
|
||||
string evidenceId,
|
||||
CancellationToken ct = default);
|
||||
|
||||
Task<IReadOnlyList<EvidenceRecord>> GetBySubjectAsync(
|
||||
string subjectNodeId,
|
||||
EvidenceType? evidenceType = null,
|
||||
CancellationToken ct = default);
|
||||
|
||||
Task<IReadOnlyList<EvidenceRecord>> GetByTypeAsync(
|
||||
EvidenceType evidenceType,
|
||||
int limit = 100,
|
||||
CancellationToken ct = default);
|
||||
|
||||
Task<bool> ExistsAsync(
|
||||
string evidenceId,
|
||||
CancellationToken ct = default);
|
||||
|
||||
Task<bool> DeleteAsync(
|
||||
string evidenceId,
|
||||
CancellationToken ct = default);
|
||||
}
|
||||
```
|
||||
|
||||
### Implementations
|
||||
|
||||
- **`InMemoryEvidenceStore`**: Thread-safe in-memory store for testing and development.
|
||||
- **`PostgresEvidenceStore`** (planned): Production store with tenant isolation and indexing.
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Creating Evidence
|
||||
|
||||
```csharp
|
||||
var provenance = new EvidenceProvenance
|
||||
{
|
||||
GeneratorId = "scanner-service",
|
||||
GeneratorVersion = "2.1.0",
|
||||
GeneratedAt = DateTimeOffset.UtcNow,
|
||||
TenantId = tenantId
|
||||
};
|
||||
|
||||
// Serialize payload to canonical JSON
|
||||
var payloadBytes = CanonJson.Canonicalize(new
|
||||
{
|
||||
cveId = "CVE-2024-1234",
|
||||
severity = "HIGH",
|
||||
affectedPackage = "pkg:npm/lodash@4.17.20"
|
||||
});
|
||||
|
||||
var evidence = EvidenceRecord.Create(
|
||||
subjectNodeId: "sha256:abc123def456...",
|
||||
evidenceType: EvidenceType.Scan,
|
||||
payload: payloadBytes,
|
||||
provenance: provenance,
|
||||
payloadSchemaVersion: "scan/v1");
|
||||
```
|
||||
|
||||
### Storing and Retrieving
|
||||
|
||||
```csharp
|
||||
var store = new InMemoryEvidenceStore();
|
||||
|
||||
// Store
|
||||
await store.StoreAsync(evidence);
|
||||
|
||||
// Retrieve by ID
|
||||
var retrieved = await store.GetByIdAsync(evidence.EvidenceId);
|
||||
|
||||
// Retrieve all evidence for a subject
|
||||
var allForSubject = await store.GetBySubjectAsync(
|
||||
"sha256:abc123def456...",
|
||||
evidenceType: EvidenceType.Scan);
|
||||
|
||||
// Verify integrity
|
||||
bool isValid = retrieved!.VerifyIntegrity();
|
||||
```
|
||||
|
||||
### Cross-Module Evidence Linking
|
||||
|
||||
```csharp
|
||||
// Scanner produces evidence bundle
|
||||
var bundle = scanner.ProduceEvidenceBundle(target);
|
||||
|
||||
// Convert to unified evidence
|
||||
var adapter = new EvidenceBundleAdapter();
|
||||
var evidenceRecords = adapter.Convert(bundle, subjectNodeId, provenance);
|
||||
|
||||
// Store all records
|
||||
await store.StoreBatchAsync(evidenceRecords);
|
||||
|
||||
// Later, any module can query by subject
|
||||
var allEvidence = await store.GetBySubjectAsync(subjectNodeId);
|
||||
|
||||
// Filter by type
|
||||
var reachabilityEvidence = allEvidence
|
||||
.Where(e => e.EvidenceType == EvidenceType.Reachability);
|
||||
var vexEvidence = allEvidence
|
||||
.Where(e => e.EvidenceType == EvidenceType.Vex);
|
||||
```
|
||||
|
||||
## Schema Versioning
|
||||
|
||||
Each evidence type payload has a schema version (`PayloadSchemaVersion`) for forward compatibility:
|
||||
|
||||
- `scan/v1`: Initial scan evidence schema
|
||||
- `reachability/v1`: Reachability evidence schema
|
||||
- `vex-statement/v1`: VEX statement evidence schema
|
||||
- `proof-segment/v1`: Proof segment evidence schema
|
||||
- `exception-application/v1`: Exception application schema
|
||||
|
||||
Consumers should check `PayloadSchemaVersion` before deserializing payloads to handle schema evolution.
|
||||
|
||||
## Integration Patterns
|
||||
|
||||
### Module Integration
|
||||
|
||||
Each module that produces evidence should:
|
||||
|
||||
1. Create an adapter if converting from module-specific types
|
||||
2. Use `EvidenceRecord.Create()` for new evidence
|
||||
3. Store evidence via `IEvidenceStore`
|
||||
4. Include provenance with generator identification
|
||||
|
||||
### Verification Flow
|
||||
|
||||
```
|
||||
1. Retrieve evidence by SubjectNodeId
|
||||
2. Call VerifyIntegrity() to check EvidenceId
|
||||
3. Verify signatures against known trust roots
|
||||
4. Deserialize and validate payload against schema
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
The `StellaOps.Evidence.Core.Tests` project includes:
|
||||
|
||||
- **111 unit tests** covering:
|
||||
- EvidenceRecord creation and hash computation
|
||||
- InMemoryEvidenceStore CRUD operations
|
||||
- All adapter conversions (VexObservation, ExceptionApplication, ProofSegment)
|
||||
- Edge cases and error handling
|
||||
|
||||
Run tests:
|
||||
```bash
|
||||
dotnet test src/__Libraries/StellaOps.Evidence.Core.Tests/
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Proof Chain Architecture](../attestor/proof-chain.md)
|
||||
- [Evidence Bundle Design](../scanner/evidence-bundle.md)
|
||||
- [VEX Observation Model](../excititor/vex-observation.md)
|
||||
- [Policy Exceptions](../policy/exceptions.md)
|
||||
@@ -39,5 +39,5 @@ Defines the event envelope for evidence timelines emitted by Excititor. All fiel
|
||||
- Emit at-most-once per storage write; idempotent consumers rely on `(eventId, tenant)`.
|
||||
|
||||
## Transport
|
||||
- Default topic: `excititor.timeline.v1` (NATS/Redis). Subject includes tenant: `excititor.timeline.v1.<tenant>`.
|
||||
- Default topic: `excititor.timeline.v1` (NATS/Valkey). Subject includes tenant: `excititor.timeline.v1.<tenant>`.
|
||||
- Payload size should stay <32 KiB; truncate conflict arrays with `truncated=true` flag if needed (keep hash counts deterministic).
|
||||
|
||||
@@ -1,33 +0,0 @@
|
||||
# Excititor · VEX Raw Collection Validator (AOC-19-001/002)
|
||||
|
||||
> **DEPRECATED:** This document describes MongoDB validators which are no longer used. Excititor now uses PostgreSQL for persistence (Sprint 4400). Schema validation is now performed via PostgreSQL constraints and check constraints. See `docs/db/SPECIFICATION.md` for current database schema.
|
||||
|
||||
- **Date:** 2025-11-25
|
||||
- **Scope:** EXCITITOR-STORE-AOC-19-001 / 19-002
|
||||
- **Working directory:** ~~`src/Excititor/__Libraries/StellaOps.Excititor.Storage.Mongo`~~ (deprecated)
|
||||
|
||||
## What shipped (historical)
|
||||
- `$jsonSchema` validator applied to `vex_raw` (migration `20251125-vex-raw-json-schema`) with `validationAction=warn`, `validationLevel=moderate` to surface contract violations without impacting ingestion.
|
||||
- Schema lives at `docs/modules/excititor/schemas/vex_raw.schema.json` (mirrors Mongo validator fields: digest/id, providerId, format, sourceUri, retrievedAt, optional content/GridFS object id, metadata strings).
|
||||
- Migration is auto-registered in DI; hosted migration runner applies it on service start. New collections created with the validator if missing.
|
||||
|
||||
## How to run (online/offline)
|
||||
1) Ensure Excititor WebService/Worker starts with Mongo credentials that allow `collMod`.
|
||||
2) Validator applies automatically via migration runner. To force manually:
|
||||
```bash
|
||||
mongosh "$MONGO_URI" --eval 'db.runCommand({collMod:"vex_raw", validator:'$(cat docs/modules/excititor/schemas/vex_raw.schema.json)', validationAction:"warn", validationLevel:"moderate"})'
|
||||
```
|
||||
3) Offline kit: bundle `docs/modules/excititor/schemas/vex_raw.schema.json` with release artifacts; ops can apply via `mongosh` or `mongo` offline against snapshots.
|
||||
|
||||
## Rollback / relax
|
||||
- To relax validation (e.g., hotfix window): `db.runCommand({collMod:"vex_raw", validator:{}, validationAction:"warn", validationLevel:"off"})`.
|
||||
- Reapplying the migration restores the schema.
|
||||
|
||||
## Compatibility notes
|
||||
- Validator keeps `additionalProperties=true` to avoid blocking future fields; required set is minimal to guarantee provenance + content hash presence.
|
||||
- Action is `warn` to avoid breaking existing feeds; flip to `error` once downstream datasets are clean.
|
||||
|
||||
## Acceptance
|
||||
- Contract + schema captured.
|
||||
- Migration in code and auto-applied.
|
||||
- Rollback path documented.
|
||||
@@ -1,7 +1,13 @@
|
||||
|
||||
# Findings Ledger
|
||||
|
||||
Start here for ledger docs.
|
||||
Immutable, append-only event ledger for tracking vulnerability findings, policy decisions, and workflow state changes across the StellaOps platform.
|
||||
|
||||
## Purpose
|
||||
|
||||
- **Audit trail**: Every finding state change (open, triage, suppress, resolve) is recorded with cryptographic hashes and actor metadata.
|
||||
- **Deterministic replay**: Events can be replayed to reconstruct finding states at any point in time.
|
||||
- **Merkle anchoring**: Event chains are Merkle-linked for tamper-evident verification.
|
||||
- **Tenant isolation**: All events are partitioned by tenant with cross-tenant access forbidden.
|
||||
|
||||
## Quick links
|
||||
- FL1–FL10 remediation tracker: `gaps-FL1-FL10.md`
|
||||
|
||||
@@ -37,7 +37,7 @@ Status for these items is tracked in `src/Notifier/StellaOps.Notifier/TASKS.md`
|
||||
|
||||
## Integrations & dependencies
|
||||
- **Storage:** PostgreSQL (schema `notify`) for rules, channels, deliveries, digests, and throttles; Valkey for worker coordination.
|
||||
- **Queues:** Redis Streams or NATS JetStream for ingestion, throttling, and DLQs (`notify.dlq`).
|
||||
- **Queues:** Valkey Streams or NATS JetStream for ingestion, throttling, and DLQs (`notify.dlq`).
|
||||
- **Authority:** OpTok-protected APIs, DPoP-backed CLI/UI scopes (`notify.viewer`, `notify.operator`, `notify.admin`), and secret references for channel credentials.
|
||||
- **Observability:** Prometheus metrics (`notify.sent_total`, `notify.failed_total`, `notify.digest_coalesced_total`, etc.), OTEL traces, and dashboards documented in `docs/notifications/architecture.md#12-observability-prometheus--otel`.
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@ src/
|
||||
├─ StellaOps.Notify.Engine/ # rules engine, templates, idempotency, digests, throttles
|
||||
├─ StellaOps.Notify.Models/ # DTOs (Rule, Channel, Event, Delivery, Template)
|
||||
├─ StellaOps.Notify.Storage.Postgres/ # canonical persistence (notify schema)
|
||||
├─ StellaOps.Notify.Queue/ # bus client (Redis Streams/NATS JetStream)
|
||||
├─ StellaOps.Notify.Queue/ # bus client (Valkey Streams/NATS JetStream)
|
||||
└─ StellaOps.Notify.Tests.* # unit/integration/e2e
|
||||
```
|
||||
|
||||
@@ -35,7 +35,7 @@ src/
|
||||
* **Notify.WebService** (stateless API)
|
||||
* **Notify.Worker** (horizontal scale)
|
||||
|
||||
**Dependencies**: Authority (OpToks; DPoP/mTLS), **PostgreSQL** (notify schema), Redis/NATS (bus), HTTP egress to Slack/Teams/Webhooks, SMTP relay for Email.
|
||||
**Dependencies**: Authority (OpToks; DPoP/mTLS), **PostgreSQL** (notify schema), Valkey/NATS (bus), HTTP egress to Slack/Teams/Webhooks, SMTP relay for Email.
|
||||
|
||||
> **Configuration.** Notify.WebService bootstraps from `notify.yaml` (see `etc/notify.yaml.sample`). Use `storage.driver: postgres` and provide `postgres.notify` options (`connectionString`, `schemaName`, pool sizing, timeouts). Authority settings follow the platform defaults—when running locally without Authority, set `authority.enabled: false` and supply `developmentSigningKey` so JWTs can be validated offline.
|
||||
>
|
||||
@@ -277,7 +277,7 @@ Canonical JSON Schemas for rules/channels/events live in `docs/modules/notify/re
|
||||
* `throttles`
|
||||
|
||||
```
|
||||
{ key:"idem:<hash>", ttlAt } // short-lived, also cached in Redis
|
||||
{ key:"idem:<hash>", ttlAt } // short-lived, also cached in Valkey
|
||||
```
|
||||
|
||||
**Indexes**: rules by `{tenantId, enabled}`, deliveries by `{tenantId, sentAt desc}`, digests by `{tenantId, actionKey}`.
|
||||
@@ -346,12 +346,12 @@ Authority signs ack tokens using keys configured under `notifications.ackTokens`
|
||||
|
||||
* **Ingestor**: N consumers with per‑key ordering (key = tenant|digest|namespace).
|
||||
* **RuleMatcher**: loads active rules snapshot for tenant into memory; vectorized predicate check.
|
||||
* **Throttle/Dedupe**: consult Redis + PostgreSQL `throttles`; if hit → record `status=throttled`.
|
||||
* **Throttle/Dedupe**: consult Valkey + PostgreSQL `throttles`; if hit → record `status=throttled`.
|
||||
* **DigestCoalescer**: append to open digest window or flush when timer expires.
|
||||
* **Renderer**: select template (channel+locale), inject variables, enforce length limits, compute `bodyHash`.
|
||||
* **Connector**: send; handle provider‑specific rate limits and backoffs; `maxAttempts` with exponential jitter; overflow → DLQ (dead‑letter topic) + UI surfacing.
|
||||
|
||||
**Idempotency**: per action **idempotency key** stored in Redis (TTL = `throttle window` or `digest window`). Connectors also respect **provider** idempotency where available (e.g., Slack `client_msg_id`).
|
||||
**Idempotency**: per action **idempotency key** stored in Valkey (TTL = `throttle window` or `digest window`). Connectors also respect **provider** idempotency where available (e.g., Slack `client_msg_id`).
|
||||
|
||||
---
|
||||
|
||||
@@ -359,7 +359,7 @@ Authority signs ack tokens using keys configured under `notifications.ackTokens`
|
||||
|
||||
* **Per‑tenant** RPM caps (default 600/min) + **per‑channel** concurrency (Slack 1–4, Teams 1–2, Email 8–32 based on relay).
|
||||
* **Backoff** map: Slack 429 → respect `Retry‑After`; SMTP 4xx → retry; 5xx → retry with jitter; permanent rejects → drop with status recorded.
|
||||
* **DLQ**: NATS/Redis stream `notify.dlq` with `{event, rule, action, error}` for operator inspection; UI shows DLQ items.
|
||||
* **DLQ**: NATS/Valkey stream `notify.dlq` with `{event, rule, action, error}` for operator inspection; UI shows DLQ items.
|
||||
|
||||
---
|
||||
|
||||
@@ -402,7 +402,7 @@ notify:
|
||||
issuer: "https://authority.internal"
|
||||
require: "dpop" # or "mtls"
|
||||
bus:
|
||||
kind: "redis" # or "nats"
|
||||
kind: "valkey" # or "nats" (valkey uses redis:// protocol)
|
||||
streams:
|
||||
- "scanner.events"
|
||||
- "scheduler.events"
|
||||
@@ -455,7 +455,7 @@ notify:
|
||||
| Invalid channel secret | Mark channel unhealthy; suppress sends; surface in UI |
|
||||
| Rule explosion (matches everything) | Safety valve: per‑tenant RPM caps; auto‑pause rule after X drops; UI alert |
|
||||
| Bus outage | Buffer to local queue (bounded); resume consuming when healthy |
|
||||
| PostgreSQL slowness | Fall back to Redis throttles; batch write deliveries; shed low‑priority notifications |
|
||||
| PostgreSQL slowness | Fall back to Valkey throttles; batch write deliveries; shed low‑priority notifications |
|
||||
|
||||
---
|
||||
|
||||
@@ -466,7 +466,7 @@ notify:
|
||||
* **Integration**: synthetic event storm (10k/min), ensure p95 latency & duplicate rate.
|
||||
* **Security**: DPoP/mTLS on APIs; secretRef resolution; webhook signing & replay windows.
|
||||
* **i18n**: localized templates render deterministically.
|
||||
* **Chaos**: Slack/Teams API flaps; SMTP greylisting; Redis hiccups; ensure graceful degradation.
|
||||
* **Chaos**: Slack/Teams API flaps; SMTP greylisting; Valkey hiccups; ensure graceful degradation.
|
||||
|
||||
---
|
||||
|
||||
@@ -514,7 +514,7 @@ sequenceDiagram
|
||||
## 18) Implementation notes
|
||||
|
||||
* **Language**: .NET 10; minimal API; `System.Text.Json` with canonical writer for body hashing; Channels for pipelines.
|
||||
* **Bus**: Redis Streams (**XGROUP** consumers) or NATS JetStream for at‑least‑once with ack; per‑tenant consumer groups to localize backpressure.
|
||||
* **Bus**: Valkey Streams (**XGROUP** consumers) or NATS JetStream for at‑least‑once with ack; per‑tenant consumer groups to localize backpressure.
|
||||
* **Templates**: compile and cache per rule+channel+locale; version with rule `updatedAt` to invalidate.
|
||||
* **Rules**: store raw YAML + parsed AST; validate with schema + static checks (e.g., nonsensical combos).
|
||||
* **Secrets**: pluggable secret resolver (Authority Secret proxy, K8s, Vault).
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# StellaOps Source & Job Orchestrator
|
||||
|
||||
# StellaOps Source & Job Orchestrator
|
||||
|
||||
The Orchestrator schedules, observes, and recovers ingestion and analysis jobs across the StellaOps platform.
|
||||
|
||||
## Latest updates (2025-11-30)
|
||||
@@ -15,24 +15,24 @@ The Orchestrator schedules, observes, and recovers ingestion and analysis jobs a
|
||||
- Enforce rate-limits, concurrency and dependency chains across queues.
|
||||
- Stream structured events and audit logs for incident response.
|
||||
- Provide Task Runner bridge semantics (claim/ack, heartbeats, progress, artifacts, backfills) for Go/Python SDKs.
|
||||
|
||||
## Key components
|
||||
- Orchestrator WebService (control plane).
|
||||
- Queue adapters (Redis/NATS) and job ledger.
|
||||
- Console dashboard module and CLI integration for operators.
|
||||
|
||||
## Integrations & dependencies
|
||||
- Authority for authN/Z on operational actions.
|
||||
- Telemetry stack for job metrics and alerts.
|
||||
- Scheduler/Concelier/Excititor workers for job lifecycle.
|
||||
- Offline Kit for state export/import during air-gap refreshes.
|
||||
|
||||
|
||||
## Key components
|
||||
- Orchestrator WebService (control plane).
|
||||
- Queue adapters (Valkey/NATS) and job ledger.
|
||||
- Console dashboard module and CLI integration for operators.
|
||||
|
||||
## Integrations & dependencies
|
||||
- Authority for authN/Z on operational actions.
|
||||
- Telemetry stack for job metrics and alerts.
|
||||
- Scheduler/Concelier/Excititor workers for job lifecycle.
|
||||
- Offline Kit for state export/import during air-gap refreshes.
|
||||
|
||||
## Operational notes
|
||||
- Job recovery runbooks and dashboard JSON as described in Epic 9.
|
||||
- Rate-limit and lease reconfiguration guidelines; keep lease defaults aligned across runners and SDKs (Go/Python).
|
||||
- Log streaming: SSE/WS endpoints carry correlationId + tenant/project; buffer size and retention must be documented in runbooks.
|
||||
- When using `orch:quota` / `orch:backfill` scopes, capture reason/ticket fields in runbooks and audit checklists.
|
||||
|
||||
## Epic alignment
|
||||
- Epic 9: Source & Job Orchestrator Dashboard.
|
||||
- ORCH stories in ../../TASKS.md.
|
||||
|
||||
## Epic alignment
|
||||
- Epic 9: Source & Job Orchestrator Dashboard.
|
||||
- ORCH stories in ../../TASKS.md.
|
||||
|
||||
@@ -26,7 +26,7 @@ src/
|
||||
├─ StellaOps.Scanner.Worker/ # queue consumer; executes analyzers
|
||||
├─ StellaOps.Scanner.Models/ # DTOs, evidence, graph nodes, CDX/SPDX adapters
|
||||
├─ StellaOps.Scanner.Storage/ # PostgreSQL repositories; RustFS object client (default) + S3 fallback; ILM/GC
|
||||
├─ StellaOps.Scanner.Queue/ # queue abstraction (Redis/NATS/RabbitMQ)
|
||||
├─ StellaOps.Scanner.Queue/ # queue abstraction (Valkey/NATS/RabbitMQ)
|
||||
├─ StellaOps.Scanner.Cache/ # layer cache; file CAS; bloom/bitmap indexes
|
||||
├─ StellaOps.Scanner.EntryTrace/ # ENTRYPOINT/CMD → terminal program resolver (shell AST)
|
||||
├─ StellaOps.Scanner.Analyzers.OS.[Apk|Dpkg|Rpm]/
|
||||
@@ -92,20 +92,20 @@ CLI usage: `stella scan --semantic <image>` enables semantic analysis in output.
|
||||
- **Hybrid attestation**: emit **graph-level DSSE** for every `richgraph-v1` (mandatory) and optional **edge-bundle DSSE** (≤512 edges) for runtime/init-root/contested edges or third-party provenance. Publish graph DSSE digests to Rekor by default; edge-bundle Rekor publish is policy-driven. CAS layout: `cas://reachability/graphs/{blake3}` for graph body, `.../{blake3}.dsse` for envelope, and `cas://reachability/edges/{graph_hash}/{bundle_id}[.dsse]` for bundles. Deterministic ordering before hashing/signing is required.
|
||||
- **Deterministic call-graph manifest**: capture analyzer versions, feed hashes, toolchain digests, and flags in a manifest stored alongside `richgraph-v1`; replaying with the same manifest MUST yield identical node/edge sets and hashes (see `docs/reachability/lead.md`).
|
||||
|
||||
### 1.1 Queue backbone (Redis / NATS)
|
||||
### 1.1 Queue backbone (Valkey / NATS)
|
||||
|
||||
`StellaOps.Scanner.Queue` exposes a transport-agnostic contract (`IScanQueue`/`IScanQueueLease`) used by the WebService producer and Worker consumers. Sprint 9 introduces two first-party transports:
|
||||
|
||||
- **Redis Streams** (default). Uses consumer groups, deterministic idempotency keys (`scanner:jobs:idemp:*`), and supports lease claim (`XCLAIM`), renewal, exponential-backoff retries, and a `scanner:jobs:dead` stream for exhausted attempts.
|
||||
- **Valkey Streams** (default). Uses consumer groups, deterministic idempotency keys (`scanner:jobs:idemp:*`), and supports lease claim (`XCLAIM`), renewal, exponential-backoff retries, and a `scanner:jobs:dead` stream for exhausted attempts.
|
||||
- **NATS JetStream**. Provisions the `SCANNER_JOBS` work-queue stream + durable consumer `scanner-workers`, publishes with `MsgId` for dedupe, applies backoff via `NAK` delays, and routes dead-lettered jobs to `SCANNER_JOBS_DEAD`.
|
||||
|
||||
Metrics are emitted via `Meter` counters (`scanner_queue_enqueued_total`, `scanner_queue_retry_total`, `scanner_queue_deadletter_total`), and `ScannerQueueHealthCheck` pings the active backend (Redis `PING`, NATS `PING`). Configuration is bound from `scanner.queue`:
|
||||
Metrics are emitted via `Meter` counters (`scanner_queue_enqueued_total`, `scanner_queue_retry_total`, `scanner_queue_deadletter_total`), and `ScannerQueueHealthCheck` pings the active backend (Valkey `PING`, NATS `PING`). Configuration is bound from `scanner.queue`:
|
||||
|
||||
```yaml
|
||||
scanner:
|
||||
queue:
|
||||
kind: redis # or nats
|
||||
redis:
|
||||
kind: valkey # or nats (valkey uses redis:// protocol)
|
||||
valkey:
|
||||
connectionString: "redis://queue:6379/0"
|
||||
streamName: "scanner:jobs"
|
||||
nats:
|
||||
@@ -133,7 +133,7 @@ The DI extension (`AddScannerQueue`) wires the selected transport, so future add
|
||||
* **OCI registry** with **Referrers API** (discover attached SBOMs/signatures).
|
||||
* **RustFS** (default, offline-first) for SBOM artifacts; optional S3/MinIO compatibility retained for migration; **Object Lock** semantics emulated via retention headers; **ILM** for TTL.
|
||||
* **PostgreSQL** for catalog, job state, diffs, ILM rules.
|
||||
* **Queue** (Redis Streams/NATS/RabbitMQ).
|
||||
* **Queue** (Valkey Streams/NATS/RabbitMQ).
|
||||
* **Authority** (on‑prem OIDC) for **OpToks** (DPoP/mTLS).
|
||||
* **Signer** + **Attestor** (+ **Fulcio/KMS** + **Rekor v2**) for DSSE + transparency.
|
||||
|
||||
@@ -390,7 +390,7 @@ Diffs are stored as artifacts and feed **UI** and **CLI**.
|
||||
```yaml
|
||||
scanner:
|
||||
queue:
|
||||
kind: redis
|
||||
kind: valkey # uses redis:// protocol
|
||||
url: "redis://queue:6379/0"
|
||||
postgres:
|
||||
connectionString: "Host=postgres;Port=5432;Database=scanner;Username=stellaops;Password=stellaops"
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
## Delivery phases
|
||||
- **Phase 1 – Control plane & job queue**
|
||||
Finalise Scanner WebService, queue abstraction (Redis/NATS), job leasing, CAS layer cache, artifact catalog, and API endpoints.
|
||||
Finalise Scanner WebService, queue abstraction (Valkey/NATS), job leasing, CAS layer cache, artifact catalog, and API endpoints.
|
||||
- **Phase 2 – Analyzer parity & SBOM assembly**
|
||||
Implement OS/Lang/Native analyzers, inventory/usage SBOM views, entry trace resolution, deterministic component identity.
|
||||
- **Phase 3 – Diff & attestations**
|
||||
|
||||
@@ -15,7 +15,7 @@ Scheduler detects advisory/VEX deltas, computes impact windows, and orchestrates
|
||||
|
||||
## Integrations & dependencies
|
||||
- PostgreSQL (schema `scheduler`) for impact models.
|
||||
- Redis/NATS for queueing.
|
||||
- Valkey/NATS for queueing.
|
||||
- Policy Engine, Scanner, Notify.
|
||||
|
||||
## Operational notes
|
||||
|
||||
@@ -27,7 +27,7 @@ src/
|
||||
├─ StellaOps.Scheduler.ImpactIndex/ # purl→images inverted index (roaring bitmaps)
|
||||
├─ StellaOps.Scheduler.Models/ # DTOs (Schedule, Run, ImpactSet, Deltas)
|
||||
├─ StellaOps.Scheduler.Storage.Postgres/ # schedules, runs, cursors, locks
|
||||
├─ StellaOps.Scheduler.Queue/ # Redis Streams / NATS abstraction
|
||||
├─ StellaOps.Scheduler.Queue/ # Valkey Streams / NATS abstraction
|
||||
├─ StellaOps.Scheduler.Tests.* # unit/integration/e2e
|
||||
```
|
||||
|
||||
@@ -36,7 +36,7 @@ src/
|
||||
* **Scheduler.WebService** (stateless)
|
||||
* **Scheduler.Worker** (scale‑out; planners + executors)
|
||||
|
||||
**Dependencies**: Authority (OpTok + DPoP/mTLS), Scanner.WebService, Conselier, Excitor, PostgreSQL, Redis/NATS, (optional) Notify.
|
||||
**Dependencies**: Authority (OpTok + DPoP/mTLS), Scanner.WebService, Conselier, Excitor, PostgreSQL, Valkey/NATS, (optional) Notify.
|
||||
|
||||
---
|
||||
|
||||
@@ -111,7 +111,7 @@ Goal: translate **change keys** → **image sets** in **milliseconds**.
|
||||
* `Contains[purl] → bitmap(imageIds)`
|
||||
* `UsedBy[purl] → bitmap(imageIds)` (subset of Contains)
|
||||
* Optionally keep **Owner maps**: `{imageId → {tenantId, namespaces[], repos[]}}` for selection filters.
|
||||
* Persist in RocksDB/LMDB or Redis‑modules; cache hot shards in memory; snapshot to PostgreSQL for cold start.
|
||||
* Persist in RocksDB/LMDB or Valkey‑modules; cache hot shards in memory; snapshot to PostgreSQL for cold start.
|
||||
|
||||
**Update paths**:
|
||||
|
||||
@@ -296,12 +296,12 @@ scheduler:
|
||||
issuer: "https://authority.internal"
|
||||
require: "dpop" # or "mtls"
|
||||
queue:
|
||||
kind: "redis" # or "nats"
|
||||
url: "redis://redis:6379/4"
|
||||
kind: "valkey" # or "nats" (valkey uses redis:// protocol)
|
||||
url: "redis://valkey:6379/4"
|
||||
postgres:
|
||||
connectionString: "Host=postgres;Port=5432;Database=scheduler;Username=stellaops;Password=stellaops"
|
||||
impactIndex:
|
||||
storage: "rocksdb" # "rocksdb" | "redis" | "memory"
|
||||
storage: "rocksdb" # "rocksdb" | "valkey" | "memory"
|
||||
warmOnStart: true
|
||||
usageOnlyDefault: true
|
||||
limits:
|
||||
|
||||
@@ -41,7 +41,7 @@
|
||||
* **Fulcio** (Sigstore) *or* **KMS/HSM**: to obtain certs or perform signatures.
|
||||
* **OCI Registry (Referrers API)**: to verify **scanner** image release signature.
|
||||
* **Attestor**: downstream service that writes DSSE bundles to **Rekor v2**.
|
||||
* **Config/state stores**: Redis (caches, rate buckets), PostgreSQL (audit log).
|
||||
* **Config/state stores**: Valkey (caches, rate buckets), PostgreSQL (audit log).
|
||||
|
||||
---
|
||||
|
||||
@@ -191,7 +191,7 @@ sequenceDiagram
|
||||
**DPoP nonce dance (when enabled for high‑value ops):**
|
||||
|
||||
* If DPoP proof lacks a valid nonce, Signer replies `401` with `WWW-Authenticate: DPoP error="use_dpop_nonce", dpop_nonce="<nonce>"`.
|
||||
* Client retries with new proof including the nonce; Signer validates nonce and `jti` uniqueness (Redis TTL cache).
|
||||
* Client retries with new proof including the nonce; Signer validates nonce and `jti` uniqueness (Valkey TTL cache).
|
||||
|
||||
---
|
||||
|
||||
@@ -210,7 +210,7 @@ sequenceDiagram
|
||||
* **Enforcements**:
|
||||
|
||||
* Reject if **revoked**, **expired**, **plan mismatch** or **release outside window** (`stellaops_version` in predicate exceeds `max_version` or release date beyond `valid_release_year`).
|
||||
* Apply plan **throttles** (QPS/concurrency/artifact bytes) via token‑bucket in Redis keyed by `license_id`.
|
||||
* Apply plan **throttles** (QPS/concurrency/artifact bytes) via token‑bucket in Valkey keyed by `license_id`.
|
||||
|
||||
---
|
||||
|
||||
@@ -277,7 +277,7 @@ Per `license_id` (from PoE):
|
||||
|
||||
## 10) Storage & caches
|
||||
|
||||
* **Redis**:
|
||||
* **Valkey**:
|
||||
|
||||
* DPoP nonce & `jti` replay cache (TTL ≤ 10 min).
|
||||
* PoE introspection cache (short TTL, e.g., 60–120 s).
|
||||
@@ -399,7 +399,7 @@ signer:
|
||||
## 16) Deployment & HA
|
||||
|
||||
* Run ≥ 2 replicas; front with L7 LB; **sticky** not required.
|
||||
* Redis for replay/quota caches (HA).
|
||||
* Valkey for replay/quota caches (HA).
|
||||
* Audit sink (PostgreSQL) in primary region; asynchronous write with local fallback buffer.
|
||||
* Fulcio/KMS clients configured with retries/backoff; circuit breakers.
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
|-----------|-------------|-------|
|
||||
| Runtime | .NET 10.0+ | LTS recommended |
|
||||
| Database | PostgreSQL 15.0+ | For projections and issuer directory |
|
||||
| Cache | Redis 7.0+ (optional) | For caching consensus results |
|
||||
| Cache | Valkey 8.0+ (optional) | For caching consensus results |
|
||||
| Memory | 512MB minimum | 2GB recommended for production |
|
||||
| CPU | 2 cores minimum | 4 cores for high throughput |
|
||||
|
||||
|
||||
Reference in New Issue
Block a user