docs consolidation
This commit is contained in:
@@ -53,7 +53,7 @@ open a PR and append it alphabetically.*
|
|||||||
| **Hyperfine** | CLI micro‑benchmark tool used in Performance Workbook. | Outputs CSV |
|
| **Hyperfine** | CLI micro‑benchmark tool used in Performance Workbook. | Outputs CSV |
|
||||||
| **JWT** | *JSON Web Token* – bearer auth token issued by OpenIddict. | Scope `scanner`, `admin`, `ui` |
|
| **JWT** | *JSON Web Token* – bearer auth token issued by OpenIddict. | Scope `scanner`, `admin`, `ui` |
|
||||||
| **K3s / RKE2** | Lightweight Kubernetes distributions (Rancher). | Supported in K8s guide |
|
| **K3s / RKE2** | Lightweight Kubernetes distributions (Rancher). | Supported in K8s guide |
|
||||||
| **Kubernetes NetworkPolicy** | K8s resource controlling pod traffic. | Redis/PostgreSQL isolation |
|
| **Kubernetes NetworkPolicy** | K8s resource controlling pod traffic. | Valkey/PostgreSQL isolation |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -79,7 +79,7 @@ open a PR and append it alphabetically.*
|
|||||||
| **PDF SAR** | *Security Assessment Report* PDF produced by Pro edition. | Cosign‑signed |
|
| **PDF SAR** | *Security Assessment Report* PDF produced by Pro edition. | Cosign‑signed |
|
||||||
| **Plug‑in** | Hot‑loadable DLL implementing a Stella contract (`IScannerRunner`, `ITlsProvider`, etc.). | Signed with Cosign |
|
| **Plug‑in** | Hot‑loadable DLL implementing a Stella contract (`IScannerRunner`, `ITlsProvider`, etc.). | Signed with Cosign |
|
||||||
| **Problem Details** | RFC 7807 JSON error format returned by API. | See API ref §0 |
|
| **Problem Details** | RFC 7807 JSON error format returned by API. | See API ref §0 |
|
||||||
| **Redis** | In‑memory datastore used for queue + cache. | Port 6379 |
|
| **Valkey** | In‑memory datastore (Redis-compatible) used for queue + cache. | Port 6379 |
|
||||||
| **Rekor** | Sigstore transparency log; future work for signature anchoring. | Road‑map P4 |
|
| **Rekor** | Sigstore transparency log; future work for signature anchoring. | Road‑map P4 |
|
||||||
| **RPS** | *Requests Per Second*. | Backend perf budget 40 rps |
|
| **RPS** | *Requests Per Second*. | Backend perf budget 40 rps |
|
||||||
| **SBOM** | *Software Bill of Materials* – inventory of packages in an image. | Trivy JSON v2 |
|
| **SBOM** | *Software Bill of Materials* – inventory of packages in an image. | Trivy JSON v2 |
|
||||||
|
|||||||
@@ -40,7 +40,7 @@ the CLI/UI introduce a delay, detailed below.*
|
|||||||
| Step | Operation | Typical latency |
|
| Step | Operation | Typical latency |
|
||||||
| ---- | ------------------------------------------------------------------------------ | ------------------------------------ |
|
| ---- | ------------------------------------------------------------------------------ | ------------------------------------ |
|
||||||
| 1 | `key = sha256(ip)` *or* `sha256(tid)` | < 0.1 ms |
|
| 1 | `key = sha256(ip)` *or* `sha256(tid)` | < 0.1 ms |
|
||||||
| 2 | `count = INCR quota:<key>` in Redis (24 h TTL) | 0.2 ms (Lua) |
|
| 2 | `count = INCR quota:<key>` in Valkey (24 h TTL) | 0.2 ms (Lua) |
|
||||||
| 3 | If `count > limit` → `WAIT delay_ms` | first 30 × 5 000 ms → then 60 000 ms |
|
| 3 | If `count > limit` → `WAIT delay_ms` | first 30 × 5 000 ms → then 60 000 ms |
|
||||||
| 4 | Return HTTP 429 **only if** `delay > 60 s` (should never fire under free tier) | — |
|
| 4 | Return HTTP 429 **only if** `delay > 60 s` (should never fire under free tier) | — |
|
||||||
|
|
||||||
@@ -91,7 +91,7 @@ docker run --rm \
|
|||||||
<details>
|
<details>
|
||||||
<summary>Does the quota differ offline?</summary>
|
<summary>Does the quota differ offline?</summary>
|
||||||
|
|
||||||
> No. Counters are evaluated locally in Redis; the same limits apply even
|
> No. Counters are evaluated locally in Valkey; the same limits apply even
|
||||||
> without Internet access.
|
> without Internet access.
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
@@ -99,7 +99,7 @@ docker run --rm \
|
|||||||
<details>
|
<details>
|
||||||
<summary>Can I reset counters manually?</summary>
|
<summary>Can I reset counters manually?</summary>
|
||||||
|
|
||||||
> Yes – delete the `quota:*` keys in Redis, but we recommend letting them
|
> Yes – delete the `quota:*` keys in Valkey, but we recommend letting them
|
||||||
> expire at midnight to keep statistics meaningful.
|
> expire at midnight to keep statistics meaningful.
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|||||||
@@ -234,7 +234,7 @@ All services follow this configuration priority (highest to lowest):
|
|||||||
"Dsn": "localhost:6379"
|
"Dsn": "localhost:6379"
|
||||||
},
|
},
|
||||||
"Cache": {
|
"Cache": {
|
||||||
"Redis": {
|
"Valkey": {
|
||||||
"ConnectionString": "localhost:6379"
|
"ConnectionString": "localhost:6379"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
15
docs/_archive/README.md
Normal file
15
docs/_archive/README.md
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
# Archived Documentation
|
||||||
|
|
||||||
|
This directory contains documentation that has been superseded, deprecated, or consolidated.
|
||||||
|
|
||||||
|
## Archived Content
|
||||||
|
|
||||||
|
| Directory | Reason | Canonical Location |
|
||||||
|
|-----------|--------|-------------------|
|
||||||
|
| `orchestrator-legacy/` | Parallel directory consolidated | `docs/modules/orchestrator/` |
|
||||||
|
|
||||||
|
## Policy
|
||||||
|
|
||||||
|
- Archived content is retained for historical reference
|
||||||
|
- Do not update archived files - update canonical locations instead
|
||||||
|
- Content here may contain outdated technology references (MongoDB, Redis, GridFS)
|
||||||
@@ -24,7 +24,7 @@ Last updated: 2025-11-25
|
|||||||
|
|
||||||
## Storage & queues
|
## Storage & queues
|
||||||
- PostgreSQL stores DAG specs, versions, and run history (per-tenant tables or tenant key prefix).
|
- PostgreSQL stores DAG specs, versions, and run history (per-tenant tables or tenant key prefix).
|
||||||
- Queues: Redis/PostgreSQL-backed FIFO per tenant; message includes `traceparent`, `runToken`, `dagVersion`, `inputsHash`.
|
- Queues: Valkey/PostgreSQL-backed FIFO per tenant; message includes `traceparent`, `runToken`, `dagVersion`, `inputsHash`.
|
||||||
- Artifacts (logs, outputs) referenced by content hash; stored in object storage or PostgreSQL large objects; hashes recorded in run record.
|
- Artifacts (logs, outputs) referenced by content hash; stored in object storage or PostgreSQL large objects; hashes recorded in run record.
|
||||||
|
|
||||||
## Security & AOC alignment
|
## Security & AOC alignment
|
||||||
@@ -12,7 +12,7 @@ Last updated: 2025-11-25
|
|||||||
## Runtime shape
|
## Runtime shape
|
||||||
- **Services**: Orchestrator WebService (API/UI), Worker (executors), Scheduler (timer-based triggers).
|
- **Services**: Orchestrator WebService (API/UI), Worker (executors), Scheduler (timer-based triggers).
|
||||||
- **Queues**: per-tenant work queues; FIFO with deterministic ordering and idempotency keys.
|
- **Queues**: per-tenant work queues; FIFO with deterministic ordering and idempotency keys.
|
||||||
- **State**: Mongo for run metadata and DAG definitions; optional Redis for locks/throttles; all scoped by tenant.
|
- **State**: PostgreSQL for run metadata and DAG definitions; optional Valkey for locks/throttles; all scoped by tenant.
|
||||||
- **APIs**: REST + WebSocket for run status/stream; admin endpoints require `orchestrator:admin` plus tenant header.
|
- **APIs**: REST + WebSocket for run status/stream; admin endpoints require `orchestrator:admin` plus tenant header.
|
||||||
|
|
||||||
## AOC alignment
|
## AOC alignment
|
||||||
@@ -18,8 +18,8 @@ Immutable record of every DAG run and step execution for audit, replay, and offl
|
|||||||
- `events[]` (optional): ordered list of significant events with `timestamp`, `type`, `message`, `actor`
|
- `events[]` (optional): ordered list of significant events with `timestamp`, `type`, `message`, `actor`
|
||||||
|
|
||||||
## Storage
|
## Storage
|
||||||
- Mongo collection partitioned by tenant; indexes on `(tenant, dagId, runId)`, `(tenant, status, startedUtc)`.
|
- PostgreSQL table partitioned by tenant; indexes on `(tenant, dagId, runId)`, `(tenant, status, startedUtc)`.
|
||||||
- Artifacts/logs referenced by content hash; stored separately (object storage/GridFS).
|
- Artifacts/logs referenced by content hash; stored separately (object storage/RustFS).
|
||||||
- Append-only updates; run status transitions are monotonic.
|
- Append-only updates; run status transitions are monotonic.
|
||||||
|
|
||||||
## Exports
|
## Exports
|
||||||
@@ -14,7 +14,8 @@ Proof chains in StellaOps consist of cryptographically-linked attestations:
|
|||||||
1. **Evidence statements** - Raw vulnerability findings
|
1. **Evidence statements** - Raw vulnerability findings
|
||||||
2. **Reasoning statements** - Policy evaluation traces
|
2. **Reasoning statements** - Policy evaluation traces
|
||||||
3. **VEX verdict statements** - Final vulnerability status determinations
|
3. **VEX verdict statements** - Final vulnerability status determinations
|
||||||
4. **Proof spine** - Merkle tree aggregating all components
|
4. **Graph root statements** - Merkle root commitments to graph analysis results
|
||||||
|
5. **Proof spine** - Merkle tree aggregating all components
|
||||||
|
|
||||||
In online mode, proof chains include Rekor inclusion proofs for transparency. In air-gap mode, verification proceeds without Rekor but maintains cryptographic integrity.
|
In online mode, proof chains include Rekor inclusion proofs for transparency. In air-gap mode, verification proceeds without Rekor but maintains cryptographic integrity.
|
||||||
|
|
||||||
@@ -244,6 +245,174 @@ stellaops proof verify-batch \
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## Graph Root Attestation Verification (Offline)
|
||||||
|
|
||||||
|
Graph root attestations provide tamper-evident commitment to graph analysis results. In air-gap mode, these attestations can be verified without network access.
|
||||||
|
|
||||||
|
### Verify Graph Root Attestation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify a single graph root attestation
|
||||||
|
stellaops graph-root verify --offline \
|
||||||
|
--envelope graph-root.dsse \
|
||||||
|
--anchor-file trust-anchors.json
|
||||||
|
|
||||||
|
# Expected output:
|
||||||
|
# Graph Root Verification
|
||||||
|
# ═══════════════════════
|
||||||
|
# ✓ DSSE signature verified
|
||||||
|
# ✓ Predicate type: graph-root.stella/v1
|
||||||
|
# ✓ Graph type: ReachabilityGraph
|
||||||
|
# ✓ Canon version: stella:canon:v1
|
||||||
|
# ⊘ Rekor verification skipped (offline mode)
|
||||||
|
#
|
||||||
|
# Overall: VERIFIED (offline)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verify with Node/Edge Reconstruction
|
||||||
|
|
||||||
|
When you have the original graph data, you can recompute and verify the Merkle root:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify with reconstruction
|
||||||
|
stellaops graph-root verify --offline \
|
||||||
|
--envelope graph-root.dsse \
|
||||||
|
--nodes nodes.json \
|
||||||
|
--edges edges.json \
|
||||||
|
--anchor-file trust-anchors.json
|
||||||
|
|
||||||
|
# Expected output:
|
||||||
|
# Graph Root Verification (with reconstruction)
|
||||||
|
# ═════════════════════════════════════════════
|
||||||
|
# ✓ DSSE signature verified
|
||||||
|
# ✓ Nodes canonicalized: 1234 entries
|
||||||
|
# ✓ Edges canonicalized: 5678 entries
|
||||||
|
# ✓ Merkle root recomputed: sha256:abc123...
|
||||||
|
# ✓ Merkle root matches claimed: sha256:abc123...
|
||||||
|
#
|
||||||
|
# Overall: VERIFIED (reconstructed)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Graph Data File Formats
|
||||||
|
|
||||||
|
**nodes.json** - Array of node identifiers:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"canonVersion": "stella:canon:v1",
|
||||||
|
"nodes": [
|
||||||
|
"pkg:npm/lodash@4.17.21",
|
||||||
|
"pkg:npm/express@4.18.2",
|
||||||
|
"pkg:npm/body-parser@1.20.0"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**edges.json** - Array of edge identifiers:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"canonVersion": "stella:canon:v1",
|
||||||
|
"edges": [
|
||||||
|
"pkg:npm/express@4.18.2->pkg:npm/body-parser@1.20.0",
|
||||||
|
"pkg:npm/express@4.18.2->pkg:npm/lodash@4.17.21"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verification Steps (Detailed)
|
||||||
|
|
||||||
|
The offline graph root verification algorithm:
|
||||||
|
|
||||||
|
1. **Parse DSSE envelope** - Extract payload and signatures
|
||||||
|
2. **Decode in-toto statement** - Parse subject and predicate
|
||||||
|
3. **Verify signature** - Check DSSE signature against trust anchor allowed keys
|
||||||
|
4. **Validate predicate type** - Confirm `graph-root.stella/v1`
|
||||||
|
5. **Extract Merkle root** - Get claimed root from predicate
|
||||||
|
6. **If reconstruction requested**:
|
||||||
|
- Load nodes.json and edges.json
|
||||||
|
- Verify canon version matches predicate
|
||||||
|
- Sort nodes lexicographically
|
||||||
|
- Sort edges lexicographically
|
||||||
|
- Concatenate sorted lists
|
||||||
|
- Build SHA-256 Merkle tree
|
||||||
|
- Compare computed root to claimed root
|
||||||
|
7. **Emit verification result**
|
||||||
|
|
||||||
|
### Programmatic Verification (.NET)
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
using StellaOps.Attestor.GraphRoot;
|
||||||
|
|
||||||
|
// Load trust anchors
|
||||||
|
var anchors = await TrustAnchors.LoadFromFileAsync("trust-anchors.json");
|
||||||
|
|
||||||
|
// Create verifier
|
||||||
|
var verifier = new GraphRootAttestor(signer, canonicalJsonSerializer);
|
||||||
|
|
||||||
|
// Load envelope
|
||||||
|
var envelope = await DsseEnvelope.LoadAsync("graph-root.dsse");
|
||||||
|
|
||||||
|
// Verify without reconstruction
|
||||||
|
var result = await verifier.VerifyAsync(
|
||||||
|
envelope,
|
||||||
|
trustAnchors: anchors,
|
||||||
|
verifyRekor: false);
|
||||||
|
|
||||||
|
// Verify with reconstruction
|
||||||
|
var nodeIds = new[] { "pkg:npm/lodash@4.17.21", "pkg:npm/express@4.18.2" };
|
||||||
|
var edgeIds = new[] { "pkg:npm/express@4.18.2->pkg:npm/lodash@4.17.21" };
|
||||||
|
|
||||||
|
var fullResult = await verifier.VerifyAsync(
|
||||||
|
envelope,
|
||||||
|
nodeIds: nodeIds,
|
||||||
|
edgeIds: edgeIds,
|
||||||
|
trustAnchors: anchors,
|
||||||
|
verifyRekor: false);
|
||||||
|
|
||||||
|
Console.WriteLine($"Verified: {fullResult.IsValid}");
|
||||||
|
Console.WriteLine($"Merkle root: {fullResult.MerkleRoot}");
|
||||||
|
```
|
||||||
|
|
||||||
|
### Integration with Proof Spine
|
||||||
|
|
||||||
|
Graph roots can be included in proof spines for comprehensive verification:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Export proof bundle with graph roots
|
||||||
|
stellaops proof export \
|
||||||
|
--entry sha256:abc123:pkg:npm/lodash@4.17.21 \
|
||||||
|
--include-graph-roots \
|
||||||
|
--output proof-bundle.zip
|
||||||
|
|
||||||
|
# Bundle now includes:
|
||||||
|
# proof-bundle.zip
|
||||||
|
# ├── proof-spine.json
|
||||||
|
# ├── evidence/
|
||||||
|
# ├── reasoning.json
|
||||||
|
# ├── vex-verdict.json
|
||||||
|
# ├── graph-roots/ # Graph root attestations
|
||||||
|
# │ ├── reachability.dsse
|
||||||
|
# │ └── dependency.dsse
|
||||||
|
# ├── envelopes/
|
||||||
|
# └── VERIFY.md
|
||||||
|
|
||||||
|
# Verify with graph roots
|
||||||
|
stellaops proof verify --offline \
|
||||||
|
--bundle-file proof-bundle.zip \
|
||||||
|
--verify-graph-roots \
|
||||||
|
--anchor-file trust-anchors.json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Determinism Requirements
|
||||||
|
|
||||||
|
For offline verification to succeed:
|
||||||
|
|
||||||
|
1. **Same canonicalization** - Use `stella:canon:v1` consistently
|
||||||
|
2. **Same ordering** - Lexicographic sort for nodes and edges
|
||||||
|
3. **Same encoding** - UTF-8 for all string operations
|
||||||
|
4. **Same hash algorithm** - SHA-256 for Merkle tree
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Key Rotation in Air-Gap Mode
|
## Key Rotation in Air-Gap Mode
|
||||||
|
|
||||||
When keys are rotated, trust anchor updates must be distributed:
|
When keys are rotated, trust anchor updates must be distributed:
|
||||||
|
|||||||
@@ -1,156 +0,0 @@
|
|||||||
# VEX Raw Migration Rollback Guide
|
|
||||||
|
|
||||||
> **DEPRECATED:** This document describes MongoDB migration rollback procedures which are no longer used. Excititor now uses PostgreSQL for persistence (Sprint 4400). See `docs/db/SPECIFICATION.md` for current schema and migration procedures.
|
|
||||||
|
|
||||||
This document describes how to rollback migrations applied to the `vex_raw` table.
|
|
||||||
|
|
||||||
## Migration: 20251127-vex-raw-idempotency-indexes
|
|
||||||
|
|
||||||
### Description
|
|
||||||
Adds unique idempotency indexes to enforce content-addressed storage:
|
|
||||||
- `idx_provider_sourceUri_digest_unique`: Prevents duplicate documents from same provider/source
|
|
||||||
- `idx_digest_providerId`: Optimizes evidence queries by digest
|
|
||||||
- `idx_retrievedAt`: Supports time-based queries and future TTL operations
|
|
||||||
|
|
||||||
### Rollback Steps
|
|
||||||
|
|
||||||
#### Option 1: MongoDB Shell
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Connect to your MongoDB instance
|
|
||||||
mongosh "mongodb://localhost:27017/excititor"
|
|
||||||
|
|
||||||
// Drop the idempotency indexes
|
|
||||||
db.vex_raw.dropIndex("idx_provider_sourceUri_digest_unique")
|
|
||||||
db.vex_raw.dropIndex("idx_digest_providerId")
|
|
||||||
db.vex_raw.dropIndex("idx_retrievedAt")
|
|
||||||
|
|
||||||
// Verify indexes are dropped
|
|
||||||
db.vex_raw.getIndexes()
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Option 2: Programmatic Rollback (C#)
|
|
||||||
|
|
||||||
```csharp
|
|
||||||
using StellaOps.Excititor.Storage.Mongo.Migrations;
|
|
||||||
|
|
||||||
// Get the database instance
|
|
||||||
var database = client.GetDatabase("excititor");
|
|
||||||
|
|
||||||
// Execute rollback
|
|
||||||
await database.RollbackIdempotencyIndexesAsync(cancellationToken);
|
|
||||||
|
|
||||||
// Verify rollback
|
|
||||||
var verified = await database.VerifyIdempotencyIndexesExistAsync(cancellationToken);
|
|
||||||
Console.WriteLine($"Indexes exist after rollback: {verified}"); // Should be false
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Option 3: MongoDB Compass
|
|
||||||
|
|
||||||
1. Connect to your MongoDB instance
|
|
||||||
2. Navigate to the `excititor` database
|
|
||||||
3. Select the `vex_raw` collection
|
|
||||||
4. Go to the "Indexes" tab
|
|
||||||
5. Click "Drop Index" for each of:
|
|
||||||
- `idx_provider_sourceUri_digest_unique`
|
|
||||||
- `idx_digest_providerId`
|
|
||||||
- `idx_retrievedAt`
|
|
||||||
|
|
||||||
### Impact of Rollback
|
|
||||||
|
|
||||||
**Before rollback (indexes present):**
|
|
||||||
- Documents are prevented from being duplicated
|
|
||||||
- Evidence queries are optimized
|
|
||||||
- Unique constraint enforced
|
|
||||||
|
|
||||||
**After rollback (indexes dropped):**
|
|
||||||
- Duplicate documents may be inserted
|
|
||||||
- Evidence queries may be slower
|
|
||||||
- No unique constraint enforcement
|
|
||||||
|
|
||||||
### Re-applying the Migration
|
|
||||||
|
|
||||||
To re-apply the migration after rollback:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// MongoDB shell
|
|
||||||
db.vex_raw.createIndex(
|
|
||||||
{ "providerId": 1, "sourceUri": 1, "digest": 1 },
|
|
||||||
{ unique: true, name: "idx_provider_sourceUri_digest_unique", background: true }
|
|
||||||
)
|
|
||||||
|
|
||||||
db.vex_raw.createIndex(
|
|
||||||
{ "digest": 1, "providerId": 1 },
|
|
||||||
{ name: "idx_digest_providerId", background: true }
|
|
||||||
)
|
|
||||||
|
|
||||||
db.vex_raw.createIndex(
|
|
||||||
{ "retrievedAt": 1 },
|
|
||||||
{ name: "idx_retrievedAt", background: true }
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
Or run the migration runner:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
stellaops excititor migrate --run 20251127-vex-raw-idempotency-indexes
|
|
||||||
```
|
|
||||||
|
|
||||||
## Migration: 20251125-vex-raw-json-schema
|
|
||||||
|
|
||||||
### Description
|
|
||||||
Adds a JSON Schema validator to the `vex_raw` collection with `validationAction: warn`.
|
|
||||||
|
|
||||||
### Rollback Steps
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// MongoDB shell - remove the validator
|
|
||||||
db.runCommand({
|
|
||||||
collMod: "vex_raw",
|
|
||||||
validator: {},
|
|
||||||
validationAction: "off",
|
|
||||||
validationLevel: "off"
|
|
||||||
})
|
|
||||||
|
|
||||||
// Verify validator is removed
|
|
||||||
db.getCollectionInfos({ name: "vex_raw" })[0].options
|
|
||||||
```
|
|
||||||
|
|
||||||
### Impact of Rollback
|
|
||||||
|
|
||||||
- Documents will no longer be validated against the schema
|
|
||||||
- Invalid documents may be inserted
|
|
||||||
- Existing documents are not affected
|
|
||||||
|
|
||||||
## General Rollback Guidelines
|
|
||||||
|
|
||||||
1. **Always backup first**: Create a backup before any rollback operation
|
|
||||||
2. **Test in staging**: Verify rollback procedure in a non-production environment
|
|
||||||
3. **Monitor performance**: Watch for query performance changes after rollback
|
|
||||||
4. **Document changes**: Log all rollback operations for audit purposes
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Index Drop Fails
|
|
||||||
|
|
||||||
If you see "IndexNotFound" errors, the index may have already been dropped or was never created:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Check existing indexes
|
|
||||||
db.vex_raw.getIndexes()
|
|
||||||
```
|
|
||||||
|
|
||||||
### Validator Removal Fails
|
|
||||||
|
|
||||||
If the validator command fails, verify you have the correct permissions:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Check current user roles
|
|
||||||
db.runCommand({ usersInfo: 1 })
|
|
||||||
```
|
|
||||||
|
|
||||||
## Related Documentation
|
|
||||||
|
|
||||||
- [VEX Raw Schema Validation](vex-raw-schema-validation.md)
|
|
||||||
- [MongoDB Index Management](https://www.mongodb.com/docs/manual/indexes/)
|
|
||||||
- [Excititor Architecture](../modules/excititor/architecture.md)
|
|
||||||
@@ -1,199 +0,0 @@
|
|||||||
# VEX Raw Schema Validation - Offline Kit
|
|
||||||
|
|
||||||
> **DEPRECATED:** This document describes MongoDB validation procedures which are no longer used. Excititor now uses PostgreSQL for persistence (Sprint 4400). Schema validation is performed via PostgreSQL constraints. See `docs/db/SPECIFICATION.md` for current schema.
|
|
||||||
|
|
||||||
This document describes how operators can validate the integrity of VEX raw evidence stored in the database, ensuring that Excititor stores only immutable, content-addressed documents.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
The `vex_raw` collection stores raw VEX documents with content-addressed storage (documents are keyed by their cryptographic hash). This ensures immutability - documents cannot be modified after insertion without changing their key.
|
|
||||||
|
|
||||||
## Schema Definition
|
|
||||||
|
|
||||||
The MongoDB JSON Schema enforces the following structure:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"$jsonSchema": {
|
|
||||||
"bsonType": "object",
|
|
||||||
"title": "VEX Raw Document Schema",
|
|
||||||
"description": "Schema for immutable VEX evidence storage",
|
|
||||||
"required": ["_id", "providerId", "format", "sourceUri", "retrievedAt", "digest"],
|
|
||||||
"properties": {
|
|
||||||
"_id": {
|
|
||||||
"bsonType": "string",
|
|
||||||
"description": "Content digest serving as immutable key"
|
|
||||||
},
|
|
||||||
"providerId": {
|
|
||||||
"bsonType": "string",
|
|
||||||
"minLength": 1,
|
|
||||||
"description": "VEX provider identifier"
|
|
||||||
},
|
|
||||||
"format": {
|
|
||||||
"bsonType": "string",
|
|
||||||
"enum": ["csaf", "cyclonedx", "openvex"],
|
|
||||||
"description": "VEX document format"
|
|
||||||
},
|
|
||||||
"sourceUri": {
|
|
||||||
"bsonType": "string",
|
|
||||||
"minLength": 1,
|
|
||||||
"description": "Original source URI"
|
|
||||||
},
|
|
||||||
"retrievedAt": {
|
|
||||||
"bsonType": "date",
|
|
||||||
"description": "Timestamp when document was fetched"
|
|
||||||
},
|
|
||||||
"digest": {
|
|
||||||
"bsonType": "string",
|
|
||||||
"minLength": 32,
|
|
||||||
"description": "Content hash (SHA-256 hex)"
|
|
||||||
},
|
|
||||||
"content": {
|
|
||||||
"bsonType": ["binData", "string"],
|
|
||||||
"description": "Raw document content"
|
|
||||||
},
|
|
||||||
"gridFsObjectId": {
|
|
||||||
"bsonType": ["objectId", "null", "string"],
|
|
||||||
"description": "GridFS reference for large documents"
|
|
||||||
},
|
|
||||||
"metadata": {
|
|
||||||
"bsonType": "object",
|
|
||||||
"description": "Provider-specific metadata"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Offline Validation Steps
|
|
||||||
|
|
||||||
### 1. Export the Schema
|
|
||||||
|
|
||||||
The schema can be exported from the application using the validator tooling:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Using the Excititor CLI
|
|
||||||
stellaops excititor schema export --collection vex_raw --output vex-raw-schema.json
|
|
||||||
|
|
||||||
# Or via MongoDB shell
|
|
||||||
mongosh --eval "db.getCollectionInfos({name: 'vex_raw'})[0].options.validator" > vex-raw-schema.json
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Validate Documents in MongoDB Shell
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Connect to your MongoDB instance
|
|
||||||
mongosh "mongodb://localhost:27017/excititor"
|
|
||||||
|
|
||||||
// Get all documents that violate the schema
|
|
||||||
db.runCommand({
|
|
||||||
validate: "vex_raw",
|
|
||||||
full: true
|
|
||||||
})
|
|
||||||
|
|
||||||
// Or check individual documents
|
|
||||||
db.vex_raw.find().forEach(function(doc) {
|
|
||||||
var result = db.runCommand({
|
|
||||||
validate: "vex_raw",
|
|
||||||
documentId: doc._id
|
|
||||||
});
|
|
||||||
if (!result.valid) {
|
|
||||||
print("Invalid: " + doc._id);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Programmatic Validation (C#)
|
|
||||||
|
|
||||||
```csharp
|
|
||||||
using StellaOps.Excititor.Storage.Mongo.Validation;
|
|
||||||
|
|
||||||
// Validate a single document
|
|
||||||
var result = VexRawSchemaValidator.Validate(document);
|
|
||||||
if (!result.IsValid)
|
|
||||||
{
|
|
||||||
foreach (var violation in result.Violations)
|
|
||||||
{
|
|
||||||
Console.WriteLine($"{violation.Field}: {violation.Message}");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Batch validation
|
|
||||||
var batchResult = VexRawSchemaValidator.ValidateBatch(documents);
|
|
||||||
Console.WriteLine($"Valid: {batchResult.ValidCount}, Invalid: {batchResult.InvalidCount}");
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Export Schema for External Tools
|
|
||||||
|
|
||||||
```csharp
|
|
||||||
// Get schema as JSON for external validation tools
|
|
||||||
var schemaJson = VexRawSchemaValidator.GetJsonSchemaAsJson();
|
|
||||||
File.WriteAllText("vex-raw-schema.json", schemaJson);
|
|
||||||
```
|
|
||||||
|
|
||||||
## Verification Checklist
|
|
||||||
|
|
||||||
Use this checklist to verify schema compliance:
|
|
||||||
|
|
||||||
- [ ] All documents have required fields (_id, providerId, format, sourceUri, retrievedAt, digest)
|
|
||||||
- [ ] The `_id` matches the `digest` value (content-addressed)
|
|
||||||
- [ ] Format is one of: csaf, cyclonedx, openvex
|
|
||||||
- [ ] Digest is at least 32 characters (SHA-256 hex)
|
|
||||||
- [ ] No documents have been modified after insertion (verify via digest recomputation)
|
|
||||||
|
|
||||||
## Immutability Verification
|
|
||||||
|
|
||||||
To verify documents haven't been tampered with:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// MongoDB shell - verify content matches digest
|
|
||||||
db.vex_raw.find().forEach(function(doc) {
|
|
||||||
var content = doc.content;
|
|
||||||
if (content) {
|
|
||||||
// Compute SHA-256 of content
|
|
||||||
var computedDigest = hex_md5(content); // Use appropriate hash function
|
|
||||||
if (computedDigest !== doc.digest) {
|
|
||||||
print("TAMPERED: " + doc._id);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
## Auditing
|
|
||||||
|
|
||||||
For compliance auditing, export a validation report:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Generate validation report
|
|
||||||
stellaops excititor validate --collection vex_raw --report validation-report.json
|
|
||||||
|
|
||||||
# The report includes:
|
|
||||||
# - Total document count
|
|
||||||
# - Valid/invalid counts
|
|
||||||
# - List of violations by document
|
|
||||||
# - Schema version used for validation
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Common Violations
|
|
||||||
|
|
||||||
1. **Missing required field**: Ensure all required fields are present
|
|
||||||
2. **Invalid format**: Format must be exactly "csaf", "cyclonedx", or "openvex"
|
|
||||||
3. **Digest too short**: Digest must be at least 32 hex characters
|
|
||||||
4. **Wrong type**: Check field types match schema requirements
|
|
||||||
|
|
||||||
### Recovery
|
|
||||||
|
|
||||||
If invalid documents are found:
|
|
||||||
|
|
||||||
1. Do NOT modify documents in place (violates immutability)
|
|
||||||
2. Export the invalid documents for analysis
|
|
||||||
3. Re-ingest from original sources with correct data
|
|
||||||
4. Document the incident in audit logs
|
|
||||||
|
|
||||||
## Related Documentation
|
|
||||||
|
|
||||||
- [Excititor Architecture](../modules/excititor/architecture.md)
|
|
||||||
- [VEX Storage Design](../modules/excititor/storage.md)
|
|
||||||
- [Offline Operation Guide](../24_OFFLINE_KIT.md)
|
|
||||||
408
docs/api/evidence-api-reference.md
Normal file
408
docs/api/evidence-api-reference.md
Normal file
@@ -0,0 +1,408 @@
|
|||||||
|
# Evidence API Reference
|
||||||
|
|
||||||
|
This document provides the complete API reference for the StellaOps unified evidence model.
|
||||||
|
|
||||||
|
## Interfaces
|
||||||
|
|
||||||
|
### IEvidence
|
||||||
|
|
||||||
|
Base interface for all evidence records.
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
namespace StellaOps.Evidence.Core;
|
||||||
|
|
||||||
|
public interface IEvidence
|
||||||
|
{
|
||||||
|
/// <summary>
|
||||||
|
/// Content-addressed evidence identifier (e.g., "sha256:abc123...").
|
||||||
|
/// </summary>
|
||||||
|
string EvidenceId { get; }
|
||||||
|
|
||||||
|
/// <summary>
|
||||||
|
/// The type of evidence this record represents.
|
||||||
|
/// </summary>
|
||||||
|
EvidenceType Type { get; }
|
||||||
|
|
||||||
|
/// <summary>
|
||||||
|
/// The subject (node ID) this evidence is about.
|
||||||
|
/// </summary>
|
||||||
|
string SubjectNodeId { get; }
|
||||||
|
|
||||||
|
/// <summary>
|
||||||
|
/// When the evidence was created (UTC).
|
||||||
|
/// </summary>
|
||||||
|
DateTimeOffset CreatedAt { get; }
|
||||||
|
|
||||||
|
/// <summary>
|
||||||
|
/// Cryptographic signatures attesting to this evidence.
|
||||||
|
/// </summary>
|
||||||
|
IReadOnlyList<EvidenceSignature> Signatures { get; }
|
||||||
|
|
||||||
|
/// <summary>
|
||||||
|
/// Origin and provenance information.
|
||||||
|
/// </summary>
|
||||||
|
EvidenceProvenance? Provenance { get; }
|
||||||
|
|
||||||
|
/// <summary>
|
||||||
|
/// Type-specific properties as key-value pairs.
|
||||||
|
/// </summary>
|
||||||
|
IReadOnlyDictionary<string, string> Properties { get; }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### IEvidenceStore
|
||||||
|
|
||||||
|
Storage interface for evidence persistence.
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
namespace StellaOps.Evidence.Core;
|
||||||
|
|
||||||
|
public interface IEvidenceStore
|
||||||
|
{
|
||||||
|
/// <summary>
|
||||||
|
/// Retrieves evidence by its content-addressed ID.
|
||||||
|
/// </summary>
|
||||||
|
Task<IEvidence?> GetAsync(string evidenceId, CancellationToken ct = default);
|
||||||
|
|
||||||
|
/// <summary>
|
||||||
|
/// Retrieves all evidence records for a given subject.
|
||||||
|
/// </summary>
|
||||||
|
Task<IReadOnlyList<IEvidence>> GetBySubjectAsync(
|
||||||
|
string subjectNodeId,
|
||||||
|
CancellationToken ct = default);
|
||||||
|
|
||||||
|
/// <summary>
|
||||||
|
/// Retrieves all evidence records of a specific type.
|
||||||
|
/// </summary>
|
||||||
|
Task<IReadOnlyList<IEvidence>> GetByTypeAsync(
|
||||||
|
EvidenceType type,
|
||||||
|
CancellationToken ct = default);
|
||||||
|
|
||||||
|
/// <summary>
|
||||||
|
/// Stores an evidence record.
|
||||||
|
/// </summary>
|
||||||
|
Task StoreAsync(IEvidence evidence, CancellationToken ct = default);
|
||||||
|
|
||||||
|
/// <summary>
|
||||||
|
/// Checks if evidence with the given ID exists.
|
||||||
|
/// </summary>
|
||||||
|
Task<bool> ExistsAsync(string evidenceId, CancellationToken ct = default);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### IEvidenceAdapter<TInput>
|
||||||
|
|
||||||
|
Adapter interface for converting module-specific types to evidence.
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
namespace StellaOps.Evidence.Core.Adapters;
|
||||||
|
|
||||||
|
public interface IEvidenceAdapter<TInput>
|
||||||
|
{
|
||||||
|
/// <summary>
|
||||||
|
/// Converts a module-specific input to one or more evidence records.
|
||||||
|
/// </summary>
|
||||||
|
IReadOnlyList<IEvidence> ToEvidence(TInput input);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Records
|
||||||
|
|
||||||
|
### EvidenceRecord
|
||||||
|
|
||||||
|
Standard implementation of IEvidence.
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
namespace StellaOps.Evidence.Core;
|
||||||
|
|
||||||
|
public sealed record EvidenceRecord : IEvidence
|
||||||
|
{
|
||||||
|
public required string EvidenceId { get; init; }
|
||||||
|
public required EvidenceType Type { get; init; }
|
||||||
|
public required string SubjectNodeId { get; init; }
|
||||||
|
public required DateTimeOffset CreatedAt { get; init; }
|
||||||
|
public IReadOnlyList<EvidenceSignature> Signatures { get; init; } = [];
|
||||||
|
public EvidenceProvenance? Provenance { get; init; }
|
||||||
|
public IReadOnlyDictionary<string, string> Properties { get; init; } =
|
||||||
|
new Dictionary<string, string>();
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### EvidenceSignature
|
||||||
|
|
||||||
|
Cryptographic signature attached to evidence.
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
namespace StellaOps.Evidence.Core;
|
||||||
|
|
||||||
|
public sealed record EvidenceSignature
|
||||||
|
{
|
||||||
|
/// <summary>
|
||||||
|
/// Identifier of the signer (key ID, tool name, etc.).
|
||||||
|
/// </summary>
|
||||||
|
public required string SignerId { get; init; }
|
||||||
|
|
||||||
|
/// <summary>
|
||||||
|
/// Signing algorithm (e.g., "Ed25519", "ES256").
|
||||||
|
/// </summary>
|
||||||
|
public required string Algorithm { get; init; }
|
||||||
|
|
||||||
|
/// <summary>
|
||||||
|
/// Base64-encoded signature bytes.
|
||||||
|
/// </summary>
|
||||||
|
public required string SignatureBase64 { get; init; }
|
||||||
|
|
||||||
|
/// <summary>
|
||||||
|
/// When the signature was created (UTC).
|
||||||
|
/// </summary>
|
||||||
|
public required DateTimeOffset SignedAt { get; init; }
|
||||||
|
|
||||||
|
/// <summary>
|
||||||
|
/// Type of entity that produced the signature.
|
||||||
|
/// </summary>
|
||||||
|
public required SignerType SignerType { get; init; }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### EvidenceProvenance
|
||||||
|
|
||||||
|
Origin and provenance information.
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
namespace StellaOps.Evidence.Core;
|
||||||
|
|
||||||
|
public sealed record EvidenceProvenance
|
||||||
|
{
|
||||||
|
/// <summary>
|
||||||
|
/// Source that produced this evidence (e.g., "grype", "trivy").
|
||||||
|
/// </summary>
|
||||||
|
public required string Source { get; init; }
|
||||||
|
|
||||||
|
/// <summary>
|
||||||
|
/// Version of the source tool.
|
||||||
|
/// </summary>
|
||||||
|
public string? SourceVersion { get; init; }
|
||||||
|
|
||||||
|
/// <summary>
|
||||||
|
/// URI where original data was obtained.
|
||||||
|
/// </summary>
|
||||||
|
public string? SourceUri { get; init; }
|
||||||
|
|
||||||
|
/// <summary>
|
||||||
|
/// Digest of the original content.
|
||||||
|
/// </summary>
|
||||||
|
public string? ContentDigest { get; init; }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Enumerations
|
||||||
|
|
||||||
|
### EvidenceType
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
namespace StellaOps.Evidence.Core;
|
||||||
|
|
||||||
|
public enum EvidenceType
|
||||||
|
{
|
||||||
|
Unknown = 0,
|
||||||
|
Sbom = 1,
|
||||||
|
Vulnerability = 2,
|
||||||
|
Vex = 3,
|
||||||
|
Attestation = 4,
|
||||||
|
PolicyDecision = 5,
|
||||||
|
ScanResult = 6,
|
||||||
|
Provenance = 7,
|
||||||
|
Signature = 8,
|
||||||
|
ProofSegment = 9,
|
||||||
|
Exception = 10,
|
||||||
|
Advisory = 11,
|
||||||
|
CveMatch = 12,
|
||||||
|
ReachabilityResult = 13
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### SignerType
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
namespace StellaOps.Evidence.Core;
|
||||||
|
|
||||||
|
public enum SignerType
|
||||||
|
{
|
||||||
|
Unknown = 0,
|
||||||
|
Tool = 1,
|
||||||
|
Human = 2,
|
||||||
|
Authority = 3,
|
||||||
|
Vendor = 4,
|
||||||
|
Service = 5
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Adapters
|
||||||
|
|
||||||
|
### EvidenceStatementAdapter
|
||||||
|
|
||||||
|
Converts `EvidenceStatement` from Attestor module.
|
||||||
|
|
||||||
|
**Input**: `EvidenceStatementInput`
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public sealed record EvidenceStatementInput
|
||||||
|
{
|
||||||
|
public required string StatementId { get; init; }
|
||||||
|
public required string SubjectDigest { get; init; }
|
||||||
|
public required string StatementType { get; init; }
|
||||||
|
public required string PredicateType { get; init; }
|
||||||
|
public required DateTimeOffset IssuedAt { get; init; }
|
||||||
|
public IReadOnlyList<EvidenceSignatureInput>? Signatures { get; init; }
|
||||||
|
public IReadOnlyDictionary<string, string>? Metadata { get; init; }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**: Single `IEvidence` record with `Type = Attestation`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ProofSegmentAdapter
|
||||||
|
|
||||||
|
Converts `ProofSegment` from Scanner module.
|
||||||
|
|
||||||
|
**Input**: `ProofSegmentInput`
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public sealed record ProofSegmentInput
|
||||||
|
{
|
||||||
|
public required string SegmentId { get; init; }
|
||||||
|
public required string SubjectNodeId { get; init; }
|
||||||
|
public required string SegmentType { get; init; }
|
||||||
|
public required string Status { get; init; }
|
||||||
|
public required DateTimeOffset CreatedAt { get; init; }
|
||||||
|
public string? PreviousSegmentId { get; init; }
|
||||||
|
public string? PayloadDigest { get; init; }
|
||||||
|
public IReadOnlyList<EvidenceSignatureInput>? Signatures { get; init; }
|
||||||
|
public IReadOnlyDictionary<string, string>? Properties { get; init; }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**: Single `IEvidence` record with `Type = ProofSegment`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### VexObservationAdapter
|
||||||
|
|
||||||
|
Converts `VexObservation` from Excititor module.
|
||||||
|
|
||||||
|
**Input**: `VexObservationInput`
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public sealed record VexObservationInput
|
||||||
|
{
|
||||||
|
public required string SubjectDigest { get; init; }
|
||||||
|
public VexObservationUpstreamInput? Upstream { get; init; }
|
||||||
|
public IReadOnlyList<VexObservationStatementInput>? Statements { get; init; }
|
||||||
|
public IReadOnlyDictionary<string, string>? Properties { get; init; }
|
||||||
|
}
|
||||||
|
|
||||||
|
public sealed record VexObservationUpstreamInput
|
||||||
|
{
|
||||||
|
public required string VexDocumentId { get; init; }
|
||||||
|
public required string VendorName { get; init; }
|
||||||
|
public required DateTimeOffset PublishedAt { get; init; }
|
||||||
|
public string? DocumentDigest { get; init; }
|
||||||
|
}
|
||||||
|
|
||||||
|
public sealed record VexObservationStatementInput
|
||||||
|
{
|
||||||
|
public required string VulnerabilityId { get; init; }
|
||||||
|
public required string ProductId { get; init; }
|
||||||
|
public required string Status { get; init; }
|
||||||
|
public required DateTimeOffset Timestamp { get; init; }
|
||||||
|
public string? Justification { get; init; }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**: Multiple `IEvidence` records:
|
||||||
|
- 1 record with `Type = Provenance` (from upstream)
|
||||||
|
- N records with `Type = Vex` (one per statement)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ExceptionApplicationAdapter
|
||||||
|
|
||||||
|
Converts `ExceptionApplication` from Policy module.
|
||||||
|
|
||||||
|
**Input**: `ExceptionApplicationInput`
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public sealed record ExceptionApplicationInput
|
||||||
|
{
|
||||||
|
public required string ApplicationId { get; init; }
|
||||||
|
public required string TenantId { get; init; }
|
||||||
|
public required string ExceptionId { get; init; }
|
||||||
|
public required string FindingId { get; init; }
|
||||||
|
public required DateTimeOffset AppliedAt { get; init; }
|
||||||
|
public DateTimeOffset? ExpiresAt { get; init; }
|
||||||
|
public string? Reason { get; init; }
|
||||||
|
public string? AppliedBy { get; init; }
|
||||||
|
public IReadOnlyDictionary<string, string>? Properties { get; init; }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**: Single `IEvidence` record with `Type = Exception`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementations
|
||||||
|
|
||||||
|
### InMemoryEvidenceStore
|
||||||
|
|
||||||
|
Thread-safe in-memory evidence store for testing and caching.
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
var store = new InMemoryEvidenceStore();
|
||||||
|
|
||||||
|
// Store evidence
|
||||||
|
await store.StoreAsync(record);
|
||||||
|
|
||||||
|
// Retrieve by ID
|
||||||
|
var evidence = await store.GetAsync("sha256:abc123...");
|
||||||
|
|
||||||
|
// Query by subject
|
||||||
|
var subjectEvidence = await store.GetBySubjectAsync("pkg:npm/lodash@4.17.21");
|
||||||
|
|
||||||
|
// Query by type
|
||||||
|
var vexRecords = await store.GetByTypeAsync(EvidenceType.Vex);
|
||||||
|
|
||||||
|
// Check existence
|
||||||
|
var exists = await store.ExistsAsync("sha256:abc123...");
|
||||||
|
```
|
||||||
|
|
||||||
|
**Thread Safety**: Uses `ConcurrentDictionary` for all operations.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Common Property Keys
|
||||||
|
|
||||||
|
Standard property keys used across evidence types:
|
||||||
|
|
||||||
|
| Key | Used By | Description |
|
||||||
|
|-----|---------|-------------|
|
||||||
|
| `cve` | Vulnerability, CveMatch | CVE identifier |
|
||||||
|
| `severity` | Vulnerability | Severity level (CRITICAL, HIGH, etc.) |
|
||||||
|
| `cvss` | Vulnerability | CVSS score |
|
||||||
|
| `status` | Vex, ProofSegment | Current status |
|
||||||
|
| `justification` | Vex, Exception | Reason for status |
|
||||||
|
| `productId` | Vex | Affected product identifier |
|
||||||
|
| `exceptionId` | Exception | Parent exception ID |
|
||||||
|
| `findingId` | Exception | Finding being excepted |
|
||||||
|
| `tenantId` | Exception | Tenant context |
|
||||||
|
| `segmentType` | ProofSegment | Type of proof segment |
|
||||||
|
| `previousSegmentId` | ProofSegment | Chain link to previous segment |
|
||||||
|
| `payloadDigest` | ProofSegment | Content digest |
|
||||||
|
| `predicateType` | Attestation | In-toto predicate type URI |
|
||||||
|
| `statementType` | Attestation | Statement type identifier |
|
||||||
342
docs/api/reference/canon-json.md
Normal file
342
docs/api/reference/canon-json.md
Normal file
@@ -0,0 +1,342 @@
|
|||||||
|
# CanonJson API Reference
|
||||||
|
|
||||||
|
**Namespace**: `StellaOps.Canonical.Json`
|
||||||
|
**Assembly**: `StellaOps.Canonical.Json`
|
||||||
|
**Version**: 1.0.0
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The `CanonJson` class provides RFC 8785-compliant JSON canonicalization and cryptographic hashing utilities for content-addressed identifiers. It ensures deterministic, reproducible JSON serialization across all environments.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CanonVersion Class
|
||||||
|
|
||||||
|
Static class containing canonicalization version constants and utilities.
|
||||||
|
|
||||||
|
### Constants
|
||||||
|
|
||||||
|
| Constant | Type | Value | Description |
|
||||||
|
|----------|------|-------|-------------|
|
||||||
|
| `V1` | `string` | `"stella:canon:v1"` | Version 1: RFC 8785 JSON canonicalization |
|
||||||
|
| `VersionFieldName` | `string` | `"_canonVersion"` | Field name for version marker (underscore ensures first position) |
|
||||||
|
| `Current` | `string` | `V1` | Current default version for new hashes |
|
||||||
|
|
||||||
|
### Methods
|
||||||
|
|
||||||
|
#### IsVersioned
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public static bool IsVersioned(ReadOnlySpan<byte> canonicalJson)
|
||||||
|
```
|
||||||
|
|
||||||
|
Detects if canonical JSON includes a version marker.
|
||||||
|
|
||||||
|
**Parameters:**
|
||||||
|
- `canonicalJson`: UTF-8 encoded canonical JSON bytes
|
||||||
|
|
||||||
|
**Returns:** `true` if the JSON starts with `{"_canonVersion":`, `false` otherwise
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```csharp
|
||||||
|
var json = """{"_canonVersion":"stella:canon:v1","foo":"bar"}"""u8;
|
||||||
|
bool versioned = CanonVersion.IsVersioned(json); // true
|
||||||
|
|
||||||
|
var legacy = """{"foo":"bar"}"""u8;
|
||||||
|
bool legacyVersioned = CanonVersion.IsVersioned(legacy); // false
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### ExtractVersion
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public static string? ExtractVersion(ReadOnlySpan<byte> canonicalJson)
|
||||||
|
```
|
||||||
|
|
||||||
|
Extracts the version string from versioned canonical JSON.
|
||||||
|
|
||||||
|
**Parameters:**
|
||||||
|
- `canonicalJson`: UTF-8 encoded canonical JSON bytes
|
||||||
|
|
||||||
|
**Returns:** The version string (e.g., `"stella:canon:v1"`) or `null` if not versioned
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```csharp
|
||||||
|
var json = """{"_canonVersion":"stella:canon:v1","foo":"bar"}"""u8;
|
||||||
|
string? version = CanonVersion.ExtractVersion(json); // "stella:canon:v1"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CanonJson Class
|
||||||
|
|
||||||
|
Static class providing JSON canonicalization and hashing methods.
|
||||||
|
|
||||||
|
### Canonicalization Methods
|
||||||
|
|
||||||
|
#### Canonicalize<T>
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public static byte[] Canonicalize<T>(T obj)
|
||||||
|
```
|
||||||
|
|
||||||
|
Canonicalizes an object to RFC 8785 JSON without version marker (legacy format).
|
||||||
|
|
||||||
|
**Parameters:**
|
||||||
|
- `obj`: The object to canonicalize
|
||||||
|
|
||||||
|
**Returns:** UTF-8 encoded canonical JSON bytes
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```csharp
|
||||||
|
var obj = new { z = 3, a = 1 };
|
||||||
|
byte[] canonical = CanonJson.Canonicalize(obj);
|
||||||
|
// Result: {"a":1,"z":3}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### CanonicalizeVersioned<T>
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public static byte[] CanonicalizeVersioned<T>(T obj, string version = CanonVersion.Current)
|
||||||
|
```
|
||||||
|
|
||||||
|
Canonicalizes an object with a version marker for content-addressed hashing.
|
||||||
|
|
||||||
|
**Parameters:**
|
||||||
|
- `obj`: The object to canonicalize
|
||||||
|
- `version`: Canonicalization version (default: `CanonVersion.Current`)
|
||||||
|
|
||||||
|
**Returns:** UTF-8 encoded canonical JSON bytes with version marker
|
||||||
|
|
||||||
|
**Exceptions:**
|
||||||
|
- `ArgumentNullException`: When `version` is null
|
||||||
|
- `ArgumentException`: When `version` is empty
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```csharp
|
||||||
|
var obj = new { z = 3, a = 1 };
|
||||||
|
byte[] canonical = CanonJson.CanonicalizeVersioned(obj);
|
||||||
|
// Result: {"_canonVersion":"stella:canon:v1","a":1,"z":3}
|
||||||
|
|
||||||
|
// With explicit version
|
||||||
|
byte[] v2 = CanonJson.CanonicalizeVersioned(obj, "stella:canon:v2");
|
||||||
|
// Result: {"_canonVersion":"stella:canon:v2","a":1,"z":3}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Hashing Methods
|
||||||
|
|
||||||
|
#### Hash<T>
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public static string Hash<T>(T obj)
|
||||||
|
```
|
||||||
|
|
||||||
|
Computes SHA-256 hash of canonical JSON (legacy format, no version marker).
|
||||||
|
|
||||||
|
**Parameters:**
|
||||||
|
- `obj`: The object to hash
|
||||||
|
|
||||||
|
**Returns:** Lowercase hex-encoded SHA-256 hash (64 characters)
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```csharp
|
||||||
|
var obj = new { foo = "bar" };
|
||||||
|
string hash = CanonJson.Hash(obj);
|
||||||
|
// Result: "7a38bf81f383f69433ad6e900d35b3e2385593f76a7b7ab5d4355b8ba41ee24b"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### HashVersioned<T>
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public static string HashVersioned<T>(T obj, string version = CanonVersion.Current)
|
||||||
|
```
|
||||||
|
|
||||||
|
Computes SHA-256 hash of versioned canonical JSON.
|
||||||
|
|
||||||
|
**Parameters:**
|
||||||
|
- `obj`: The object to hash
|
||||||
|
- `version`: Canonicalization version (default: `CanonVersion.Current`)
|
||||||
|
|
||||||
|
**Returns:** Lowercase hex-encoded SHA-256 hash (64 characters)
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```csharp
|
||||||
|
var obj = new { foo = "bar" };
|
||||||
|
string hash = CanonJson.HashVersioned(obj);
|
||||||
|
// Different from legacy hash due to version marker
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### HashPrefixed<T>
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public static string HashPrefixed<T>(T obj)
|
||||||
|
```
|
||||||
|
|
||||||
|
Computes SHA-256 hash with `sha256:` prefix (legacy format).
|
||||||
|
|
||||||
|
**Parameters:**
|
||||||
|
- `obj`: The object to hash
|
||||||
|
|
||||||
|
**Returns:** Hash in format `sha256:<64-hex-chars>`
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```csharp
|
||||||
|
var obj = new { foo = "bar" };
|
||||||
|
string hash = CanonJson.HashPrefixed(obj);
|
||||||
|
// Result: "sha256:7a38bf81f383f69433ad6e900d35b3e2385593f76a7b7ab5d4355b8ba41ee24b"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### HashVersionedPrefixed<T>
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public static string HashVersionedPrefixed<T>(T obj, string version = CanonVersion.Current)
|
||||||
|
```
|
||||||
|
|
||||||
|
Computes SHA-256 hash with `sha256:` prefix (versioned format).
|
||||||
|
|
||||||
|
**Parameters:**
|
||||||
|
- `obj`: The object to hash
|
||||||
|
- `version`: Canonicalization version (default: `CanonVersion.Current`)
|
||||||
|
|
||||||
|
**Returns:** Hash in format `sha256:<64-hex-chars>`
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```csharp
|
||||||
|
var obj = new { foo = "bar" };
|
||||||
|
string hash = CanonJson.HashVersionedPrefixed(obj);
|
||||||
|
// Result: "sha256:..." (different from HashPrefixed due to version marker)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## IJsonCanonicalizer Interface
|
||||||
|
|
||||||
|
Interface for JSON canonicalization implementations.
|
||||||
|
|
||||||
|
### Methods
|
||||||
|
|
||||||
|
#### Canonicalize
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
byte[] Canonicalize(ReadOnlySpan<byte> json)
|
||||||
|
```
|
||||||
|
|
||||||
|
Canonicalizes UTF-8 JSON bytes per RFC 8785.
|
||||||
|
|
||||||
|
**Parameters:**
|
||||||
|
- `json`: Raw UTF-8 JSON bytes to canonicalize
|
||||||
|
|
||||||
|
**Returns:** Canonical UTF-8 JSON bytes
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### CanonicalizeWithVersion
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
byte[] CanonicalizeWithVersion(ReadOnlySpan<byte> json, string version)
|
||||||
|
```
|
||||||
|
|
||||||
|
Canonicalizes UTF-8 JSON bytes with version marker prepended.
|
||||||
|
|
||||||
|
**Parameters:**
|
||||||
|
- `json`: Raw UTF-8 JSON bytes to canonicalize
|
||||||
|
- `version`: Version string to embed
|
||||||
|
|
||||||
|
**Returns:** Canonical UTF-8 JSON bytes with `_canonVersion` field
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### Computing Content-Addressed IDs
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
using StellaOps.Canonical.Json;
|
||||||
|
|
||||||
|
// Evidence predicate hashing
|
||||||
|
var evidence = new EvidencePredicate
|
||||||
|
{
|
||||||
|
Source = "scanner/trivy",
|
||||||
|
SbomEntryId = "sha256:91f2ab3c:pkg:npm/lodash@4.17.21",
|
||||||
|
VulnerabilityId = "CVE-2021-23337"
|
||||||
|
};
|
||||||
|
|
||||||
|
// Compute versioned hash (recommended)
|
||||||
|
string evidenceId = CanonJson.HashVersionedPrefixed(evidence);
|
||||||
|
// Result: "sha256:..."
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verifying Attestations
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public bool VerifyAttestation(byte[] payload, string expectedHash)
|
||||||
|
{
|
||||||
|
// Detect format and verify accordingly
|
||||||
|
if (CanonVersion.IsVersioned(payload))
|
||||||
|
{
|
||||||
|
var version = CanonVersion.ExtractVersion(payload);
|
||||||
|
// Re-canonicalize with same version and compare
|
||||||
|
var computed = CanonJson.HashVersioned(payload, version!);
|
||||||
|
return computed == expectedHash;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Legacy format
|
||||||
|
var legacyHash = CanonJson.Hash(payload);
|
||||||
|
return legacyHash == expectedHash;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Migration from Legacy to Versioned
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
// Old code (legacy)
|
||||||
|
var hash = CanonJson.Hash(predicate);
|
||||||
|
|
||||||
|
// New code (versioned) - just add "Versioned"
|
||||||
|
var hash = CanonJson.HashVersioned(predicate);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Algorithm Details
|
||||||
|
|
||||||
|
### RFC 8785 Compliance
|
||||||
|
|
||||||
|
| Requirement | Implementation |
|
||||||
|
|-------------|----------------|
|
||||||
|
| Key ordering | Ordinal string comparison (case-sensitive, ASCII) |
|
||||||
|
| Number format | IEEE 754, shortest representation |
|
||||||
|
| String escaping | Minimal (only `"`, `\`, control characters) |
|
||||||
|
| Whitespace | None (compact output) |
|
||||||
|
| Encoding | UTF-8 without BOM |
|
||||||
|
|
||||||
|
### Version Marker Position
|
||||||
|
|
||||||
|
The `_canonVersion` field is **always first** in the output due to:
|
||||||
|
1. Underscore (`_`) sorts before all letters in ASCII
|
||||||
|
2. After injecting version, remaining keys are sorted normally
|
||||||
|
|
||||||
|
```json
|
||||||
|
{"_canonVersion":"stella:canon:v1","aaa":1,"bbb":2,"zzz":3}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Related Documentation
|
||||||
|
|
||||||
|
- [Proof Chain Specification](../modules/attestor/proof-chain-specification.md)
|
||||||
|
- [Canonicalization Migration Guide](../operations/canon-version-migration.md)
|
||||||
|
- [RFC 8785 - JSON Canonicalization Scheme](https://datatracker.ietf.org/doc/html/rfc8785)
|
||||||
289
docs/api/triage-export-api-reference.md
Normal file
289
docs/api/triage-export-api-reference.md
Normal file
@@ -0,0 +1,289 @@
|
|||||||
|
# Triage Evidence Export API Reference
|
||||||
|
|
||||||
|
Version: 1.0
|
||||||
|
Sprint: SPRINT_9200_0001_0002, SPRINT_9200_0001_0003
|
||||||
|
Status: Stable
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The Triage Evidence Export API provides endpoints for downloading complete evidence packages as archives. These endpoints support both individual finding exports and batch exports for entire scan runs.
|
||||||
|
|
||||||
|
## Base URL
|
||||||
|
|
||||||
|
```
|
||||||
|
/api/v1/triage
|
||||||
|
```
|
||||||
|
|
||||||
|
## Endpoints
|
||||||
|
|
||||||
|
### Export Finding Evidence Bundle
|
||||||
|
|
||||||
|
Downloads a complete evidence bundle for a single finding as a ZIP or TAR.GZ archive.
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /findings/{findingId}/evidence/export
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Path Parameters
|
||||||
|
|
||||||
|
| Parameter | Type | Required | Description |
|
||||||
|
|-----------|------|----------|-------------|
|
||||||
|
| `findingId` | string | Yes | Finding identifier |
|
||||||
|
|
||||||
|
#### Query Parameters
|
||||||
|
|
||||||
|
| Parameter | Type | Default | Description |
|
||||||
|
|-----------|------|---------|-------------|
|
||||||
|
| `format` | string | `zip` | Archive format: `zip`, `tar.gz`, `targz`, `tgz` |
|
||||||
|
|
||||||
|
#### Response Headers
|
||||||
|
|
||||||
|
| Header | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| `Content-Type` | `application/zip` or `application/gzip` |
|
||||||
|
| `Content-Disposition` | `attachment; filename="evidence-{findingId}.zip"` |
|
||||||
|
| `X-Archive-Digest` | SHA-256 digest of the archive: `sha256:{digest}` |
|
||||||
|
|
||||||
|
#### Response Codes
|
||||||
|
|
||||||
|
| Code | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| 200 | Success - archive stream returned |
|
||||||
|
| 400 | Invalid format specified |
|
||||||
|
| 404 | Finding not found |
|
||||||
|
|
||||||
|
#### Example Request
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X GET \
|
||||||
|
"https://api.stellaops.example/api/v1/triage/findings/f-abc123/evidence/export?format=zip" \
|
||||||
|
-H "Authorization: Bearer <token>" \
|
||||||
|
-o evidence-f-abc123.zip
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Example Response
|
||||||
|
|
||||||
|
Binary stream of the archive file.
|
||||||
|
|
||||||
|
### Get Unified Evidence
|
||||||
|
|
||||||
|
Retrieves the unified evidence package as JSON (not downloadable archive).
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /findings/{findingId}/evidence
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Path Parameters
|
||||||
|
|
||||||
|
| Parameter | Type | Required | Description |
|
||||||
|
|-----------|------|----------|-------------|
|
||||||
|
| `findingId` | string | Yes | Finding identifier |
|
||||||
|
|
||||||
|
#### Query Parameters
|
||||||
|
|
||||||
|
| Parameter | Type | Default | Description |
|
||||||
|
|-----------|------|---------|-------------|
|
||||||
|
| `includeSbom` | boolean | `true` | Include SBOM evidence |
|
||||||
|
| `includeReachability` | boolean | `true` | Include reachability evidence |
|
||||||
|
| `includeVex` | boolean | `true` | Include VEX claims |
|
||||||
|
| `includeAttestations` | boolean | `true` | Include attestations |
|
||||||
|
| `includeDeltas` | boolean | `true` | Include delta evidence |
|
||||||
|
| `includePolicy` | boolean | `true` | Include policy evidence |
|
||||||
|
| `includeReplayCommand` | boolean | `true` | Include replay command |
|
||||||
|
|
||||||
|
#### Response Headers
|
||||||
|
|
||||||
|
| Header | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| `ETag` | Content-addressed cache key: `"{cacheKey}"` |
|
||||||
|
| `Cache-Control` | `private, max-age=300` |
|
||||||
|
|
||||||
|
#### Response Codes
|
||||||
|
|
||||||
|
| Code | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| 200 | Success - evidence returned |
|
||||||
|
| 304 | Not Modified (ETag match) |
|
||||||
|
| 404 | Finding not found |
|
||||||
|
|
||||||
|
#### Example Request
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X GET \
|
||||||
|
"https://api.stellaops.example/api/v1/triage/findings/f-abc123/evidence" \
|
||||||
|
-H "Authorization: Bearer <token>" \
|
||||||
|
-H "If-None-Match: \"sha256:abc123...\""
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Example Response (200 OK)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"findingId": "f-abc123",
|
||||||
|
"cveId": "CVE-2024-1234",
|
||||||
|
"componentPurl": "pkg:npm/lodash@4.17.15",
|
||||||
|
"sbom": {
|
||||||
|
"format": "cyclonedx",
|
||||||
|
"version": "1.5",
|
||||||
|
"documentUri": "/sboms/sha256:abc123",
|
||||||
|
"digest": "sha256:abc123...",
|
||||||
|
"component": {
|
||||||
|
"purl": "pkg:npm/lodash@4.17.15",
|
||||||
|
"name": "lodash",
|
||||||
|
"version": "4.17.15",
|
||||||
|
"ecosystem": "npm"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"reachability": {
|
||||||
|
"subgraphId": "sg-xyz789",
|
||||||
|
"status": "reachable",
|
||||||
|
"confidence": 0.95,
|
||||||
|
"method": "static",
|
||||||
|
"entryPoints": [...]
|
||||||
|
},
|
||||||
|
"vexClaims": [...],
|
||||||
|
"attestations": [...],
|
||||||
|
"deltas": {...},
|
||||||
|
"policy": {...},
|
||||||
|
"manifests": {
|
||||||
|
"artifactDigest": "sha256:a1b2c3...",
|
||||||
|
"manifestHash": "sha256:def456...",
|
||||||
|
"feedSnapshotHash": "sha256:feed789...",
|
||||||
|
"policyHash": "sha256:policy321..."
|
||||||
|
},
|
||||||
|
"verification": {
|
||||||
|
"status": "verified",
|
||||||
|
"hashesVerified": true,
|
||||||
|
"attestationsVerified": true,
|
||||||
|
"evidenceComplete": true
|
||||||
|
},
|
||||||
|
"replayCommand": "stella scan replay --artifact sha256:a1b2c3... --manifest sha256:def456... --feeds sha256:feed789... --policy sha256:policy321...",
|
||||||
|
"shortReplayCommand": "stella replay snapshot --verdict V-12345",
|
||||||
|
"evidenceBundleUrl": "/v1/triage/findings/f-abc123/evidence/export",
|
||||||
|
"generatedAt": "2025-01-15T10:30:00Z",
|
||||||
|
"cacheKey": "sha256:unique123..."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Get Replay Command
|
||||||
|
|
||||||
|
Retrieves the replay command for a finding.
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /findings/{findingId}/replay-command
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Path Parameters
|
||||||
|
|
||||||
|
| Parameter | Type | Required | Description |
|
||||||
|
|-----------|------|----------|-------------|
|
||||||
|
| `findingId` | string | Yes | Finding identifier |
|
||||||
|
|
||||||
|
#### Query Parameters
|
||||||
|
|
||||||
|
| Parameter | Type | Default | Description |
|
||||||
|
|-----------|------|---------|-------------|
|
||||||
|
| `shells` | string[] | `["bash"]` | Target shells: `bash`, `powershell`, `cmd` |
|
||||||
|
| `includeOffline` | boolean | `false` | Include offline replay variant |
|
||||||
|
| `generateBundle` | boolean | `false` | Generate evidence bundle |
|
||||||
|
|
||||||
|
#### Response Codes
|
||||||
|
|
||||||
|
| Code | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| 200 | Success - replay command returned |
|
||||||
|
| 404 | Finding not found |
|
||||||
|
|
||||||
|
#### Example Response
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"findingId": "f-abc123",
|
||||||
|
"commands": {
|
||||||
|
"bash": "stella scan replay --artifact sha256:a1b2c3... --manifest sha256:def456... --feeds sha256:feed789... --policy sha256:policy321...",
|
||||||
|
"powershell": "stella scan replay --artifact sha256:a1b2c3... --manifest sha256:def456... --feeds sha256:feed789... --policy sha256:policy321..."
|
||||||
|
},
|
||||||
|
"shortCommand": "stella replay snapshot --verdict V-12345",
|
||||||
|
"inputHashes": {
|
||||||
|
"artifactDigest": "sha256:a1b2c3...",
|
||||||
|
"manifestHash": "sha256:def456...",
|
||||||
|
"feedSnapshotHash": "sha256:feed789...",
|
||||||
|
"policyHash": "sha256:policy321..."
|
||||||
|
},
|
||||||
|
"bundleUrl": "/v1/triage/findings/f-abc123/evidence/export",
|
||||||
|
"generatedAt": "2025-01-15T10:30:00Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Get Scan Replay Command
|
||||||
|
|
||||||
|
Retrieves the replay command for an entire scan.
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /scans/{scanId}/replay-command
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Path Parameters
|
||||||
|
|
||||||
|
| Parameter | Type | Required | Description |
|
||||||
|
|-----------|------|----------|-------------|
|
||||||
|
| `scanId` | string | Yes | Scan identifier |
|
||||||
|
|
||||||
|
#### Query Parameters
|
||||||
|
|
||||||
|
Same as finding replay command endpoint.
|
||||||
|
|
||||||
|
#### Response Codes
|
||||||
|
|
||||||
|
| Code | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| 200 | Success - replay command returned |
|
||||||
|
| 404 | Scan not found |
|
||||||
|
|
||||||
|
## ETag Caching
|
||||||
|
|
||||||
|
The unified evidence endpoint supports HTTP caching via ETag/If-None-Match:
|
||||||
|
|
||||||
|
1. **Initial request**: Returns evidence with `ETag` header
|
||||||
|
2. **Subsequent requests**: Include `If-None-Match: "{etag}"` header
|
||||||
|
3. **If unchanged**: Returns `304 Not Modified` (no body)
|
||||||
|
4. **If changed**: Returns `200 OK` with new evidence and ETag
|
||||||
|
|
||||||
|
Example flow:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Initial request
|
||||||
|
curl -i "https://api.stellaops.example/api/v1/triage/findings/f-abc123/evidence"
|
||||||
|
# Response: 200 OK, ETag: "sha256:abc123..."
|
||||||
|
|
||||||
|
# Conditional request
|
||||||
|
curl -i "https://api.stellaops.example/api/v1/triage/findings/f-abc123/evidence" \
|
||||||
|
-H 'If-None-Match: "sha256:abc123..."'
|
||||||
|
# Response: 304 Not Modified (if unchanged)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Archive Integrity
|
||||||
|
|
||||||
|
To verify downloaded archives:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Get expected digest from header
|
||||||
|
EXPECTED=$(curl -sI ".../evidence/export" | grep X-Archive-Digest | cut -d: -f2-)
|
||||||
|
|
||||||
|
# Download and verify
|
||||||
|
curl -o evidence.zip ".../evidence/export"
|
||||||
|
ACTUAL=$(sha256sum evidence.zip | cut -d' ' -f1)
|
||||||
|
|
||||||
|
if [ "sha256:$ACTUAL" = "$EXPECTED" ]; then
|
||||||
|
echo "Archive verified"
|
||||||
|
else
|
||||||
|
echo "Verification failed!"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- [Evidence Bundle Format Specification](../modules/cli/guides/commands/evidence-bundle-format.md)
|
||||||
|
- [stella scan replay Command Reference](../modules/cli/guides/commands/scan-replay.md)
|
||||||
|
- [Unified Evidence Model](./evidence-api-reference.md)
|
||||||
@@ -1 +0,0 @@
|
|||||||
# ADVISORY assets (hash before publish)
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
# API assets (hash before publish)
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
# CLI assets (hash before publish)
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
# CONSOLE assets (hash before publish)
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
# LEDGER assets (hash before publish)
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
# RBAC assets (hash before publish)
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
# RUNBOOK assets (hash before publish)
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
# SBOM assets (hash before publish)
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
# TELEMETRY assets (hash before publish)
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
# VEX assets (hash before publish)
|
|
||||||
@@ -1,47 +0,0 @@
|
|||||||
# CONCELIER-CORE-AOC-19-004 · Backfill prerequisites
|
|
||||||
|
|
||||||
> **DEPRECATED:** This document references MongoDB backfill procedures which are no longer used. Concelier now uses PostgreSQL (Sprint 4400). See `docs/db/SPECIFICATION.md` for current schema.
|
|
||||||
|
|
||||||
Purpose: prep safety rails so CONCELIER-STORE-AOC-19-005 can execute the raw-linkset backfill and rollback without risk to offline kits or prod PostgreSQL.
|
|
||||||
|
|
||||||
## Inputs
|
|
||||||
- Dataset: `out/concelier/backfill/linksets-m0.ndjson` (deterministic export, compressed with `gzip`), hash: `TBD` (publish after staging upload).
|
|
||||||
- Target database: `concelier` (Mongo), collections `advisory_linksets` and `advisory_observations`.
|
|
||||||
- Offline kit bundle: `out/offline/concelier-linksets-m0.tar.gz` (mirrors the NDJSON used for Mongo ingest).
|
|
||||||
|
|
||||||
## Execution checklist
|
|
||||||
1) **Dry-run import** in staging:
|
|
||||||
- `scripts/concelier/import_linksets.sh --input out/concelier/backfill/linksets-m0.ndjson.gz --dry-run`
|
|
||||||
- Verify no merge counters / no inferred severity fields.
|
|
||||||
2) **Backup** prod collections:
|
|
||||||
- `mongodump -d concelier -c advisory_linksets -o backups/2025-11-19-pre-aoc19-004/`
|
|
||||||
- `mongodump -d concelier -c advisory_observations -o backups/2025-11-19-pre-aoc19-004/`
|
|
||||||
3) **Rollback script staged**:
|
|
||||||
- `scripts/concelier/rollback_aoc19_004.sh` restores both collections from above dump, then runs `db.advisory_linksets.createIndex` to re-seat deterministic indexes.
|
|
||||||
4) **Gate flags**:
|
|
||||||
- Ensure `LinkNotMerge.Enabled=true` and `AggregationOnly.Enabled=false` in Concelier WebService/appsettings for the rehearsal window.
|
|
||||||
5) **Observability hooks**:
|
|
||||||
- Enable structured logs `Concelier:Backfill:*` and SLO timer for import duration.
|
|
||||||
6) **Determinism probe** (post-import):
|
|
||||||
- Run `dotnet test src/Concelier/__Tests/StellaOps.Concelier.WebService.Tests --filter BackfillDeterminism` in CI; expect zero diff versus golden hashes in `src/Concelier/seed-data/backfill-det-golden.json`.
|
|
||||||
|
|
||||||
## Rollback procedure
|
|
||||||
```
|
|
||||||
scripts/concelier/rollback_aoc19_004.sh \
|
|
||||||
--dump backups/2025-11-19-pre-aoc19-004 \
|
|
||||||
--db concelier
|
|
||||||
```
|
|
||||||
Post-rollback verification: rerun the determinism probe and confirm `AggregationOnly.Enabled=false`.
|
|
||||||
|
|
||||||
## Evidence to attach after execution
|
|
||||||
- Mongo dump hash (SHA256 of archive).
|
|
||||||
- Import log excerpt showing counts and zero merge counters.
|
|
||||||
- Determinism test TRX.
|
|
||||||
- Offline kit bundle hash.
|
|
||||||
|
|
||||||
## Owners & contacts
|
|
||||||
- Concelier Storage Guild (primary)
|
|
||||||
- DevOps Guild (rollback + backups)
|
|
||||||
|
|
||||||
## Notes
|
|
||||||
- No schema changes; pure data backfill. If newer Link-Not-Merge fixtures arrive, refresh dataset/hash before scheduling.
|
|
||||||
@@ -1,16 +0,0 @@
|
|||||||
# linksets-m0 dataset plan (CONCELIER-CORE-AOC-19-004)
|
|
||||||
|
|
||||||
> **DEPRECATED:** This document references MongoDB export procedures which are no longer used. Concelier now uses PostgreSQL (Sprint 4400). See `docs/db/SPECIFICATION.md` for current schema.
|
|
||||||
|
|
||||||
Purpose: produce deterministic dataset for STORE-AOC-19-005 rehearsal.
|
|
||||||
|
|
||||||
Generated artefacts:
|
|
||||||
- `out/concelier/backfill/linksets-m0.ndjson.gz` — placeholder deterministic NDJSON (stub until real export).
|
|
||||||
- `out/concelier/backfill/linksets-m0.ndjson.gz.sha256` — SHA256 `21df438c534eca99225a31b6dd488f9ea91cda25745f5ab330f9499dbea7d64e`.
|
|
||||||
|
|
||||||
Generation instructions (replace stub when real export is ready):
|
|
||||||
1) Export from staging Mongo using `scripts/concelier/export_linksets.sh --tenant default --output out/concelier/backfill/linksets-m0.ndjson.gz --gzip`.
|
|
||||||
2) Verify determinism: `python3 scripts/hash_ndjson.py out/concelier/backfill/linksets-m0.ndjson.gz` and compare across two runs (hashes must match).
|
|
||||||
3) Update `.sha256` file with the new hash.
|
|
||||||
|
|
||||||
Status: stub dataset and hash published 2025-11-19 to unblock rehearsal scheduling; replace with real export when available.
|
|
||||||
@@ -1,88 +0,0 @@
|
|||||||
# Excititor Statement Backfill Runbook
|
|
||||||
|
|
||||||
> **DEPRECATED:** This runbook describes MongoDB-based backfill procedures which are no longer used. Excititor now uses PostgreSQL for persistence (Sprint 4400). See `docs/db/SPECIFICATION.md` for current schema and `docs/operations/postgresql-guide.md` for database operations.
|
|
||||||
|
|
||||||
Last updated: 2025-10-19
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
Use this runbook when you need to rebuild the `vex.statements` collection from historical raw documents. Typical scenarios:
|
|
||||||
|
|
||||||
- Upgrading the statement schema (e.g., adding severity/KEV/EPSS signals).
|
|
||||||
- Recovering from a partial ingest outage where statements were never persisted.
|
|
||||||
- Seeding a freshly provisioned Excititor deployment from an existing raw archive.
|
|
||||||
|
|
||||||
Backfill operates server-side via the Excititor WebService and reuses the same pipeline that powers the `/excititor/statements` ingestion endpoint. Each raw document is normalized, signed metadata is preserved, and duplicate statements are skipped unless the run is forced.
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
1. **Connectivity to Excititor WebService** – the CLI uses the backend URL configured in `stellaops.yml` or the `--backend-url` argument.
|
|
||||||
2. **Authority credentials** – the CLI honours the existing Authority client configuration; ensure the caller has permission to invoke admin endpoints.
|
|
||||||
3. **Mongo replica set** (recommended) – causal consistency guarantees rely on majority read/write concerns. Standalone deployment works but skips cross-document transactions.
|
|
||||||
|
|
||||||
## CLI command
|
|
||||||
|
|
||||||
```
|
|
||||||
stellaops excititor backfill-statements \
|
|
||||||
[--retrieved-since <ISO8601>] \
|
|
||||||
[--force] \
|
|
||||||
[--batch-size <int>] \
|
|
||||||
[--max-documents <int>]
|
|
||||||
```
|
|
||||||
|
|
||||||
| Option | Description |
|
|
||||||
| ------ | ----------- |
|
|
||||||
| `--retrieved-since` | Only process raw documents fetched on or after the specified timestamp (UTC by default). |
|
|
||||||
| `--force` | Reprocess documents even if matching statements already exist (useful after schema upgrades). |
|
|
||||||
| `--batch-size` | Number of raw documents pulled per batch (default `100`). |
|
|
||||||
| `--max-documents` | Optional hard limit on the number of raw documents to evaluate. |
|
|
||||||
|
|
||||||
Example – replay the last 48 hours of Red Hat ingest while keeping existing statements:
|
|
||||||
|
|
||||||
```
|
|
||||||
stellaops excititor backfill-statements \
|
|
||||||
--retrieved-since "$(date -u -d '48 hours ago' +%Y-%m-%dT%H:%M:%SZ)"
|
|
||||||
```
|
|
||||||
|
|
||||||
Example – full replay with forced overwrites, capped at 2,000 documents:
|
|
||||||
|
|
||||||
```
|
|
||||||
stellaops excititor backfill-statements --force --max-documents 2000
|
|
||||||
```
|
|
||||||
|
|
||||||
The command returns a summary similar to:
|
|
||||||
|
|
||||||
```
|
|
||||||
Backfill completed: evaluated 450, backfilled 180, claims written 320, skipped 270, failures 0.
|
|
||||||
```
|
|
||||||
|
|
||||||
## Behaviour
|
|
||||||
|
|
||||||
- Raw documents are streamed in ascending `retrievedAt` order.
|
|
||||||
- Each document is normalized using the registered VEX normalizers (CSAF, CycloneDX, OpenVEX).
|
|
||||||
- Statements are appended through the same `IVexClaimStore.AppendAsync` path that powers `/excititor/statements`.
|
|
||||||
- Duplicate detection compares `Document.Digest`; duplicates are skipped unless `--force` is specified.
|
|
||||||
- Failures are logged with the offending digest and continue with the next document.
|
|
||||||
|
|
||||||
## Observability
|
|
||||||
|
|
||||||
- CLI logs aggregate counts and the backend logs per-digest warnings or errors.
|
|
||||||
- Mongo writes carry majority write concern; expect backfill throughput to match ingest baselines (≈5 seconds warm, 30 seconds cold).
|
|
||||||
- Monitor the `excititor.storage.backfill` log scope for detailed telemetry.
|
|
||||||
|
|
||||||
## Post-run verification
|
|
||||||
|
|
||||||
1. Inspect the `vex.statements` collection for the targeted window (check `InsertedAt`).
|
|
||||||
2. Re-run the Excititor storage test suite if possible:
|
|
||||||
```
|
|
||||||
dotnet test src/Excititor/__Tests/StellaOps.Excititor.Storage.Mongo.Tests/StellaOps.Excititor.Storage.Mongo.Tests.csproj
|
|
||||||
```
|
|
||||||
3. Optionally, call `/excititor/statements/{vulnerabilityId}/{productKey}` to confirm the expected statements exist.
|
|
||||||
|
|
||||||
## Rollback
|
|
||||||
|
|
||||||
If a forced run produced incorrect statements, use the standard Mongo rollback procedure:
|
|
||||||
|
|
||||||
1. Identify the `InsertedAt` window for the backfill run.
|
|
||||||
2. Delete affected records from `vex.statements` (and any downstream exports if applicable).
|
|
||||||
3. Rerun the backfill command with corrected parameters.
|
|
||||||
@@ -32,7 +32,7 @@ Each card below pairs the headline capability with the evidence that backs it an
|
|||||||
- **Why it matters:** A CVE found 6 months ago can be re-verified today by running `stella replay srm.yaml`, yielding an identical result—an audit trail no other scanner provides. This is why Stella decisions survive auditors, regulators, and supply-chain propagation.
|
- **Why it matters:** A CVE found 6 months ago can be re-verified today by running `stella replay srm.yaml`, yielding an identical result—an audit trail no other scanner provides. This is why Stella decisions survive auditors, regulators, and supply-chain propagation.
|
||||||
|
|
||||||
## 5. Transparent Quotas & Offline Operations
|
## 5. Transparent Quotas & Offline Operations
|
||||||
- **What it is:** Redis-backed counters surface `{{ quota_token }}` scans/day via headers, UI banners, and `/quota` API; Offline Update Kits mirror feeds.
|
- **What it is:** Valkey-backed counters surface `{{ quota_token }}` scans/day via headers, UI banners, and `/quota` API; Offline Update Kits mirror feeds.
|
||||||
- **Evidence:** Quota tokens verify locally using bundled public keys, and Offline Update Kits include mirrored advisories, SBOM feeds, and VEX sources.
|
- **Evidence:** Quota tokens verify locally using bundled public keys, and Offline Update Kits include mirrored advisories, SBOM feeds, and VEX sources.
|
||||||
- **Why it matters:** You stay within predictable limits, avoid surprise throttling, and operate entirely offline when needed.
|
- **Why it matters:** You stay within predictable limits, avoid surprise throttling, and operate entirely offline when needed.
|
||||||
|
|
||||||
|
|||||||
@@ -28,7 +28,7 @@
|
|||||||
* **Rekor v2** — tile‑backed transparency log endpoint(s).
|
* **Rekor v2** — tile‑backed transparency log endpoint(s).
|
||||||
* **MinIO (S3)** — optional archive store for DSSE envelopes & verification bundles.
|
* **MinIO (S3)** — optional archive store for DSSE envelopes & verification bundles.
|
||||||
* **PostgreSQL** — local cache of `{uuid, index, proof, artifactSha256, bundleSha256}`; job state; audit.
|
* **PostgreSQL** — local cache of `{uuid, index, proof, artifactSha256, bundleSha256}`; job state; audit.
|
||||||
* **Redis** — dedupe/idempotency keys and short‑lived rate‑limit buckets.
|
* **Valkey** — dedupe/idempotency keys and short‑lived rate‑limit buckets.
|
||||||
* **Licensing Service (optional)** — “endorse” call for cross‑log publishing when customer opts‑in.
|
* **Licensing Service (optional)** — “endorse” call for cross‑log publishing when customer opts‑in.
|
||||||
|
|
||||||
Trust boundary: **Only the Signer** is allowed to call submission endpoints; enforced by **mTLS peer cert allowlist** + `aud=attestor` OpTok.
|
Trust boundary: **Only the Signer** is allowed to call submission endpoints; enforced by **mTLS peer cert allowlist** + `aud=attestor` OpTok.
|
||||||
@@ -619,8 +619,8 @@ attestor:
|
|||||||
bucket: "stellaops"
|
bucket: "stellaops"
|
||||||
prefix: "attest/"
|
prefix: "attest/"
|
||||||
objectLock: "governance"
|
objectLock: "governance"
|
||||||
redis:
|
valkey:
|
||||||
url: "redis://redis:6379/2"
|
url: "valkey://valkey:6379/2"
|
||||||
quotas:
|
quotas:
|
||||||
perCaller:
|
perCaller:
|
||||||
qps: 50
|
qps: 50
|
||||||
|
|||||||
238
docs/modules/attestor/graph-root-attestation.md
Normal file
238
docs/modules/attestor/graph-root-attestation.md
Normal file
@@ -0,0 +1,238 @@
|
|||||||
|
# Graph Root Attestation
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Graph root attestation is a mechanism for creating cryptographically signed, content-addressed proofs of graph state. It enables offline verification that replayed graphs match the original attested state by computing a Merkle root from sorted node/edge IDs and input digests, then wrapping it in a DSSE envelope with an in-toto statement.
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
Graph root attestations solve the problem of proving graph authenticity without reconstructing the entire proof chain. They enable:
|
||||||
|
|
||||||
|
- **Offline Verification**: Download an attestation, recompute the root from stored nodes/edges, compare
|
||||||
|
- **Audit Snapshots**: Point-in-time proof of graph state for compliance
|
||||||
|
- **Evidence Linking**: Reference attested roots (not transient IDs) in evidence chains
|
||||||
|
- **Transparency**: Optional Rekor publication for public auditability
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### Components
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ GraphRootAttestor │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ AttestAsync(request) → GraphRootAttestationResult │
|
||||||
|
│ VerifyAsync(envelope, nodes, edges) → VerificationResult │
|
||||||
|
└─────────────────┬───────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ IMerkleRootComputer │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ ComputeRoot(leaves) → byte[] │
|
||||||
|
│ Algorithm → "sha256" │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ EnvelopeSignatureService │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ Sign(payload, key) → EnvelopeSignature │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Data Flow
|
||||||
|
|
||||||
|
1. **Request** → Sorted node/edge IDs + input digests
|
||||||
|
2. **Merkle Tree** → Compute SHA-256 root from leaves
|
||||||
|
3. **In-Toto Statement** → Build attestation with predicate
|
||||||
|
4. **Canonicalize** → JCS (RFC 8785) with version marker
|
||||||
|
5. **Sign** → DSSE envelope with Ed25519/ECDSA
|
||||||
|
6. **Store/Publish** → CAS storage + optional Rekor
|
||||||
|
|
||||||
|
## Graph Types
|
||||||
|
|
||||||
|
The `GraphType` enum identifies what kind of graph is being attested:
|
||||||
|
|
||||||
|
| GraphType | Description |
|
||||||
|
|-----------|-------------|
|
||||||
|
| `Unknown` | Unspecified graph type |
|
||||||
|
| `CallGraph` | Function/method call relationships |
|
||||||
|
| `DependencyGraph` | Package/library dependencies (SBOM) |
|
||||||
|
| `SbomGraph` | SBOM component graph |
|
||||||
|
| `EvidenceGraph` | Linked evidence records |
|
||||||
|
| `PolicyGraph` | Policy decision trees |
|
||||||
|
| `ProofSpine` | Proof chain spine segments |
|
||||||
|
| `ReachabilityGraph` | Code reachability analysis |
|
||||||
|
| `VexLinkageGraph` | VEX statement linkages |
|
||||||
|
| `Custom` | Application-specific graph |
|
||||||
|
|
||||||
|
## Models
|
||||||
|
|
||||||
|
### GraphRootAttestationRequest
|
||||||
|
|
||||||
|
Input to the attestation service:
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public sealed record GraphRootAttestationRequest
|
||||||
|
{
|
||||||
|
public required GraphType GraphType { get; init; }
|
||||||
|
public required IReadOnlyList<string> NodeIds { get; init; }
|
||||||
|
public required IReadOnlyList<string> EdgeIds { get; init; }
|
||||||
|
public required string PolicyDigest { get; init; }
|
||||||
|
public required string FeedsDigest { get; init; }
|
||||||
|
public required string ToolchainDigest { get; init; }
|
||||||
|
public required string ParamsDigest { get; init; }
|
||||||
|
public required string ArtifactDigest { get; init; }
|
||||||
|
public IReadOnlyList<string> EvidenceIds { get; init; } = [];
|
||||||
|
public bool PublishToRekor { get; init; } = false;
|
||||||
|
public string? SigningKeyId { get; init; }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### GraphRootAttestation (In-Toto Statement)
|
||||||
|
|
||||||
|
The attestation follows the in-toto Statement/v1 format:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"_type": "https://in-toto.io/Statement/v1",
|
||||||
|
"subject": [
|
||||||
|
{
|
||||||
|
"name": "sha256:abc123...",
|
||||||
|
"digest": { "sha256": "abc123..." }
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "sha256:artifact...",
|
||||||
|
"digest": { "sha256": "artifact..." }
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"predicateType": "https://stella-ops.org/attestation/graph-root/v1",
|
||||||
|
"predicate": {
|
||||||
|
"graphType": "DependencyGraph",
|
||||||
|
"rootHash": "sha256:abc123...",
|
||||||
|
"rootAlgorithm": "sha256",
|
||||||
|
"nodeCount": 1247,
|
||||||
|
"edgeCount": 3891,
|
||||||
|
"nodeIds": ["node-a", "node-b", ...],
|
||||||
|
"edgeIds": ["edge-1", "edge-2", ...],
|
||||||
|
"inputs": {
|
||||||
|
"policyDigest": "sha256:policy...",
|
||||||
|
"feedsDigest": "sha256:feeds...",
|
||||||
|
"toolchainDigest": "sha256:tools...",
|
||||||
|
"paramsDigest": "sha256:params..."
|
||||||
|
},
|
||||||
|
"evidenceIds": ["ev-1", "ev-2"],
|
||||||
|
"canonVersion": "stella:canon:v1",
|
||||||
|
"computedAt": "2025-12-26T10:30:00Z",
|
||||||
|
"computedBy": "stellaops/attestor/graph-root",
|
||||||
|
"computedByVersion": "1.0.0"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Merkle Root Computation
|
||||||
|
|
||||||
|
The root is computed from leaves in this deterministic order:
|
||||||
|
|
||||||
|
1. **Sorted node IDs** (lexicographic, ordinal)
|
||||||
|
2. **Sorted edge IDs** (lexicographic, ordinal)
|
||||||
|
3. **Policy digest**
|
||||||
|
4. **Feeds digest**
|
||||||
|
5. **Toolchain digest**
|
||||||
|
6. **Params digest**
|
||||||
|
|
||||||
|
Each leaf is SHA-256 hashed, then combined pairwise until a single root remains.
|
||||||
|
|
||||||
|
```
|
||||||
|
ROOT
|
||||||
|
/ \
|
||||||
|
H(L12) H(R12)
|
||||||
|
/ \ / \
|
||||||
|
H(n1) H(n2) H(e1) H(policy)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Creating an Attestation
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
var services = new ServiceCollection();
|
||||||
|
services.AddGraphRootAttestation(sp => keyId => GetSigningKey(keyId));
|
||||||
|
var provider = services.BuildServiceProvider();
|
||||||
|
|
||||||
|
var attestor = provider.GetRequiredService<IGraphRootAttestor>();
|
||||||
|
|
||||||
|
var request = new GraphRootAttestationRequest
|
||||||
|
{
|
||||||
|
GraphType = GraphType.DependencyGraph,
|
||||||
|
NodeIds = graph.Nodes.Select(n => n.Id).ToList(),
|
||||||
|
EdgeIds = graph.Edges.Select(e => e.Id).ToList(),
|
||||||
|
PolicyDigest = "sha256:abc...",
|
||||||
|
FeedsDigest = "sha256:def...",
|
||||||
|
ToolchainDigest = "sha256:ghi...",
|
||||||
|
ParamsDigest = "sha256:jkl...",
|
||||||
|
ArtifactDigest = imageDigest
|
||||||
|
};
|
||||||
|
|
||||||
|
var result = await attestor.AttestAsync(request);
|
||||||
|
Console.WriteLine($"Root: {result.RootHash}");
|
||||||
|
Console.WriteLine($"Nodes: {result.NodeCount}");
|
||||||
|
Console.WriteLine($"Edges: {result.EdgeCount}");
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verifying an Attestation
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
var envelope = LoadEnvelope("attestation.dsse.json");
|
||||||
|
var nodes = LoadNodes("nodes.ndjson");
|
||||||
|
var edges = LoadEdges("edges.ndjson");
|
||||||
|
|
||||||
|
var result = await attestor.VerifyAsync(envelope, nodes, edges);
|
||||||
|
|
||||||
|
if (result.IsValid)
|
||||||
|
{
|
||||||
|
Console.WriteLine($"✓ Verified: {result.ComputedRoot}");
|
||||||
|
Console.WriteLine($" Nodes: {result.NodeCount}");
|
||||||
|
Console.WriteLine($" Edges: {result.EdgeCount}");
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
Console.WriteLine($"✗ Failed: {result.FailureReason}");
|
||||||
|
Console.WriteLine($" Expected: {result.ExpectedRoot}");
|
||||||
|
Console.WriteLine($" Computed: {result.ComputedRoot}");
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Offline Verification Workflow
|
||||||
|
|
||||||
|
1. **Obtain attestation**: Download DSSE envelope from storage or transparency log
|
||||||
|
2. **Verify signature**: Check envelope signature against trusted public keys
|
||||||
|
3. **Extract predicate**: Parse `GraphRootPredicate` from the payload
|
||||||
|
4. **Fetch graph data**: Download nodes and edges by ID from CAS
|
||||||
|
5. **Recompute root**: Apply Merkle tree algorithm to node/edge IDs + input digests
|
||||||
|
6. **Compare**: Computed root must match `predicate.RootHash`
|
||||||
|
|
||||||
|
## Determinism Guarantees
|
||||||
|
|
||||||
|
Graph root attestations are fully deterministic:
|
||||||
|
|
||||||
|
- **Sorting**: All IDs sorted lexicographically (ordinal comparison)
|
||||||
|
- **Canonicalization**: RFC 8785 JCS with `stella:canon:v1` version marker
|
||||||
|
- **Hashing**: SHA-256 only
|
||||||
|
- **Timestamps**: UTC ISO-8601 (not included in root computation)
|
||||||
|
|
||||||
|
Same inputs always produce the same root hash, enabling replay verification.
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
- **Signature verification**: Always verify DSSE envelope signatures before trusting attestations
|
||||||
|
- **Key management**: Use short-lived signing keys; rotate regularly
|
||||||
|
- **Transparency**: Publish to Rekor for tamper-evident audit trail
|
||||||
|
- **Input validation**: Validate all digests are properly formatted before attestation
|
||||||
|
|
||||||
|
## Related Documentation
|
||||||
|
|
||||||
|
- [DSSE Envelopes](./dsse-envelopes.md) - Envelope format and signing
|
||||||
|
- [Proof Chain](./proof-chain.md) - Overall proof chain architecture
|
||||||
|
- [Canonical JSON](../../modules/platform/canonical-json.md) - Canonicalization scheme
|
||||||
@@ -1,6 +0,0 @@
|
|||||||
# Keys and Issuers (DOCS-ATTEST-74-001)
|
|
||||||
|
|
||||||
- Maintain issuer registry (KMS IDs, key IDs, allowed predicates).
|
|
||||||
- Rotate keys with overlap; publish fingerprints and validity in registry file.
|
|
||||||
- Offline operation: bundle registry with bootstrap; no remote fetch.
|
|
||||||
- Each attestation must include issuer ID and key ID; verify against registry.
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
# Attestor Overview (DOCS-ATTEST-73-001)
|
|
||||||
|
|
||||||
High-level description of the Attestor service and its contracts.
|
|
||||||
|
|
||||||
- Purpose: verify DSSE/attestations, supply transparency info, and expose attestation APIs without deriving verdicts.
|
|
||||||
- Components: WebService, Worker, KMS integration, Transparency log (optional), Evidence links.
|
|
||||||
- Rule banner: aggregation-only; no policy decisions.
|
|
||||||
- Tenancy: all attestations scoped per tenant; cross-tenant reads forbidden.
|
|
||||||
- Offline posture: allow offline verification using bundled trust roots and Rekor checkpoints when available.
|
|
||||||
@@ -1,12 +0,0 @@
|
|||||||
# Attestor Policies (DOCS-ATTEST-73-003)
|
|
||||||
|
|
||||||
Guidance on verification policies applied by Attestor.
|
|
||||||
|
|
||||||
- Scope: DSSE envelope validation, subject hash matching, optional transparency checks.
|
|
||||||
- Policy fields:
|
|
||||||
- allowed issuers / key IDs
|
|
||||||
- required predicates (e.g., `stella.ops/vexObservation@v1`)
|
|
||||||
- transparency requirements (allow/require/skip)
|
|
||||||
- freshness window for attestations
|
|
||||||
- Determinism: policies must be pure; no external lookups in sealed mode.
|
|
||||||
- Versioning: include `policyVersion` and hash; store alongside attestation records.
|
|
||||||
@@ -90,6 +90,68 @@ This specification defines the implementation of a cryptographically verifiable
|
|||||||
5. **Numbers in shortest form**
|
5. **Numbers in shortest form**
|
||||||
6. **Deterministic array ordering** (by semantic key: bom-ref, purl)
|
6. **Deterministic array ordering** (by semantic key: bom-ref, purl)
|
||||||
|
|
||||||
|
### Canonicalization Versioning
|
||||||
|
|
||||||
|
Content-addressed identifiers embed a canonicalization version marker to prevent hash collisions when the canonicalization algorithm evolves. This ensures that:
|
||||||
|
|
||||||
|
- **Forward compatibility**: Future algorithm changes won't invalidate existing hashes.
|
||||||
|
- **Verifier clarity**: Verifiers know exactly which algorithm to use.
|
||||||
|
- **Auditability**: Hash provenance is cryptographically bound to algorithm version.
|
||||||
|
|
||||||
|
**Version Marker Format:**
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"_canonVersion": "stella:canon:v1",
|
||||||
|
"sbomEntryId": "...",
|
||||||
|
"vulnerabilityId": "..."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
| Field | Description |
|
||||||
|
|-------|-------------|
|
||||||
|
| `_canonVersion` | Underscore prefix ensures lexicographic first position after sorting |
|
||||||
|
| Value format | `stella:canon:v<N>` where N is the version number |
|
||||||
|
| Current version | `stella:canon:v1` (RFC 8785 JSON canonicalization) |
|
||||||
|
|
||||||
|
**V1 Algorithm Specification:**
|
||||||
|
|
||||||
|
| Property | Behavior |
|
||||||
|
|----------|----------|
|
||||||
|
| Standard | RFC 8785 (JSON Canonicalization Scheme) |
|
||||||
|
| Key sorting | Ordinal string comparison |
|
||||||
|
| Whitespace | None (compact JSON) |
|
||||||
|
| Encoding | UTF-8 without BOM |
|
||||||
|
| Numbers | IEEE 754, shortest representation |
|
||||||
|
| Escaping | Minimal (only required characters) |
|
||||||
|
|
||||||
|
**Version Detection:**
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
// Detect if canonical JSON includes version marker
|
||||||
|
public static bool IsVersioned(ReadOnlySpan<byte> canonicalJson)
|
||||||
|
{
|
||||||
|
return canonicalJson.Length > 20 &&
|
||||||
|
canonicalJson.StartsWith("{\"_canonVersion\":"u8);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract version from versioned canonical JSON
|
||||||
|
public static string? ExtractVersion(ReadOnlySpan<byte> canonicalJson)
|
||||||
|
{
|
||||||
|
// Parse and return the _canonVersion value, or null if not versioned
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Migration Strategy:**
|
||||||
|
|
||||||
|
| Phase | Behavior | Timeline |
|
||||||
|
|-------|----------|----------|
|
||||||
|
| Phase 1 (Current) | Generate v1 hashes; accept both legacy and v1 for verification | Now |
|
||||||
|
| Phase 2 | Log deprecation warnings for legacy hashes | +6 months |
|
||||||
|
| Phase 3 | Reject legacy hashes; require v1 | +12 months |
|
||||||
|
|
||||||
|
See also: [Canonicalization Migration Guide](../../operations/canon-version-migration.md)
|
||||||
|
|
||||||
## DSSE Predicate Types
|
## DSSE Predicate Types
|
||||||
|
|
||||||
### 1. Evidence Statement (`evidence.stella/v1`)
|
### 1. Evidence Statement (`evidence.stella/v1`)
|
||||||
@@ -194,6 +256,101 @@ This specification defines the implementation of a cryptographically verifiable
|
|||||||
```
|
```
|
||||||
**Signer**: Generator key
|
**Signer**: Generator key
|
||||||
|
|
||||||
|
### 7. Graph Root Statement (`graph-root.stella/v1`)
|
||||||
|
|
||||||
|
The Graph Root attestation provides tamper-evident commitment to graph analysis results (dependency graphs, call graphs, reachability graphs) by computing a Merkle root over canonicalized node and edge identifiers.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"_type": "https://in-toto.io/Statement/v1",
|
||||||
|
"subject": [
|
||||||
|
{
|
||||||
|
"name": "graph-root://<graphType>/<merkleRoot>",
|
||||||
|
"digest": {
|
||||||
|
"sha256": "<merkle-root-hex>"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"predicateType": "https://stella-ops.org/predicates/graph-root/v1",
|
||||||
|
"predicate": {
|
||||||
|
"graphType": "DependencyGraph|CallGraph|ReachabilityGraph|...",
|
||||||
|
"merkleRoot": "sha256:<hex>",
|
||||||
|
"nodeCount": 1234,
|
||||||
|
"edgeCount": 5678,
|
||||||
|
"canonVersion": "stella:canon:v1",
|
||||||
|
"inputs": {
|
||||||
|
"sbomDigest": "sha256:<hex>",
|
||||||
|
"analyzerDigest": "sha256:<hex>",
|
||||||
|
"configDigest": "sha256:<hex>"
|
||||||
|
},
|
||||||
|
"createdAt": "2025-01-12T10:30:00Z"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Signer**: Graph Analyzer key
|
||||||
|
|
||||||
|
#### Supported Graph Types
|
||||||
|
|
||||||
|
| Graph Type | Use Case |
|
||||||
|
|------------|----------|
|
||||||
|
| `DependencyGraph` | Package/library dependency analysis |
|
||||||
|
| `CallGraph` | Function-level call relationships |
|
||||||
|
| `ReachabilityGraph` | Vulnerability reachability analysis |
|
||||||
|
| `DataFlowGraph` | Data flow and taint tracking |
|
||||||
|
| `ControlFlowGraph` | Code execution paths |
|
||||||
|
| `InheritanceGraph` | OOP class hierarchies |
|
||||||
|
| `ModuleGraph` | Module/namespace dependencies |
|
||||||
|
| `BuildGraph` | Build system dependencies |
|
||||||
|
| `ContainerLayerGraph` | Container layer relationships |
|
||||||
|
|
||||||
|
#### Merkle Root Computation
|
||||||
|
|
||||||
|
The Merkle root is computed deterministically:
|
||||||
|
|
||||||
|
1. **Canonicalize Node IDs**: Sort all node identifiers lexicographically
|
||||||
|
2. **Canonicalize Edge IDs**: Sort all edge identifiers (format: `{source}->{target}`)
|
||||||
|
3. **Combine**: Concatenate sorted nodes + sorted edges
|
||||||
|
4. **Binary Tree**: Build SHA-256 Merkle tree with odd-node duplication
|
||||||
|
5. **Root**: Extract 32-byte root as `sha256:<hex>`
|
||||||
|
|
||||||
|
```
|
||||||
|
Merkle Tree Structure:
|
||||||
|
[root]
|
||||||
|
/ \
|
||||||
|
[h01] [h23]
|
||||||
|
/ \ / \
|
||||||
|
[n0] [n1] [n2] [n3]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Integration with Proof Spine
|
||||||
|
|
||||||
|
Graph root attestations can be referenced in proof spines:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"predicateType": "proofspine.stella/v1",
|
||||||
|
"predicate": {
|
||||||
|
"sbomEntryId": "<SBOMEntryID>",
|
||||||
|
"evidenceIds": ["<ID1>", "<ID2>"],
|
||||||
|
"reasoningId": "<ID>",
|
||||||
|
"vexVerdictId": "<ID>",
|
||||||
|
"graphRootIds": ["<GraphRootID1>"],
|
||||||
|
"policyVersion": "v2.3.1",
|
||||||
|
"proofBundleId": "<ProofBundleID>"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Verification Steps
|
||||||
|
|
||||||
|
1. Parse DSSE envelope and verify signature against allowed keys
|
||||||
|
2. Extract predicate and Merkle root
|
||||||
|
3. Re-canonicalize provided node/edge IDs using `stella:canon:v1`
|
||||||
|
4. Recompute Merkle root from canonicalized inputs
|
||||||
|
5. Compare computed root to claimed root
|
||||||
|
6. If Rekor entry exists, verify transparency log inclusion
|
||||||
|
|
||||||
## Database Schema
|
## Database Schema
|
||||||
|
|
||||||
### Tables
|
### Tables
|
||||||
@@ -205,6 +362,7 @@ This specification defines the implementation of a cryptographically verifiable
|
|||||||
| `proofchain.spines` | Proof spine aggregations linking evidence to verdicts |
|
| `proofchain.spines` | Proof spine aggregations linking evidence to verdicts |
|
||||||
| `proofchain.trust_anchors` | Trust anchor configurations for verification |
|
| `proofchain.trust_anchors` | Trust anchor configurations for verification |
|
||||||
| `proofchain.rekor_entries` | Rekor transparency log entries |
|
| `proofchain.rekor_entries` | Rekor transparency log entries |
|
||||||
|
| `proofchain.graph_roots` | Graph root attestations with Merkle roots |
|
||||||
| `proofchain.key_history` | Key lifecycle history for rotation |
|
| `proofchain.key_history` | Key lifecycle history for rotation |
|
||||||
| `proofchain.key_audit_log` | Audit log for key operations |
|
| `proofchain.key_audit_log` | Audit log for key operations |
|
||||||
|
|
||||||
@@ -282,6 +440,7 @@ The 13-step verification algorithm:
|
|||||||
- [Database Schema Sprint](../../implplan/SPRINT_0501_0006_0001_proof_chain_database_schema.md)
|
- [Database Schema Sprint](../../implplan/SPRINT_0501_0006_0001_proof_chain_database_schema.md)
|
||||||
- [CLI Integration Sprint](../../implplan/SPRINT_0501_0007_0001_proof_chain_cli_integration.md)
|
- [CLI Integration Sprint](../../implplan/SPRINT_0501_0007_0001_proof_chain_cli_integration.md)
|
||||||
- [Key Rotation Sprint](../../implplan/SPRINT_0501_0008_0001_proof_chain_key_rotation.md)
|
- [Key Rotation Sprint](../../implplan/SPRINT_0501_0008_0001_proof_chain_key_rotation.md)
|
||||||
|
- [Graph Root Attestation](./graph-root-attestation.md)
|
||||||
- [Attestor Architecture](./architecture.md)
|
- [Attestor Architecture](./architecture.md)
|
||||||
- [Signer Architecture](../signer/architecture.md)
|
- [Signer Architecture](../signer/architecture.md)
|
||||||
- [Database Specification](../../db/SPECIFICATION.md)
|
- [Database Specification](../../db/SPECIFICATION.md)
|
||||||
|
|||||||
@@ -1,49 +0,0 @@
|
|||||||
# Attestor TTL Validation Runbook
|
|
||||||
|
|
||||||
> **DEPRECATED:** This runbook tests MongoDB TTL indexes, which are no longer used. Attestor now uses PostgreSQL for persistence (Sprint 4400). See `docs/db/SPECIFICATION.md` for current database schema.
|
|
||||||
|
|
||||||
> **Purpose:** confirm MongoDB TTL indexes and Redis expirations for the attestation dedupe store behave as expected on a production-like stack.
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
- Docker Desktop or compatible daemon with the Compose plugin enabled.
|
|
||||||
- Local ports `27017` and `6379` free.
|
|
||||||
- `dotnet` SDK 10.0 preview (same as repo toolchain).
|
|
||||||
- Network access to pull `mongo:7` and `redis:7` images.
|
|
||||||
|
|
||||||
## Quickstart
|
|
||||||
1. From the repo root export any required proxy settings, then run
|
|
||||||
```bash
|
|
||||||
scripts/run-attestor-ttl-validation.sh
|
|
||||||
```
|
|
||||||
The helper script:
|
|
||||||
- Spins up `mongo:7` and `redis:7` containers.
|
|
||||||
- Sets `ATTESTOR_LIVE_MONGO_URI` / `ATTESTOR_LIVE_REDIS_URI`.
|
|
||||||
- Executes the live TTL test suite (`Category=LiveTTL`) in `StellaOps.Attestor.Tests`.
|
|
||||||
- Tears the stack down automatically.
|
|
||||||
|
|
||||||
2. Capture the test output (`ttl-validation-<timestamp>.log`) and attach it to the sprint evidence folder (`docs/modules/attestor/evidence/`).
|
|
||||||
|
|
||||||
## Result handling
|
|
||||||
- **Success:** Tests complete in ~3–4 minutes with `Total tests: 2, Passed: 2`. Store the log and note the run in `docs/implplan/archived/SPRINT_0100_0001_0001_identity_signing.md` under ATTESTOR-72-003.
|
|
||||||
- **Failure:** Preserve:
|
|
||||||
- `docker compose logs` for both services.
|
|
||||||
- `mongosh` output of `db.dedupe.getIndexes()` and sample documents.
|
|
||||||
- `redis-cli --raw ttl attestor:ttl:live:bundle:<id>`.
|
|
||||||
File an incident in the Attestor Guild channel and link the captured artifacts.
|
|
||||||
|
|
||||||
## Manual verification (optional)
|
|
||||||
If the helper script cannot be used:
|
|
||||||
1. Start MongoDB and Redis manually with equivalent configuration.
|
|
||||||
2. Set `ATTESTOR_LIVE_MONGO_URI` and `ATTESTOR_LIVE_REDIS_URI`.
|
|
||||||
3. Run `dotnet test src/Attestor/StellaOps.Attestor.sln --no-build --filter "Category=LiveTTL"`.
|
|
||||||
4. Follow the evidence handling steps above.
|
|
||||||
|
|
||||||
## Ownership
|
|
||||||
- Primary: Attestor Service Guild.
|
|
||||||
- Partner: QA Guild (observes TTL metrics, confirms evidence archiving).
|
|
||||||
|
|
||||||
## 2025-11-03 validation summary
|
|
||||||
- **Stack:** `mongod` 7.0.5 (tarball) + `mongosh` 2.0.2, `redis-server` 7.2.4 (source build) running on localhost without Docker.
|
|
||||||
- **Mongo results:** `dedupe` TTL index (`ttlAt`, `expireAfterSeconds: 0`) confirmed; document inserted with 20 s TTL expired automatically after ~80 s (expected allocator sweep). Evidence: `docs/modules/attestor/evidence/2025-11-03-mongo-ttl-validation.txt`.
|
|
||||||
- **Redis results:** Key `attestor:ttl:live:bundle:validation` set with 45 s TTL reached `TTL=-2` after ~47 s confirming expiry propagation. Evidence: `docs/modules/attestor/evidence/2025-11-03-redis-ttl-validation.txt`.
|
|
||||||
- **Notes:** Local binaries built/run to accommodate sandbox without Docker; services shut down after validation.
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
# Attestor Workflows (DOCS-ATTEST-73-004)
|
|
||||||
|
|
||||||
Sequence of ingest, verify, and bulk operations.
|
|
||||||
|
|
||||||
1. **Ingest**: receive DSSE, validate schema, hash subjects, store envelope + metadata.
|
|
||||||
2. **Verify**: run policy checks (issuer, predicate, transparency optional), compute verification record.
|
|
||||||
3. **Persist**: store verification result with `verificationId`, `attestationId`, `policyVersion`, timestamps.
|
|
||||||
4. **Bulk ops**: batch verify envelopes; export results to timeline/audit logs.
|
|
||||||
5. **Audit**: expose read API for verification records; include determinism hash of inputs.
|
|
||||||
@@ -228,7 +228,7 @@ Services **must** verify `aud` and **sender constraint** (DPoP/mTLS) per their p
|
|||||||
## 5) Storage & state
|
## 5) Storage & state
|
||||||
|
|
||||||
* **Configuration DB** (PostgreSQL/MySQL): clients, audiences, role→scope maps, tenant/installation registry, device code grants, persistent consents (if any).
|
* **Configuration DB** (PostgreSQL/MySQL): clients, audiences, role→scope maps, tenant/installation registry, device code grants, persistent consents (if any).
|
||||||
* **Cache** (Redis):
|
* **Cache** (Valkey):
|
||||||
|
|
||||||
* DPoP **jti** replay cache (short TTL)
|
* DPoP **jti** replay cache (short TTL)
|
||||||
* **Nonce** store (per resource server, if they demand nonce)
|
* **Nonce** store (per resource server, if they demand nonce)
|
||||||
@@ -375,8 +375,8 @@ authority:
|
|||||||
enabled: true
|
enabled: true
|
||||||
ttl: "00:10:00"
|
ttl: "00:10:00"
|
||||||
maxIssuancePerMinute: 120
|
maxIssuancePerMinute: 120
|
||||||
store: "redis"
|
store: "valkey" # uses redis:// protocol
|
||||||
redisConnectionString: "redis://authority-redis:6379?ssl=false"
|
valkeyConnectionString: "redis://authority-valkey:6379?ssl=false"
|
||||||
requiredAudiences:
|
requiredAudiences:
|
||||||
- "signer"
|
- "signer"
|
||||||
- "attestor"
|
- "attestor"
|
||||||
@@ -428,7 +428,7 @@ authority:
|
|||||||
* **RBAC**: scope enforcement per audience; over‑privileged client denied.
|
* **RBAC**: scope enforcement per audience; over‑privileged client denied.
|
||||||
* **Rotation**: JWKS rotation while load‑testing; zero‑downtime verification.
|
* **Rotation**: JWKS rotation while load‑testing; zero‑downtime verification.
|
||||||
* **HA**: kill one Authority instance; verify issuance continues; JWKS served by peers.
|
* **HA**: kill one Authority instance; verify issuance continues; JWKS served by peers.
|
||||||
* **Performance**: 1k token issuance/sec on 2 cores with Redis enabled for jti caching.
|
* **Performance**: 1k token issuance/sec on 2 cores with Valkey enabled for jti caching.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -448,9 +448,9 @@ authority:
|
|||||||
## 17) Deployment & HA
|
## 17) Deployment & HA
|
||||||
|
|
||||||
* **Stateless** microservice, containerized; run ≥ 2 replicas behind LB.
|
* **Stateless** microservice, containerized; run ≥ 2 replicas behind LB.
|
||||||
* **DB**: HA Postgres (or MySQL) for clients/roles; **Redis** for device codes, DPoP nonces/jtis.
|
* **DB**: HA Postgres (or MySQL) for clients/roles; **Valkey** for device codes, DPoP nonces/jtis.
|
||||||
* **Secrets**: mount client JWKs via K8s Secrets/HashiCorp Vault; signing keys via KMS.
|
* **Secrets**: mount client JWKs via K8s Secrets/HashiCorp Vault; signing keys via KMS.
|
||||||
* **Backups**: DB daily; Redis not critical (ephemeral).
|
* **Backups**: DB daily; Valkey not critical (ephemeral).
|
||||||
* **Disaster recovery**: export/import of client registry; JWKS rehydrate from KMS.
|
* **Disaster recovery**: export/import of client registry; JWKS rehydrate from KMS.
|
||||||
* **Compliance**: TLS audit; penetration testing for OIDC flows.
|
* **Compliance**: TLS audit; penetration testing for OIDC flows.
|
||||||
|
|
||||||
@@ -459,7 +459,7 @@ authority:
|
|||||||
## 18) Implementation notes
|
## 18) Implementation notes
|
||||||
|
|
||||||
* Reference stack: **.NET 10** + **OpenIddict 6** (or IdentityServer if licensed) with custom DPoP validator and mTLS binding middleware.
|
* Reference stack: **.NET 10** + **OpenIddict 6** (or IdentityServer if licensed) with custom DPoP validator and mTLS binding middleware.
|
||||||
* Keep the DPoP/JTI cache pluggable; allow Redis/Memcached.
|
* Keep the DPoP/JTI cache pluggable; allow Valkey/Memcached.
|
||||||
* Provide **client SDKs** for C# and Go: DPoP key mgmt, proof generation, nonce handling, token refresh helper.
|
* Provide **client SDKs** for C# and Go: DPoP key mgmt, proof generation, nonce handling, token refresh helper.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
329
docs/modules/cli/guides/commands/evidence-bundle-format.md
Normal file
329
docs/modules/cli/guides/commands/evidence-bundle-format.md
Normal file
@@ -0,0 +1,329 @@
|
|||||||
|
# Evidence Bundle Format Specification
|
||||||
|
|
||||||
|
Version: 1.0
|
||||||
|
Status: Stable
|
||||||
|
Sprint: SPRINT_9200_0001_0003
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Evidence bundles are downloadable archives containing complete evidence packages for findings or scan runs. They enable:
|
||||||
|
|
||||||
|
- **Offline verification**: All evidence is self-contained
|
||||||
|
- **Deterministic replay**: Includes scripts and hashes for verdict reproduction
|
||||||
|
- **Audit compliance**: Provides cryptographic verification of all evidence
|
||||||
|
- **Human readability**: Includes README and manifest for easy inspection
|
||||||
|
|
||||||
|
## Archive Formats
|
||||||
|
|
||||||
|
Evidence bundles are available in two formats:
|
||||||
|
|
||||||
|
| Format | Extension | MIME Type | Use Case |
|
||||||
|
|--------|-----------|-----------|----------|
|
||||||
|
| ZIP | `.zip` | `application/zip` | General use, Windows compatible |
|
||||||
|
| TAR.GZ | `.tar.gz` | `application/gzip` | Unix systems, better compression |
|
||||||
|
|
||||||
|
## Endpoints
|
||||||
|
|
||||||
|
### Single Finding Bundle
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /v1/triage/findings/{findingId}/evidence/export?format=zip
|
||||||
|
```
|
||||||
|
|
||||||
|
Response headers:
|
||||||
|
- `Content-Type: application/zip`
|
||||||
|
- `Content-Disposition: attachment; filename="evidence-{findingId}.zip"`
|
||||||
|
- `X-Archive-Digest: sha256:{digest}`
|
||||||
|
|
||||||
|
### Scan Run Bundle
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /v1/triage/scans/{scanId}/evidence/export?format=zip
|
||||||
|
```
|
||||||
|
|
||||||
|
Response headers:
|
||||||
|
- `Content-Type: application/zip`
|
||||||
|
- `Content-Disposition: attachment; filename="evidence-run-{scanId}.zip"`
|
||||||
|
- `X-Archive-Digest: sha256:{digest}`
|
||||||
|
|
||||||
|
## Finding Bundle Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
evidence-{findingId}/
|
||||||
|
├── manifest.json # Archive manifest with file hashes
|
||||||
|
├── README.md # Human-readable documentation
|
||||||
|
├── sbom.cdx.json # CycloneDX SBOM slice
|
||||||
|
├── reachability.json # Reachability analysis data
|
||||||
|
├── vex/
|
||||||
|
│ ├── vendor.json # Vendor VEX statements
|
||||||
|
│ ├── nvd.json # NVD VEX data
|
||||||
|
│ └── cisa-kev.json # CISA KEV data
|
||||||
|
├── attestations/
|
||||||
|
│ ├── sbom.dsse.json # SBOM DSSE envelope
|
||||||
|
│ └── scan.dsse.json # Scan DSSE envelope
|
||||||
|
├── policy/
|
||||||
|
│ └── evaluation.json # Policy evaluation result
|
||||||
|
├── delta.json # Delta comparison (if available)
|
||||||
|
├── replay-command.txt # Copy-ready replay command
|
||||||
|
├── replay.sh # Bash replay script
|
||||||
|
└── replay.ps1 # PowerShell replay script
|
||||||
|
```
|
||||||
|
|
||||||
|
## Scan Run Bundle Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
evidence-run-{scanId}/
|
||||||
|
├── MANIFEST.json # Run-level manifest
|
||||||
|
├── README.md # Run-level documentation
|
||||||
|
└── findings/
|
||||||
|
├── {findingId1}/
|
||||||
|
│ ├── manifest.json
|
||||||
|
│ ├── README.md
|
||||||
|
│ ├── sbom.cdx.json
|
||||||
|
│ ├── reachability.json
|
||||||
|
│ ├── vex/
|
||||||
|
│ ├── attestations/
|
||||||
|
│ ├── policy/
|
||||||
|
│ ├── delta.json
|
||||||
|
│ ├── replay-command.txt
|
||||||
|
│ ├── replay.sh
|
||||||
|
│ └── replay.ps1
|
||||||
|
├── {findingId2}/
|
||||||
|
│ └── ...
|
||||||
|
└── ...
|
||||||
|
```
|
||||||
|
|
||||||
|
## Manifest Schema
|
||||||
|
|
||||||
|
### Finding Manifest (manifest.json)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"schemaVersion": "1.0",
|
||||||
|
"findingId": "f-abc123",
|
||||||
|
"generatedAt": "2025-01-15T10:30:00Z",
|
||||||
|
"cacheKey": "sha256:abc123...",
|
||||||
|
"scannerVersion": "10.1.3",
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"path": "sbom.cdx.json",
|
||||||
|
"sha256": "abc123def456...",
|
||||||
|
"size": 12345,
|
||||||
|
"contentType": "application/json"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "reachability.json",
|
||||||
|
"sha256": "789xyz...",
|
||||||
|
"size": 5678,
|
||||||
|
"contentType": "application/json"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Run Manifest (MANIFEST.json)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"schemaVersion": "1.0",
|
||||||
|
"scanId": "scan-xyz789",
|
||||||
|
"generatedAt": "2025-01-15T10:30:00Z",
|
||||||
|
"totalFiles": 42,
|
||||||
|
"scannerVersion": "10.1.3",
|
||||||
|
"findings": [
|
||||||
|
{
|
||||||
|
"findingId": "f-abc123",
|
||||||
|
"generatedAt": "2025-01-15T10:30:00Z",
|
||||||
|
"cacheKey": "sha256:abc123...",
|
||||||
|
"files": [...]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"findingId": "f-def456",
|
||||||
|
"generatedAt": "2025-01-15T10:30:00Z",
|
||||||
|
"cacheKey": "sha256:def456...",
|
||||||
|
"files": [...]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Replay Scripts
|
||||||
|
|
||||||
|
### Bash Script (replay.sh)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
# StellaOps Evidence Bundle Replay Script
|
||||||
|
# Generated: 2025-01-15T10:30:00Z
|
||||||
|
# Finding: f-abc123
|
||||||
|
# CVE: CVE-2024-1234
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Input hashes for deterministic replay
|
||||||
|
ARTIFACT_DIGEST="sha256:a1b2c3d4e5f6..."
|
||||||
|
MANIFEST_HASH="sha256:abc123def456..."
|
||||||
|
FEED_HASH="sha256:feed789feed..."
|
||||||
|
POLICY_HASH="sha256:policy321..."
|
||||||
|
|
||||||
|
# Verify prerequisites
|
||||||
|
if ! command -v stella &> /dev/null; then
|
||||||
|
echo "Error: stella CLI not found. Install from https://stellaops.org/install"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Replaying verdict for finding: ${ARTIFACT_DIGEST}"
|
||||||
|
echo "Using manifest: ${MANIFEST_HASH}"
|
||||||
|
|
||||||
|
# Execute replay
|
||||||
|
stella scan replay \
|
||||||
|
--artifact "${ARTIFACT_DIGEST}" \
|
||||||
|
--manifest "${MANIFEST_HASH}" \
|
||||||
|
--feeds "${FEED_HASH}" \
|
||||||
|
--policy "${POLICY_HASH}"
|
||||||
|
|
||||||
|
echo "Replay complete. Verify verdict matches original."
|
||||||
|
```
|
||||||
|
|
||||||
|
### PowerShell Script (replay.ps1)
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
# StellaOps Evidence Bundle Replay Script
|
||||||
|
# Generated: 2025-01-15T10:30:00Z
|
||||||
|
# Finding: f-abc123
|
||||||
|
# CVE: CVE-2024-1234
|
||||||
|
|
||||||
|
$ErrorActionPreference = 'Stop'
|
||||||
|
|
||||||
|
# Input hashes for deterministic replay
|
||||||
|
$ArtifactDigest = "sha256:a1b2c3d4e5f6..."
|
||||||
|
$ManifestHash = "sha256:abc123def456..."
|
||||||
|
$FeedHash = "sha256:feed789feed..."
|
||||||
|
$PolicyHash = "sha256:policy321..."
|
||||||
|
|
||||||
|
# Verify prerequisites
|
||||||
|
if (-not (Get-Command stella -ErrorAction SilentlyContinue)) {
|
||||||
|
Write-Error "stella CLI not found. Install from https://stellaops.org/install"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
Write-Host "Replaying verdict for finding: $ArtifactDigest"
|
||||||
|
Write-Host "Using manifest: $ManifestHash"
|
||||||
|
|
||||||
|
# Execute replay
|
||||||
|
stella scan replay `
|
||||||
|
--artifact $ArtifactDigest `
|
||||||
|
--manifest $ManifestHash `
|
||||||
|
--feeds $FeedHash `
|
||||||
|
--policy $PolicyHash
|
||||||
|
|
||||||
|
Write-Host "Replay complete. Verify verdict matches original."
|
||||||
|
```
|
||||||
|
|
||||||
|
## README Format
|
||||||
|
|
||||||
|
### Finding README (README.md)
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# StellaOps Evidence Bundle
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
- **Finding ID:** `f-abc123`
|
||||||
|
- **CVE:** `CVE-2024-1234`
|
||||||
|
- **Component:** `pkg:npm/lodash@4.17.15`
|
||||||
|
- **Generated:** 2025-01-15T10:30:00Z
|
||||||
|
|
||||||
|
## Input Hashes for Deterministic Replay
|
||||||
|
|
||||||
|
| Input | Hash |
|
||||||
|
|-------|------|
|
||||||
|
| Artifact Digest | `sha256:a1b2c3d4e5f6...` |
|
||||||
|
| Run Manifest | `sha256:abc123def456...` |
|
||||||
|
| Feed Snapshot | `sha256:feed789feed...` |
|
||||||
|
| Policy | `sha256:policy321...` |
|
||||||
|
|
||||||
|
## Replay Instructions
|
||||||
|
|
||||||
|
### Using Bash
|
||||||
|
```bash
|
||||||
|
chmod +x replay.sh
|
||||||
|
./replay.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using PowerShell
|
||||||
|
```powershell
|
||||||
|
.\replay.ps1
|
||||||
|
```
|
||||||
|
|
||||||
|
## Bundle Contents
|
||||||
|
|
||||||
|
| File | SHA-256 | Size |
|
||||||
|
|------|---------|------|
|
||||||
|
| `sbom.cdx.json` | `abc123...` | 12.3 KB |
|
||||||
|
| `reachability.json` | `789xyz...` | 5.6 KB |
|
||||||
|
| ... | ... | ... |
|
||||||
|
|
||||||
|
## Verification Status
|
||||||
|
|
||||||
|
- **Status:** verified
|
||||||
|
- **Hashes Verified:** ✓
|
||||||
|
- **Attestations Verified:** ✓
|
||||||
|
- **Evidence Complete:** ✓
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Generated by StellaOps Scanner*
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integrity Verification
|
||||||
|
|
||||||
|
To verify bundle integrity:
|
||||||
|
|
||||||
|
1. **Download with digest header**: The `X-Archive-Digest` response header contains the archive's SHA-256 hash
|
||||||
|
2. **Verify archive hash**: `sha256sum evidence-{findingId}.zip`
|
||||||
|
3. **Verify file hashes**: Compare each file's SHA-256 against `manifest.json`
|
||||||
|
|
||||||
|
Example verification:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify archive integrity
|
||||||
|
EXPECTED_HASH="abc123..."
|
||||||
|
ACTUAL_HASH=$(sha256sum evidence-f-abc123.zip | cut -d' ' -f1)
|
||||||
|
if [ "$EXPECTED_HASH" = "$ACTUAL_HASH" ]; then
|
||||||
|
echo "Archive integrity verified"
|
||||||
|
else
|
||||||
|
echo "Archive integrity check FAILED"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Verify individual files
|
||||||
|
cd evidence-f-abc123
|
||||||
|
for file in $(jq -r '.files[].path' manifest.json); do
|
||||||
|
expected=$(jq -r ".files[] | select(.path==\"$file\") | .sha256" manifest.json)
|
||||||
|
actual=$(sha256sum "$file" | cut -d' ' -f1)
|
||||||
|
if [ "$expected" = "$actual" ]; then
|
||||||
|
echo "✓ $file"
|
||||||
|
else
|
||||||
|
echo "✗ $file"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
## Content Types
|
||||||
|
|
||||||
|
| File Type | Content-Type | Description |
|
||||||
|
|-----------|--------------|-------------|
|
||||||
|
| `.json` | `application/json` | JSON data files |
|
||||||
|
| `.cdx.json` | `application/json` | CycloneDX SBOM |
|
||||||
|
| `.dsse.json` | `application/json` | DSSE envelope |
|
||||||
|
| `.sh` | `text/x-shellscript` | Bash script |
|
||||||
|
| `.ps1` | `text/plain` | PowerShell script |
|
||||||
|
| `.md` | `text/markdown` | Markdown documentation |
|
||||||
|
| `.txt` | `text/plain` | Plain text |
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- [stella scan replay Command Reference](../cli/guides/commands/scan-replay.md)
|
||||||
|
- [Deterministic Replay Specification](../replay/DETERMINISTIC_REPLAY.md)
|
||||||
|
- [Unified Evidence Endpoint API](./unified-evidence-endpoint.md)
|
||||||
162
docs/modules/cli/guides/commands/scan-replay.md
Normal file
162
docs/modules/cli/guides/commands/scan-replay.md
Normal file
@@ -0,0 +1,162 @@
|
|||||||
|
# scan replay Command Reference
|
||||||
|
|
||||||
|
The `stella scan replay` command performs deterministic verdict reproduction using explicit input hashes.
|
||||||
|
|
||||||
|
## Synopsis
|
||||||
|
|
||||||
|
```bash
|
||||||
|
stella scan replay [options]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Replays a scan with explicit hashes for **deterministic verdict reproduction**. This command enables:
|
||||||
|
|
||||||
|
- **Reproducibility**: Re-execute a scan with the same inputs to verify identical results
|
||||||
|
- **Audit compliance**: Prove historical decisions can be recreated
|
||||||
|
- **Offline verification**: Replay verdicts in air-gapped environments
|
||||||
|
|
||||||
|
Unlike `stella replay --manifest <file>` which uses a manifest file, `stella scan replay` accepts individual hash parameters directly, making it suitable for:
|
||||||
|
|
||||||
|
- Commands copied from evidence bundles
|
||||||
|
- CI/CD pipeline integration
|
||||||
|
- Backend-generated replay commands
|
||||||
|
|
||||||
|
## Options
|
||||||
|
|
||||||
|
### Required Parameters
|
||||||
|
|
||||||
|
| Option | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| `--artifact <digest>` | Artifact digest to replay (e.g., `sha256:abc123...`) |
|
||||||
|
| `--manifest <hash>` | Run manifest hash for configuration |
|
||||||
|
| `--feeds <hash>` | Feed snapshot hash at time of scan |
|
||||||
|
| `--policy <hash>` | Policy ruleset hash |
|
||||||
|
|
||||||
|
### Optional Parameters
|
||||||
|
|
||||||
|
| Option | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| `--snapshot <id>` | Knowledge snapshot ID for offline replay |
|
||||||
|
| `--offline` | Run in offline/air-gapped mode. Requires all inputs to be locally cached |
|
||||||
|
| `--verify-inputs` | Verify all input hashes before starting replay |
|
||||||
|
| `-o, --output <path>` | Output file path for verdict JSON (defaults to stdout) |
|
||||||
|
| `--verbose` | Enable verbose output with hash confirmation |
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### Basic Replay
|
||||||
|
|
||||||
|
```bash
|
||||||
|
stella scan replay \
|
||||||
|
--artifact sha256:a1b2c3d4e5f6... \
|
||||||
|
--manifest sha256:abc123def456... \
|
||||||
|
--feeds sha256:feed789feed... \
|
||||||
|
--policy sha256:policy321...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Replay with Knowledge Snapshot
|
||||||
|
|
||||||
|
```bash
|
||||||
|
stella scan replay \
|
||||||
|
--artifact sha256:a1b2c3d4e5f6... \
|
||||||
|
--manifest sha256:abc123def456... \
|
||||||
|
--feeds sha256:feed789feed... \
|
||||||
|
--policy sha256:policy321... \
|
||||||
|
--snapshot KS-2025-01-15-001
|
||||||
|
```
|
||||||
|
|
||||||
|
### Offline Replay with Verification
|
||||||
|
|
||||||
|
```bash
|
||||||
|
stella scan replay \
|
||||||
|
--artifact sha256:a1b2c3d4e5f6... \
|
||||||
|
--manifest sha256:abc123def456... \
|
||||||
|
--feeds sha256:feed789feed... \
|
||||||
|
--policy sha256:policy321... \
|
||||||
|
--offline \
|
||||||
|
--verify-inputs \
|
||||||
|
--verbose
|
||||||
|
```
|
||||||
|
|
||||||
|
### Save Output to File
|
||||||
|
|
||||||
|
```bash
|
||||||
|
stella scan replay \
|
||||||
|
--artifact sha256:a1b2c3d4e5f6... \
|
||||||
|
--manifest sha256:abc123def456... \
|
||||||
|
--feeds sha256:feed789feed... \
|
||||||
|
--policy sha256:policy321... \
|
||||||
|
--output replay-result.json
|
||||||
|
```
|
||||||
|
|
||||||
|
## Input Hash Verification
|
||||||
|
|
||||||
|
When `--verify-inputs` is specified, the command validates:
|
||||||
|
|
||||||
|
1. **Artifact digest format**: Must start with `sha256:` or `sha512:`
|
||||||
|
2. **Hash lengths**: SHA256 = 64 hex characters, SHA512 = 128 hex characters
|
||||||
|
3. **Local availability** (in offline mode): Verifies cached inputs exist
|
||||||
|
|
||||||
|
## Offline Mode
|
||||||
|
|
||||||
|
The `--offline` flag enables air-gapped replay:
|
||||||
|
|
||||||
|
- All inputs must be pre-cached locally
|
||||||
|
- No network calls are made
|
||||||
|
- Use `stella offline prepare` to pre-fetch required data
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "success",
|
||||||
|
"artifactDigest": "sha256:a1b2c3d4e5f6...",
|
||||||
|
"manifestHash": "sha256:abc123def456...",
|
||||||
|
"feedSnapshotHash": "sha256:feed789feed...",
|
||||||
|
"policyHash": "sha256:policy321...",
|
||||||
|
"knowledgeSnapshotId": "KS-2025-01-15-001",
|
||||||
|
"offlineMode": false,
|
||||||
|
"startedAt": "2025-01-15T10:30:00Z",
|
||||||
|
"completedAt": "2025-01-15T10:30:45Z",
|
||||||
|
"verdict": {
|
||||||
|
"findingId": "f-abc123",
|
||||||
|
"status": "affected",
|
||||||
|
"confidence": 0.95
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integration with Evidence Bundles
|
||||||
|
|
||||||
|
Evidence bundles generated by the `/v1/triage/findings/{id}/evidence/export` endpoint include ready-to-run replay scripts:
|
||||||
|
|
||||||
|
- `replay.sh` - Bash script for Linux/macOS
|
||||||
|
- `replay.ps1` - PowerShell script for Windows
|
||||||
|
- `replay-command.txt` - Raw command for copy-paste
|
||||||
|
|
||||||
|
Example from evidence bundle:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# From evidence bundle replay.sh
|
||||||
|
stella scan replay \
|
||||||
|
--artifact "sha256:a1b2c3d4e5f6..." \
|
||||||
|
--manifest "sha256:abc123def456..." \
|
||||||
|
--feeds "sha256:feed789feed..." \
|
||||||
|
--policy "sha256:policy321..."
|
||||||
|
```
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `stella replay --manifest <file>` | Replay using a manifest file |
|
||||||
|
| `stella replay verify` | Verify determinism by replaying twice |
|
||||||
|
| `stella replay snapshot` | Replay using knowledge snapshot ID |
|
||||||
|
| `stella offline prepare` | Pre-fetch data for offline replay |
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- [Deterministic Replay Specification](../../replay/DETERMINISTIC_REPLAY.md)
|
||||||
|
- [Offline Kit Documentation](../../24_OFFLINE_KIT.md)
|
||||||
|
- [Evidence Bundle Format](./evidence-bundle-format.md)
|
||||||
@@ -317,7 +317,7 @@ public interface IFeedConnector {
|
|||||||
| `advisory.observation.updated@1` | `events/advisory.observation.updated@1.json` | Fired on new or superseded observations. Includes `observationId`, source metadata, `linksetSummary` (aliases/purls), supersedes pointer (if any), SHA-256 hash, and `traceId`. |
|
| `advisory.observation.updated@1` | `events/advisory.observation.updated@1.json` | Fired on new or superseded observations. Includes `observationId`, source metadata, `linksetSummary` (aliases/purls), supersedes pointer (if any), SHA-256 hash, and `traceId`. |
|
||||||
| `advisory.linkset.updated@1` | `events/advisory.linkset.updated@1.json` | Fired when correlation changes. Includes `linksetId`, `key{vulnerabilityId, productKey, confidence}`, observation deltas, conflicts, `updatedAt`, and canonical hash. |
|
| `advisory.linkset.updated@1` | `events/advisory.linkset.updated@1.json` | Fired when correlation changes. Includes `linksetId`, `key{vulnerabilityId, productKey, confidence}`, observation deltas, conflicts, `updatedAt`, and canonical hash. |
|
||||||
|
|
||||||
Events are emitted via NATS (primary) and Redis Stream (fallback). Consumers acknowledge idempotently using the hash; duplicates are safe. Offline Kit captures both topics during bundle creation for air-gapped replay.
|
Events are emitted via NATS (primary) and Valkey Stream (fallback). Consumers acknowledge idempotently using the hash; duplicates are safe. Offline Kit captures both topics during bundle creation for air-gapped replay.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -42,7 +42,7 @@ Purpose: unblock CONCELIER-LNM-21-005 by freezing the platform event shape for l
|
|||||||
- No judgments: only raw facts, delta descriptions, and provenance pointers; any derived severity/merge content is forbidden.
|
- No judgments: only raw facts, delta descriptions, and provenance pointers; any derived severity/merge content is forbidden.
|
||||||
|
|
||||||
### Error contracts for Scheduler
|
### Error contracts for Scheduler
|
||||||
- Retryable NATS/Redis failures use backoff capped at 30s; after 5 attempts, emit `concelier.events.dlq` with the same envelope and `error` field describing transport failure.
|
- Retryable NATS/Valkey failures use backoff capped at 30s; after 5 attempts, emit `concelier.events.dlq` with the same envelope and `error` field describing transport failure.
|
||||||
- Consumers must NACK on schema validation failure; publisher logs `ERR_EVENT_SCHEMA` and quarantines the offending linkset id.
|
- Consumers must NACK on schema validation failure; publisher logs `ERR_EVENT_SCHEMA` and quarantines the offending linkset id.
|
||||||
|
|
||||||
## Sample payload
|
## Sample payload
|
||||||
|
|||||||
@@ -31,7 +31,7 @@ Purpose: unblock CONCELIER-GRAPH-21-002 by freezing the platform event shape for
|
|||||||
- No judgments: only raw facts and hash pointers; any derived severity/merge content is forbidden.
|
- No judgments: only raw facts and hash pointers; any derived severity/merge content is forbidden.
|
||||||
|
|
||||||
### Error contracts for Scheduler
|
### Error contracts for Scheduler
|
||||||
- Retryable NATS/Redis failures use backoff capped at 30s; after 5 attempts, emit `concelier.events.dlq` with the same envelope and `error` field describing transport failure.
|
- Retryable NATS/Valkey failures use backoff capped at 30s; after 5 attempts, emit `concelier.events.dlq` with the same envelope and `error` field describing transport failure.
|
||||||
- Consumers must NACK on schema validation failure; publisher logs `ERR_EVENT_SCHEMA` and quarantines the offending observation id.
|
- Consumers must NACK on schema validation failure; publisher logs `ERR_EVENT_SCHEMA` and quarantines the offending observation id.
|
||||||
|
|
||||||
## Sample payload
|
## Sample payload
|
||||||
|
|||||||
@@ -56,7 +56,7 @@ concelier:
|
|||||||
- `histogram_quantile(0.95, rate(apple_map_affected_count_bucket[1h]))` to watch affected-package fan-out
|
- `histogram_quantile(0.95, rate(apple_map_affected_count_bucket[1h]))` to watch affected-package fan-out
|
||||||
- `increase(apple_parse_failures_total[6h])` to catch parser drift (alerts at `>0`)
|
- `increase(apple_parse_failures_total[6h])` to catch parser drift (alerts at `>0`)
|
||||||
- **Alerts** – Page if `rate(apple_fetch_items_total[2h]) == 0` during business hours while other connectors are active. This often indicates lookup feed failures or misconfigured allow-lists.
|
- **Alerts** – Page if `rate(apple_fetch_items_total[2h]) == 0` during business hours while other connectors are active. This often indicates lookup feed failures or misconfigured allow-lists.
|
||||||
- **Logs** – Surface warnings `Apple document {DocumentId} missing GridFS payload` or `Apple parse failed`—repeated hits imply storage issues or HTML regressions.
|
- **Logs** – Surface warnings `Apple document {DocumentId} missing document payload` or `Apple parse failed`—repeated hits imply storage issues or HTML regressions.
|
||||||
- **Telemetry pipeline** – `StellaOps.Concelier.WebService` now exports `StellaOps.Concelier.Connector.Vndr.Apple` alongside existing Concelier meters; ensure your OTEL collector or Prometheus scraper includes it.
|
- **Telemetry pipeline** – `StellaOps.Concelier.WebService` now exports `StellaOps.Concelier.Connector.Vndr.Apple` alongside existing Concelier meters; ensure your OTEL collector or Prometheus scraper includes it.
|
||||||
|
|
||||||
## 4. Fixture Maintenance
|
## 4. Fixture Maintenance
|
||||||
|
|||||||
@@ -40,7 +40,7 @@ concelier:
|
|||||||
- `CCCS fetch completed feeds=… items=… newDocuments=… pendingDocuments=…`
|
- `CCCS fetch completed feeds=… items=… newDocuments=… pendingDocuments=…`
|
||||||
- `CCCS parse completed parsed=… failures=…`
|
- `CCCS parse completed parsed=… failures=…`
|
||||||
- `CCCS map completed mapped=… failures=…`
|
- `CCCS map completed mapped=… failures=…`
|
||||||
- Warnings fire when GridFS payloads/DTOs go missing or parser sanitisation fails.
|
- Warnings fire when document payloads/DTOs go missing or parser sanitisation fails.
|
||||||
|
|
||||||
Suggested Grafana alerts:
|
Suggested Grafana alerts:
|
||||||
- `increase(cccs.fetch.failures_total[15m]) > 0`
|
- `increase(cccs.fetch.failures_total[15m]) > 0`
|
||||||
|
|||||||
@@ -78,7 +78,7 @@ This runbook describes how Ops provisions, rotates, and distributes Cisco PSIRT
|
|||||||
- `Cisco fetch completed date=… pages=… added=…` (info)
|
- `Cisco fetch completed date=… pages=… added=…` (info)
|
||||||
- `Cisco parse completed parsed=… failures=…` (info)
|
- `Cisco parse completed parsed=… failures=…` (info)
|
||||||
- `Cisco map completed mapped=… failures=…` (info)
|
- `Cisco map completed mapped=… failures=…` (info)
|
||||||
- Warnings surface when DTO serialization fails or GridFS payload is missing.
|
- Warnings surface when DTO serialization fails or document payload is missing.
|
||||||
- Suggested alerts: non-zero `cisco.fetch.failures` in 15m, or `cisco.map.success` flatlines while fetch continues.
|
- Suggested alerts: non-zero `cisco.fetch.failures` in 15m, or `cisco.map.success` flatlines while fetch continues.
|
||||||
|
|
||||||
## 8. Incident response
|
## 8. Incident response
|
||||||
|
|||||||
@@ -37,7 +37,7 @@ concelier:
|
|||||||
- `KISA feed returned {ItemCount}`
|
- `KISA feed returned {ItemCount}`
|
||||||
- `KISA fetched detail for {Idx} … category={Category}`
|
- `KISA fetched detail for {Idx} … category={Category}`
|
||||||
- `KISA mapped advisory {AdvisoryId} (severity={Severity})`
|
- `KISA mapped advisory {AdvisoryId} (severity={Severity})`
|
||||||
- Absence of warnings such as `document missing GridFS payload`.
|
- Absence of warnings such as `document missing payload`.
|
||||||
5. Validate PostgreSQL state (schema `vuln`):
|
5. Validate PostgreSQL state (schema `vuln`):
|
||||||
- `raw_documents` table metadata has `kisa.idx`, `kisa.category`, `kisa.title`.
|
- `raw_documents` table metadata has `kisa.idx`, `kisa.category`, `kisa.title`.
|
||||||
- `dtos` table contains `schemaVersion="kisa.detail.v1"`.
|
- `dtos` table contains `schemaVersion="kisa.detail.v1"`.
|
||||||
|
|||||||
@@ -43,6 +43,6 @@ For large migrations, seed caches with archived zip bundles, then run fetch/pars
|
|||||||
|
|
||||||
- Listing failures mark the source state with exponential backoff while attempting cache replay.
|
- Listing failures mark the source state with exponential backoff while attempting cache replay.
|
||||||
- Bulletin fetches fall back to cached copies before surfacing an error.
|
- Bulletin fetches fall back to cached copies before surfacing an error.
|
||||||
- Mongo integration tests rely on bundled OpenSSL 1.1 libraries (`src/Tools/openssl/linux-x64`) to keep `Mongo2Go` operational on modern distros.
|
- Integration tests use Testcontainers with PostgreSQL for connector verification.
|
||||||
|
|
||||||
Refer to `ru-nkcki` entries in `src/Concelier/StellaOps.Concelier.PluginBinaries/StellaOps.Concelier.Connector.Ru.Nkcki/TASKS.md` for outstanding items.
|
Refer to `ru-nkcki` entries in `src/Concelier/StellaOps.Concelier.PluginBinaries/StellaOps.Concelier.Connector.Ru.Nkcki/TASKS.md` for outstanding items.
|
||||||
|
|||||||
@@ -314,9 +314,9 @@ rustfs://stellaops/
|
|||||||
|
|
||||||
### 7.5 PostgreSQL server baseline
|
### 7.5 PostgreSQL server baseline
|
||||||
|
|
||||||
* **Minimum supported server:** MongoDB **4.2+**. Driver 3.5.0 removes compatibility shims for 4.0; upstream has already announced 4.0 support will be dropped in upcoming C# driver releases. citeturn1open1
|
* **Minimum supported server:** PostgreSQL **16+**. Earlier versions lack required features (e.g., enhanced JSON functions, performance improvements).
|
||||||
* **Deploy images:** Compose/Helm defaults stay on `postgres:16`. For air-gapped installs, refresh Offline Kit bundles so the packaged `postgres` matches ≥4.2.
|
* **Deploy images:** Compose/Helm defaults stay on `postgres:16`. For air-gapped installs, refresh Offline Kit bundles so the packaged PostgreSQL image matches ≥16.
|
||||||
* **Upgrade guard:** During rollout, verify replica sets reach FCV `4.2` or above before swapping binaries; automation should hard-stop if FCV is <4.2.
|
* **Upgrade guard:** During rollout, verify PostgreSQL major version ≥16 before applying schema migrations; automation should hard-stop if version check fails.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -351,7 +351,7 @@ Prometheus + OTLP; Grafana dashboards ship in the charts.
|
|||||||
* **Vulnerability response**:
|
* **Vulnerability response**:
|
||||||
|
|
||||||
* Concelier red-flag advisories trigger accelerated **stable** patch rollout; UI/CLI “security patch available” notice.
|
* Concelier red-flag advisories trigger accelerated **stable** patch rollout; UI/CLI “security patch available” notice.
|
||||||
* 2025-10: Pinned `MongoDB.Driver` **3.5.0** and `SharpCompress` **0.41.0** across services (DEVOPS-SEC-10-301) to eliminate NU1902/NU1903 warnings surfaced during scanner cache/worker test runs; repacked the local `Mongo2Go` feed so test fixtures inherit the patched dependencies; future bumps follow the same central override pattern.
|
* 2025-10: Pinned `SharpCompress` **0.41.0** across services (DEVOPS-SEC-10-301) to eliminate NU1903 warnings; future bumps follow the central override pattern. MongoDB dependencies were removed in Sprint 4400 (all persistence now uses PostgreSQL).
|
||||||
|
|
||||||
* **Backups/DR**:
|
* **Backups/DR**:
|
||||||
|
|
||||||
|
|||||||
@@ -2,6 +2,8 @@
|
|||||||
|
|
||||||
_Updated: 2025-10-26 (UTC)_
|
_Updated: 2025-10-26 (UTC)_
|
||||||
|
|
||||||
|
> **Note (2025-12):** This document reflects the state at initial launch. Since then, MongoDB has been fully removed (Sprint 4400) and replaced with PostgreSQL. Redis references now use Valkey. See current deployment docs in `deploy/` for up-to-date configuration.
|
||||||
|
|
||||||
This document captures production launch sign-offs, deployment readiness checkpoints, and any open risks that must be tracked before GA cutover.
|
This document captures production launch sign-offs, deployment readiness checkpoints, and any open risks that must be tracked before GA cutover.
|
||||||
|
|
||||||
## 1. Sign-off Summary
|
## 1. Sign-off Summary
|
||||||
|
|||||||
360
docs/modules/evidence/unified-model.md
Normal file
360
docs/modules/evidence/unified-model.md
Normal file
@@ -0,0 +1,360 @@
|
|||||||
|
# Unified Evidence Model
|
||||||
|
|
||||||
|
> **Module:** `StellaOps.Evidence.Core`
|
||||||
|
> **Status:** Production
|
||||||
|
> **Owner:** Platform Guild
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The Unified Evidence Model provides a standardized interface (`IEvidence`) and implementation (`EvidenceRecord`) for representing evidence across all StellaOps modules. This enables:
|
||||||
|
|
||||||
|
- **Cross-module evidence linking**: Evidence from Scanner, Attestor, Excititor, and Policy modules share a common contract.
|
||||||
|
- **Content-addressed verification**: Evidence records are immutable and verifiable via deterministic hashing.
|
||||||
|
- **Unified storage**: A single `IEvidenceStore` interface abstracts persistence across modules.
|
||||||
|
- **Cryptographic attestation**: Multiple signatures from different signers (internal, vendor, CI, operator) can vouch for evidence.
|
||||||
|
|
||||||
|
## Core Types
|
||||||
|
|
||||||
|
### IEvidence Interface
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public interface IEvidence
|
||||||
|
{
|
||||||
|
string SubjectNodeId { get; } // Content-addressed subject
|
||||||
|
EvidenceType EvidenceType { get; } // Type discriminator
|
||||||
|
string EvidenceId { get; } // Computed hash identifier
|
||||||
|
ReadOnlyMemory<byte> Payload { get; } // Canonical JSON payload
|
||||||
|
IReadOnlyList<EvidenceSignature> Signatures { get; }
|
||||||
|
EvidenceProvenance Provenance { get; }
|
||||||
|
string? ExternalPayloadCid { get; } // For large payloads
|
||||||
|
string PayloadSchemaVersion { get; }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### EvidenceType Enum
|
||||||
|
|
||||||
|
The platform supports these evidence types:
|
||||||
|
|
||||||
|
| Type | Value | Description | Example Payload |
|
||||||
|
|------|-------|-------------|-----------------|
|
||||||
|
| `Reachability` | 1 | Call graph analysis | Paths, confidence, graph digest |
|
||||||
|
| `Scan` | 2 | Vulnerability finding | CVE, severity, affected package |
|
||||||
|
| `Policy` | 3 | Policy evaluation | Rule ID, verdict, inputs |
|
||||||
|
| `Artifact` | 4 | SBOM entry metadata | PURL, digest, build info |
|
||||||
|
| `Vex` | 5 | VEX statement | Status, justification, impact |
|
||||||
|
| `Epss` | 6 | EPSS score | Score, percentile, model date |
|
||||||
|
| `Runtime` | 7 | Runtime observation | eBPF/ETW traces, call frames |
|
||||||
|
| `Provenance` | 8 | Build provenance | SLSA attestation, builder info |
|
||||||
|
| `Exception` | 9 | Applied exception | Exception ID, reason, expiry |
|
||||||
|
| `Guard` | 10 | Guard/gate analysis | Gate type, condition, bypass |
|
||||||
|
| `Kev` | 11 | KEV status | In-KEV flag, added date |
|
||||||
|
| `License` | 12 | License analysis | SPDX ID, compliance status |
|
||||||
|
| `Dependency` | 13 | Dependency metadata | Graph edge, version range |
|
||||||
|
| `Custom` | 100 | User-defined | Schema-versioned custom payload |
|
||||||
|
|
||||||
|
### EvidenceRecord
|
||||||
|
|
||||||
|
The concrete implementation with deterministic identity:
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public sealed record EvidenceRecord : IEvidence
|
||||||
|
{
|
||||||
|
public static EvidenceRecord Create(
|
||||||
|
string subjectNodeId,
|
||||||
|
EvidenceType evidenceType,
|
||||||
|
ReadOnlyMemory<byte> payload,
|
||||||
|
EvidenceProvenance provenance,
|
||||||
|
string payloadSchemaVersion,
|
||||||
|
IReadOnlyList<EvidenceSignature>? signatures = null,
|
||||||
|
string? externalPayloadCid = null);
|
||||||
|
|
||||||
|
public bool VerifyIntegrity();
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**EvidenceId Computation:**
|
||||||
|
|
||||||
|
The `EvidenceId` is a SHA-256 hash of the canonicalized fields using versioned prefixing:
|
||||||
|
|
||||||
|
```
|
||||||
|
EvidenceId = "evidence:" + CanonJson.HashVersionedPrefixed("IEvidence", "v1", {
|
||||||
|
SubjectNodeId,
|
||||||
|
EvidenceType,
|
||||||
|
PayloadHash,
|
||||||
|
Provenance.GeneratorId,
|
||||||
|
Provenance.GeneratorVersion,
|
||||||
|
Provenance.GeneratedAt (ISO 8601)
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
### EvidenceSignature
|
||||||
|
|
||||||
|
Cryptographic attestation by a signer:
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public sealed record EvidenceSignature
|
||||||
|
{
|
||||||
|
public required string SignerId { get; init; }
|
||||||
|
public required string Algorithm { get; init; } // ES256, RS256, EdDSA
|
||||||
|
public required string SignatureBase64 { get; init; }
|
||||||
|
public required DateTimeOffset SignedAt { get; init; }
|
||||||
|
public SignerType SignerType { get; init; }
|
||||||
|
public IReadOnlyList<string>? CertificateChain { get; init; }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**SignerType Values:**
|
||||||
|
- `Internal` (0): StellaOps service
|
||||||
|
- `Vendor` (1): External vendor/supplier
|
||||||
|
- `CI` (2): CI/CD pipeline
|
||||||
|
- `Operator` (3): Human operator
|
||||||
|
- `TransparencyLog` (4): Rekor/transparency log
|
||||||
|
- `Scanner` (5): Security scanner
|
||||||
|
- `PolicyEngine` (6): Policy engine
|
||||||
|
- `Unknown` (255): Unclassified
|
||||||
|
|
||||||
|
### EvidenceProvenance
|
||||||
|
|
||||||
|
Generation context:
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public sealed record EvidenceProvenance
|
||||||
|
{
|
||||||
|
public required string GeneratorId { get; init; }
|
||||||
|
public required string GeneratorVersion { get; init; }
|
||||||
|
public required DateTimeOffset GeneratedAt { get; init; }
|
||||||
|
public string? CorrelationId { get; init; }
|
||||||
|
public Guid? TenantId { get; init; }
|
||||||
|
// ... additional fields
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Adapters
|
||||||
|
|
||||||
|
Adapters convert module-specific evidence types to the unified `IEvidence` interface:
|
||||||
|
|
||||||
|
### Available Adapters
|
||||||
|
|
||||||
|
| Adapter | Source Module | Source Type | Target Evidence Types |
|
||||||
|
|---------|---------------|-------------|----------------------|
|
||||||
|
| `EvidenceBundleAdapter` | Scanner | `EvidenceBundle` | Reachability, Vex, Provenance, Scan |
|
||||||
|
| `EvidenceStatementAdapter` | Attestor | `EvidenceStatement` (in-toto) | Scan |
|
||||||
|
| `ProofSegmentAdapter` | Scanner | `ProofSegment` | Varies by segment type |
|
||||||
|
| `VexObservationAdapter` | Excititor | `VexObservation` | Vex, Provenance |
|
||||||
|
| `ExceptionApplicationAdapter` | Policy | `ExceptionApplication` | Exception |
|
||||||
|
|
||||||
|
### Adapter Interface
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public interface IEvidenceAdapter<TSource>
|
||||||
|
{
|
||||||
|
IReadOnlyList<IEvidence> Convert(
|
||||||
|
TSource source,
|
||||||
|
string subjectNodeId,
|
||||||
|
EvidenceProvenance provenance);
|
||||||
|
|
||||||
|
bool CanConvert(TSource source);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using Adapters
|
||||||
|
|
||||||
|
Adapters use **input DTOs** to avoid circular dependencies:
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
// Using VexObservationAdapter
|
||||||
|
var adapter = new VexObservationAdapter();
|
||||||
|
|
||||||
|
var input = new VexObservationInput
|
||||||
|
{
|
||||||
|
ObservationId = "obs-001",
|
||||||
|
ProviderId = "nvd",
|
||||||
|
StreamId = "cve-feed",
|
||||||
|
// ... other fields from VexObservation
|
||||||
|
};
|
||||||
|
|
||||||
|
var provenance = new EvidenceProvenance
|
||||||
|
{
|
||||||
|
GeneratorId = "excititor-ingestor",
|
||||||
|
GeneratorVersion = "1.0.0",
|
||||||
|
GeneratedAt = DateTimeOffset.UtcNow
|
||||||
|
};
|
||||||
|
|
||||||
|
if (adapter.CanConvert(input))
|
||||||
|
{
|
||||||
|
IReadOnlyList<IEvidence> records = adapter.Convert(
|
||||||
|
input,
|
||||||
|
subjectNodeId: "sha256:abc123",
|
||||||
|
provenance);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Evidence Store
|
||||||
|
|
||||||
|
### IEvidenceStore Interface
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public interface IEvidenceStore
|
||||||
|
{
|
||||||
|
Task<EvidenceRecord> StoreAsync(
|
||||||
|
EvidenceRecord record,
|
||||||
|
CancellationToken ct = default);
|
||||||
|
|
||||||
|
Task<IReadOnlyList<EvidenceRecord>> StoreBatchAsync(
|
||||||
|
IEnumerable<EvidenceRecord> records,
|
||||||
|
CancellationToken ct = default);
|
||||||
|
|
||||||
|
Task<EvidenceRecord?> GetByIdAsync(
|
||||||
|
string evidenceId,
|
||||||
|
CancellationToken ct = default);
|
||||||
|
|
||||||
|
Task<IReadOnlyList<EvidenceRecord>> GetBySubjectAsync(
|
||||||
|
string subjectNodeId,
|
||||||
|
EvidenceType? evidenceType = null,
|
||||||
|
CancellationToken ct = default);
|
||||||
|
|
||||||
|
Task<IReadOnlyList<EvidenceRecord>> GetByTypeAsync(
|
||||||
|
EvidenceType evidenceType,
|
||||||
|
int limit = 100,
|
||||||
|
CancellationToken ct = default);
|
||||||
|
|
||||||
|
Task<bool> ExistsAsync(
|
||||||
|
string evidenceId,
|
||||||
|
CancellationToken ct = default);
|
||||||
|
|
||||||
|
Task<bool> DeleteAsync(
|
||||||
|
string evidenceId,
|
||||||
|
CancellationToken ct = default);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Implementations
|
||||||
|
|
||||||
|
- **`InMemoryEvidenceStore`**: Thread-safe in-memory store for testing and development.
|
||||||
|
- **`PostgresEvidenceStore`** (planned): Production store with tenant isolation and indexing.
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### Creating Evidence
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
var provenance = new EvidenceProvenance
|
||||||
|
{
|
||||||
|
GeneratorId = "scanner-service",
|
||||||
|
GeneratorVersion = "2.1.0",
|
||||||
|
GeneratedAt = DateTimeOffset.UtcNow,
|
||||||
|
TenantId = tenantId
|
||||||
|
};
|
||||||
|
|
||||||
|
// Serialize payload to canonical JSON
|
||||||
|
var payloadBytes = CanonJson.Canonicalize(new
|
||||||
|
{
|
||||||
|
cveId = "CVE-2024-1234",
|
||||||
|
severity = "HIGH",
|
||||||
|
affectedPackage = "pkg:npm/lodash@4.17.20"
|
||||||
|
});
|
||||||
|
|
||||||
|
var evidence = EvidenceRecord.Create(
|
||||||
|
subjectNodeId: "sha256:abc123def456...",
|
||||||
|
evidenceType: EvidenceType.Scan,
|
||||||
|
payload: payloadBytes,
|
||||||
|
provenance: provenance,
|
||||||
|
payloadSchemaVersion: "scan/v1");
|
||||||
|
```
|
||||||
|
|
||||||
|
### Storing and Retrieving
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
var store = new InMemoryEvidenceStore();
|
||||||
|
|
||||||
|
// Store
|
||||||
|
await store.StoreAsync(evidence);
|
||||||
|
|
||||||
|
// Retrieve by ID
|
||||||
|
var retrieved = await store.GetByIdAsync(evidence.EvidenceId);
|
||||||
|
|
||||||
|
// Retrieve all evidence for a subject
|
||||||
|
var allForSubject = await store.GetBySubjectAsync(
|
||||||
|
"sha256:abc123def456...",
|
||||||
|
evidenceType: EvidenceType.Scan);
|
||||||
|
|
||||||
|
// Verify integrity
|
||||||
|
bool isValid = retrieved!.VerifyIntegrity();
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cross-Module Evidence Linking
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
// Scanner produces evidence bundle
|
||||||
|
var bundle = scanner.ProduceEvidenceBundle(target);
|
||||||
|
|
||||||
|
// Convert to unified evidence
|
||||||
|
var adapter = new EvidenceBundleAdapter();
|
||||||
|
var evidenceRecords = adapter.Convert(bundle, subjectNodeId, provenance);
|
||||||
|
|
||||||
|
// Store all records
|
||||||
|
await store.StoreBatchAsync(evidenceRecords);
|
||||||
|
|
||||||
|
// Later, any module can query by subject
|
||||||
|
var allEvidence = await store.GetBySubjectAsync(subjectNodeId);
|
||||||
|
|
||||||
|
// Filter by type
|
||||||
|
var reachabilityEvidence = allEvidence
|
||||||
|
.Where(e => e.EvidenceType == EvidenceType.Reachability);
|
||||||
|
var vexEvidence = allEvidence
|
||||||
|
.Where(e => e.EvidenceType == EvidenceType.Vex);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Schema Versioning
|
||||||
|
|
||||||
|
Each evidence type payload has a schema version (`PayloadSchemaVersion`) for forward compatibility:
|
||||||
|
|
||||||
|
- `scan/v1`: Initial scan evidence schema
|
||||||
|
- `reachability/v1`: Reachability evidence schema
|
||||||
|
- `vex-statement/v1`: VEX statement evidence schema
|
||||||
|
- `proof-segment/v1`: Proof segment evidence schema
|
||||||
|
- `exception-application/v1`: Exception application schema
|
||||||
|
|
||||||
|
Consumers should check `PayloadSchemaVersion` before deserializing payloads to handle schema evolution.
|
||||||
|
|
||||||
|
## Integration Patterns
|
||||||
|
|
||||||
|
### Module Integration
|
||||||
|
|
||||||
|
Each module that produces evidence should:
|
||||||
|
|
||||||
|
1. Create an adapter if converting from module-specific types
|
||||||
|
2. Use `EvidenceRecord.Create()` for new evidence
|
||||||
|
3. Store evidence via `IEvidenceStore`
|
||||||
|
4. Include provenance with generator identification
|
||||||
|
|
||||||
|
### Verification Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Retrieve evidence by SubjectNodeId
|
||||||
|
2. Call VerifyIntegrity() to check EvidenceId
|
||||||
|
3. Verify signatures against known trust roots
|
||||||
|
4. Deserialize and validate payload against schema
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
The `StellaOps.Evidence.Core.Tests` project includes:
|
||||||
|
|
||||||
|
- **111 unit tests** covering:
|
||||||
|
- EvidenceRecord creation and hash computation
|
||||||
|
- InMemoryEvidenceStore CRUD operations
|
||||||
|
- All adapter conversions (VexObservation, ExceptionApplication, ProofSegment)
|
||||||
|
- Edge cases and error handling
|
||||||
|
|
||||||
|
Run tests:
|
||||||
|
```bash
|
||||||
|
dotnet test src/__Libraries/StellaOps.Evidence.Core.Tests/
|
||||||
|
```
|
||||||
|
|
||||||
|
## Related Documentation
|
||||||
|
|
||||||
|
- [Proof Chain Architecture](../attestor/proof-chain.md)
|
||||||
|
- [Evidence Bundle Design](../scanner/evidence-bundle.md)
|
||||||
|
- [VEX Observation Model](../excititor/vex-observation.md)
|
||||||
|
- [Policy Exceptions](../policy/exceptions.md)
|
||||||
@@ -39,5 +39,5 @@ Defines the event envelope for evidence timelines emitted by Excititor. All fiel
|
|||||||
- Emit at-most-once per storage write; idempotent consumers rely on `(eventId, tenant)`.
|
- Emit at-most-once per storage write; idempotent consumers rely on `(eventId, tenant)`.
|
||||||
|
|
||||||
## Transport
|
## Transport
|
||||||
- Default topic: `excititor.timeline.v1` (NATS/Redis). Subject includes tenant: `excititor.timeline.v1.<tenant>`.
|
- Default topic: `excititor.timeline.v1` (NATS/Valkey). Subject includes tenant: `excititor.timeline.v1.<tenant>`.
|
||||||
- Payload size should stay <32 KiB; truncate conflict arrays with `truncated=true` flag if needed (keep hash counts deterministic).
|
- Payload size should stay <32 KiB; truncate conflict arrays with `truncated=true` flag if needed (keep hash counts deterministic).
|
||||||
|
|||||||
@@ -1,33 +0,0 @@
|
|||||||
# Excititor · VEX Raw Collection Validator (AOC-19-001/002)
|
|
||||||
|
|
||||||
> **DEPRECATED:** This document describes MongoDB validators which are no longer used. Excititor now uses PostgreSQL for persistence (Sprint 4400). Schema validation is now performed via PostgreSQL constraints and check constraints. See `docs/db/SPECIFICATION.md` for current database schema.
|
|
||||||
|
|
||||||
- **Date:** 2025-11-25
|
|
||||||
- **Scope:** EXCITITOR-STORE-AOC-19-001 / 19-002
|
|
||||||
- **Working directory:** ~~`src/Excititor/__Libraries/StellaOps.Excititor.Storage.Mongo`~~ (deprecated)
|
|
||||||
|
|
||||||
## What shipped (historical)
|
|
||||||
- `$jsonSchema` validator applied to `vex_raw` (migration `20251125-vex-raw-json-schema`) with `validationAction=warn`, `validationLevel=moderate` to surface contract violations without impacting ingestion.
|
|
||||||
- Schema lives at `docs/modules/excititor/schemas/vex_raw.schema.json` (mirrors Mongo validator fields: digest/id, providerId, format, sourceUri, retrievedAt, optional content/GridFS object id, metadata strings).
|
|
||||||
- Migration is auto-registered in DI; hosted migration runner applies it on service start. New collections created with the validator if missing.
|
|
||||||
|
|
||||||
## How to run (online/offline)
|
|
||||||
1) Ensure Excititor WebService/Worker starts with Mongo credentials that allow `collMod`.
|
|
||||||
2) Validator applies automatically via migration runner. To force manually:
|
|
||||||
```bash
|
|
||||||
mongosh "$MONGO_URI" --eval 'db.runCommand({collMod:"vex_raw", validator:'$(cat docs/modules/excititor/schemas/vex_raw.schema.json)', validationAction:"warn", validationLevel:"moderate"})'
|
|
||||||
```
|
|
||||||
3) Offline kit: bundle `docs/modules/excititor/schemas/vex_raw.schema.json` with release artifacts; ops can apply via `mongosh` or `mongo` offline against snapshots.
|
|
||||||
|
|
||||||
## Rollback / relax
|
|
||||||
- To relax validation (e.g., hotfix window): `db.runCommand({collMod:"vex_raw", validator:{}, validationAction:"warn", validationLevel:"off"})`.
|
|
||||||
- Reapplying the migration restores the schema.
|
|
||||||
|
|
||||||
## Compatibility notes
|
|
||||||
- Validator keeps `additionalProperties=true` to avoid blocking future fields; required set is minimal to guarantee provenance + content hash presence.
|
|
||||||
- Action is `warn` to avoid breaking existing feeds; flip to `error` once downstream datasets are clean.
|
|
||||||
|
|
||||||
## Acceptance
|
|
||||||
- Contract + schema captured.
|
|
||||||
- Migration in code and auto-applied.
|
|
||||||
- Rollback path documented.
|
|
||||||
@@ -1,7 +1,13 @@
|
|||||||
|
|
||||||
# Findings Ledger
|
# Findings Ledger
|
||||||
|
|
||||||
Start here for ledger docs.
|
Immutable, append-only event ledger for tracking vulnerability findings, policy decisions, and workflow state changes across the StellaOps platform.
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
- **Audit trail**: Every finding state change (open, triage, suppress, resolve) is recorded with cryptographic hashes and actor metadata.
|
||||||
|
- **Deterministic replay**: Events can be replayed to reconstruct finding states at any point in time.
|
||||||
|
- **Merkle anchoring**: Event chains are Merkle-linked for tamper-evident verification.
|
||||||
|
- **Tenant isolation**: All events are partitioned by tenant with cross-tenant access forbidden.
|
||||||
|
|
||||||
## Quick links
|
## Quick links
|
||||||
- FL1–FL10 remediation tracker: `gaps-FL1-FL10.md`
|
- FL1–FL10 remediation tracker: `gaps-FL1-FL10.md`
|
||||||
|
|||||||
@@ -37,7 +37,7 @@ Status for these items is tracked in `src/Notifier/StellaOps.Notifier/TASKS.md`
|
|||||||
|
|
||||||
## Integrations & dependencies
|
## Integrations & dependencies
|
||||||
- **Storage:** PostgreSQL (schema `notify`) for rules, channels, deliveries, digests, and throttles; Valkey for worker coordination.
|
- **Storage:** PostgreSQL (schema `notify`) for rules, channels, deliveries, digests, and throttles; Valkey for worker coordination.
|
||||||
- **Queues:** Redis Streams or NATS JetStream for ingestion, throttling, and DLQs (`notify.dlq`).
|
- **Queues:** Valkey Streams or NATS JetStream for ingestion, throttling, and DLQs (`notify.dlq`).
|
||||||
- **Authority:** OpTok-protected APIs, DPoP-backed CLI/UI scopes (`notify.viewer`, `notify.operator`, `notify.admin`), and secret references for channel credentials.
|
- **Authority:** OpTok-protected APIs, DPoP-backed CLI/UI scopes (`notify.viewer`, `notify.operator`, `notify.admin`), and secret references for channel credentials.
|
||||||
- **Observability:** Prometheus metrics (`notify.sent_total`, `notify.failed_total`, `notify.digest_coalesced_total`, etc.), OTEL traces, and dashboards documented in `docs/notifications/architecture.md#12-observability-prometheus--otel`.
|
- **Observability:** Prometheus metrics (`notify.sent_total`, `notify.failed_total`, `notify.digest_coalesced_total`, etc.), OTEL traces, and dashboards documented in `docs/notifications/architecture.md#12-observability-prometheus--otel`.
|
||||||
|
|
||||||
|
|||||||
@@ -26,7 +26,7 @@ src/
|
|||||||
├─ StellaOps.Notify.Engine/ # rules engine, templates, idempotency, digests, throttles
|
├─ StellaOps.Notify.Engine/ # rules engine, templates, idempotency, digests, throttles
|
||||||
├─ StellaOps.Notify.Models/ # DTOs (Rule, Channel, Event, Delivery, Template)
|
├─ StellaOps.Notify.Models/ # DTOs (Rule, Channel, Event, Delivery, Template)
|
||||||
├─ StellaOps.Notify.Storage.Postgres/ # canonical persistence (notify schema)
|
├─ StellaOps.Notify.Storage.Postgres/ # canonical persistence (notify schema)
|
||||||
├─ StellaOps.Notify.Queue/ # bus client (Redis Streams/NATS JetStream)
|
├─ StellaOps.Notify.Queue/ # bus client (Valkey Streams/NATS JetStream)
|
||||||
└─ StellaOps.Notify.Tests.* # unit/integration/e2e
|
└─ StellaOps.Notify.Tests.* # unit/integration/e2e
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -35,7 +35,7 @@ src/
|
|||||||
* **Notify.WebService** (stateless API)
|
* **Notify.WebService** (stateless API)
|
||||||
* **Notify.Worker** (horizontal scale)
|
* **Notify.Worker** (horizontal scale)
|
||||||
|
|
||||||
**Dependencies**: Authority (OpToks; DPoP/mTLS), **PostgreSQL** (notify schema), Redis/NATS (bus), HTTP egress to Slack/Teams/Webhooks, SMTP relay for Email.
|
**Dependencies**: Authority (OpToks; DPoP/mTLS), **PostgreSQL** (notify schema), Valkey/NATS (bus), HTTP egress to Slack/Teams/Webhooks, SMTP relay for Email.
|
||||||
|
|
||||||
> **Configuration.** Notify.WebService bootstraps from `notify.yaml` (see `etc/notify.yaml.sample`). Use `storage.driver: postgres` and provide `postgres.notify` options (`connectionString`, `schemaName`, pool sizing, timeouts). Authority settings follow the platform defaults—when running locally without Authority, set `authority.enabled: false` and supply `developmentSigningKey` so JWTs can be validated offline.
|
> **Configuration.** Notify.WebService bootstraps from `notify.yaml` (see `etc/notify.yaml.sample`). Use `storage.driver: postgres` and provide `postgres.notify` options (`connectionString`, `schemaName`, pool sizing, timeouts). Authority settings follow the platform defaults—when running locally without Authority, set `authority.enabled: false` and supply `developmentSigningKey` so JWTs can be validated offline.
|
||||||
>
|
>
|
||||||
@@ -277,7 +277,7 @@ Canonical JSON Schemas for rules/channels/events live in `docs/modules/notify/re
|
|||||||
* `throttles`
|
* `throttles`
|
||||||
|
|
||||||
```
|
```
|
||||||
{ key:"idem:<hash>", ttlAt } // short-lived, also cached in Redis
|
{ key:"idem:<hash>", ttlAt } // short-lived, also cached in Valkey
|
||||||
```
|
```
|
||||||
|
|
||||||
**Indexes**: rules by `{tenantId, enabled}`, deliveries by `{tenantId, sentAt desc}`, digests by `{tenantId, actionKey}`.
|
**Indexes**: rules by `{tenantId, enabled}`, deliveries by `{tenantId, sentAt desc}`, digests by `{tenantId, actionKey}`.
|
||||||
@@ -346,12 +346,12 @@ Authority signs ack tokens using keys configured under `notifications.ackTokens`
|
|||||||
|
|
||||||
* **Ingestor**: N consumers with per‑key ordering (key = tenant|digest|namespace).
|
* **Ingestor**: N consumers with per‑key ordering (key = tenant|digest|namespace).
|
||||||
* **RuleMatcher**: loads active rules snapshot for tenant into memory; vectorized predicate check.
|
* **RuleMatcher**: loads active rules snapshot for tenant into memory; vectorized predicate check.
|
||||||
* **Throttle/Dedupe**: consult Redis + PostgreSQL `throttles`; if hit → record `status=throttled`.
|
* **Throttle/Dedupe**: consult Valkey + PostgreSQL `throttles`; if hit → record `status=throttled`.
|
||||||
* **DigestCoalescer**: append to open digest window or flush when timer expires.
|
* **DigestCoalescer**: append to open digest window or flush when timer expires.
|
||||||
* **Renderer**: select template (channel+locale), inject variables, enforce length limits, compute `bodyHash`.
|
* **Renderer**: select template (channel+locale), inject variables, enforce length limits, compute `bodyHash`.
|
||||||
* **Connector**: send; handle provider‑specific rate limits and backoffs; `maxAttempts` with exponential jitter; overflow → DLQ (dead‑letter topic) + UI surfacing.
|
* **Connector**: send; handle provider‑specific rate limits and backoffs; `maxAttempts` with exponential jitter; overflow → DLQ (dead‑letter topic) + UI surfacing.
|
||||||
|
|
||||||
**Idempotency**: per action **idempotency key** stored in Redis (TTL = `throttle window` or `digest window`). Connectors also respect **provider** idempotency where available (e.g., Slack `client_msg_id`).
|
**Idempotency**: per action **idempotency key** stored in Valkey (TTL = `throttle window` or `digest window`). Connectors also respect **provider** idempotency where available (e.g., Slack `client_msg_id`).
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -359,7 +359,7 @@ Authority signs ack tokens using keys configured under `notifications.ackTokens`
|
|||||||
|
|
||||||
* **Per‑tenant** RPM caps (default 600/min) + **per‑channel** concurrency (Slack 1–4, Teams 1–2, Email 8–32 based on relay).
|
* **Per‑tenant** RPM caps (default 600/min) + **per‑channel** concurrency (Slack 1–4, Teams 1–2, Email 8–32 based on relay).
|
||||||
* **Backoff** map: Slack 429 → respect `Retry‑After`; SMTP 4xx → retry; 5xx → retry with jitter; permanent rejects → drop with status recorded.
|
* **Backoff** map: Slack 429 → respect `Retry‑After`; SMTP 4xx → retry; 5xx → retry with jitter; permanent rejects → drop with status recorded.
|
||||||
* **DLQ**: NATS/Redis stream `notify.dlq` with `{event, rule, action, error}` for operator inspection; UI shows DLQ items.
|
* **DLQ**: NATS/Valkey stream `notify.dlq` with `{event, rule, action, error}` for operator inspection; UI shows DLQ items.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -402,7 +402,7 @@ notify:
|
|||||||
issuer: "https://authority.internal"
|
issuer: "https://authority.internal"
|
||||||
require: "dpop" # or "mtls"
|
require: "dpop" # or "mtls"
|
||||||
bus:
|
bus:
|
||||||
kind: "redis" # or "nats"
|
kind: "valkey" # or "nats" (valkey uses redis:// protocol)
|
||||||
streams:
|
streams:
|
||||||
- "scanner.events"
|
- "scanner.events"
|
||||||
- "scheduler.events"
|
- "scheduler.events"
|
||||||
@@ -455,7 +455,7 @@ notify:
|
|||||||
| Invalid channel secret | Mark channel unhealthy; suppress sends; surface in UI |
|
| Invalid channel secret | Mark channel unhealthy; suppress sends; surface in UI |
|
||||||
| Rule explosion (matches everything) | Safety valve: per‑tenant RPM caps; auto‑pause rule after X drops; UI alert |
|
| Rule explosion (matches everything) | Safety valve: per‑tenant RPM caps; auto‑pause rule after X drops; UI alert |
|
||||||
| Bus outage | Buffer to local queue (bounded); resume consuming when healthy |
|
| Bus outage | Buffer to local queue (bounded); resume consuming when healthy |
|
||||||
| PostgreSQL slowness | Fall back to Redis throttles; batch write deliveries; shed low‑priority notifications |
|
| PostgreSQL slowness | Fall back to Valkey throttles; batch write deliveries; shed low‑priority notifications |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -466,7 +466,7 @@ notify:
|
|||||||
* **Integration**: synthetic event storm (10k/min), ensure p95 latency & duplicate rate.
|
* **Integration**: synthetic event storm (10k/min), ensure p95 latency & duplicate rate.
|
||||||
* **Security**: DPoP/mTLS on APIs; secretRef resolution; webhook signing & replay windows.
|
* **Security**: DPoP/mTLS on APIs; secretRef resolution; webhook signing & replay windows.
|
||||||
* **i18n**: localized templates render deterministically.
|
* **i18n**: localized templates render deterministically.
|
||||||
* **Chaos**: Slack/Teams API flaps; SMTP greylisting; Redis hiccups; ensure graceful degradation.
|
* **Chaos**: Slack/Teams API flaps; SMTP greylisting; Valkey hiccups; ensure graceful degradation.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -514,7 +514,7 @@ sequenceDiagram
|
|||||||
## 18) Implementation notes
|
## 18) Implementation notes
|
||||||
|
|
||||||
* **Language**: .NET 10; minimal API; `System.Text.Json` with canonical writer for body hashing; Channels for pipelines.
|
* **Language**: .NET 10; minimal API; `System.Text.Json` with canonical writer for body hashing; Channels for pipelines.
|
||||||
* **Bus**: Redis Streams (**XGROUP** consumers) or NATS JetStream for at‑least‑once with ack; per‑tenant consumer groups to localize backpressure.
|
* **Bus**: Valkey Streams (**XGROUP** consumers) or NATS JetStream for at‑least‑once with ack; per‑tenant consumer groups to localize backpressure.
|
||||||
* **Templates**: compile and cache per rule+channel+locale; version with rule `updatedAt` to invalidate.
|
* **Templates**: compile and cache per rule+channel+locale; version with rule `updatedAt` to invalidate.
|
||||||
* **Rules**: store raw YAML + parsed AST; validate with schema + static checks (e.g., nonsensical combos).
|
* **Rules**: store raw YAML + parsed AST; validate with schema + static checks (e.g., nonsensical combos).
|
||||||
* **Secrets**: pluggable secret resolver (Authority Secret proxy, K8s, Vault).
|
* **Secrets**: pluggable secret resolver (Authority Secret proxy, K8s, Vault).
|
||||||
|
|||||||
@@ -18,7 +18,7 @@ The Orchestrator schedules, observes, and recovers ingestion and analysis jobs a
|
|||||||
|
|
||||||
## Key components
|
## Key components
|
||||||
- Orchestrator WebService (control plane).
|
- Orchestrator WebService (control plane).
|
||||||
- Queue adapters (Redis/NATS) and job ledger.
|
- Queue adapters (Valkey/NATS) and job ledger.
|
||||||
- Console dashboard module and CLI integration for operators.
|
- Console dashboard module and CLI integration for operators.
|
||||||
|
|
||||||
## Integrations & dependencies
|
## Integrations & dependencies
|
||||||
|
|||||||
@@ -26,7 +26,7 @@ src/
|
|||||||
├─ StellaOps.Scanner.Worker/ # queue consumer; executes analyzers
|
├─ StellaOps.Scanner.Worker/ # queue consumer; executes analyzers
|
||||||
├─ StellaOps.Scanner.Models/ # DTOs, evidence, graph nodes, CDX/SPDX adapters
|
├─ StellaOps.Scanner.Models/ # DTOs, evidence, graph nodes, CDX/SPDX adapters
|
||||||
├─ StellaOps.Scanner.Storage/ # PostgreSQL repositories; RustFS object client (default) + S3 fallback; ILM/GC
|
├─ StellaOps.Scanner.Storage/ # PostgreSQL repositories; RustFS object client (default) + S3 fallback; ILM/GC
|
||||||
├─ StellaOps.Scanner.Queue/ # queue abstraction (Redis/NATS/RabbitMQ)
|
├─ StellaOps.Scanner.Queue/ # queue abstraction (Valkey/NATS/RabbitMQ)
|
||||||
├─ StellaOps.Scanner.Cache/ # layer cache; file CAS; bloom/bitmap indexes
|
├─ StellaOps.Scanner.Cache/ # layer cache; file CAS; bloom/bitmap indexes
|
||||||
├─ StellaOps.Scanner.EntryTrace/ # ENTRYPOINT/CMD → terminal program resolver (shell AST)
|
├─ StellaOps.Scanner.EntryTrace/ # ENTRYPOINT/CMD → terminal program resolver (shell AST)
|
||||||
├─ StellaOps.Scanner.Analyzers.OS.[Apk|Dpkg|Rpm]/
|
├─ StellaOps.Scanner.Analyzers.OS.[Apk|Dpkg|Rpm]/
|
||||||
@@ -92,20 +92,20 @@ CLI usage: `stella scan --semantic <image>` enables semantic analysis in output.
|
|||||||
- **Hybrid attestation**: emit **graph-level DSSE** for every `richgraph-v1` (mandatory) and optional **edge-bundle DSSE** (≤512 edges) for runtime/init-root/contested edges or third-party provenance. Publish graph DSSE digests to Rekor by default; edge-bundle Rekor publish is policy-driven. CAS layout: `cas://reachability/graphs/{blake3}` for graph body, `.../{blake3}.dsse` for envelope, and `cas://reachability/edges/{graph_hash}/{bundle_id}[.dsse]` for bundles. Deterministic ordering before hashing/signing is required.
|
- **Hybrid attestation**: emit **graph-level DSSE** for every `richgraph-v1` (mandatory) and optional **edge-bundle DSSE** (≤512 edges) for runtime/init-root/contested edges or third-party provenance. Publish graph DSSE digests to Rekor by default; edge-bundle Rekor publish is policy-driven. CAS layout: `cas://reachability/graphs/{blake3}` for graph body, `.../{blake3}.dsse` for envelope, and `cas://reachability/edges/{graph_hash}/{bundle_id}[.dsse]` for bundles. Deterministic ordering before hashing/signing is required.
|
||||||
- **Deterministic call-graph manifest**: capture analyzer versions, feed hashes, toolchain digests, and flags in a manifest stored alongside `richgraph-v1`; replaying with the same manifest MUST yield identical node/edge sets and hashes (see `docs/reachability/lead.md`).
|
- **Deterministic call-graph manifest**: capture analyzer versions, feed hashes, toolchain digests, and flags in a manifest stored alongside `richgraph-v1`; replaying with the same manifest MUST yield identical node/edge sets and hashes (see `docs/reachability/lead.md`).
|
||||||
|
|
||||||
### 1.1 Queue backbone (Redis / NATS)
|
### 1.1 Queue backbone (Valkey / NATS)
|
||||||
|
|
||||||
`StellaOps.Scanner.Queue` exposes a transport-agnostic contract (`IScanQueue`/`IScanQueueLease`) used by the WebService producer and Worker consumers. Sprint 9 introduces two first-party transports:
|
`StellaOps.Scanner.Queue` exposes a transport-agnostic contract (`IScanQueue`/`IScanQueueLease`) used by the WebService producer and Worker consumers. Sprint 9 introduces two first-party transports:
|
||||||
|
|
||||||
- **Redis Streams** (default). Uses consumer groups, deterministic idempotency keys (`scanner:jobs:idemp:*`), and supports lease claim (`XCLAIM`), renewal, exponential-backoff retries, and a `scanner:jobs:dead` stream for exhausted attempts.
|
- **Valkey Streams** (default). Uses consumer groups, deterministic idempotency keys (`scanner:jobs:idemp:*`), and supports lease claim (`XCLAIM`), renewal, exponential-backoff retries, and a `scanner:jobs:dead` stream for exhausted attempts.
|
||||||
- **NATS JetStream**. Provisions the `SCANNER_JOBS` work-queue stream + durable consumer `scanner-workers`, publishes with `MsgId` for dedupe, applies backoff via `NAK` delays, and routes dead-lettered jobs to `SCANNER_JOBS_DEAD`.
|
- **NATS JetStream**. Provisions the `SCANNER_JOBS` work-queue stream + durable consumer `scanner-workers`, publishes with `MsgId` for dedupe, applies backoff via `NAK` delays, and routes dead-lettered jobs to `SCANNER_JOBS_DEAD`.
|
||||||
|
|
||||||
Metrics are emitted via `Meter` counters (`scanner_queue_enqueued_total`, `scanner_queue_retry_total`, `scanner_queue_deadletter_total`), and `ScannerQueueHealthCheck` pings the active backend (Redis `PING`, NATS `PING`). Configuration is bound from `scanner.queue`:
|
Metrics are emitted via `Meter` counters (`scanner_queue_enqueued_total`, `scanner_queue_retry_total`, `scanner_queue_deadletter_total`), and `ScannerQueueHealthCheck` pings the active backend (Valkey `PING`, NATS `PING`). Configuration is bound from `scanner.queue`:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
scanner:
|
scanner:
|
||||||
queue:
|
queue:
|
||||||
kind: redis # or nats
|
kind: valkey # or nats (valkey uses redis:// protocol)
|
||||||
redis:
|
valkey:
|
||||||
connectionString: "redis://queue:6379/0"
|
connectionString: "redis://queue:6379/0"
|
||||||
streamName: "scanner:jobs"
|
streamName: "scanner:jobs"
|
||||||
nats:
|
nats:
|
||||||
@@ -133,7 +133,7 @@ The DI extension (`AddScannerQueue`) wires the selected transport, so future add
|
|||||||
* **OCI registry** with **Referrers API** (discover attached SBOMs/signatures).
|
* **OCI registry** with **Referrers API** (discover attached SBOMs/signatures).
|
||||||
* **RustFS** (default, offline-first) for SBOM artifacts; optional S3/MinIO compatibility retained for migration; **Object Lock** semantics emulated via retention headers; **ILM** for TTL.
|
* **RustFS** (default, offline-first) for SBOM artifacts; optional S3/MinIO compatibility retained for migration; **Object Lock** semantics emulated via retention headers; **ILM** for TTL.
|
||||||
* **PostgreSQL** for catalog, job state, diffs, ILM rules.
|
* **PostgreSQL** for catalog, job state, diffs, ILM rules.
|
||||||
* **Queue** (Redis Streams/NATS/RabbitMQ).
|
* **Queue** (Valkey Streams/NATS/RabbitMQ).
|
||||||
* **Authority** (on‑prem OIDC) for **OpToks** (DPoP/mTLS).
|
* **Authority** (on‑prem OIDC) for **OpToks** (DPoP/mTLS).
|
||||||
* **Signer** + **Attestor** (+ **Fulcio/KMS** + **Rekor v2**) for DSSE + transparency.
|
* **Signer** + **Attestor** (+ **Fulcio/KMS** + **Rekor v2**) for DSSE + transparency.
|
||||||
|
|
||||||
@@ -390,7 +390,7 @@ Diffs are stored as artifacts and feed **UI** and **CLI**.
|
|||||||
```yaml
|
```yaml
|
||||||
scanner:
|
scanner:
|
||||||
queue:
|
queue:
|
||||||
kind: redis
|
kind: valkey # uses redis:// protocol
|
||||||
url: "redis://queue:6379/0"
|
url: "redis://queue:6379/0"
|
||||||
postgres:
|
postgres:
|
||||||
connectionString: "Host=postgres;Port=5432;Database=scanner;Username=stellaops;Password=stellaops"
|
connectionString: "Host=postgres;Port=5432;Database=scanner;Username=stellaops;Password=stellaops"
|
||||||
|
|||||||
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
## Delivery phases
|
## Delivery phases
|
||||||
- **Phase 1 – Control plane & job queue**
|
- **Phase 1 – Control plane & job queue**
|
||||||
Finalise Scanner WebService, queue abstraction (Redis/NATS), job leasing, CAS layer cache, artifact catalog, and API endpoints.
|
Finalise Scanner WebService, queue abstraction (Valkey/NATS), job leasing, CAS layer cache, artifact catalog, and API endpoints.
|
||||||
- **Phase 2 – Analyzer parity & SBOM assembly**
|
- **Phase 2 – Analyzer parity & SBOM assembly**
|
||||||
Implement OS/Lang/Native analyzers, inventory/usage SBOM views, entry trace resolution, deterministic component identity.
|
Implement OS/Lang/Native analyzers, inventory/usage SBOM views, entry trace resolution, deterministic component identity.
|
||||||
- **Phase 3 – Diff & attestations**
|
- **Phase 3 – Diff & attestations**
|
||||||
|
|||||||
@@ -15,7 +15,7 @@ Scheduler detects advisory/VEX deltas, computes impact windows, and orchestrates
|
|||||||
|
|
||||||
## Integrations & dependencies
|
## Integrations & dependencies
|
||||||
- PostgreSQL (schema `scheduler`) for impact models.
|
- PostgreSQL (schema `scheduler`) for impact models.
|
||||||
- Redis/NATS for queueing.
|
- Valkey/NATS for queueing.
|
||||||
- Policy Engine, Scanner, Notify.
|
- Policy Engine, Scanner, Notify.
|
||||||
|
|
||||||
## Operational notes
|
## Operational notes
|
||||||
|
|||||||
@@ -27,7 +27,7 @@ src/
|
|||||||
├─ StellaOps.Scheduler.ImpactIndex/ # purl→images inverted index (roaring bitmaps)
|
├─ StellaOps.Scheduler.ImpactIndex/ # purl→images inverted index (roaring bitmaps)
|
||||||
├─ StellaOps.Scheduler.Models/ # DTOs (Schedule, Run, ImpactSet, Deltas)
|
├─ StellaOps.Scheduler.Models/ # DTOs (Schedule, Run, ImpactSet, Deltas)
|
||||||
├─ StellaOps.Scheduler.Storage.Postgres/ # schedules, runs, cursors, locks
|
├─ StellaOps.Scheduler.Storage.Postgres/ # schedules, runs, cursors, locks
|
||||||
├─ StellaOps.Scheduler.Queue/ # Redis Streams / NATS abstraction
|
├─ StellaOps.Scheduler.Queue/ # Valkey Streams / NATS abstraction
|
||||||
├─ StellaOps.Scheduler.Tests.* # unit/integration/e2e
|
├─ StellaOps.Scheduler.Tests.* # unit/integration/e2e
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -36,7 +36,7 @@ src/
|
|||||||
* **Scheduler.WebService** (stateless)
|
* **Scheduler.WebService** (stateless)
|
||||||
* **Scheduler.Worker** (scale‑out; planners + executors)
|
* **Scheduler.Worker** (scale‑out; planners + executors)
|
||||||
|
|
||||||
**Dependencies**: Authority (OpTok + DPoP/mTLS), Scanner.WebService, Conselier, Excitor, PostgreSQL, Redis/NATS, (optional) Notify.
|
**Dependencies**: Authority (OpTok + DPoP/mTLS), Scanner.WebService, Conselier, Excitor, PostgreSQL, Valkey/NATS, (optional) Notify.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -111,7 +111,7 @@ Goal: translate **change keys** → **image sets** in **milliseconds**.
|
|||||||
* `Contains[purl] → bitmap(imageIds)`
|
* `Contains[purl] → bitmap(imageIds)`
|
||||||
* `UsedBy[purl] → bitmap(imageIds)` (subset of Contains)
|
* `UsedBy[purl] → bitmap(imageIds)` (subset of Contains)
|
||||||
* Optionally keep **Owner maps**: `{imageId → {tenantId, namespaces[], repos[]}}` for selection filters.
|
* Optionally keep **Owner maps**: `{imageId → {tenantId, namespaces[], repos[]}}` for selection filters.
|
||||||
* Persist in RocksDB/LMDB or Redis‑modules; cache hot shards in memory; snapshot to PostgreSQL for cold start.
|
* Persist in RocksDB/LMDB or Valkey‑modules; cache hot shards in memory; snapshot to PostgreSQL for cold start.
|
||||||
|
|
||||||
**Update paths**:
|
**Update paths**:
|
||||||
|
|
||||||
@@ -296,12 +296,12 @@ scheduler:
|
|||||||
issuer: "https://authority.internal"
|
issuer: "https://authority.internal"
|
||||||
require: "dpop" # or "mtls"
|
require: "dpop" # or "mtls"
|
||||||
queue:
|
queue:
|
||||||
kind: "redis" # or "nats"
|
kind: "valkey" # or "nats" (valkey uses redis:// protocol)
|
||||||
url: "redis://redis:6379/4"
|
url: "redis://valkey:6379/4"
|
||||||
postgres:
|
postgres:
|
||||||
connectionString: "Host=postgres;Port=5432;Database=scheduler;Username=stellaops;Password=stellaops"
|
connectionString: "Host=postgres;Port=5432;Database=scheduler;Username=stellaops;Password=stellaops"
|
||||||
impactIndex:
|
impactIndex:
|
||||||
storage: "rocksdb" # "rocksdb" | "redis" | "memory"
|
storage: "rocksdb" # "rocksdb" | "valkey" | "memory"
|
||||||
warmOnStart: true
|
warmOnStart: true
|
||||||
usageOnlyDefault: true
|
usageOnlyDefault: true
|
||||||
limits:
|
limits:
|
||||||
|
|||||||
@@ -41,7 +41,7 @@
|
|||||||
* **Fulcio** (Sigstore) *or* **KMS/HSM**: to obtain certs or perform signatures.
|
* **Fulcio** (Sigstore) *or* **KMS/HSM**: to obtain certs or perform signatures.
|
||||||
* **OCI Registry (Referrers API)**: to verify **scanner** image release signature.
|
* **OCI Registry (Referrers API)**: to verify **scanner** image release signature.
|
||||||
* **Attestor**: downstream service that writes DSSE bundles to **Rekor v2**.
|
* **Attestor**: downstream service that writes DSSE bundles to **Rekor v2**.
|
||||||
* **Config/state stores**: Redis (caches, rate buckets), PostgreSQL (audit log).
|
* **Config/state stores**: Valkey (caches, rate buckets), PostgreSQL (audit log).
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -191,7 +191,7 @@ sequenceDiagram
|
|||||||
**DPoP nonce dance (when enabled for high‑value ops):**
|
**DPoP nonce dance (when enabled for high‑value ops):**
|
||||||
|
|
||||||
* If DPoP proof lacks a valid nonce, Signer replies `401` with `WWW-Authenticate: DPoP error="use_dpop_nonce", dpop_nonce="<nonce>"`.
|
* If DPoP proof lacks a valid nonce, Signer replies `401` with `WWW-Authenticate: DPoP error="use_dpop_nonce", dpop_nonce="<nonce>"`.
|
||||||
* Client retries with new proof including the nonce; Signer validates nonce and `jti` uniqueness (Redis TTL cache).
|
* Client retries with new proof including the nonce; Signer validates nonce and `jti` uniqueness (Valkey TTL cache).
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -210,7 +210,7 @@ sequenceDiagram
|
|||||||
* **Enforcements**:
|
* **Enforcements**:
|
||||||
|
|
||||||
* Reject if **revoked**, **expired**, **plan mismatch** or **release outside window** (`stellaops_version` in predicate exceeds `max_version` or release date beyond `valid_release_year`).
|
* Reject if **revoked**, **expired**, **plan mismatch** or **release outside window** (`stellaops_version` in predicate exceeds `max_version` or release date beyond `valid_release_year`).
|
||||||
* Apply plan **throttles** (QPS/concurrency/artifact bytes) via token‑bucket in Redis keyed by `license_id`.
|
* Apply plan **throttles** (QPS/concurrency/artifact bytes) via token‑bucket in Valkey keyed by `license_id`.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -277,7 +277,7 @@ Per `license_id` (from PoE):
|
|||||||
|
|
||||||
## 10) Storage & caches
|
## 10) Storage & caches
|
||||||
|
|
||||||
* **Redis**:
|
* **Valkey**:
|
||||||
|
|
||||||
* DPoP nonce & `jti` replay cache (TTL ≤ 10 min).
|
* DPoP nonce & `jti` replay cache (TTL ≤ 10 min).
|
||||||
* PoE introspection cache (short TTL, e.g., 60–120 s).
|
* PoE introspection cache (short TTL, e.g., 60–120 s).
|
||||||
@@ -399,7 +399,7 @@ signer:
|
|||||||
## 16) Deployment & HA
|
## 16) Deployment & HA
|
||||||
|
|
||||||
* Run ≥ 2 replicas; front with L7 LB; **sticky** not required.
|
* Run ≥ 2 replicas; front with L7 LB; **sticky** not required.
|
||||||
* Redis for replay/quota caches (HA).
|
* Valkey for replay/quota caches (HA).
|
||||||
* Audit sink (PostgreSQL) in primary region; asynchronous write with local fallback buffer.
|
* Audit sink (PostgreSQL) in primary region; asynchronous write with local fallback buffer.
|
||||||
* Fulcio/KMS clients configured with retries/backoff; circuit breakers.
|
* Fulcio/KMS clients configured with retries/backoff; circuit breakers.
|
||||||
|
|
||||||
|
|||||||
@@ -12,7 +12,7 @@
|
|||||||
|-----------|-------------|-------|
|
|-----------|-------------|-------|
|
||||||
| Runtime | .NET 10.0+ | LTS recommended |
|
| Runtime | .NET 10.0+ | LTS recommended |
|
||||||
| Database | PostgreSQL 15.0+ | For projections and issuer directory |
|
| Database | PostgreSQL 15.0+ | For projections and issuer directory |
|
||||||
| Cache | Redis 7.0+ (optional) | For caching consensus results |
|
| Cache | Valkey 8.0+ (optional) | For caching consensus results |
|
||||||
| Memory | 512MB minimum | 2GB recommended for production |
|
| Memory | 512MB minimum | 2GB recommended for production |
|
||||||
| CPU | 2 cores minimum | 4 cores for high throughput |
|
| CPU | 2 cores minimum | 4 cores for high throughput |
|
||||||
|
|
||||||
|
|||||||
@@ -19,10 +19,10 @@ Tenant API│ REST + gRPC WIP │ │ rules/channels│
|
|||||||
└───────▲──────────┘ │ deliveries │
|
└───────▲──────────┘ │ deliveries │
|
||||||
│ │ digests │
|
│ │ digests │
|
||||||
Internal bus │ └───────────────┘
|
Internal bus │ └───────────────┘
|
||||||
(NATS/Redis/etc) │
|
(NATS/Valkey/etc)│
|
||||||
│
|
│
|
||||||
┌─────────▼─────────┐ ┌───────────────┐
|
┌─────────▼─────────┐ ┌───────────────┐
|
||||||
│ Notify.Worker │◀────▶│ Redis / Cache │
|
│ Notify.Worker │◀────▶│ Valkey / Cache│
|
||||||
│ rule eval + render│ │ throttles/locks│
|
│ rule eval + render│ │ throttles/locks│
|
||||||
└─────────▲─────────┘ └───────▲───────┘
|
└─────────▲─────────┘ └───────▲───────┘
|
||||||
│ │
|
│ │
|
||||||
@@ -38,13 +38,13 @@ Tenant API│ REST + gRPC WIP │ │ rules/channels│
|
|||||||
- **Worker** subscribes to the platform event bus, evaluates rules per tenant, applies throttles/digests, renders payloads, writes ledger entries, and invokes connectors.
|
- **Worker** subscribes to the platform event bus, evaluates rules per tenant, applies throttles/digests, renders payloads, writes ledger entries, and invokes connectors.
|
||||||
- **Plug-ins** live under `plugins/notify/` and are loaded deterministically at service start (`orderedPlugins` list). Each implements connector contracts and optional health/test-preview providers.
|
- **Plug-ins** live under `plugins/notify/` and are loaded deterministically at service start (`orderedPlugins` list). Each implements connector contracts and optional health/test-preview providers.
|
||||||
|
|
||||||
Both services share options via `notify.yaml` (see `etc/notify.yaml.sample`). For dev/test scenarios, an in-memory repository exists but production requires PostgreSQL + Redis/NATS for durability and coordination.
|
Both services share options via `notify.yaml` (see `etc/notify.yaml.sample`). For dev/test scenarios, an in-memory repository exists but production requires PostgreSQL + Valkey/NATS for durability and coordination.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 2. Event ingestion and rule evaluation
|
## 2. Event ingestion and rule evaluation
|
||||||
|
|
||||||
1. **Subscription.** Workers attach to the internal bus (Redis Streams or NATS JetStream). Each partition key is `tenantId|scope.digest|event.kind` to preserve order for a given artefact.
|
1. **Subscription.** Workers attach to the internal bus (Valkey Streams or NATS JetStream). Each partition key is `tenantId|scope.digest|event.kind` to preserve order for a given artefact.
|
||||||
2. **Normalisation.** Incoming events are hydrated into `NotifyEvent` envelopes. Payload JSON is normalised (sorted object keys) to preserve determinism and enable hashing.
|
2. **Normalisation.** Incoming events are hydrated into `NotifyEvent` envelopes. Payload JSON is normalised (sorted object keys) to preserve determinism and enable hashing.
|
||||||
3. **Rule snapshot.** Per-tenant rule sets are cached in memory. PostgreSQL LISTEN/NOTIFY triggers snapshot refreshes without restart.
|
3. **Rule snapshot.** Per-tenant rule sets are cached in memory. PostgreSQL LISTEN/NOTIFY triggers snapshot refreshes without restart.
|
||||||
4. **Match pipeline.**
|
4. **Match pipeline.**
|
||||||
@@ -85,7 +85,7 @@ Failures during evaluation are logged with correlation IDs and surfaced through
|
|||||||
| `templates` | Locale-specific render bodies. | `id`, `tenant_id`, `channel_type`, `key`; index on `(tenant_id, channel_type, key)`. |
|
| `templates` | Locale-specific render bodies. | `id`, `tenant_id`, `channel_type`, `key`; index on `(tenant_id, channel_type, key)`. |
|
||||||
| `deliveries` | Ledger of rendered notifications. | `id`, `tenant_id`, `sent_at`; compound index on `(tenant_id, sent_at DESC)` for history queries. |
|
| `deliveries` | Ledger of rendered notifications. | `id`, `tenant_id`, `sent_at`; compound index on `(tenant_id, sent_at DESC)` for history queries. |
|
||||||
| `digests` | Open digest windows per action. | `id` (`tenant_id:action_key:window`), `status`; index on `(tenant_id, action_key)`. |
|
| `digests` | Open digest windows per action. | `id` (`tenant_id:action_key:window`), `status`; index on `(tenant_id, action_key)`. |
|
||||||
| `throttles` | Short-lived throttle tokens (PostgreSQL or Redis). | Key format `idem:<hash>` with TTL aligned to throttle duration. |
|
| `throttles` | Short-lived throttle tokens (PostgreSQL or Valkey). | Key format `idem:<hash>` with TTL aligned to throttle duration. |
|
||||||
|
|
||||||
Records are stored using the canonical JSON serializer (`NotifyCanonicalJsonSerializer`) to preserve property ordering and casing. Schema migration helpers upgrade stored records when new versions ship.
|
Records are stored using the canonical JSON serializer (`NotifyCanonicalJsonSerializer`) to preserve property ordering and casing. Schema migration helpers upgrade stored records when new versions ship.
|
||||||
|
|
||||||
@@ -101,7 +101,7 @@ Records are stored using the canonical JSON serializer (`NotifyCanonicalJsonSeri
|
|||||||
- `notify_delivery_attempts_total{channelType,status}`
|
- `notify_delivery_attempts_total{channelType,status}`
|
||||||
- `notify_digest_open_windows{window}`
|
- `notify_digest_open_windows{window}`
|
||||||
- Optional OpenTelemetry traces for rule evaluation and connector round-trips.
|
- Optional OpenTelemetry traces for rule evaluation and connector round-trips.
|
||||||
- **Scaling levers.** Increase worker replicas to cope with bus throughput; adjust `worker.prefetchCount` for Redis Streams or `ackWait` for NATS JetStream. WebService remains stateless and scales horizontally behind the gateway.
|
- **Scaling levers.** Increase worker replicas to cope with bus throughput; adjust `worker.prefetchCount` for Valkey Streams or `ackWait` for NATS JetStream. WebService remains stateless and scales horizontally behind the gateway.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -19,7 +19,7 @@ Deliverables feed Sprint 37 tasks (`NOTIFY-SVC-37-00x`) and unblock Task Runner
|
|||||||
### 2. Ingestion & Persistence
|
### 2. Ingestion & Persistence
|
||||||
- Expose a secure Notifications API endpoint (`POST /notifications/pack-approvals`) receiving Task Runner events.
|
- Expose a secure Notifications API endpoint (`POST /notifications/pack-approvals`) receiving Task Runner events.
|
||||||
- Validate scope (`packs.approve`, `Notifier.Events:Write`) and tenant match.
|
- Validate scope (`packs.approve`, `Notifier.Events:Write`) and tenant match.
|
||||||
- Persist approval state transitions in Mongo (`notifications.pack_approvals`) with indexes on run/approval/tenant.
|
- Persist approval state transitions in PostgreSQL (`notify.pack_approvals` table) with indexes on run/approval/tenant.
|
||||||
- Store outbound notification audit records with correlation IDs to support Task Runner resume flow.
|
- Store outbound notification audit records with correlation IDs to support Task Runner resume flow.
|
||||||
|
|
||||||
### 3. Notification Routing
|
### 3. Notification Routing
|
||||||
@@ -51,12 +51,12 @@ Deliverables feed Sprint 37 tasks (`NOTIFY-SVC-37-00x`) and unblock Task Runner
|
|||||||
| Task ID | Scope |
|
| Task ID | Scope |
|
||||||
| --- | --- |
|
| --- | --- |
|
||||||
| **NOTIFY-SVC-37-001** | Author this contract doc, OpenAPI fragment, and schema references. Coordinate with Task Runner/Authority guilds. |
|
| **NOTIFY-SVC-37-001** | Author this contract doc, OpenAPI fragment, and schema references. Coordinate with Task Runner/Authority guilds. |
|
||||||
| **NOTIFY-SVC-37-002** | Implement secure ingestion endpoint, Mongo persistence, and audit hooks. Provide integration tests with sample events. |
|
| **NOTIFY-SVC-37-002** | Implement secure ingestion endpoint, PostgreSQL persistence, and audit hooks. Provide integration tests with sample events. |
|
||||||
| **NOTIFY-SVC-37-003** | Build approval/policy notification templates, routing rules, and channel dispatch (email + webhook). |
|
| **NOTIFY-SVC-37-003** | Build approval/policy notification templates, routing rules, and channel dispatch (email + webhook). |
|
||||||
| **NOTIFY-SVC-37-004** | Ship acknowledgement endpoint + Task Runner callback client, resume token handling, and metrics/dashboards. |
|
| **NOTIFY-SVC-37-004** | Ship acknowledgement endpoint + Task Runner callback client, resume token handling, and metrics/dashboards. |
|
||||||
|
|
||||||
## Open Questions
|
## Open Questions
|
||||||
|
|
||||||
1. Who owns approval resume callback (Task Runner Worker vs Orchestrator)? Resolve before NOTIFY-SVC-37-004.
|
1. Who owns approval resume callback (Task Runner Worker vs Orchestrator)? Resolve before NOTIFY-SVC-37-004.
|
||||||
2. Should approvals generate incidents in existing incident schema or dedicated collection? Decision impacts Mongo design.
|
2. Should approvals generate incidents in existing incident schema or dedicated table? Decision impacts PostgreSQL schema design.
|
||||||
3. Authority scopes for approval ingestion/ack — reuse `packs.approve` or introduce `packs.approve:notify`? Coordinate with Authority team.
|
3. Authority scopes for approval ingestion/ack — reuse `packs.approve` or introduce `packs.approve:notify`? Coordinate with Authority team.
|
||||||
|
|||||||
225
docs/operations/canon-version-migration.md
Normal file
225
docs/operations/canon-version-migration.md
Normal file
@@ -0,0 +1,225 @@
|
|||||||
|
# Canonicalization Version Migration Guide
|
||||||
|
|
||||||
|
**Version**: 1.0
|
||||||
|
**Status**: Active
|
||||||
|
**Last Updated**: 2025-12-24
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This guide describes the migration path for content-addressed identifiers from legacy (unversioned) canonicalization to versioned canonicalization (`stella:canon:v1`). Versioned canonicalization embeds algorithm version markers in the canonical JSON before hashing, ensuring forward compatibility and verifier clarity.
|
||||||
|
|
||||||
|
## Why Versioning?
|
||||||
|
|
||||||
|
### Problem
|
||||||
|
|
||||||
|
Legacy content-addressed IDs (EvidenceID, ReasoningID, VEXVerdictID, ProofBundleID) were computed as:
|
||||||
|
|
||||||
|
```
|
||||||
|
hash = SHA256(canonicalize(payload))
|
||||||
|
```
|
||||||
|
|
||||||
|
If the canonicalization algorithm ever changed (bug fix, specification update, optimization), existing hashes would become invalid with no way to detect which algorithm produced them.
|
||||||
|
|
||||||
|
### Solution
|
||||||
|
|
||||||
|
Versioned content-addressed IDs embed a version marker:
|
||||||
|
|
||||||
|
```
|
||||||
|
hash = SHA256(canonicalize_with_version(payload, "stella:canon:v1"))
|
||||||
|
```
|
||||||
|
|
||||||
|
The canonical JSON now includes `_canonVersion` as the first field:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"_canonVersion": "stella:canon:v1",
|
||||||
|
"sbomEntryId": "sha256:91f2ab3c:pkg:npm/lodash@4.17.21",
|
||||||
|
"vulnerabilityId": "CVE-2021-23337"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Migration Phases
|
||||||
|
|
||||||
|
### Phase 1: Dual-Mode (Current)
|
||||||
|
|
||||||
|
**Timeline**: Now
|
||||||
|
**Behavior**:
|
||||||
|
- **Generation**: All new content-addressed IDs use versioned canonicalization (v1)
|
||||||
|
- **Verification**: Accept both legacy and v1 hashes
|
||||||
|
- **Detection**: `CanonVersion.IsVersioned()` distinguishes format
|
||||||
|
|
||||||
|
**Impact**:
|
||||||
|
- Zero downtime migration
|
||||||
|
- Existing attestations remain valid
|
||||||
|
- New attestations get version markers
|
||||||
|
|
||||||
|
### Phase 2: Deprecation Warning
|
||||||
|
|
||||||
|
**Timeline**: +6 months from Phase 1
|
||||||
|
**Behavior**:
|
||||||
|
- Log warnings when verifying legacy hashes
|
||||||
|
- Emit metrics for legacy hash encounters
|
||||||
|
- Continue accepting legacy hashes
|
||||||
|
|
||||||
|
**Operator Action**:
|
||||||
|
- Monitor `canon_legacy_hash_verified_total` metric
|
||||||
|
- Plan re-attestation of critical assets
|
||||||
|
|
||||||
|
### Phase 3: Legacy Rejection
|
||||||
|
|
||||||
|
**Timeline**: +12 months from Phase 1
|
||||||
|
**Behavior**:
|
||||||
|
- Reject legacy hashes during verification
|
||||||
|
- Only v1 (or newer) hashes accepted
|
||||||
|
|
||||||
|
**Operator Action**:
|
||||||
|
- Re-attest any remaining legacy attestations before cutoff
|
||||||
|
- Use `stella rehash` CLI command for bulk re-attestation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Detection and Verification
|
||||||
|
|
||||||
|
### Detecting Versioned Hashes
|
||||||
|
|
||||||
|
Versioned canonical JSON always starts with `{"_canonVersion":` due to lexicographic sorting (underscore sorts before all ASCII letters).
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
using StellaOps.Canonical.Json;
|
||||||
|
|
||||||
|
// Check if canonical JSON is versioned
|
||||||
|
byte[] canonicalJson = GetCanonicalPayload();
|
||||||
|
bool isVersioned = CanonVersion.IsVersioned(canonicalJson);
|
||||||
|
|
||||||
|
// Extract version if present
|
||||||
|
string? version = CanonVersion.ExtractVersion(canonicalJson);
|
||||||
|
if (version == CanonVersion.V1)
|
||||||
|
{
|
||||||
|
// Use V1 verification algorithm
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verifying Both Formats
|
||||||
|
|
||||||
|
During Phase 1, verifiers should accept both formats:
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public bool VerifyContentAddressedId(byte[] payload, string expectedHash)
|
||||||
|
{
|
||||||
|
// Try versioned first (new format)
|
||||||
|
if (CanonVersion.IsVersioned(payload))
|
||||||
|
{
|
||||||
|
var hash = CanonJson.HashVersioned(payload, CanonVersion.Current);
|
||||||
|
return hash == expectedHash;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fall back to legacy (unversioned)
|
||||||
|
var legacyHash = CanonJson.Hash(payload);
|
||||||
|
return legacyHash == expectedHash;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Re-Attestation Procedure
|
||||||
|
|
||||||
|
### When to Re-Attest
|
||||||
|
|
||||||
|
Re-attestation is required when:
|
||||||
|
- Moving from Phase 1 to Phase 3
|
||||||
|
- Migrating attestations between systems
|
||||||
|
- Archiving for long-term storage
|
||||||
|
|
||||||
|
### CLI Re-Attestation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Re-attest a single attestation bundle
|
||||||
|
stella rehash --input attestation.json --output attestation-v1.json
|
||||||
|
|
||||||
|
# Bulk re-attest all attestations in a directory
|
||||||
|
stella rehash --input-dir /var/stella/attestations \
|
||||||
|
--output-dir /var/stella/attestations-v1 \
|
||||||
|
--version stella:canon:v1
|
||||||
|
|
||||||
|
# Dry-run to preview changes
|
||||||
|
stella rehash --input attestation.json --dry-run
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Migration
|
||||||
|
|
||||||
|
For PostgreSQL-stored attestations:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Find legacy attestations (those without version marker)
|
||||||
|
SELECT id, content_hash, created_at
|
||||||
|
FROM attestations
|
||||||
|
WHERE NOT content LIKE '{"_canonVersion":%'
|
||||||
|
ORDER BY created_at;
|
||||||
|
|
||||||
|
-- Export for re-processing
|
||||||
|
COPY (
|
||||||
|
SELECT id, content
|
||||||
|
FROM attestations
|
||||||
|
WHERE NOT content LIKE '{"_canonVersion":%'
|
||||||
|
) TO '/tmp/legacy_attestations.csv' WITH CSV HEADER;
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Hash Mismatch Errors
|
||||||
|
|
||||||
|
**Symptom**: Verification fails with "hash mismatch" error.
|
||||||
|
|
||||||
|
**Diagnosis**:
|
||||||
|
1. Check if the stored hash was computed with legacy or versioned canonicalization
|
||||||
|
2. Check the current verification mode (Phase 1/2/3)
|
||||||
|
|
||||||
|
**Resolution**:
|
||||||
|
```bash
|
||||||
|
# Inspect the attestation format
|
||||||
|
stella inspect attestation.json --show-version
|
||||||
|
|
||||||
|
# Output:
|
||||||
|
# Canonicalization Version: stella:canon:v1
|
||||||
|
# Hash Algorithm: SHA-256
|
||||||
|
# Computed Hash: sha256:7b8c9d0e...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Legacy Hash in Phase 3
|
||||||
|
|
||||||
|
**Symptom**: Verification rejected with "legacy hash not accepted" error.
|
||||||
|
|
||||||
|
**Resolution**:
|
||||||
|
1. Re-attest the content with versioned canonicalization
|
||||||
|
2. Update any references to the old hash
|
||||||
|
|
||||||
|
```bash
|
||||||
|
stella rehash --input old.json --output new.json
|
||||||
|
stella verify new.json # Should pass
|
||||||
|
```
|
||||||
|
|
||||||
|
### Performance Considerations
|
||||||
|
|
||||||
|
Versioned canonicalization adds ~25-30 bytes to each canonical payload (`{"_canonVersion":"stella:canon:v1",`). This has negligible impact on:
|
||||||
|
- Hash computation time (<1μs overhead)
|
||||||
|
- Storage size (<0.1% increase for typical payloads)
|
||||||
|
- Network transfer (compression eliminates overhead)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Version History
|
||||||
|
|
||||||
|
| Version | Identifier | Status | Notes |
|
||||||
|
|---------|------------|--------|-------|
|
||||||
|
| V1 | `stella:canon:v1` | **Current** | RFC 8785 JSON canonicalization |
|
||||||
|
| Legacy | (none) | Deprecated | Pre-versioning; no version marker |
|
||||||
|
|
||||||
|
## Related Documentation
|
||||||
|
|
||||||
|
- [Proof Chain Specification](../modules/attestor/proof-chain-specification.md)
|
||||||
|
- [Content-Addressed Identifier System](../modules/attestor/proof-chain-specification.md#content-addressed-identifier-system)
|
||||||
|
- [CanonJson API Reference](../api/canon-json.md)
|
||||||
@@ -8,7 +8,7 @@ Operational steps to deploy, monitor, and recover the Notifications service (Web
|
|||||||
## Pre-flight
|
## Pre-flight
|
||||||
- Secrets stored in Authority: SMTP creds, Slack/Teams hooks, webhook HMAC keys.
|
- Secrets stored in Authority: SMTP creds, Slack/Teams hooks, webhook HMAC keys.
|
||||||
- Outbound allowlist updated for target channels.
|
- Outbound allowlist updated for target channels.
|
||||||
- PostgreSQL and Redis reachable; health checks pass.
|
- PostgreSQL and Valkey reachable; health checks pass.
|
||||||
- Offline kit loaded: channel manifests, default templates, rule seeds.
|
- Offline kit loaded: channel manifests, default templates, rule seeds.
|
||||||
|
|
||||||
## Deploy
|
## Deploy
|
||||||
@@ -37,7 +37,7 @@ Operational steps to deploy, monitor, and recover the Notifications service (Web
|
|||||||
- **Rotate secrets**: update Authority secret, then `POST /api/v1/notify/channels/{id}:refresh-secret`.
|
- **Rotate secrets**: update Authority secret, then `POST /api/v1/notify/channels/{id}:refresh-secret`.
|
||||||
|
|
||||||
## Failure recovery
|
## Failure recovery
|
||||||
- Worker crash loop: check Redis connectivity, template compile errors; run `notify-worker --validate-only` using current config.
|
- Worker crash loop: check Valkey connectivity, template compile errors; run `notify-worker --validate-only` using current config.
|
||||||
- PostgreSQL outage: worker backs off with exponential retry; after recovery, replay via `:replay` or digests as needed.
|
- PostgreSQL outage: worker backs off with exponential retry; after recovery, replay via `:replay` or digests as needed.
|
||||||
- Channel outage (e.g., Slack 5xx): throttles + retry policy handle transient errors; for extended outages, disable channel or swap to backup policy.
|
- Channel outage (e.g., Slack 5xx): throttles + retry policy handle transient errors; for extended outages, disable channel or swap to backup policy.
|
||||||
|
|
||||||
@@ -54,5 +54,5 @@ Operational steps to deploy, monitor, and recover the Notifications service (Web
|
|||||||
- [ ] Health endpoints green.
|
- [ ] Health endpoints green.
|
||||||
- [ ] Delivery failure rate < 0.5% over last hour.
|
- [ ] Delivery failure rate < 0.5% over last hour.
|
||||||
- [ ] Escalation backlog empty or within SLO.
|
- [ ] Escalation backlog empty or within SLO.
|
||||||
- [ ] Redis memory < 75% and PostgreSQL primary healthy.
|
- [ ] Valkey memory < 75% and PostgreSQL primary healthy.
|
||||||
- [ ] Latest release notes applied and channels validated.
|
- [ ] Latest release notes applied and channels validated.
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
Last updated: 2025-11-25
|
Last updated: 2025-11-25
|
||||||
|
|
||||||
## Pre-flight
|
## Pre-flight
|
||||||
- Ensure Mongo and queue backend reachable; health at `/api/v1/orchestrator/admin/health` green.
|
- Ensure PostgreSQL and queue backend reachable; health at `/api/v1/orchestrator/admin/health` green.
|
||||||
- Verify tenant allowlist and scopes (`orchestrator:*`) configured in Authority.
|
- Verify tenant allowlist and scopes (`orchestrator:*`) configured in Authority.
|
||||||
- Plugin bundles present and signatures verified.
|
- Plugin bundles present and signatures verified.
|
||||||
|
|
||||||
|
|||||||
@@ -1,11 +0,0 @@
|
|||||||
# Evidence & Suppression Patterns (Gaps Stub)
|
|
||||||
|
|
||||||
> **Development Placeholder:** This document tracks implementation gaps for sprint planning. For evidence and suppression documentation, see `docs/policy/` and `docs/modules/policy/README.md`.
|
|
||||||
|
|
||||||
Use with sprint task 9 (EVIDENCE-PATTERNS-GAPS-300-016) and advisory `30-Nov-2025 - Comparative Evidence Patterns for Stella Ops.md`.
|
|
||||||
|
|
||||||
- TODO: Canonical schema for evidence, suppression, export; align across modules.
|
|
||||||
- TODO: Unified justification/expiry taxonomy and visibility policy.
|
|
||||||
- TODO: Offline evidence-kit packaging plan with signed manifests.
|
|
||||||
- TODO: Fixtures and observability metrics to be added; ensure deterministic ordering.
|
|
||||||
- TODO: Add `evidence-suppression.schema.json` placeholder and link to Policy/UI modules.
|
|
||||||
@@ -1,13 +0,0 @@
|
|||||||
# Plugin Architecture Gaps (Stub)
|
|
||||||
|
|
||||||
> **Development Placeholder:** This document tracks implementation gaps for sprint planning. For plugin architecture documentation, see `docs/modules/*/AGENTS.md` and `docs/dev/30_PLUGIN_DEV_GUIDE.md`.
|
|
||||||
|
|
||||||
Use with sprint task 14 (Plugin architecture gaps remediation).
|
|
||||||
|
|
||||||
- TODO: Signed schemas/capability catalog for plugins.
|
|
||||||
- TODO: Sandbox/resource limits and crash kill-switch rules.
|
|
||||||
- TODO: Provenance: SBOM + DSSE verification for plugins; offline kit packaging + verify script.
|
|
||||||
- TODO: Compatibility matrix and dependency/secret rules.
|
|
||||||
- TODO: Signed plugin index with revocation/CVE data (see `tests/plugins/plugin-index.json`).
|
|
||||||
- TODO: Determinism harness and fixture plan (see `tests/plugins/README.md`).
|
|
||||||
- TODO: Publish `docs/process/plugin-capability-catalog.json` and sign it.
|
|
||||||
@@ -11,7 +11,7 @@
|
|||||||
| Resources | 2 vCPU / 2 GiB RAM / 10 GiB SSD | Fits developer laptops |
|
| Resources | 2 vCPU / 2 GiB RAM / 10 GiB SSD | Fits developer laptops |
|
||||||
| TLS trust | Built-in self-signed or your own certs | Replace `/certs` before production |
|
| TLS trust | Built-in self-signed or your own certs | Replace `/certs` before production |
|
||||||
|
|
||||||
Keep Redis and PostgreSQL bundled unless you already operate managed instances.
|
Keep Valkey and PostgreSQL bundled unless you already operate managed instances.
|
||||||
|
|
||||||
## 1. Download the signed bundles (1 min)
|
## 1. Download the signed bundles (1 min)
|
||||||
|
|
||||||
@@ -45,11 +45,11 @@ STELLA_OPS_DEFAULT_ADMIN_PASSWORD="change-me!"
|
|||||||
POSTGRES_USER=stella_admin
|
POSTGRES_USER=stella_admin
|
||||||
POSTGRES_PASSWORD=$(openssl rand -base64 18)
|
POSTGRES_PASSWORD=$(openssl rand -base64 18)
|
||||||
POSTGRES_HOST=postgres
|
POSTGRES_HOST=postgres
|
||||||
REDIS_PASSWORD=$(openssl rand -base64 18)
|
VALKEY_PASSWORD=$(openssl rand -base64 18)
|
||||||
REDIS_URL=redis
|
VALKEY_URL=valkey
|
||||||
```
|
```
|
||||||
|
|
||||||
Use existing Redis/PostgreSQL endpoints by setting `POSTGRES_HOST` and `REDIS_URL`. Keep credentials scoped to Stella Ops; Redis counters enforce the transparent quota (`{{ quota_token }}` scans/day).
|
Use existing Valkey/PostgreSQL endpoints by setting `POSTGRES_HOST` and `VALKEY_URL`. Keep credentials scoped to Stella Ops; Valkey counters enforce the transparent quota (`{{ quota_token }}` scans/day).
|
||||||
|
|
||||||
## 3. Launch services (1 min)
|
## 3. Launch services (1 min)
|
||||||
|
|
||||||
|
|||||||
@@ -86,11 +86,11 @@ stella replay manifest.json --what-if --vary=feeds
|
|||||||
|
|
||||||
## Storage
|
## Storage
|
||||||
|
|
||||||
- **Mongo collections** (see `../data/replay_schema.md`)
|
- **PostgreSQL tables** (see `docs/db/SPECIFICATION.md` for schema details)
|
||||||
- `replay_runs`: manifest hash, status, signatures, outputs
|
- `replay.runs`: manifest hash, status, signatures, outputs
|
||||||
- `replay_bundles`: digest, type, CAS location, size
|
- `replay.bundles`: digest, type, CAS location, size
|
||||||
- `replay_subjects`: OCI digests + per-layer Merkle roots
|
- `replay.subjects`: OCI digests + per-layer Merkle roots
|
||||||
- **Indexes** (canonical names): `runs_manifestHash_unique`, `runs_status_createdAt`, `bundles_type`, `bundles_location`, `subjects_layerDigest`
|
- **Indexes** (canonical names): `runs_manifest_hash_unique`, `runs_status_created_at`, `bundles_type`, `bundles_location`, `subjects_layer_digest`
|
||||||
- **File store**
|
- **File store**
|
||||||
- Bundles stored as `<sha256>.tar.zst` in CAS (`cas://replay/<shard>/<digest>.tar.zst`); shard = first two hex chars
|
- Bundles stored as `<sha256>.tar.zst` in CAS (`cas://replay/<shard>/<digest>.tar.zst`); shard = first two hex chars
|
||||||
|
|
||||||
|
|||||||
@@ -17,19 +17,64 @@ StellaOps provides **deterministic, reproducible outputs** for all security arti
|
|||||||
|
|
||||||
### Content-Addressed Verdict ID
|
### Content-Addressed Verdict ID
|
||||||
|
|
||||||
All verdicts use content-addressed identifiers computed as:
|
All policy verdicts use content-addressed identifiers computed as:
|
||||||
|
|
||||||
```
|
```
|
||||||
VerdictId = SHA256(Canonicalize(VerdictPayload))
|
VerdictId = "verdict:sha256:" + HexLower(SHA256(CanonicalJson(VerdictPayload)))
|
||||||
```
|
```
|
||||||
|
|
||||||
Where `VerdictPayload` includes:
|
Where `VerdictPayload` is a JSON object with the following structure:
|
||||||
- **Delta ID**: Content hash of the security delta
|
|
||||||
- **Blocking Drivers**: Sorted list of risk-increasing factors
|
```json
|
||||||
- **Warning Drivers**: Sorted list of advisory factors
|
{
|
||||||
- **Applied Exceptions**: Sorted list of exception IDs covering findings
|
"_canonVersion": "stella:canon:v1",
|
||||||
- **Gate Level**: Recommended gate (G0-G4)
|
"deltaId": "<content-addressed delta ID>",
|
||||||
- **Input Stamps**: Hashes of all inputs (see below)
|
"blockingDrivers": [
|
||||||
|
{
|
||||||
|
"cveId": "CVE-...",
|
||||||
|
"description": "...",
|
||||||
|
"purl": "pkg:...",
|
||||||
|
"severity": "Critical|High|Medium|Low",
|
||||||
|
"type": "new-reachable-cve|..."
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"warningDrivers": [...],
|
||||||
|
"appliedExceptions": ["EXCEPTION-001", ...],
|
||||||
|
"gateLevel": "G0|G1|G2|G3|G4"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Determinism guarantees:**
|
||||||
|
- `blockingDrivers` and `warningDrivers` are sorted by `type`, then `cveId`, then `purl`, then `severity`
|
||||||
|
- `appliedExceptions` are sorted lexicographically
|
||||||
|
- All string comparisons use Ordinal (case-sensitive, lexicographic)
|
||||||
|
- Canonical JSON follows RFC 8785 (JCS) with keys sorted alphabetically
|
||||||
|
- The `_canonVersion` field ensures hash stability across algorithm evolution
|
||||||
|
|
||||||
|
### VerdictIdGenerator Implementation
|
||||||
|
|
||||||
|
The `VerdictIdGenerator` class in `StellaOps.Policy.Deltas` computes deterministic verdict IDs:
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
// Create a verdict with content-addressed ID
|
||||||
|
var verdict = new DeltaVerdictBuilder()
|
||||||
|
.AddBlockingDriver(new DeltaDriver
|
||||||
|
{
|
||||||
|
Type = "new-reachable-cve",
|
||||||
|
CveId = "CVE-2024-001",
|
||||||
|
Severity = DeltaDriverSeverity.Critical,
|
||||||
|
Description = "Critical CVE is now reachable"
|
||||||
|
})
|
||||||
|
.Build("delta:sha256:abc123...");
|
||||||
|
|
||||||
|
// VerdictId is deterministic:
|
||||||
|
// verdict.VerdictId == "verdict:sha256:..."
|
||||||
|
|
||||||
|
// Recompute for verification:
|
||||||
|
var generator = new VerdictIdGenerator();
|
||||||
|
var recomputed = generator.ComputeVerdictId(verdict);
|
||||||
|
Debug.Assert(recomputed == verdict.VerdictId);
|
||||||
|
```
|
||||||
|
|
||||||
### Input Stamps
|
### Input Stamps
|
||||||
|
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ Last updated: 2025-11-25 (Docs Tasks Md.V · DOCS-NOTIFY-40-001)
|
|||||||
- Flooding / notification storms.
|
- Flooding / notification storms.
|
||||||
|
|
||||||
## Controls
|
## Controls
|
||||||
- **Tenant isolation**: every rule/channel/template includes `tenant`; APIs enforce `X-Stella-Tenant`. Mongo collections are filtered by tenant with indexes on `(tenant, id)`.
|
- **Tenant isolation**: every rule/channel/template includes `tenant`; APIs enforce `X-Stella-Tenant`. PostgreSQL tables are filtered by tenant with indexes on `(tenant_id, id)` and row-level security.
|
||||||
- **Secrets**: channels reference Authority `secretRef`; secrets never stored in Notify DB. Rotate via Authority and `:refresh-secret`.
|
- **Secrets**: channels reference Authority `secretRef`; secrets never stored in Notify DB. Rotate via Authority and `:refresh-secret`.
|
||||||
- **Outbound allowlist**: restrict hosts/ports per tenant; defaults block public internet in air-gapped kits.
|
- **Outbound allowlist**: restrict hosts/ports per tenant; defaults block public internet in air-gapped kits.
|
||||||
- **Signing**: webhook deliveries include `X-Stella-Signature` HMAC-SHA256 over body+nonce; receivers must reject stale timestamps (>5m) and verify signature.
|
- **Signing**: webhook deliveries include `X-Stella-Signature` HMAC-SHA256 over body+nonce; receivers must reject stale timestamps (>5m) and verify signature.
|
||||||
@@ -22,7 +22,7 @@ Last updated: 2025-11-25 (Docs Tasks Md.V · DOCS-NOTIFY-40-001)
|
|||||||
## Deployment checklist
|
## Deployment checklist
|
||||||
- [ ] Authority scopes `notify.viewer|operator|admin` configured; service accounts least-privilege.
|
- [ ] Authority scopes `notify.viewer|operator|admin` configured; service accounts least-privilege.
|
||||||
- [ ] HTTPS everywhere; TLS 1.2+; HSTS on WebService front-door.
|
- [ ] HTTPS everywhere; TLS 1.2+; HSTS on WebService front-door.
|
||||||
- [ ] Redis protected by auth and network policy; Mongo TLS + auth enabled.
|
- [ ] Valkey protected by auth and network policy; PostgreSQL TLS + auth enabled.
|
||||||
- [ ] Outbound allowlists defined per environment; no wildcard `*`.
|
- [ ] Outbound allowlists defined per environment; no wildcard `*`.
|
||||||
- [ ] Webhook receivers validate signatures and enforce host/IP allowlists.
|
- [ ] Webhook receivers validate signatures and enforce host/IP allowlists.
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user