save work

This commit is contained in:
StellaOps Bot
2025-12-19 07:28:23 +02:00
parent 6410a6d082
commit 2eafe98d44
97 changed files with 5040 additions and 1443 deletions

View File

@@ -1,6 +1,65 @@
# Transparency (DOCS-ATTEST-74-002)
- Optional Rekor/witness integration.
- In sealed mode, use bundled checkpoints and disable live witness fetch.
- Verification: compare embedded checkpoint with bundled; log discrepancies.
- Record transparency fields on verification result: `{uuid, logIndex, checkpointHash}`.
Last updated: 2025-12-18
## Purpose
StellaOps uses transparency logs (Sigstore Rekor v2 or equivalent) to provide tamper-evident, timestamped anchoring for DSSE bundles.
This document freezes the **offline verification inputs** used by Attestor in sealed/air-gapped operation and points to the canonical schema for `rekor-receipt.json`.
## Offline Inputs (Air-Gap / Sealed Mode)
Baseline directory layout is defined in `docs/product-advisories/14-Dec-2025 - Offline and Air-Gap Technical Reference.md`:
```
/evidence/
keys/
tlog-root/ # pinned transparency log public key(s)
tlog/
checkpoint.sig # signed tree head / checkpoint (note format)
entries/ # *.jsonl entry pack (leaves + proofs)
```
### Rekor Receipt (`rekor-receipt.json`)
The offline kit (or any offline DSSE evidence pack) may include a Rekor receipt alongside a DSSE statement.
- **Schema:** `docs/schemas/rekor-receipt.schema.json`
- **Source:** `docs/product-advisories/14-Dec-2025 - Rekor Integration Technical Reference.md` (Section 13.1) and `docs/product-advisories/14-Dec-2025 - Offline and Air-Gap Technical Reference.md` (Section 1.4)
Fields:
- `uuid`: Rekor entry UUID.
- `logIndex`: Rekor log index (integer, >= 0).
- `rootHash`: expected Merkle tree root hash (lowercase hex, 32 bytes).
- `hashes`: Merkle inclusion path hashes (lowercase hex, 32 bytes each; ordered as provided by Rekor).
- `checkpoint`: either the signed checkpoint note text (UTF-8) or a relative path (e.g., `checkpoint.sig`, `tlog/checkpoint.sig`) resolved relative to the receipt file.
### Checkpoint (`checkpoint.sig`)
`/evidence/tlog/checkpoint.sig` is the pinned signed tree head used for offline verification.
Contract:
- Content is **UTF-8 text** using **LF** line endings.
- The checkpoint **MUST** parse to the checkpoint body shape used by `CheckpointSignatureVerifier` (origin, tree size, base64 root hash, optional timestamp).
- In offline verification, the checkpoint from receipts SHOULD match the pinned checkpoint (tree size + root hash).
### Entry Pack (`entries/*.jsonl`)
`/evidence/tlog/entries/*.jsonl` is an optional-but-recommended offline mirror snapshot for bulk audit/replay.
Contract:
- Files are **NDJSON** (one JSON object per line).
- Each line uses the "Rekor Entry Structure" defined in `docs/product-advisories/14-Dec-2025 - Rekor Integration Technical Reference.md` (Section 4).
- **Deterministic ordering**:
- File names sort lexicographically (Ordinal).
- Within each file, lines sort by `rekor.logIndex` ascending.
## Offline Verification Rules (High Level)
1. Load the pinned Rekor log public key from `/evidence/keys/tlog-root/` (rotation is handled by shipping a new key file alongside the updated checkpoint snapshot).
2. Verify the checkpoint signature (when configured) and extract tree size + root hash.
3. For each `rekor-receipt.json`, verify:
- inclusion proof path resolves to `rootHash` for the given leaf hash,
- receipt checkpoint root matches the pinned checkpoint root (same tree head).
4. Optionally, validate that each receipt's UUID/digest appears in the entry pack and that the recomputed Merkle root matches the pinned checkpoint.

View File

@@ -319,13 +319,13 @@ For each vulnerability instance:
- [ ] Concelier ingestion job: online download + bundle import
### Phase 2: Integration
- [ ] epss_current + epss_changes projection
- [ ] Scanner.WebService: attach EPSS-at-scan evidence
- [ ] Bulk lookup API
- [x] epss_current + epss_changes projection
- [x] Scanner.WebService: attach EPSS-at-scan evidence
- [x] Bulk lookup API (`/api/v1/epss/*`)
### Phase 3: Enrichment
- [ ] Concelier enrichment job: update triage projections
- [ ] Notify subscription to vuln.priority.changed
- [x] Scanner Worker `EpssEnrichmentJob`: update `vuln_instance_triage` for CVEs with material changes
- [x] Scanner Worker `EpssSignalJob`: generate tenant-scoped EPSS signals (stored in `epss_signal`; published via `IEpssSignalPublisher` when configured)
### Phase 4: UI/UX
- [ ] EPSS fields in vulnerability detail
@@ -342,7 +342,7 @@ For each vulnerability instance:
### 10.1 Configuration
EPSS ingestion is configured via the `Epss:Ingest` section in Scanner Worker configuration:
EPSS jobs are configured via the `Epss:*` sections in Scanner Worker configuration:
```yaml
Epss:
@@ -354,6 +354,22 @@ Epss:
InitialDelay: "00:00:30" # Wait before first run (30s)
RetryDelay: "00:05:00" # Delay between retries (5m)
MaxRetries: 3 # Maximum retry attempts
Enrichment:
Enabled: true # Enable/disable live triage enrichment
PostIngestDelay: "00:01:00" # Wait after ingest before enriching
BatchSize: 1000 # CVEs per batch
HighPercentile: 0.99 # ≥ threshold => HIGH (and CrossedHigh flag)
HighScore: 0.50 # ≥ threshold => high score threshold
BigJumpDelta: 0.10 # ≥ threshold => BIG_JUMP flag
CriticalPercentile: 0.995 # ≥ threshold => CRITICAL
MediumPercentile: 0.90 # ≥ threshold => MEDIUM
FlagsToProcess: "NewScored,CrossedHigh,BigJumpUp,BigJumpDown" # Empty => process all
Signal:
Enabled: true # Enable/disable tenant-scoped signal generation
PostEnrichmentDelay: "00:00:30" # Wait after enrichment before emitting signals
BatchSize: 500 # Signals per batch
RetentionDays: 90 # Retention for epss_signal layer
SuppressSignalsOnModelChange: true # Suppress per-CVE signals on model version changes
```
### 10.2 Online Mode (Connected)
@@ -378,12 +394,13 @@ For offline deployments:
### 10.4 Manual Ingestion
Trigger manual ingestion via the Scanner Worker API:
There is currently no HTTP endpoint for one-shot ingestion. To force a run:
```bash
# POST to trigger immediate ingestion for a specific date
curl -X POST "https://scanner-worker/epss/ingest?date=2025-12-18"
```
1. Temporarily set `Epss:Ingest:Schedule` to `0 * * * * *` and `Epss:Ingest:InitialDelay` to `00:00:00`
2. Restart Scanner Worker and wait for one ingest cycle
3. Restore the normal schedule
Note: a successful ingest triggers `EpssEnrichmentJob`, which then triggers `EpssSignalJob`.
### 10.5 Troubleshooting
@@ -392,23 +409,34 @@ curl -X POST "https://scanner-worker/epss/ingest?date=2025-12-18"
| Job not running | `Enabled: false` | Set `Enabled: true` |
| Download fails | Network/firewall | Check HTTPS egress to `epss.empiricalsecurity.com` |
| Parse errors | Corrupted file | Re-download, check SHA256 |
| Slow ingestion | Large dataset | Normal for ~250k rows; expect 60-90s |
| Enrichment/signals not running | Storage disabled or job disabled | Ensure `ScannerStorage:Postgres:ConnectionString` is set and `Epss:Enrichment:Enabled` / `Epss:Signal:Enabled` are `true` |
| Slow ingestion | Large dataset / constrained IO | Expect <120s for ~310k rows; confirm via the perf harness and compare against CI baseline |
| Duplicate runs | Idempotent | Safe - existing data preserved |
### 10.6 Monitoring
Key metrics and traces:
- **Activity**: `StellaOps.Scanner.EpssIngest` with tags:
- `epss.model_date`: Date of EPSS model
- `epss.row_count`: Number of rows ingested
- `epss.cve_count`: Distinct CVEs processed
- `epss.duration_ms`: Total ingestion time
- **Activities**
- `StellaOps.Scanner.EpssIngest` (`epss.ingest`): `epss.model_date`, `epss.row_count`, `epss.cve_count`, `epss.duration_ms`
- `StellaOps.Scanner.EpssEnrichment` (`epss.enrich`): `epss.model_date`, `epss.changed_cve_count`, `epss.updated_count`, `epss.band_change_count`, `epss.duration_ms`
- `StellaOps.Scanner.EpssSignal` (`epss.signal.generate`): `epss.model_date`, `epss.change_count`, `epss.signal_count`, `epss.filtered_count`, `epss.tenant_count`, `epss.duration_ms`
- **Logs**: Structured logs at Info/Warning/Error levels
- `EPSS ingest job started`
- `Starting EPSS ingestion for {ModelDate}`
- `EPSS ingestion completed: modelDate={ModelDate}, rows={RowCount}...`
- **Metrics**
- `epss_enrichment_runs_total{result}` / `epss_enrichment_duration_ms` / `epss_enrichment_updated_total` / `epss_enrichment_band_changes_total`
- `epss_signal_runs_total{result}` / `epss_signal_duration_ms` / `epss_signals_emitted_total{event_type, tenant_id}`
- **Logs** (structured)
- `EPSS ingest/enrichment/signal job started`
- `EPSS ingestion completed: modelDate={ModelDate}, rows={RowCount}, ...`
- `EPSS enrichment completed: updated={Updated}, bandChanges={BandChanges}, ...`
- `EPSS model version changed: {OldVersion} -> {NewVersion}`
- `EPSS signal generation completed: signals={SignalCount}, changes={ChangeCount}, ...`
### 10.7 Performance
- Local harness: `src/Scanner/__Benchmarks/StellaOps.Scanner.Storage.Epss.Perf/README.md`
- CI workflow: `.gitea/workflows/epss-ingest-perf.yml` (nightly + manual, artifacts retained 90 days)
---