Add post-quantum cryptography support with PqSoftCryptoProvider
Some checks failed
AOC Guard CI / aoc-guard (push) Has been cancelled
AOC Guard CI / aoc-verify (push) Has been cancelled
Concelier Attestation Tests / attestation-tests (push) Has been cancelled
Docs CI / lint-and-preview (push) Has been cancelled
Policy Lint & Smoke / policy-lint (push) Has been cancelled
Scanner Analyzers / Discover Analyzers (push) Has been cancelled
Scanner Analyzers / Build Analyzers (push) Has been cancelled
Scanner Analyzers / Test Language Analyzers (push) Has been cancelled
Scanner Analyzers / Validate Test Fixtures (push) Has been cancelled
Scanner Analyzers / Verify Deterministic Output (push) Has been cancelled
wine-csp-build / Build Wine CSP Image (push) Has been cancelled
Some checks failed
AOC Guard CI / aoc-guard (push) Has been cancelled
AOC Guard CI / aoc-verify (push) Has been cancelled
Concelier Attestation Tests / attestation-tests (push) Has been cancelled
Docs CI / lint-and-preview (push) Has been cancelled
Policy Lint & Smoke / policy-lint (push) Has been cancelled
Scanner Analyzers / Discover Analyzers (push) Has been cancelled
Scanner Analyzers / Build Analyzers (push) Has been cancelled
Scanner Analyzers / Test Language Analyzers (push) Has been cancelled
Scanner Analyzers / Validate Test Fixtures (push) Has been cancelled
Scanner Analyzers / Verify Deterministic Output (push) Has been cancelled
wine-csp-build / Build Wine CSP Image (push) Has been cancelled
- Implemented PqSoftCryptoProvider for software-only post-quantum algorithms (Dilithium3, Falcon512) using BouncyCastle. - Added PqSoftProviderOptions and PqSoftKeyOptions for configuration. - Created unit tests for Dilithium3 and Falcon512 signing and verification. - Introduced EcdsaPolicyCryptoProvider for compliance profiles (FIPS/eIDAS) with explicit allow-lists. - Added KcmvpHashOnlyProvider for KCMVP baseline compliance. - Updated project files and dependencies for new libraries and testing frameworks.
This commit is contained in:
158
ops/devops/findings-ledger/offline-kit/README.md
Normal file
158
ops/devops/findings-ledger/offline-kit/README.md
Normal file
@@ -0,0 +1,158 @@
|
||||
# Findings Ledger Offline Kit
|
||||
|
||||
This directory contains manifests and scripts for deploying Findings Ledger in air-gapped/offline environments.
|
||||
|
||||
## Contents
|
||||
|
||||
```
|
||||
offline-kit/
|
||||
├── README.md # This file
|
||||
├── manifest.yaml # Offline bundle manifest
|
||||
├── images/ # Container image tarballs (populated at build)
|
||||
│ └── .gitkeep
|
||||
├── migrations/ # Database migration scripts
|
||||
│ └── .gitkeep
|
||||
├── dashboards/ # Grafana dashboard JSON exports
|
||||
│ └── findings-ledger.json
|
||||
├── alerts/ # Prometheus alert rules
|
||||
│ └── findings-ledger-alerts.yaml
|
||||
└── scripts/
|
||||
├── import-images.sh # Load container images
|
||||
├── run-migrations.sh # Apply database migrations
|
||||
└── verify-install.sh # Post-install verification
|
||||
```
|
||||
|
||||
## Building the Offline Kit
|
||||
|
||||
Use the platform offline kit builder:
|
||||
|
||||
```bash
|
||||
# From repository root
|
||||
python ops/offline-kit/build_offline_kit.py \
|
||||
--include ledger \
|
||||
--version 2025.11.0 \
|
||||
--output dist/offline-kit-ledger-2025.11.0.tar.gz
|
||||
```
|
||||
|
||||
## Installation Steps
|
||||
|
||||
### 1. Transfer and Extract
|
||||
|
||||
```bash
|
||||
# On air-gapped host
|
||||
tar xzf offline-kit-ledger-*.tar.gz
|
||||
cd offline-kit-ledger-*
|
||||
```
|
||||
|
||||
### 2. Load Container Images
|
||||
|
||||
```bash
|
||||
./scripts/import-images.sh
|
||||
# Loads: stellaops/findings-ledger, stellaops/findings-ledger-migrations
|
||||
```
|
||||
|
||||
### 3. Run Database Migrations
|
||||
|
||||
```bash
|
||||
export LEDGER__DB__CONNECTIONSTRING="Host=...;Database=...;..."
|
||||
./scripts/run-migrations.sh
|
||||
```
|
||||
|
||||
### 4. Deploy Service
|
||||
|
||||
Choose deployment method:
|
||||
|
||||
**Docker Compose:**
|
||||
```bash
|
||||
cp ../compose/env/ledger.prod.env ./ledger.env
|
||||
# Edit ledger.env with local values
|
||||
docker compose -f ../compose/docker-compose.ledger.yaml up -d
|
||||
```
|
||||
|
||||
**Helm:**
|
||||
```bash
|
||||
helm upgrade --install findings-ledger ../helm \
|
||||
-f values-offline.yaml \
|
||||
--set image.pullPolicy=Never
|
||||
```
|
||||
|
||||
### 5. Verify Installation
|
||||
|
||||
```bash
|
||||
./scripts/verify-install.sh
|
||||
```
|
||||
|
||||
## Configuration Notes
|
||||
|
||||
### Sealed Mode
|
||||
|
||||
In air-gapped environments, configure:
|
||||
|
||||
```yaml
|
||||
# Disable outbound attachment egress
|
||||
LEDGER__ATTACHMENTS__ALLOWEGRESS: "false"
|
||||
|
||||
# Set appropriate staleness thresholds
|
||||
LEDGER__AIRGAP__ADVISORYSTALETHRESHOLD: "604800" # 7 days
|
||||
LEDGER__AIRGAP__VEXSTALETHRESHOLD: "604800"
|
||||
LEDGER__AIRGAP__POLICYSTALETHRESHOLD: "86400" # 1 day
|
||||
```
|
||||
|
||||
### Merkle Anchoring
|
||||
|
||||
For offline environments without external anchoring:
|
||||
|
||||
```yaml
|
||||
LEDGER__MERKLE__EXTERNALIZE: "false"
|
||||
```
|
||||
|
||||
Keep local Merkle roots and export periodically for audit.
|
||||
|
||||
## Backup & Restore
|
||||
|
||||
See `docs/modules/findings-ledger/deployment.md` for full backup/restore procedures.
|
||||
|
||||
Quick reference:
|
||||
```bash
|
||||
# Backup
|
||||
pg_dump -Fc --dbname="$LEDGER_DB" --file ledger-$(date -u +%Y%m%d).dump
|
||||
|
||||
# Restore
|
||||
pg_restore -C -d postgres ledger-YYYYMMDD.dump
|
||||
|
||||
# Replay projections
|
||||
dotnet run --project tools/LedgerReplayHarness -- \
|
||||
--connection "$LEDGER_DB" --tenant all
|
||||
```
|
||||
|
||||
## Observability
|
||||
|
||||
Import the provided dashboards into your local Grafana instance:
|
||||
|
||||
```bash
|
||||
# Import via Grafana API or UI
|
||||
curl -X POST http://grafana:3000/api/dashboards/db \
|
||||
-H "Content-Type: application/json" \
|
||||
-d @dashboards/findings-ledger.json
|
||||
```
|
||||
|
||||
Apply alert rules to Prometheus:
|
||||
```bash
|
||||
cp alerts/findings-ledger-alerts.yaml /etc/prometheus/rules.d/
|
||||
# Reload Prometheus
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Resolution |
|
||||
| --- | --- |
|
||||
| Migration fails | Check DB connectivity; verify user has CREATE/ALTER privileges |
|
||||
| Health check fails | Check logs: `docker logs findings-ledger` or `kubectl logs -l app.kubernetes.io/name=findings-ledger` |
|
||||
| Metrics not visible | Verify OTLP endpoint is reachable or use Prometheus scrape |
|
||||
| Staleness warnings | Import fresh advisory/VEX bundles via Mirror |
|
||||
|
||||
## Support
|
||||
|
||||
- Platform docs: `docs/modules/findings-ledger/`
|
||||
- Offline operation: `docs/24_OFFLINE_KIT.md`
|
||||
- Air-gap mode: `docs/airgap/`
|
||||
@@ -0,0 +1,122 @@
|
||||
# Findings Ledger Prometheus Alert Rules
|
||||
# Apply to Prometheus: cp findings-ledger-alerts.yaml /etc/prometheus/rules.d/
|
||||
|
||||
groups:
|
||||
- name: findings-ledger
|
||||
rules:
|
||||
# Service availability
|
||||
- alert: FindingsLedgerDown
|
||||
expr: up{job="findings-ledger"} == 0
|
||||
for: 2m
|
||||
labels:
|
||||
severity: critical
|
||||
service: findings-ledger
|
||||
annotations:
|
||||
summary: "Findings Ledger service is down"
|
||||
description: "Findings Ledger service has been unreachable for more than 2 minutes."
|
||||
|
||||
# Write latency
|
||||
- alert: FindingsLedgerHighWriteLatency
|
||||
expr: histogram_quantile(0.95, sum(rate(ledger_write_latency_seconds_bucket{job="findings-ledger"}[5m])) by (le)) > 1
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
service: findings-ledger
|
||||
annotations:
|
||||
summary: "Findings Ledger write latency is high"
|
||||
description: "95th percentile write latency exceeds 1 second for 5 minutes. Current: {{ $value | humanizeDuration }}"
|
||||
|
||||
- alert: FindingsLedgerCriticalWriteLatency
|
||||
expr: histogram_quantile(0.95, sum(rate(ledger_write_latency_seconds_bucket{job="findings-ledger"}[5m])) by (le)) > 5
|
||||
for: 2m
|
||||
labels:
|
||||
severity: critical
|
||||
service: findings-ledger
|
||||
annotations:
|
||||
summary: "Findings Ledger write latency is critically high"
|
||||
description: "95th percentile write latency exceeds 5 seconds. Current: {{ $value | humanizeDuration }}"
|
||||
|
||||
# Projection lag
|
||||
- alert: FindingsLedgerProjectionLag
|
||||
expr: ledger_projection_lag_seconds{job="findings-ledger"} > 30
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
service: findings-ledger
|
||||
annotations:
|
||||
summary: "Findings Ledger projection lag is high"
|
||||
description: "Projection lag exceeds 30 seconds for 5 minutes. Current: {{ $value | humanizeDuration }}"
|
||||
|
||||
- alert: FindingsLedgerCriticalProjectionLag
|
||||
expr: ledger_projection_lag_seconds{job="findings-ledger"} > 300
|
||||
for: 2m
|
||||
labels:
|
||||
severity: critical
|
||||
service: findings-ledger
|
||||
annotations:
|
||||
summary: "Findings Ledger projection lag is critically high"
|
||||
description: "Projection lag exceeds 5 minutes. Current: {{ $value | humanizeDuration }}"
|
||||
|
||||
# Merkle anchoring
|
||||
- alert: FindingsLedgerMerkleAnchorStale
|
||||
expr: time() - ledger_merkle_last_anchor_timestamp_seconds{job="findings-ledger"} > 600
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
service: findings-ledger
|
||||
annotations:
|
||||
summary: "Findings Ledger Merkle anchor is stale"
|
||||
description: "No Merkle anchor created in the last 10 minutes. Last anchor: {{ $value | humanizeTimestamp }}"
|
||||
|
||||
- alert: FindingsLedgerMerkleAnchorFailed
|
||||
expr: increase(ledger_merkle_anchor_failures_total{job="findings-ledger"}[15m]) > 0
|
||||
for: 0m
|
||||
labels:
|
||||
severity: warning
|
||||
service: findings-ledger
|
||||
annotations:
|
||||
summary: "Findings Ledger Merkle anchoring failed"
|
||||
description: "Merkle anchor operation failed. Check logs for details."
|
||||
|
||||
# Database connectivity
|
||||
- alert: FindingsLedgerDatabaseErrors
|
||||
expr: increase(ledger_database_errors_total{job="findings-ledger"}[5m]) > 5
|
||||
for: 2m
|
||||
labels:
|
||||
severity: warning
|
||||
service: findings-ledger
|
||||
annotations:
|
||||
summary: "Findings Ledger database errors detected"
|
||||
description: "More than 5 database errors in the last 5 minutes."
|
||||
|
||||
# Attachment storage
|
||||
- alert: FindingsLedgerAttachmentStorageErrors
|
||||
expr: increase(ledger_attachment_storage_errors_total{job="findings-ledger"}[15m]) > 0
|
||||
for: 0m
|
||||
labels:
|
||||
severity: warning
|
||||
service: findings-ledger
|
||||
annotations:
|
||||
summary: "Findings Ledger attachment storage errors"
|
||||
description: "Attachment storage operation failed. Check encryption keys and storage connectivity."
|
||||
|
||||
# Air-gap staleness (for offline environments)
|
||||
- alert: FindingsLedgerAdvisoryStaleness
|
||||
expr: ledger_airgap_advisory_staleness_seconds{job="findings-ledger"} > 604800
|
||||
for: 1h
|
||||
labels:
|
||||
severity: warning
|
||||
service: findings-ledger
|
||||
annotations:
|
||||
summary: "Advisory data is stale in air-gapped environment"
|
||||
description: "Advisory data is older than 7 days. Import fresh data from Mirror."
|
||||
|
||||
- alert: FindingsLedgerVexStaleness
|
||||
expr: ledger_airgap_vex_staleness_seconds{job="findings-ledger"} > 604800
|
||||
for: 1h
|
||||
labels:
|
||||
severity: warning
|
||||
service: findings-ledger
|
||||
annotations:
|
||||
summary: "VEX data is stale in air-gapped environment"
|
||||
description: "VEX data is older than 7 days. Import fresh data from Mirror."
|
||||
@@ -0,0 +1,185 @@
|
||||
{
|
||||
"__inputs": [
|
||||
{
|
||||
"name": "DS_PROMETHEUS",
|
||||
"label": "Prometheus",
|
||||
"description": "",
|
||||
"type": "datasource",
|
||||
"pluginId": "prometheus",
|
||||
"pluginName": "Prometheus"
|
||||
}
|
||||
],
|
||||
"__requires": [
|
||||
{
|
||||
"type": "grafana",
|
||||
"id": "grafana",
|
||||
"name": "Grafana",
|
||||
"version": "9.0.0"
|
||||
},
|
||||
{
|
||||
"type": "datasource",
|
||||
"id": "prometheus",
|
||||
"name": "Prometheus",
|
||||
"version": "1.0.0"
|
||||
}
|
||||
],
|
||||
"annotations": {
|
||||
"list": []
|
||||
},
|
||||
"description": "Findings Ledger service metrics and health",
|
||||
"editable": true,
|
||||
"fiscalYearStartMonth": 0,
|
||||
"graphTooltip": 0,
|
||||
"id": null,
|
||||
"links": [],
|
||||
"liveNow": false,
|
||||
"panels": [
|
||||
{
|
||||
"collapsed": false,
|
||||
"gridPos": { "h": 1, "w": 24, "x": 0, "y": 0 },
|
||||
"id": 1,
|
||||
"panels": [],
|
||||
"title": "Health Overview",
|
||||
"type": "row"
|
||||
},
|
||||
{
|
||||
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": { "mode": "thresholds" },
|
||||
"mappings": [
|
||||
{ "options": { "0": { "color": "red", "index": 1, "text": "DOWN" }, "1": { "color": "green", "index": 0, "text": "UP" } }, "type": "value" }
|
||||
],
|
||||
"thresholds": { "mode": "absolute", "steps": [{ "color": "red", "value": null }, { "color": "green", "value": 1 }] }
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": { "h": 4, "w": 4, "x": 0, "y": 1 },
|
||||
"id": 2,
|
||||
"options": { "colorMode": "value", "graphMode": "none", "justifyMode": "auto", "orientation": "auto", "reduceOptions": { "calcs": ["lastNotNull"], "fields": "", "values": false }, "textMode": "auto" },
|
||||
"pluginVersion": "9.0.0",
|
||||
"targets": [{ "expr": "up{job=\"findings-ledger\"}", "refId": "A" }],
|
||||
"title": "Service Status",
|
||||
"type": "stat"
|
||||
},
|
||||
{
|
||||
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
|
||||
"fieldConfig": {
|
||||
"defaults": { "color": { "mode": "palette-classic" }, "unit": "short" },
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": { "h": 4, "w": 4, "x": 4, "y": 1 },
|
||||
"id": 3,
|
||||
"options": { "colorMode": "value", "graphMode": "area", "justifyMode": "auto", "orientation": "auto", "reduceOptions": { "calcs": ["lastNotNull"], "fields": "", "values": false }, "textMode": "auto" },
|
||||
"pluginVersion": "9.0.0",
|
||||
"targets": [{ "expr": "ledger_events_total{job=\"findings-ledger\"}", "refId": "A" }],
|
||||
"title": "Total Events",
|
||||
"type": "stat"
|
||||
},
|
||||
{
|
||||
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
|
||||
"fieldConfig": {
|
||||
"defaults": { "color": { "mode": "thresholds" }, "unit": "s", "thresholds": { "mode": "absolute", "steps": [{ "color": "green", "value": null }, { "color": "yellow", "value": 1 }, { "color": "red", "value": 5 }] } },
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": { "h": 4, "w": 4, "x": 8, "y": 1 },
|
||||
"id": 4,
|
||||
"options": { "colorMode": "value", "graphMode": "area", "justifyMode": "auto", "orientation": "auto", "reduceOptions": { "calcs": ["lastNotNull"], "fields": "", "values": false }, "textMode": "auto" },
|
||||
"pluginVersion": "9.0.0",
|
||||
"targets": [{ "expr": "ledger_projection_lag_seconds{job=\"findings-ledger\"}", "refId": "A" }],
|
||||
"title": "Projection Lag",
|
||||
"type": "stat"
|
||||
},
|
||||
{
|
||||
"collapsed": false,
|
||||
"gridPos": { "h": 1, "w": 24, "x": 0, "y": 5 },
|
||||
"id": 10,
|
||||
"panels": [],
|
||||
"title": "Write Performance",
|
||||
"type": "row"
|
||||
},
|
||||
{
|
||||
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
|
||||
"fieldConfig": {
|
||||
"defaults": { "color": { "mode": "palette-classic" }, "custom": { "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "barAlignment": 0, "drawStyle": "line", "fillOpacity": 10, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, "scaleDistribution": { "type": "linear" }, "showPoints": "never", "spanNulls": false, "stacking": { "group": "A", "mode": "none" }, "thresholdsStyle": { "mode": "off" } }, "unit": "s" },
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 6 },
|
||||
"id": 11,
|
||||
"options": { "legend": { "calcs": ["mean", "max"], "displayMode": "table", "placement": "bottom", "showLegend": true }, "tooltip": { "mode": "multi", "sort": "none" } },
|
||||
"pluginVersion": "9.0.0",
|
||||
"targets": [
|
||||
{ "expr": "histogram_quantile(0.50, sum(rate(ledger_write_latency_seconds_bucket{job=\"findings-ledger\"}[5m])) by (le))", "legendFormat": "p50", "refId": "A" },
|
||||
{ "expr": "histogram_quantile(0.95, sum(rate(ledger_write_latency_seconds_bucket{job=\"findings-ledger\"}[5m])) by (le))", "legendFormat": "p95", "refId": "B" },
|
||||
{ "expr": "histogram_quantile(0.99, sum(rate(ledger_write_latency_seconds_bucket{job=\"findings-ledger\"}[5m])) by (le))", "legendFormat": "p99", "refId": "C" }
|
||||
],
|
||||
"title": "Write Latency",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
|
||||
"fieldConfig": {
|
||||
"defaults": { "color": { "mode": "palette-classic" }, "custom": { "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "barAlignment": 0, "drawStyle": "line", "fillOpacity": 10, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, "scaleDistribution": { "type": "linear" }, "showPoints": "never", "spanNulls": false, "stacking": { "group": "A", "mode": "none" }, "thresholdsStyle": { "mode": "off" } }, "unit": "ops" },
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 6 },
|
||||
"id": 12,
|
||||
"options": { "legend": { "calcs": ["mean", "max"], "displayMode": "table", "placement": "bottom", "showLegend": true }, "tooltip": { "mode": "multi", "sort": "none" } },
|
||||
"pluginVersion": "9.0.0",
|
||||
"targets": [{ "expr": "rate(ledger_events_total{job=\"findings-ledger\"}[5m])", "legendFormat": "events/s", "refId": "A" }],
|
||||
"title": "Event Write Rate",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"collapsed": false,
|
||||
"gridPos": { "h": 1, "w": 24, "x": 0, "y": 14 },
|
||||
"id": 20,
|
||||
"panels": [],
|
||||
"title": "Merkle Anchoring",
|
||||
"type": "row"
|
||||
},
|
||||
{
|
||||
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
|
||||
"fieldConfig": {
|
||||
"defaults": { "color": { "mode": "palette-classic" }, "custom": { "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "barAlignment": 0, "drawStyle": "line", "fillOpacity": 10, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, "scaleDistribution": { "type": "linear" }, "showPoints": "never", "spanNulls": false, "stacking": { "group": "A", "mode": "none" }, "thresholdsStyle": { "mode": "off" } }, "unit": "s" },
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 15 },
|
||||
"id": 21,
|
||||
"options": { "legend": { "calcs": ["mean", "max"], "displayMode": "table", "placement": "bottom", "showLegend": true }, "tooltip": { "mode": "multi", "sort": "none" } },
|
||||
"pluginVersion": "9.0.0",
|
||||
"targets": [
|
||||
{ "expr": "histogram_quantile(0.50, sum(rate(ledger_merkle_anchor_duration_seconds_bucket{job=\"findings-ledger\"}[5m])) by (le))", "legendFormat": "p50", "refId": "A" },
|
||||
{ "expr": "histogram_quantile(0.95, sum(rate(ledger_merkle_anchor_duration_seconds_bucket{job=\"findings-ledger\"}[5m])) by (le))", "legendFormat": "p95", "refId": "B" }
|
||||
],
|
||||
"title": "Anchor Duration",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": { "type": "prometheus", "uid": "${DS_PROMETHEUS}" },
|
||||
"fieldConfig": {
|
||||
"defaults": { "color": { "mode": "thresholds" }, "unit": "short", "thresholds": { "mode": "absolute", "steps": [{ "color": "green", "value": null }] } },
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 15 },
|
||||
"id": 22,
|
||||
"options": { "colorMode": "value", "graphMode": "area", "justifyMode": "auto", "orientation": "auto", "reduceOptions": { "calcs": ["lastNotNull"], "fields": "", "values": false }, "textMode": "auto" },
|
||||
"pluginVersion": "9.0.0",
|
||||
"targets": [{ "expr": "ledger_merkle_anchors_total{job=\"findings-ledger\"}", "refId": "A" }],
|
||||
"title": "Total Anchors",
|
||||
"type": "stat"
|
||||
}
|
||||
],
|
||||
"refresh": "30s",
|
||||
"schemaVersion": 37,
|
||||
"style": "dark",
|
||||
"tags": ["stellaops", "findings-ledger"],
|
||||
"templating": { "list": [] },
|
||||
"time": { "from": "now-1h", "to": "now" },
|
||||
"timepicker": {},
|
||||
"timezone": "utc",
|
||||
"title": "Findings Ledger",
|
||||
"uid": "findings-ledger",
|
||||
"version": 1,
|
||||
"weekStart": ""
|
||||
}
|
||||
1
ops/devops/findings-ledger/offline-kit/images/.gitkeep
Normal file
1
ops/devops/findings-ledger/offline-kit/images/.gitkeep
Normal file
@@ -0,0 +1 @@
|
||||
# Container image tarballs populated at build time by offline-kit builder
|
||||
106
ops/devops/findings-ledger/offline-kit/manifest.yaml
Normal file
106
ops/devops/findings-ledger/offline-kit/manifest.yaml
Normal file
@@ -0,0 +1,106 @@
|
||||
# Findings Ledger Offline Kit Manifest
|
||||
# Version: 2025.11.0
|
||||
# Generated: 2025-12-07
|
||||
|
||||
apiVersion: stellaops.io/v1
|
||||
kind: OfflineKitManifest
|
||||
metadata:
|
||||
name: findings-ledger
|
||||
version: "2025.11.0"
|
||||
description: Findings Ledger service for event-sourced findings storage with Merkle anchoring
|
||||
|
||||
spec:
|
||||
components:
|
||||
- name: findings-ledger
|
||||
type: service
|
||||
image: stellaops/findings-ledger:2025.11.0
|
||||
digest: "" # Populated at build time
|
||||
|
||||
- name: findings-ledger-migrations
|
||||
type: job
|
||||
image: stellaops/findings-ledger-migrations:2025.11.0
|
||||
digest: "" # Populated at build time
|
||||
|
||||
dependencies:
|
||||
- name: postgresql
|
||||
version: ">=14.0"
|
||||
type: database
|
||||
required: true
|
||||
|
||||
- name: otel-collector
|
||||
version: ">=0.80.0"
|
||||
type: service
|
||||
required: false
|
||||
description: Optional for telemetry export
|
||||
|
||||
migrations:
|
||||
- version: "001"
|
||||
file: migrations/001_initial_schema.sql
|
||||
checksum: "" # Populated at build time
|
||||
- version: "002"
|
||||
file: migrations/002_merkle_tables.sql
|
||||
checksum: ""
|
||||
- version: "003"
|
||||
file: migrations/003_attachments.sql
|
||||
checksum: ""
|
||||
- version: "004"
|
||||
file: migrations/004_projections.sql
|
||||
checksum: ""
|
||||
- version: "005"
|
||||
file: migrations/005_airgap_imports.sql
|
||||
checksum: ""
|
||||
- version: "006"
|
||||
file: migrations/006_evidence_snapshots.sql
|
||||
checksum: ""
|
||||
- version: "007"
|
||||
file: migrations/007_timeline_events.sql
|
||||
checksum: ""
|
||||
- version: "008"
|
||||
file: migrations/008_attestation_pointers.sql
|
||||
checksum: ""
|
||||
|
||||
dashboards:
|
||||
- name: findings-ledger
|
||||
file: dashboards/findings-ledger.json
|
||||
checksum: ""
|
||||
|
||||
alerts:
|
||||
- name: findings-ledger-alerts
|
||||
file: alerts/findings-ledger-alerts.yaml
|
||||
checksum: ""
|
||||
|
||||
configuration:
|
||||
required:
|
||||
- key: LEDGER__DB__CONNECTIONSTRING
|
||||
description: PostgreSQL connection string
|
||||
secret: true
|
||||
- key: LEDGER__ATTACHMENTS__ENCRYPTIONKEY
|
||||
description: AES-256 encryption key for attachments (base64)
|
||||
secret: true
|
||||
|
||||
optional:
|
||||
- key: LEDGER__MERKLE__SIGNINGKEY
|
||||
description: Signing key for Merkle root attestations
|
||||
secret: true
|
||||
- key: LEDGER__OBSERVABILITY__OTLPENDPOINT
|
||||
description: OpenTelemetry collector endpoint
|
||||
default: http://otel-collector:4317
|
||||
- key: LEDGER__MERKLE__ANCHORINTERVAL
|
||||
description: Merkle anchor interval (TimeSpan)
|
||||
default: "00:05:00"
|
||||
- key: LEDGER__AIRGAP__ADVISORYSTALETHRESHOLD
|
||||
description: Advisory staleness threshold in seconds
|
||||
default: "604800"
|
||||
|
||||
verification:
|
||||
healthEndpoint: /health/ready
|
||||
metricsEndpoint: /metrics
|
||||
expectedMetrics:
|
||||
- ledger_write_latency_seconds
|
||||
- ledger_projection_lag_seconds
|
||||
- ledger_merkle_anchor_duration_seconds
|
||||
- ledger_events_total
|
||||
|
||||
checksums:
|
||||
algorithm: sha256
|
||||
manifest: "" # Populated at build time
|
||||
@@ -0,0 +1 @@
|
||||
# Database migration SQL scripts copied from StellaOps.FindingsLedger.Migrations
|
||||
131
ops/devops/findings-ledger/offline-kit/scripts/import-images.sh
Normal file
131
ops/devops/findings-ledger/offline-kit/scripts/import-images.sh
Normal file
@@ -0,0 +1,131 @@
|
||||
#!/usr/bin/env bash
|
||||
# Import Findings Ledger container images into local Docker/containerd
|
||||
# Usage: ./import-images.sh [registry-prefix]
|
||||
#
|
||||
# Example:
|
||||
# ./import-images.sh # Loads as stellaops/*
|
||||
# ./import-images.sh myregistry.local/ # Loads and tags as myregistry.local/stellaops/*
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
IMAGES_DIR="${SCRIPT_DIR}/../images"
|
||||
REGISTRY_PREFIX="${1:-}"
|
||||
|
||||
# Color output helpers
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${GREEN}[INFO]${NC} $*"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $*"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $*"; }
|
||||
|
||||
# Detect container runtime
|
||||
detect_runtime() {
|
||||
if command -v docker &>/dev/null; then
|
||||
echo "docker"
|
||||
elif command -v nerdctl &>/dev/null; then
|
||||
echo "nerdctl"
|
||||
elif command -v podman &>/dev/null; then
|
||||
echo "podman"
|
||||
else
|
||||
log_error "No container runtime found (docker, nerdctl, podman)"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
RUNTIME=$(detect_runtime)
|
||||
log_info "Using container runtime: $RUNTIME"
|
||||
|
||||
# Load images from tarballs
|
||||
load_images() {
|
||||
local count=0
|
||||
|
||||
for tarball in "${IMAGES_DIR}"/*.tar; do
|
||||
if [[ -f "$tarball" ]]; then
|
||||
log_info "Loading image from: $(basename "$tarball")"
|
||||
|
||||
if $RUNTIME load -i "$tarball"; then
|
||||
((count++))
|
||||
else
|
||||
log_error "Failed to load: $tarball"
|
||||
return 1
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ $count -eq 0 ]]; then
|
||||
log_warn "No image tarballs found in $IMAGES_DIR"
|
||||
log_warn "Run the offline kit builder first to populate images"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Loaded $count image(s)"
|
||||
}
|
||||
|
||||
# Re-tag images with custom registry prefix
|
||||
retag_images() {
|
||||
if [[ -z "$REGISTRY_PREFIX" ]]; then
|
||||
log_info "No registry prefix specified, skipping re-tag"
|
||||
return 0
|
||||
fi
|
||||
|
||||
local images=(
|
||||
"stellaops/findings-ledger"
|
||||
"stellaops/findings-ledger-migrations"
|
||||
)
|
||||
|
||||
for image in "${images[@]}"; do
|
||||
# Get the loaded tag
|
||||
local loaded_tag
|
||||
loaded_tag=$($RUNTIME images --format '{{.Repository}}:{{.Tag}}' | grep "^${image}:" | head -1)
|
||||
|
||||
if [[ -n "$loaded_tag" ]]; then
|
||||
local new_tag="${REGISTRY_PREFIX}${loaded_tag}"
|
||||
log_info "Re-tagging: $loaded_tag -> $new_tag"
|
||||
$RUNTIME tag "$loaded_tag" "$new_tag"
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
# Verify loaded images
|
||||
verify_images() {
|
||||
log_info "Verifying loaded images..."
|
||||
|
||||
local images=(
|
||||
"stellaops/findings-ledger"
|
||||
"stellaops/findings-ledger-migrations"
|
||||
)
|
||||
|
||||
local missing=0
|
||||
for image in "${images[@]}"; do
|
||||
if $RUNTIME images --format '{{.Repository}}' | grep -q "^${REGISTRY_PREFIX}${image}$"; then
|
||||
log_info " ✓ ${REGISTRY_PREFIX}${image}"
|
||||
else
|
||||
log_error " ✗ ${REGISTRY_PREFIX}${image} not found"
|
||||
((missing++))
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ $missing -gt 0 ]]; then
|
||||
log_error "$missing image(s) missing"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "All images verified"
|
||||
}
|
||||
|
||||
main() {
|
||||
log_info "Findings Ledger - Image Import"
|
||||
log_info "=============================="
|
||||
|
||||
load_images
|
||||
retag_images
|
||||
verify_images
|
||||
|
||||
log_info "Image import complete"
|
||||
}
|
||||
|
||||
main "$@"
|
||||
125
ops/devops/findings-ledger/offline-kit/scripts/run-migrations.sh
Normal file
125
ops/devops/findings-ledger/offline-kit/scripts/run-migrations.sh
Normal file
@@ -0,0 +1,125 @@
|
||||
#!/usr/bin/env bash
|
||||
# Run Findings Ledger database migrations
|
||||
# Usage: ./run-migrations.sh [connection-string]
|
||||
#
|
||||
# Environment variables:
|
||||
# LEDGER__DB__CONNECTIONSTRING - PostgreSQL connection string (if not provided as arg)
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
MIGRATIONS_DIR="${SCRIPT_DIR}/../migrations"
|
||||
|
||||
# Color output helpers
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${GREEN}[INFO]${NC} $*"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $*"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $*"; }
|
||||
|
||||
# Get connection string
|
||||
CONNECTION_STRING="${1:-${LEDGER__DB__CONNECTIONSTRING:-}}"
|
||||
|
||||
if [[ -z "$CONNECTION_STRING" ]]; then
|
||||
log_error "Connection string required"
|
||||
echo "Usage: $0 <connection-string>"
|
||||
echo " or set LEDGER__DB__CONNECTIONSTRING environment variable"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Detect container runtime
|
||||
detect_runtime() {
|
||||
if command -v docker &>/dev/null; then
|
||||
echo "docker"
|
||||
elif command -v nerdctl &>/dev/null; then
|
||||
echo "nerdctl"
|
||||
elif command -v podman &>/dev/null; then
|
||||
echo "podman"
|
||||
else
|
||||
log_error "No container runtime found"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
RUNTIME=$(detect_runtime)
|
||||
|
||||
# Run migrations via container
|
||||
run_migrations_container() {
|
||||
log_info "Running migrations via container..."
|
||||
|
||||
$RUNTIME run --rm \
|
||||
-e "LEDGER__DB__CONNECTIONSTRING=${CONNECTION_STRING}" \
|
||||
--network host \
|
||||
stellaops/findings-ledger-migrations:2025.11.0 \
|
||||
--connection "$CONNECTION_STRING"
|
||||
}
|
||||
|
||||
# Alternative: Run migrations via psql (if dotnet not available)
|
||||
run_migrations_psql() {
|
||||
log_info "Running migrations via psql..."
|
||||
|
||||
if ! command -v psql &>/dev/null; then
|
||||
log_error "psql not found and container runtime unavailable"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Parse connection string for psql
|
||||
# Expected format: Host=...;Port=...;Database=...;Username=...;Password=...
|
||||
local host port database username password
|
||||
host=$(echo "$CONNECTION_STRING" | grep -oP 'Host=\K[^;]+')
|
||||
port=$(echo "$CONNECTION_STRING" | grep -oP 'Port=\K[^;]+' || echo "5432")
|
||||
database=$(echo "$CONNECTION_STRING" | grep -oP 'Database=\K[^;]+')
|
||||
username=$(echo "$CONNECTION_STRING" | grep -oP 'Username=\K[^;]+')
|
||||
password=$(echo "$CONNECTION_STRING" | grep -oP 'Password=\K[^;]+')
|
||||
|
||||
export PGPASSWORD="$password"
|
||||
|
||||
for migration in "${MIGRATIONS_DIR}"/*.sql; do
|
||||
if [[ -f "$migration" ]]; then
|
||||
log_info "Applying: $(basename "$migration")"
|
||||
psql -h "$host" -p "$port" -U "$username" -d "$database" -f "$migration"
|
||||
fi
|
||||
done
|
||||
|
||||
unset PGPASSWORD
|
||||
}
|
||||
|
||||
verify_connection() {
|
||||
log_info "Verifying database connection..."
|
||||
|
||||
# Try container-based verification
|
||||
if $RUNTIME run --rm \
|
||||
--network host \
|
||||
postgres:14-alpine \
|
||||
pg_isready -h "$(echo "$CONNECTION_STRING" | grep -oP 'Host=\K[^;]+')" \
|
||||
-p "$(echo "$CONNECTION_STRING" | grep -oP 'Port=\K[^;]+' || echo 5432)" \
|
||||
&>/dev/null; then
|
||||
log_info "Database connection verified"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_warn "Could not verify database connection (may still work)"
|
||||
return 0
|
||||
}
|
||||
|
||||
main() {
|
||||
log_info "Findings Ledger - Database Migrations"
|
||||
log_info "======================================"
|
||||
|
||||
verify_connection
|
||||
|
||||
# Prefer container-based migrations
|
||||
if $RUNTIME image inspect stellaops/findings-ledger-migrations:2025.11.0 &>/dev/null; then
|
||||
run_migrations_container
|
||||
else
|
||||
log_warn "Migration image not found, falling back to psql"
|
||||
run_migrations_psql
|
||||
fi
|
||||
|
||||
log_info "Migrations complete"
|
||||
}
|
||||
|
||||
main "$@"
|
||||
@@ -0,0 +1,70 @@
|
||||
#!/usr/bin/env bash
|
||||
# Verify Findings Ledger installation
|
||||
# Usage: ./verify-install.sh [service-url]
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SERVICE_URL="${1:-http://localhost:8188}"
|
||||
|
||||
# Color output helpers
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${GREEN}[INFO]${NC} $*"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $*"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $*"; }
|
||||
log_pass() { echo -e "${GREEN} ✓${NC} $*"; }
|
||||
log_fail() { echo -e "${RED} ✗${NC} $*"; }
|
||||
|
||||
CHECKS_PASSED=0
|
||||
CHECKS_FAILED=0
|
||||
|
||||
run_check() {
|
||||
local name="$1"
|
||||
local cmd="$2"
|
||||
|
||||
if eval "$cmd" &>/dev/null; then
|
||||
log_pass "$name"
|
||||
((CHECKS_PASSED++))
|
||||
else
|
||||
log_fail "$name"
|
||||
((CHECKS_FAILED++))
|
||||
fi
|
||||
}
|
||||
|
||||
main() {
|
||||
log_info "Findings Ledger - Installation Verification"
|
||||
log_info "==========================================="
|
||||
log_info "Service URL: $SERVICE_URL"
|
||||
echo ""
|
||||
|
||||
log_info "Health Checks:"
|
||||
run_check "Readiness endpoint" "curl -sf ${SERVICE_URL}/health/ready"
|
||||
run_check "Liveness endpoint" "curl -sf ${SERVICE_URL}/health/live"
|
||||
|
||||
echo ""
|
||||
log_info "Metrics Checks:"
|
||||
run_check "Metrics endpoint available" "curl -sf ${SERVICE_URL}/metrics | head -1"
|
||||
run_check "ledger_write_latency_seconds present" "curl -sf ${SERVICE_URL}/metrics | grep -q ledger_write_latency_seconds"
|
||||
run_check "ledger_projection_lag_seconds present" "curl -sf ${SERVICE_URL}/metrics | grep -q ledger_projection_lag_seconds"
|
||||
run_check "ledger_merkle_anchor_duration_seconds present" "curl -sf ${SERVICE_URL}/metrics | grep -q ledger_merkle_anchor_duration_seconds"
|
||||
|
||||
echo ""
|
||||
log_info "API Checks:"
|
||||
run_check "OpenAPI spec available" "curl -sf ${SERVICE_URL}/swagger/v1/swagger.json | head -1"
|
||||
|
||||
echo ""
|
||||
log_info "========================================"
|
||||
log_info "Results: ${CHECKS_PASSED} passed, ${CHECKS_FAILED} failed"
|
||||
|
||||
if [[ $CHECKS_FAILED -gt 0 ]]; then
|
||||
log_error "Some checks failed. Review service logs for details."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_info "All checks passed. Installation verified."
|
||||
}
|
||||
|
||||
main "$@"
|
||||
Reference in New Issue
Block a user