up
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled
Signals CI & Image / signals-ci (push) Has been cancelled
Policy Lint & Smoke / policy-lint (push) Has been cancelled
Policy Simulation / policy-simulate (push) Has been cancelled
SDK Publish & Sign / sdk-publish (push) Has been cancelled
AOC Guard CI / aoc-guard (push) Has been cancelled
AOC Guard CI / aoc-verify (push) Has been cancelled
Concelier Attestation Tests / attestation-tests (push) Has been cancelled
devportal-offline / build-offline (push) Has been cancelled

This commit is contained in:
StellaOps Bot
2025-11-25 22:09:44 +02:00
parent 6bee1fdcf5
commit 9f6e6f7fb3
116 changed files with 4495 additions and 730 deletions

View File

@@ -0,0 +1,78 @@
{
"schemaVersion": 39,
"title": "Policy Pipeline",
"panels": [
{
"type": "stat",
"title": "Compile p99 (s)",
"datasource": "Prometheus",
"fieldConfig": {"defaults": {"unit": "s", "decimals": 2}},
"targets": [
{"expr": "histogram_quantile(0.99, sum(rate(policy_compile_duration_seconds_bucket[5m])) by (le))"}
]
},
{
"type": "timeseries",
"title": "Compile Duration (p95/p50)",
"datasource": "Prometheus",
"fieldConfig": {"defaults": {"unit": "s", "decimals": 2}},
"targets": [
{"expr": "histogram_quantile(0.95, sum(rate(policy_compile_duration_seconds_bucket[5m])) by (le))", "legendFormat": "p95"},
{"expr": "histogram_quantile(0.50, sum(rate(policy_compile_duration_seconds_bucket[5m])) by (le))", "legendFormat": "p50"}
]
},
{
"type": "stat",
"title": "Simulation Queue Depth",
"datasource": "Prometheus",
"fieldConfig": {"defaults": {"unit": "none"}},
"targets": [{"expr": "sum(policy_simulation_queue_depth)"}]
},
{
"type": "timeseries",
"title": "Queue Depth by Stage",
"datasource": "Prometheus",
"targets": [{"expr": "policy_simulation_queue_depth", "legendFormat": "{{stage}}"}],
"fieldConfig": {"defaults": {"unit": "none"}}
},
{
"type": "stat",
"title": "Approval p95 (s)",
"datasource": "Prometheus",
"fieldConfig": {"defaults": {"unit": "s", "decimals": 1}},
"targets": [
{"expr": "histogram_quantile(0.95, sum(rate(policy_approval_latency_seconds_bucket[5m])) by (le))"}
]
},
{
"type": "timeseries",
"title": "Approval Latency",
"datasource": "Prometheus",
"fieldConfig": {"defaults": {"unit": "s", "decimals": 1}},
"targets": [
{"expr": "histogram_quantile(0.90, sum(rate(policy_approval_latency_seconds_bucket[5m])) by (le))", "legendFormat": "p90"},
{"expr": "histogram_quantile(0.50, sum(rate(policy_approval_latency_seconds_bucket[5m])) by (le))", "legendFormat": "p50"}
]
},
{
"type": "gauge",
"title": "Promotion Success Rate (30m)",
"datasource": "Prometheus",
"fieldConfig": {"defaults": {"unit": "percent", "min": 0, "max": 100}},
"options": {"reduceOptions": {"calcs": ["last"]}, "orientation": "horizontal"},
"targets": [
{"expr": "100 * clamp_min(rate(policy_promotion_outcomes_total{outcome=\"success\"}[30m]),0) / clamp_min(rate(policy_promotion_outcomes_total[30m]),1)"}
]
},
{
"type": "barchart",
"title": "Promotion Outcomes",
"datasource": "Prometheus",
"fieldConfig": {"defaults": {"unit": "1/s"}},
"options": {"displayMode": "series"},
"targets": [
{"expr": "rate(policy_promotion_outcomes_total[5m])", "legendFormat": "{{outcome}}"}
]
}
]
}

View File

@@ -0,0 +1,74 @@
{
"schemaVersion": 39,
"title": "Signals Pipeline",
"panels": [
{
"type": "stat",
"title": "Scoring p95 (s)",
"datasource": "Prometheus",
"fieldConfig": {"defaults": {"unit": "s", "decimals": 2}},
"targets": [
{"expr": "histogram_quantile(0.95, sum(rate(signals_reachability_scoring_duration_seconds_bucket[5m])) by (le))"}
]
},
{
"type": "timeseries",
"title": "Scoring Duration p95/p50",
"datasource": "Prometheus",
"fieldConfig": {"defaults": {"unit": "s", "decimals": 2}},
"targets": [
{"expr": "histogram_quantile(0.95, sum(rate(signals_reachability_scoring_duration_seconds_bucket[5m])) by (le))", "legendFormat": "p95"},
{"expr": "histogram_quantile(0.50, sum(rate(signals_reachability_scoring_duration_seconds_bucket[5m])) by (le))", "legendFormat": "p50"}
]
},
{
"type": "gauge",
"title": "Cache Hit Ratio (5m)",
"datasource": "Prometheus",
"fieldConfig": {"defaults": {"unit": "percent", "min": 0, "max": 100}},
"options": {"reduceOptions": {"calcs": ["last"]}, "orientation": "horizontal"},
"targets": [
{"expr": "100 * clamp_min(rate(signals_cache_hits_total[5m]),0) / clamp_min(rate(signals_cache_hits_total[5m]) + rate(signals_cache_misses_total[5m]), 1)"}
]
},
{
"type": "timeseries",
"title": "Cache Hits/Misses",
"datasource": "Prometheus",
"fieldConfig": {"defaults": {"unit": "1/s"}},
"targets": [
{"expr": "rate(signals_cache_hits_total[5m])", "legendFormat": "hits"},
{"expr": "rate(signals_cache_misses_total[5m])", "legendFormat": "misses"}
]
},
{
"type": "stat",
"title": "Sensors Reporting",
"datasource": "Prometheus",
"fieldConfig": {"defaults": {"unit": "none"}},
"targets": [
{"expr": "count(max_over_time(signals_sensor_last_seen_timestamp_seconds[15m]))"}
]
},
{
"type": "timeseries",
"title": "Sensor Staleness",
"datasource": "Prometheus",
"fieldConfig": {"defaults": {"unit": "s"}},
"targets": [
{"expr": "time() - max(signals_sensor_last_seen_timestamp_seconds) by (sensor)", "legendFormat": "{{sensor}}"}
]
},
{
"type": "barchart",
"title": "Ingestion Outcomes",
"datasource": "Prometheus",
"fieldConfig": {"defaults": {"unit": "1/s"}},
"options": {"displayMode": "series"},
"targets": [
{"expr": "rate(signals_ingestion_total[5m])", "legendFormat": "total"},
{"expr": "rate(signals_ingestion_failures_total[5m])", "legendFormat": "failures"}
]
}
]
}

View File

@@ -0,0 +1,52 @@
groups:
- name: policy-pipeline
rules:
- alert: PolicyCompileLatencyP99High
expr: histogram_quantile(0.99, sum(rate(policy_compile_duration_seconds_bucket[5m])) by (le)) > 5
for: 10m
labels:
severity: warning
service: policy
annotations:
summary: "Policy compile latency elevated (p99)"
description: "p99 compile duration has been >5s for 10m"
- alert: PolicySimulationQueueBacklog
expr: sum(policy_simulation_queue_depth) > 100
for: 10m
labels:
severity: warning
service: policy
annotations:
summary: "Policy simulation backlog"
description: "Simulation queue depth above 100 for 10m"
- alert: PolicyApprovalLatencyHigh
expr: histogram_quantile(0.95, sum(rate(policy_approval_latency_seconds_bucket[5m])) by (le)) > 30
for: 15m
labels:
severity: critical
service: policy
annotations:
summary: "Policy approval latency high"
description: "p95 approval latency above 30s for 15m"
- alert: PolicyPromotionFailureRate
expr: clamp_min(rate(policy_promotion_outcomes_total{outcome="failure"}[15m]), 0) / clamp_min(rate(policy_promotion_outcomes_total[15m]), 1) > 0.2
for: 10m
labels:
severity: critical
service: policy
annotations:
summary: "Policy promotion failure rate elevated"
description: "Failures exceed 20% of promotions over 15m"
- alert: PolicyPromotionStall
expr: rate(policy_promotion_outcomes_total{outcome="success"}[10m]) == 0 and sum(policy_simulation_queue_depth) > 0
for: 10m
labels:
severity: warning
service: policy
annotations:
summary: "Policy promotion stalled"
description: "No successful promotions while work is queued"

View File

@@ -0,0 +1,39 @@
# Policy Pipeline Playbook
Scope: policy compile → simulation → approval → promotion path.
## Dashboards
- Grafana: import `ops/devops/observability/grafana/policy-pipeline.json` (datasource `Prometheus`).
- Key tiles: Compile p99, Simulation Queue Depth, Approval p95, Promotion Success Rate, Promotion Outcomes.
## Alerts (Prometheus)
- Rules: `ops/devops/observability/policy-alerts.yaml`
- `PolicyCompileLatencyP99High` (p99 > 5s for 10m)
- `PolicySimulationQueueBacklog` (queue depth > 100 for 10m)
- `PolicyApprovalLatencyHigh` (p95 > 30s for 15m)
- `PolicyPromotionFailureRate` (failures >20% over 15m)
- `PolicyPromotionStall` (no successes while queue non-empty for 10m)
## Runbook
1. **Compile latency alert**
- Check build nodes for CPU cap; verify cache hits for policy engine.
- Roll restart single runner; if persists, scale policy compile workers (+1) or purge stale cache.
2. **Simulation backlog**
- Inspect queue per stage (panel "Queue Depth by Stage").
- If queue limited to one stage, increase concurrency for that stage or drain stuck items; otherwise, add workers.
3. **Approval latency high**
- Look for blocked approvals (UI/API outages). Re-run approval service health check; fail over to standby.
4. **Promotion failure rate/stall**
- Pull recent logs for promotion job; compare failure reasons (policy validation vs. target registry).
- If registry errors, pause promotions and file incident with registry owner; if policy validation, revert latest policy change or apply override to unblock critical tenants.
5. **Verification**
- After mitigation, ensure promotion success rate gauge recovers >95% and queues drain to baseline (<10).
## Escalation
- Primary: Policy On-Call (week N roster).
- Secondary: DevOps Guild (release).
- Page if two critical alerts fire concurrently or any critical alert lasts >30m.
## Notes
- Metrics assumed available: `policy_compile_duration_seconds_bucket`, `policy_simulation_queue_depth`, `policy_approval_latency_seconds_bucket`, `policy_promotion_outcomes_total{outcome=*}`.
- Keep alert thresholds stable unless load profile changes; adjust in Git with approval from Policy + DevOps leads.

View File

@@ -0,0 +1,54 @@
groups:
- name: signals-pipeline
rules:
- alert: SignalsScoringLatencyP95High
expr: histogram_quantile(0.95, sum(rate(signals_reachability_scoring_duration_seconds_bucket[5m])) by (le)) > 2
for: 10m
labels:
severity: warning
service: signals
annotations:
summary: "Signals scoring latency high (p95)"
description: "Reachability scoring p95 exceeds 2s for 10m"
- alert: SignalsCacheMissRateHigh
expr: |
clamp_min(rate(signals_cache_misses_total[5m]), 0)
/ clamp_min(rate(signals_cache_hits_total[5m]) + rate(signals_cache_misses_total[5m]), 1) > 0.3
for: 10m
labels:
severity: warning
service: signals
annotations:
summary: "Signals cache miss rate high"
description: "Cache miss ratio >30% over 10m; investigate Redis or key churn."
- alert: SignalsCacheDown
expr: signals_cache_available == 0
for: 2m
labels:
severity: critical
service: signals
annotations:
summary: "Signals cache unavailable"
description: "Redis cache reported unavailable for >2m"
- alert: SignalsSensorStaleness
expr: time() - max(signals_sensor_last_seen_timestamp_seconds) by (sensor) > 900
for: 5m
labels:
severity: warning
service: signals
annotations:
summary: "Signals sensor stale"
description: "No updates from sensor for >15 minutes"
- alert: SignalsIngestionErrorRate
expr: clamp_min(rate(signals_ingestion_failures_total[5m]), 0) / clamp_min(rate(signals_ingestion_total[5m]), 1) > 0.05
for: 5m
labels:
severity: critical
service: signals
annotations:
summary: "Signals ingestion failures elevated"
description: "Ingestion failure ratio above 5% over 5m"

View File

@@ -0,0 +1,40 @@
# Signals Pipeline Playbook
Scope: Signals ingestion, cache, scoring, and sensor freshness.
## Dashboards
- Grafana: import `ops/devops/observability/grafana/signals-pipeline.json` (datasource `Prometheus`).
- Key tiles: Scoring p95, Cache hit ratio, Sensor staleness, Ingestion outcomes.
## Alerts
- Rules: `ops/devops/observability/signals-alerts.yaml`
- `SignalsScoringLatencyP95High` (p95 > 2s for 10m)
- `SignalsCacheMissRateHigh` (miss ratio >30% for 10m)
- `SignalsCacheDown`
- `SignalsSensorStaleness` (no update >15m)
- `SignalsIngestionErrorRate` (failures >5%)
## Runbook
1. **Scoring latency high**
- Check Mongo/Redis health; inspect CPU on workers.
- Scale Signals API pods or increase cache TTL to reduce load.
2. **Cache miss rate / cache down**
- Validate Redis connectivity/ACL; flush not recommended unless key explosion.
- Increase cache TTL; ensure connection string matches deployment.
3. **Sensor staleness**
- Identify stale sensors from alert label; verify upstream pipeline/log shipping.
- If sensor retired, update allowlist to silence expected gaps.
4. **Ingestion errors**
- Tail ingestion logs; classify errors (schema vs. storage).
- If artifacts rejected, check storage path and disk fullness; add capacity or rotate.
5. **Verification**
- Ensure cache hit ratio >90%, scoring p95 <2s, staleness panel near baseline (<5m) after mitigation.
## Escalation
- Primary: Signals on-call.
- Secondary: DevOps Guild (observability).
- Page when critical alerts persist >20m or when cache down + scoring latency co-occur.
## Notes
- Metrics expected: `signals_reachability_scoring_duration_seconds_bucket`, `signals_cache_hits_total`, `signals_cache_misses_total`, `signals_cache_available`, `signals_sensor_last_seen_timestamp_seconds`, `signals_ingestion_total`, `signals_ingestion_failures_total`.
- Keep thresholds version-controlled; align with Policy Engine consumers if scoring SLAs change.