1.6 KiB
1.6 KiB
Policy Metrics/Logging Prep — PREP-POLICY-ENGINE-29-004
Status: Draft (2025-11-21) Owners: Policy Guild · Observability Guild Scope: Define metrics/logging outputs for path/scope-aware evaluation (POLICY-ENGINE-29-004) so downstream overlays/simulations can consume stable counters and traces.
Needs / open points
- Ingest output shape from POLICY-ENGINE-29-003 (path/scope evaluator) to enumerate metric dimensions and labels.
- Confirm sampling, cardinality guardrails, and redaction rules for evidence payloads.
- Decide OpenTelemetry/ADI schema version and offline export format (NDJSON) for air-gapped runs.
- Align log event IDs with change-event pipeline (30-003) to ensure replay determinism.
Draft contract (initial)
- Metrics namespace:
stellaops.policy.eval.* - Required counters: total_evaluations, policy_matches, policy_denies, evaluation_failures, overlay_projections_emitted.
- Dimensions (tentative): tenant_id, policy_pack_id, overlay_id, scope_path, scheduler_job_id, evaluator_version, schema_version, environment (online/offline).
- Logs: structured JSON; fields include evaluation_id (ULID), scope_path, matched_policies[], deny_reasons[], duration_ms, trace_id (optional), schema_version.
- Export: NDJSON batch with deterministic ordering by evaluation_id; batching bounded by 1 MiB or 1k records; integrity hash (SHA256) over batch.
Next actions
- Await path/scope payloads from POLICY-ENGINE-29-003 to lock dimensions and sample payloads.
- Publish sample metric set and log envelope once upstream confirms.
- Mirror into sprint execution log once finalized.