Files
git.stella-ops.org/bench/reachability-benchmark/tools/scorer
StellaOps Bot 909d9b6220
Some checks failed
AOC Guard CI / aoc-guard (push) Has been cancelled
AOC Guard CI / aoc-verify (push) Has been cancelled
Docs CI / lint-and-preview (push) Has been cancelled
Policy Lint & Smoke / policy-lint (push) Has been cancelled
up
2025-12-01 21:16:22 +02:00
..
up
2025-12-01 21:16:22 +02:00
up
2025-12-01 21:16:22 +02:00
up
2025-12-01 21:16:22 +02:00
up
2025-12-01 21:16:22 +02:00
up
2025-12-01 21:16:22 +02:00

rb-score

Deterministic scorer for the reachability benchmark.

What it does

  • Validates submissions against schemas/submission.schema.json and truth against schemas/truth.schema.json.
  • Computes precision/recall/F1 (micro, sink-level).
  • Computes explainability score per prediction (03) and averages it.
  • Checks duplicate predictions for determinism (inconsistent duplicates lower the rate).
  • Surfaces runtime metadata from the submission (run block).

Install (offline-friendly)

python -m pip install -r requirements.txt

Usage

./rb_score.py --truth ../../benchmark/truth/public.json --submission ../../benchmark/submissions/sample.json --format json

Compare / leaderboard

Use rb-compare to aggregate multiple submissions into a deterministic leaderboard:

./rb_compare.py --truth ../../benchmark/truth/public.json --submissions sub1.json sub2.json --output ../../benchmark/leaderboard.json --text

Output

  • text (default): short human-readable summary.
  • json: deterministic JSON with top-level metrics and per-case breakdown.

Tests

python -m unittest tests/test_scoring.py

Explainability tiers (task 513-009) are covered by test_explainability_tiers in tests/test_scoring.py.

Notes

  • Predictions for sinks not present in truth count as false positives (strict posture).
  • Truth sinks with label unknown are ignored for FN/FP counting.
  • Explainability tiering: 0=no context; 1=path>=2 nodes; 2=entry + path>=3; 3=guards present.