rb-score
Deterministic scorer for the reachability benchmark.
What it does
- Validates submissions against
schemas/submission.schema.jsonand truth againstschemas/truth.schema.json. - Computes precision/recall/F1 (micro, sink-level).
- Computes explainability score per prediction (0–3) and averages it.
- Checks duplicate predictions for determinism (inconsistent duplicates lower the rate).
- Surfaces runtime metadata from the submission (
runblock).
Install (offline-friendly)
python -m pip install -r requirements.txt
Usage
./rb_score.py --truth ../../benchmark/truth/public.json --submission ../../benchmark/submissions/sample.json --format json
Compare / leaderboard
Use rb-compare to aggregate multiple submissions into a deterministic leaderboard:
./rb_compare.py --truth ../../benchmark/truth/public.json --submissions sub1.json sub2.json --output ../../benchmark/leaderboard.json --text
Output
text(default): short human-readable summary.json: deterministic JSON with top-level metrics and per-case breakdown.
Tests
python -m unittest tests/test_scoring.py
Explainability tiers (task 513-009) are covered by test_explainability_tiers in tests/test_scoring.py.
Notes
- Predictions for sinks not present in truth count as false positives (strict posture).
- Truth sinks with label
unknownare ignored for FN/FP counting. - Explainability tiering: 0=no context; 1=path>=2 nodes; 2=entry + path>=3; 3=guards present.