consolidation of some of the modules, localization fixes, product advisories work, qa work
This commit is contained in:
29
src/Tools/StellaOps.Bench/AGENTS.md
Normal file
29
src/Tools/StellaOps.Bench/AGENTS.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Benchmarks Guild Charter
|
||||
|
||||
## Mission
|
||||
Design and maintain deterministic benchmark suites that measure StellaOps performance (queue throughput, cache efficiency, API latency) to guard SLOs and capacity plans. Benchmarks must mirror production-like workloads yet remain reproducible for local and CI runs.
|
||||
|
||||
## Scope
|
||||
- `src/Bench/StellaOps.Bench/**` benchmark harnesses, datasets, and result reporters.
|
||||
- ImpactIndex/Scheduler/Scanner/Policy Engine workload simulations referenced in tasks.
|
||||
- Benchmark configuration and warm-up scripts used by DevOps for regression tracking.
|
||||
- Documentation of benchmark methodology and expected baseline metrics.
|
||||
- Determinism bench harness lives at `Determinism/` with optional reachability hashing; CI wrapper at `.gitea/scripts/test/determinism-run.sh` (threshold via `BENCH_DETERMINISM_THRESHOLD`). Include feeds via `DET_EXTRA_INPUTS`; optional reachability hashes via `DET_REACH_GRAPHS`/`DET_REACH_RUNTIME`.
|
||||
|
||||
## Required Reading
|
||||
- `docs/modules/platform/architecture-overview.md`
|
||||
- `docs/modules/scanner/architecture.md` (Scanner throughput metrics)
|
||||
- `docs/modules/scheduler/architecture.md` (ImpactIndex & planner loops)
|
||||
- `docs/modules/policy/architecture.md` (evaluation pipeline)
|
||||
- `docs/modules/telemetry/architecture.md` (metrics naming, sampling policies)
|
||||
- `docs/modules/telemetry/guides/metrics-and-slos.md` (once published)
|
||||
- Existing benchmark notes in `docs/benchmarks/` and any sprint-specific design docs referenced by TASKS.
|
||||
|
||||
## Working Agreement
|
||||
1. **State sync**: mark tasks `DOING`/`DONE` in both corresponding sprint file `docs/implplan/SPRINT_*.md` and `src/Bench/StellaOps.Bench/TASKS.md` before/after work.
|
||||
2. **Baseline references**: link commits/results for baseline metrics; update docs when targets shift.
|
||||
3. **Deterministic harnesses**: avoid random seeds without explicit seeding; ensure benchmarks run offline with local fixtures.
|
||||
4. **Safety**: guard against resource exhaustion???cap concurrency, add cleanup/finalizers, ensure containerised runs have limits.
|
||||
5. **Telemetry integration**: export metrics via OpenTelemetry/Metrics APIs; coordinate with DevOps on dashboards/alerts.
|
||||
6. **Cross-guild coordination**: notify impacted component guilds when benchmarks uncover regressions; file follow-up issues with actionable data.
|
||||
|
||||
2
src/Tools/StellaOps.Bench/Determinism/.gitignore
vendored
Normal file
2
src/Tools/StellaOps.Bench/Determinism/.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
results/
|
||||
__pycache__/
|
||||
46
src/Tools/StellaOps.Bench/Determinism/README.md
Normal file
46
src/Tools/StellaOps.Bench/Determinism/README.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# Determinism Benchmark Harness (BENCH-DETERMINISM-401-057)
|
||||
|
||||
Location: `src/Bench/StellaOps.Bench/Determinism`
|
||||
|
||||
## What it does
|
||||
- Runs a deterministic, offline-friendly benchmark that hashes scanner outputs for paired SBOM/VEX inputs.
|
||||
- Produces `results.csv`, `inputs.sha256`, and `summary.json` capturing determinism rate.
|
||||
- Ships with a built-in mock scanner so CI/offline runs do not need external tools.
|
||||
|
||||
## Quick start
|
||||
```sh
|
||||
cd src/Bench/StellaOps.Bench/Determinism
|
||||
python3 run_bench.py --shuffle --runs 3 --output out
|
||||
```
|
||||
|
||||
Outputs land in `out/`:
|
||||
- `results.csv` – per-run hashes (mode/run/scanner)
|
||||
- `inputs.sha256` – deterministic manifest of SBOM/VEX/config inputs
|
||||
- `summary.json` – aggregate determinism rate
|
||||
|
||||
## Inputs
|
||||
- SBOMs: `inputs/sboms/*.json` (sample SPDX provided)
|
||||
- VEX: `inputs/vex/*.json` (sample OpenVEX provided)
|
||||
- Scanner config: `configs/scanners.json` (defaults to built-in mock scanner)
|
||||
- Sample manifest: `inputs/inputs.sha256` covers the bundled sample SBOM/VEX/config for quick offline verification; regenerate when inputs change.
|
||||
|
||||
## Adding real scanners
|
||||
1. Add an entry to `configs/scanners.json` with `kind: "command"` and a command array, e.g.:
|
||||
```json
|
||||
{
|
||||
"name": "scannerX",
|
||||
"kind": "command",
|
||||
"command": ["python", "../../scripts/scannerX_wrapper.py", "{sbom}", "{vex}"]
|
||||
}
|
||||
```
|
||||
2. Commands must write JSON with a top-level `findings` array; each finding should include `purl`, `vulnerability`, `status`, and `base_score`.
|
||||
3. Keep commands offline and deterministic; pin any feeds to local bundles before running.
|
||||
|
||||
## Determinism expectations
|
||||
- Canonical and shuffled runs should yield identical hashes per scanner/SBOM/VEX tuple.
|
||||
- CI should treat determinism_rate < 0.95 as a failure once wired into workflows.
|
||||
|
||||
## Maintenance
|
||||
- Tests live in `tests/` and cover shuffle stability + manifest generation.
|
||||
- Update `docs/benchmarks/signals/bench-determinism.md` when inputs/outputs change.
|
||||
- Mirror task status in `docs/implplan/SPRINT_0512_0001_0001_bench.md` and `src/Bench/StellaOps.Bench/TASKS.md`.
|
||||
0
src/Tools/StellaOps.Bench/Determinism/__init__.py
Normal file
0
src/Tools/StellaOps.Bench/Determinism/__init__.py
Normal file
12
src/Tools/StellaOps.Bench/Determinism/configs/scanners.json
Normal file
12
src/Tools/StellaOps.Bench/Determinism/configs/scanners.json
Normal file
@@ -0,0 +1,12 @@
|
||||
{
|
||||
"scanners": [
|
||||
{
|
||||
"name": "mock",
|
||||
"kind": "mock",
|
||||
"description": "Deterministic mock scanner used for CI/offline parity",
|
||||
"parameters": {
|
||||
"severity_bias": 0.25
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
15
src/Tools/StellaOps.Bench/Determinism/inputs/feeds/README.md
Normal file
15
src/Tools/StellaOps.Bench/Determinism/inputs/feeds/README.md
Normal file
@@ -0,0 +1,15 @@
|
||||
# Frozen feed bundle placeholder
|
||||
|
||||
Place hashed feed bundles here for determinism runs. Example:
|
||||
|
||||
```
|
||||
# build feed bundle (offline)
|
||||
# touch feed-bundle.tar.gz
|
||||
sha256sum feed-bundle.tar.gz > feeds.sha256
|
||||
```
|
||||
|
||||
Then run the wrapper with:
|
||||
```
|
||||
DET_EXTRA_INPUTS="src/Bench/StellaOps.Bench/Determinism/inputs/feeds/feed-bundle.tar.gz" \
|
||||
BENCH_DETERMINISM_THRESHOLD=0.95 scripts/bench/determinism-run.sh
|
||||
```
|
||||
@@ -0,0 +1,11 @@
|
||||
{
|
||||
"graph": {
|
||||
"nodes": [
|
||||
{"id": "pkg:pypi/demo-lib@1.0.0", "type": "package"},
|
||||
{"id": "pkg:generic/demo-cli@0.4.2", "type": "package"}
|
||||
],
|
||||
"edges": [
|
||||
{"from": "pkg:generic/demo-cli@0.4.2", "to": "pkg:pypi/demo-lib@1.0.0", "type": "depends_on"}
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
577f932bbb00dbd596e46b96d5fbb9561506c7730c097e381a6b34de40402329 inputs/sboms/sample-spdx.json
|
||||
1b54ce4087800cfe1d5ac439c10a1f131b7476b2093b79d8cd0a29169314291f inputs/vex/sample-openvex.json
|
||||
38453c9c0e0a90d22d7048d3201bf1b5665eb483e6682db1a7112f8e4f4fa1e6 configs/scanners.json
|
||||
@@ -0,0 +1 @@
|
||||
{"event":"call","func":"demo","module":"demo-lib","ts":"2025-11-01T00:00:00Z"}
|
||||
@@ -0,0 +1,16 @@
|
||||
{
|
||||
"spdxVersion": "SPDX-3.0",
|
||||
"documentNamespace": "https://stellaops.local/spdx/sample-spdx",
|
||||
"packages": [
|
||||
{
|
||||
"name": "demo-lib",
|
||||
"versionInfo": "1.0.0",
|
||||
"purl": "pkg:pypi/demo-lib@1.0.0"
|
||||
},
|
||||
{
|
||||
"name": "demo-cli",
|
||||
"versionInfo": "0.4.2",
|
||||
"purl": "pkg:generic/demo-cli@0.4.2"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1,19 @@
|
||||
{
|
||||
"version": "1.0",
|
||||
"statements": [
|
||||
{
|
||||
"vulnerability": "CVE-2024-0001",
|
||||
"products": ["pkg:pypi/demo-lib@1.0.0"],
|
||||
"status": "affected",
|
||||
"justification": "known_exploited",
|
||||
"timestamp": "2025-11-01T00:00:00Z"
|
||||
},
|
||||
{
|
||||
"vulnerability": "CVE-2023-9999",
|
||||
"products": ["pkg:generic/demo-cli@0.4.2"],
|
||||
"status": "not_affected",
|
||||
"justification": "vulnerable_code_not_present",
|
||||
"timestamp": "2025-10-28T00:00:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
58
src/Tools/StellaOps.Bench/Determinism/offline_run.sh
Normal file
58
src/Tools/StellaOps.Bench/Determinism/offline_run.sh
Normal file
@@ -0,0 +1,58 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Offline runner for determinism (and optional reachability) benches.
|
||||
# Usage: ./offline_run.sh [--inputs DIR] [--output DIR] [--runs N] [--threshold FLOAT] [--no-verify]
|
||||
# Defaults: inputs=offline/inputs, output=offline/results, runs=10, threshold=0.95, verify manifests on.
|
||||
|
||||
ROOT="$(cd "$(dirname "$0")" && pwd)"
|
||||
INPUT_DIR="offline/inputs"
|
||||
OUTPUT_DIR="offline/results"
|
||||
RUNS=10
|
||||
THRESHOLD=0.95
|
||||
VERIFY=1
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--inputs) INPUT_DIR="$2"; shift 2;;
|
||||
--output) OUTPUT_DIR="$2"; shift 2;;
|
||||
--runs) RUNS="$2"; shift 2;;
|
||||
--threshold) THRESHOLD="$2"; shift 2;;
|
||||
--no-verify) VERIFY=0; shift 1;;
|
||||
*) echo "Unknown arg: $1"; exit 1;;
|
||||
esac
|
||||
done
|
||||
|
||||
mkdir -p "$OUTPUT_DIR"
|
||||
cd "$ROOT"
|
||||
|
||||
if [ $VERIFY -eq 1 ]; then
|
||||
if [ -f "$INPUT_DIR/inputs.sha256" ]; then
|
||||
sha256sum -c "$INPUT_DIR/inputs.sha256"
|
||||
fi
|
||||
if [ -f "$INPUT_DIR/dataset.sha256" ]; then
|
||||
sha256sum -c "$INPUT_DIR/dataset.sha256"
|
||||
fi
|
||||
fi
|
||||
|
||||
python run_bench.py \
|
||||
--sboms "$INPUT_DIR"/sboms/*.json \
|
||||
--vex "$INPUT_DIR"/vex/*.json \
|
||||
--config "$INPUT_DIR"/scanners.json \
|
||||
--runs "$RUNS" \
|
||||
--shuffle \
|
||||
--output "$OUTPUT_DIR"
|
||||
|
||||
det_rate=$(python -c "import json;print(json.load(open('$OUTPUT_DIR/summary.json'))['determinism_rate'])")
|
||||
awk -v rate="$det_rate" -v th="$THRESHOLD" 'BEGIN {if (rate+0 < th+0) {printf("determinism_rate %s is below threshold %s\n", rate, th); exit 1}}'
|
||||
|
||||
graph_glob="$INPUT_DIR/graphs/*.json"
|
||||
runtime_glob="$INPUT_DIR/runtime/*.ndjson"
|
||||
if ls $graph_glob >/dev/null 2>&1; then
|
||||
python run_reachability.py \
|
||||
--graphs "$graph_glob" \
|
||||
--runtime "$runtime_glob" \
|
||||
--output "$OUTPUT_DIR"
|
||||
fi
|
||||
|
||||
echo "Offline run complete -> $OUTPUT_DIR"
|
||||
309
src/Tools/StellaOps.Bench/Determinism/run_bench.py
Normal file
309
src/Tools/StellaOps.Bench/Determinism/run_bench.py
Normal file
@@ -0,0 +1,309 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Determinism benchmark harness for BENCH-DETERMINISM-401-057.
|
||||
|
||||
- Offline by default; uses a built-in mock scanner that derives findings from
|
||||
SBOM and VEX documents without external calls.
|
||||
- Produces deterministic hashes for canonical and (optionally) shuffled inputs.
|
||||
- Writes `results.csv` and `inputs.sha256` to the chosen output directory.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import csv
|
||||
import hashlib
|
||||
import json
|
||||
import shutil
|
||||
import subprocess
|
||||
from copy import deepcopy
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, Iterable, List, Sequence
|
||||
import random
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class Scanner:
|
||||
name: str
|
||||
kind: str # "mock" or "command"
|
||||
command: Sequence[str] | None = None
|
||||
parameters: Dict[str, Any] | None = None
|
||||
|
||||
|
||||
# ---------- utility helpers ----------
|
||||
|
||||
def sha256_bytes(data: bytes) -> str:
|
||||
return hashlib.sha256(data).hexdigest()
|
||||
|
||||
|
||||
def load_json(path: Path) -> Any:
|
||||
with path.open("r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
def dump_canonical(obj: Any) -> bytes:
|
||||
return json.dumps(obj, sort_keys=True, separators=(",", ":")).encode("utf-8")
|
||||
|
||||
|
||||
def shuffle_obj(obj: Any, rng: random.Random) -> Any:
|
||||
if isinstance(obj, list):
|
||||
shuffled = [shuffle_obj(item, rng) for item in obj]
|
||||
rng.shuffle(shuffled)
|
||||
return shuffled
|
||||
if isinstance(obj, dict):
|
||||
items = list(obj.items())
|
||||
rng.shuffle(items)
|
||||
return {k: shuffle_obj(v, rng) for k, v in items}
|
||||
return obj # primitive
|
||||
|
||||
|
||||
def stable_int(value: str, modulo: int) -> int:
|
||||
digest = hashlib.sha256(value.encode("utf-8")).hexdigest()
|
||||
return int(digest[:16], 16) % modulo
|
||||
|
||||
|
||||
# ---------- mock scanner ----------
|
||||
|
||||
def run_mock_scanner(sbom: Dict[str, Any], vex: Dict[str, Any], parameters: Dict[str, Any] | None) -> Dict[str, Any]:
|
||||
severity_bias = float(parameters.get("severity_bias", 0.0)) if parameters else 0.0
|
||||
packages = sbom.get("packages", [])
|
||||
statements = vex.get("statements", [])
|
||||
|
||||
findings: List[Dict[str, Any]] = []
|
||||
for stmt in statements:
|
||||
vuln = stmt.get("vulnerability")
|
||||
status = stmt.get("status", "unknown")
|
||||
for product in stmt.get("products", []):
|
||||
score_seed = stable_int(f"{product}:{vuln}", 600)
|
||||
score = (score_seed / 10.0) + severity_bias
|
||||
findings.append(
|
||||
{
|
||||
"purl": product,
|
||||
"vulnerability": vuln,
|
||||
"status": status,
|
||||
"base_score": round(score, 1),
|
||||
}
|
||||
)
|
||||
|
||||
# Add packages with no statements as informational rows
|
||||
seen_products = {f["purl"] for f in findings}
|
||||
for pkg in packages:
|
||||
purl = pkg.get("purl")
|
||||
if purl and purl not in seen_products:
|
||||
findings.append(
|
||||
{
|
||||
"purl": purl,
|
||||
"vulnerability": "NONE",
|
||||
"status": "unknown",
|
||||
"base_score": 0.0,
|
||||
}
|
||||
)
|
||||
|
||||
findings.sort(key=lambda f: (f.get("purl", ""), f.get("vulnerability", "")))
|
||||
return {"scanner": "mock", "findings": findings}
|
||||
|
||||
|
||||
# ---------- runners ----------
|
||||
|
||||
def run_scanner(scanner: Scanner, sbom_path: Path, vex_path: Path, sbom_obj: Dict[str, Any], vex_obj: Dict[str, Any]) -> Dict[str, Any]:
|
||||
if scanner.kind == "mock":
|
||||
return run_mock_scanner(sbom_obj, vex_obj, scanner.parameters)
|
||||
|
||||
if scanner.kind == "command":
|
||||
if scanner.command is None:
|
||||
raise ValueError(f"Scanner {scanner.name} missing command")
|
||||
cmd = [part.format(sbom=sbom_path, vex=vex_path) for part in scanner.command]
|
||||
result = subprocess.run(cmd, check=True, capture_output=True, text=True)
|
||||
return json.loads(result.stdout)
|
||||
|
||||
raise ValueError(f"Unsupported scanner kind: {scanner.kind}")
|
||||
|
||||
|
||||
def canonical_hash(scanner_name: str, sbom_path: Path, vex_path: Path, normalized_findings: List[Dict[str, Any]]) -> str:
|
||||
payload = {
|
||||
"scanner": scanner_name,
|
||||
"sbom": sbom_path.name,
|
||||
"vex": vex_path.name,
|
||||
"findings": normalized_findings,
|
||||
}
|
||||
return sha256_bytes(dump_canonical(payload))
|
||||
|
||||
|
||||
def normalize_output(raw: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||
findings = raw.get("findings", [])
|
||||
normalized: List[Dict[str, Any]] = []
|
||||
for entry in findings:
|
||||
normalized.append(
|
||||
{
|
||||
"purl": entry.get("purl", ""),
|
||||
"vulnerability": entry.get("vulnerability", ""),
|
||||
"status": entry.get("status", "unknown"),
|
||||
"base_score": float(entry.get("base_score", 0.0)),
|
||||
}
|
||||
)
|
||||
normalized.sort(key=lambda f: (f["purl"], f["vulnerability"]))
|
||||
return normalized
|
||||
|
||||
|
||||
def write_results(results: List[Dict[str, Any]], output_csv: Path) -> None:
|
||||
output_csv.parent.mkdir(parents=True, exist_ok=True)
|
||||
fieldnames = ["scanner", "sbom", "vex", "mode", "run", "hash", "finding_count"]
|
||||
with output_csv.open("w", encoding="utf-8", newline="") as f:
|
||||
writer = csv.DictWriter(f, fieldnames=fieldnames)
|
||||
writer.writeheader()
|
||||
for row in results:
|
||||
writer.writerow(row)
|
||||
|
||||
|
||||
def write_inputs_manifest(inputs: List[Path], manifest_path: Path) -> None:
|
||||
manifest_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
lines: List[str] = []
|
||||
for path in sorted(inputs, key=lambda p: str(p)):
|
||||
digest = sha256_bytes(path.read_bytes())
|
||||
try:
|
||||
rel_path = path.resolve().relative_to(Path.cwd().resolve())
|
||||
except ValueError:
|
||||
rel_path = path.resolve()
|
||||
lines.append(f"{digest} {rel_path.as_posix()}\n")
|
||||
with manifest_path.open("w", encoding="utf-8") as f:
|
||||
f.writelines(lines)
|
||||
|
||||
|
||||
def load_scanners(config_path: Path) -> List[Scanner]:
|
||||
cfg = load_json(config_path)
|
||||
scanners = []
|
||||
for entry in cfg.get("scanners", []):
|
||||
scanners.append(
|
||||
Scanner(
|
||||
name=entry.get("name", "unknown"),
|
||||
kind=entry.get("kind", "mock"),
|
||||
command=entry.get("command"),
|
||||
parameters=entry.get("parameters", {}),
|
||||
)
|
||||
)
|
||||
return scanners
|
||||
|
||||
|
||||
def run_bench(
|
||||
sboms: Sequence[Path],
|
||||
vexes: Sequence[Path],
|
||||
scanners: Sequence[Scanner],
|
||||
runs: int,
|
||||
shuffle: bool,
|
||||
output_dir: Path,
|
||||
manifest_extras: Sequence[Path] | None = None,
|
||||
) -> List[Dict[str, Any]]:
|
||||
if len(sboms) != len(vexes):
|
||||
raise ValueError("SBOM/VEX counts must match for pairwise runs")
|
||||
|
||||
results: List[Dict[str, Any]] = []
|
||||
for sbom_path, vex_path in zip(sboms, vexes):
|
||||
sbom_obj = load_json(sbom_path)
|
||||
vex_obj = load_json(vex_path)
|
||||
|
||||
for scanner in scanners:
|
||||
for run in range(runs):
|
||||
for mode in ("canonical", "shuffled" if shuffle else ""):
|
||||
if not mode:
|
||||
continue
|
||||
sbom_candidate = deepcopy(sbom_obj)
|
||||
vex_candidate = deepcopy(vex_obj)
|
||||
if mode == "shuffled":
|
||||
seed = sha256_bytes(f"{sbom_path}:{vex_path}:{run}:{scanner.name}".encode("utf-8"))
|
||||
rng = random.Random(int(seed[:16], 16))
|
||||
sbom_candidate = shuffle_obj(sbom_candidate, rng)
|
||||
vex_candidate = shuffle_obj(vex_candidate, rng)
|
||||
|
||||
raw_output = run_scanner(scanner, sbom_path, vex_path, sbom_candidate, vex_candidate)
|
||||
normalized = normalize_output(raw_output)
|
||||
results.append(
|
||||
{
|
||||
"scanner": scanner.name,
|
||||
"sbom": sbom_path.name,
|
||||
"vex": vex_path.name,
|
||||
"mode": mode,
|
||||
"run": run,
|
||||
"hash": canonical_hash(scanner.name, sbom_path, vex_path, normalized),
|
||||
"finding_count": len(normalized),
|
||||
}
|
||||
)
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
return results
|
||||
|
||||
|
||||
def compute_determinism_rate(results: List[Dict[str, Any]]) -> float:
|
||||
by_key: Dict[tuple, List[str]] = {}
|
||||
for row in results:
|
||||
key = (row["scanner"], row["sbom"], row["vex"], row["mode"])
|
||||
by_key.setdefault(key, []).append(row["hash"])
|
||||
|
||||
stable = 0
|
||||
total = 0
|
||||
for hashes in by_key.values():
|
||||
total += len(hashes)
|
||||
if len(set(hashes)) == 1:
|
||||
stable += len(hashes)
|
||||
return stable / total if total else 0.0
|
||||
|
||||
|
||||
# ---------- CLI ----------
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description="Determinism benchmark harness")
|
||||
parser.add_argument("--sboms", nargs="*", default=["inputs/sboms/*.json"], help="Glob(s) for SBOM inputs")
|
||||
parser.add_argument("--vex", nargs="*", default=["inputs/vex/*.json"], help="Glob(s) for VEX inputs")
|
||||
parser.add_argument("--config", default="configs/scanners.json", help="Scanner config JSON path")
|
||||
parser.add_argument("--runs", type=int, default=10, help="Runs per scanner/SBOM pair")
|
||||
parser.add_argument("--shuffle", action="store_true", help="Enable shuffled-order runs")
|
||||
parser.add_argument("--output", default="results", help="Output directory")
|
||||
parser.add_argument(
|
||||
"--manifest-extra",
|
||||
nargs="*",
|
||||
default=[],
|
||||
help="Extra files (or globs) to include in inputs.sha256 (e.g., frozen feeds)",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def expand_globs(patterns: Iterable[str]) -> List[Path]:
|
||||
paths: List[Path] = []
|
||||
for pattern in patterns:
|
||||
if not pattern:
|
||||
continue
|
||||
for path in sorted(Path().glob(pattern)):
|
||||
if path.is_file():
|
||||
paths.append(path)
|
||||
return paths
|
||||
|
||||
|
||||
def main() -> None:
|
||||
args = parse_args()
|
||||
sboms = expand_globs(args.sboms)
|
||||
vexes = expand_globs(args.vex)
|
||||
manifest_extras = expand_globs(args.manifest_extra)
|
||||
output_dir = Path(args.output)
|
||||
|
||||
if not sboms or not vexes:
|
||||
raise SystemExit("No SBOM or VEX inputs found; supply --sboms/--vex globs")
|
||||
|
||||
scanners = load_scanners(Path(args.config))
|
||||
if not scanners:
|
||||
raise SystemExit("Scanner config has no entries")
|
||||
|
||||
results = run_bench(sboms, vexes, scanners, args.runs, args.shuffle, output_dir, manifest_extras)
|
||||
|
||||
results_csv = output_dir / "results.csv"
|
||||
write_results(results, results_csv)
|
||||
|
||||
manifest_inputs = sboms + vexes + [Path(args.config)] + (manifest_extras or [])
|
||||
write_inputs_manifest(manifest_inputs, output_dir / "inputs.sha256")
|
||||
|
||||
determinism = compute_determinism_rate(results)
|
||||
summary_path = output_dir / "summary.json"
|
||||
summary_path.write_text(json.dumps({"determinism_rate": determinism}, indent=2), encoding="utf-8")
|
||||
|
||||
print(f"Wrote {results_csv} (determinism_rate={determinism:.3f})")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
94
src/Tools/StellaOps.Bench/Determinism/run_reachability.py
Normal file
94
src/Tools/StellaOps.Bench/Determinism/run_reachability.py
Normal file
@@ -0,0 +1,94 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Reachability dataset hash helper for optional BENCH-DETERMINISM reachability runs.
|
||||
- Computes deterministic hashes for graph JSON and runtime NDJSON inputs.
|
||||
- Emits `results-reach.csv` and `dataset.sha256` in the chosen output directory.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import csv
|
||||
import hashlib
|
||||
import json
|
||||
import glob
|
||||
from pathlib import Path
|
||||
from typing import Iterable, List
|
||||
|
||||
|
||||
def sha256_bytes(data: bytes) -> str:
|
||||
return hashlib.sha256(data).hexdigest()
|
||||
|
||||
|
||||
def expand_files(patterns: Iterable[str]) -> List[Path]:
|
||||
files: List[Path] = []
|
||||
for pattern in patterns:
|
||||
if not pattern:
|
||||
continue
|
||||
for path_str in sorted(glob.glob(pattern)):
|
||||
path = Path(path_str)
|
||||
if path.is_file():
|
||||
files.append(path)
|
||||
return files
|
||||
|
||||
|
||||
def hash_files(paths: List[Path]) -> List[tuple[str, str]]:
|
||||
rows: List[tuple[str, str]] = []
|
||||
for path in paths:
|
||||
rows.append((path.name, sha256_bytes(path.read_bytes())))
|
||||
return rows
|
||||
|
||||
|
||||
def write_manifest(paths: List[Path], manifest_path: Path) -> None:
|
||||
lines = []
|
||||
for path in sorted(paths, key=lambda p: str(p)):
|
||||
digest = sha256_bytes(path.read_bytes())
|
||||
try:
|
||||
rel = path.resolve().relative_to(Path.cwd().resolve())
|
||||
except ValueError:
|
||||
rel = path.resolve()
|
||||
lines.append(f"{digest} {rel.as_posix()}\n")
|
||||
manifest_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
manifest_path.write_text("".join(lines), encoding="utf-8")
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="Reachability dataset hash helper")
|
||||
parser.add_argument("--graphs", nargs="*", default=["inputs/graphs/*.json"], help="Glob(s) for graph JSON files")
|
||||
parser.add_argument("--runtime", nargs="*", default=["inputs/runtime/*.ndjson", "inputs/runtime/*.ndjson.gz"], help="Glob(s) for runtime NDJSON files")
|
||||
parser.add_argument("--output", default="results", help="Output directory")
|
||||
args = parser.parse_args()
|
||||
|
||||
graphs = expand_files(args.graphs)
|
||||
runtime = expand_files(args.runtime)
|
||||
|
||||
if not graphs:
|
||||
raise SystemExit("No graph inputs found; supply --graphs globs")
|
||||
|
||||
output_dir = Path(args.output)
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
dataset_manifest_files = graphs + runtime
|
||||
write_manifest(dataset_manifest_files, output_dir / "dataset.sha256")
|
||||
|
||||
csv_path = output_dir / "results-reach.csv"
|
||||
fieldnames = ["type", "file", "sha256"]
|
||||
with csv_path.open("w", encoding="utf-8", newline="") as f:
|
||||
writer = csv.DictWriter(f, fieldnames=fieldnames)
|
||||
writer.writeheader()
|
||||
for name, digest in hash_files(graphs):
|
||||
writer.writerow({"type": "graph", "file": name, "sha256": digest})
|
||||
for name, digest in hash_files(runtime):
|
||||
writer.writerow({"type": "runtime", "file": name, "sha256": digest})
|
||||
|
||||
summary = {
|
||||
"graphs": len(graphs),
|
||||
"runtime": len(runtime),
|
||||
"manifest": "dataset.sha256",
|
||||
}
|
||||
(output_dir / "results-reach.json").write_text(json.dumps(summary, indent=2), encoding="utf-8")
|
||||
|
||||
print(f"Wrote {csv_path} with {len(graphs)} graph(s) and {len(runtime)} runtime file(s)")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,61 @@
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from tempfile import TemporaryDirectory
|
||||
import unittest
|
||||
|
||||
# Allow direct import of run_bench from the harness folder
|
||||
HARNESS_DIR = Path(__file__).resolve().parents[1]
|
||||
sys.path.insert(0, str(HARNESS_DIR))
|
||||
|
||||
import run_bench # noqa: E402
|
||||
|
||||
|
||||
class DeterminismBenchTests(unittest.TestCase):
|
||||
def setUp(self) -> None:
|
||||
self.base = HARNESS_DIR
|
||||
self.sboms = [self.base / "inputs" / "sboms" / "sample-spdx.json"]
|
||||
self.vexes = [self.base / "inputs" / "vex" / "sample-openvex.json"]
|
||||
self.scanners = run_bench.load_scanners(self.base / "configs" / "scanners.json")
|
||||
|
||||
def test_canonical_and_shuffled_hashes_match(self):
|
||||
with TemporaryDirectory() as tmp:
|
||||
out_dir = Path(tmp)
|
||||
results = run_bench.run_bench(
|
||||
self.sboms,
|
||||
self.vexes,
|
||||
self.scanners,
|
||||
runs=3,
|
||||
shuffle=True,
|
||||
output_dir=out_dir,
|
||||
)
|
||||
rate = run_bench.compute_determinism_rate(results)
|
||||
self.assertAlmostEqual(rate, 1.0)
|
||||
|
||||
hashes = {(r["scanner"], r["mode"]): r["hash"] for r in results}
|
||||
self.assertEqual(len(hashes), 2)
|
||||
|
||||
def test_inputs_manifest_written(self):
|
||||
with TemporaryDirectory() as tmp:
|
||||
out_dir = Path(tmp)
|
||||
extra = Path(tmp) / "feeds.tar.gz"
|
||||
extra.write_bytes(b"feed")
|
||||
results = run_bench.run_bench(
|
||||
self.sboms,
|
||||
self.vexes,
|
||||
self.scanners,
|
||||
runs=1,
|
||||
shuffle=False,
|
||||
output_dir=out_dir,
|
||||
manifest_extras=[extra],
|
||||
)
|
||||
run_bench.write_results(results, out_dir / "results.csv")
|
||||
manifest = out_dir / "inputs.sha256"
|
||||
run_bench.write_inputs_manifest(self.sboms + self.vexes + [extra], manifest)
|
||||
text = manifest.read_text(encoding="utf-8")
|
||||
self.assertIn("sample-spdx.json", text)
|
||||
self.assertIn("sample-openvex.json", text)
|
||||
self.assertIn("feeds.tar.gz", text)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@@ -0,0 +1,33 @@
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from tempfile import TemporaryDirectory
|
||||
import unittest
|
||||
|
||||
HARNESS_DIR = Path(__file__).resolve().parents[1]
|
||||
sys.path.insert(0, str(HARNESS_DIR))
|
||||
|
||||
import run_reachability # noqa: E402
|
||||
|
||||
|
||||
class ReachabilityBenchTests(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.graphs = [HARNESS_DIR / "inputs" / "graphs" / "sample-graph.json"]
|
||||
self.runtime = [HARNESS_DIR / "inputs" / "runtime" / "sample-runtime.ndjson"]
|
||||
|
||||
def test_manifest_includes_files(self):
|
||||
with TemporaryDirectory() as tmp:
|
||||
out_dir = Path(tmp)
|
||||
manifest_path = out_dir / "dataset.sha256"
|
||||
run_reachability.write_manifest(self.graphs + self.runtime, manifest_path)
|
||||
text = manifest_path.read_text(encoding="utf-8")
|
||||
self.assertIn("sample-graph.json", text)
|
||||
self.assertIn("sample-runtime.ndjson", text)
|
||||
|
||||
def test_hash_files(self):
|
||||
hashes = dict(run_reachability.hash_files(self.graphs))
|
||||
self.assertIn("sample-graph.json", hashes)
|
||||
self.assertEqual(len(hashes), 1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
34
src/Tools/StellaOps.Bench/Graph/README.md
Normal file
34
src/Tools/StellaOps.Bench/Graph/README.md
Normal file
@@ -0,0 +1,34 @@
|
||||
# Graph Bench Harness (BENCH-GRAPH-21-001)
|
||||
|
||||
Purpose: measure basic graph load/adjacency build and shallow path exploration over deterministic fixtures.
|
||||
|
||||
## Fixtures
|
||||
- Canonical: `samples/graph/graph-40k` (SAMPLES-GRAPH-24-003) with overlay + manifest hashes.
|
||||
- Legacy interim (still usable for comparisons): `samples/graph/interim/graph-50k` and `graph-100k`.
|
||||
- Each fixture includes `nodes.ndjson`, `edges.ndjson`, and `manifest.json` with hashes/counts.
|
||||
- Optional overlay: drop `overlay.ndjson` next to the fixture (or set `overlay.path` in `manifest.json`) to apply extra edges/layers; hashes are captured in results.
|
||||
|
||||
## Usage
|
||||
```bash
|
||||
python graph_bench.py \
|
||||
--fixture ../../../../samples/graph/graph-40k \
|
||||
--output results/graph-40k.json \
|
||||
--samples 100 \
|
||||
--overlay ../../../../samples/graph/graph-40k/overlay.ndjson # optional
|
||||
```
|
||||
|
||||
Outputs a JSON summary with:
|
||||
- `nodes`, `edges`
|
||||
- `build_ms` — time to build adjacency (ms)
|
||||
- `overlay_ms` — time to apply overlay (0 when absent), plus counts and SHA under `overlay.*`
|
||||
- `bfs_ms` — total time for 3-depth BFS over sampled nodes
|
||||
- `avg_reach_3`, `max_reach_3` — nodes reached within depth 3
|
||||
- `manifest` — copied from fixture for traceability
|
||||
|
||||
Determinism:
|
||||
- Sorted node ids, fixed sample size, stable ordering, no randomness beyond fixture content.
|
||||
- No network access; pure local file reads.
|
||||
|
||||
Next steps:
|
||||
- Keep results in sync with canonical fixture hashes; if overlay schema changes regenerate fixture + manifests.
|
||||
- Add p95/median latency over multiple runs and optional concurrency knobs.
|
||||
Binary file not shown.
197
src/Tools/StellaOps.Bench/Graph/graph_bench.py
Normal file
197
src/Tools/StellaOps.Bench/Graph/graph_bench.py
Normal file
@@ -0,0 +1,197 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Graph benchmark harness (BENCH-GRAPH-21-001)
|
||||
|
||||
Reads deterministic NDJSON fixtures (nodes/edges) and computes basic metrics plus
|
||||
lightweight path queries to exercise adjacency building. Uses only local files,
|
||||
no network, and fixed seeds for reproducibility.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import hashlib
|
||||
import json
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional, Tuple
|
||||
|
||||
|
||||
def load_ndjson(path: Path):
|
||||
with path.open("r", encoding="utf-8") as f:
|
||||
for line in f:
|
||||
if line.strip():
|
||||
yield json.loads(line)
|
||||
|
||||
|
||||
def build_graph(nodes_path: Path, edges_path: Path) -> Tuple[Dict[str, List[str]], int]:
|
||||
adjacency: Dict[str, List[str]] = {}
|
||||
node_set = set()
|
||||
for n in load_ndjson(nodes_path):
|
||||
node_set.add(n["id"])
|
||||
adjacency.setdefault(n["id"], [])
|
||||
edge_count = 0
|
||||
for e in load_ndjson(edges_path):
|
||||
source = e["source"]
|
||||
target = e["target"]
|
||||
# Only keep edges where nodes exist
|
||||
if source in adjacency and target in adjacency:
|
||||
adjacency[source].append(target)
|
||||
edge_count += 1
|
||||
# sort neighbors for determinism
|
||||
for v in adjacency.values():
|
||||
v.sort()
|
||||
return adjacency, edge_count
|
||||
|
||||
|
||||
def _sha256(path: Path) -> str:
|
||||
h = hashlib.sha256()
|
||||
with path.open("rb") as f:
|
||||
for chunk in iter(lambda: f.read(8192), b""):
|
||||
h.update(chunk)
|
||||
return h.hexdigest()
|
||||
|
||||
|
||||
def apply_overlay(adjacency: Dict[str, List[str]], overlay_path: Path) -> Tuple[int, int]:
|
||||
"""
|
||||
Apply overlay edges to the adjacency map.
|
||||
|
||||
Overlay file format (NDJSON): {"source": "nodeA", "target": "nodeB"}
|
||||
Unknown keys are ignored. New nodes are added with empty adjacency to keep
|
||||
BFS deterministic. Duplicate edges are de-duplicated.
|
||||
"""
|
||||
|
||||
if not overlay_path.exists():
|
||||
return 0, 0
|
||||
|
||||
added_edges = 0
|
||||
introduced_nodes = set()
|
||||
for record in load_ndjson(overlay_path):
|
||||
source = record.get("source") or record.get("from")
|
||||
target = record.get("target") or record.get("to")
|
||||
if not source or not target:
|
||||
continue
|
||||
|
||||
if source not in adjacency:
|
||||
adjacency[source] = []
|
||||
introduced_nodes.add(source)
|
||||
if target not in adjacency:
|
||||
adjacency[target] = []
|
||||
introduced_nodes.add(target)
|
||||
|
||||
if target not in adjacency[source]:
|
||||
adjacency[source].append(target)
|
||||
added_edges += 1
|
||||
|
||||
# keep neighbor ordering deterministic
|
||||
for v in adjacency.values():
|
||||
v.sort()
|
||||
|
||||
return added_edges, len(introduced_nodes)
|
||||
|
||||
|
||||
def bfs_limited(adjacency: Dict[str, List[str]], start: str, max_depth: int = 3) -> int:
|
||||
visited = {start}
|
||||
frontier = [start]
|
||||
for _ in range(max_depth):
|
||||
next_frontier = []
|
||||
for node in frontier:
|
||||
for nbr in adjacency.get(node, []):
|
||||
if nbr not in visited:
|
||||
visited.add(nbr)
|
||||
next_frontier.append(nbr)
|
||||
if not next_frontier:
|
||||
break
|
||||
frontier = next_frontier
|
||||
return len(visited)
|
||||
|
||||
|
||||
def resolve_overlay_path(fixture_dir: Path, manifest: dict, explicit: Optional[Path]) -> Optional[Path]:
|
||||
if explicit:
|
||||
return explicit.resolve()
|
||||
|
||||
overlay_manifest = manifest.get("overlay") if isinstance(manifest, dict) else None
|
||||
if isinstance(overlay_manifest, dict):
|
||||
path_value = overlay_manifest.get("path")
|
||||
if path_value:
|
||||
candidate = Path(path_value)
|
||||
return candidate if candidate.is_absolute() else (fixture_dir / candidate)
|
||||
|
||||
default = fixture_dir / "overlay.ndjson"
|
||||
return default if default.exists() else None
|
||||
|
||||
|
||||
def run_bench(fixture_dir: Path, sample_size: int = 100, overlay_path: Optional[Path] = None) -> dict:
|
||||
nodes_path = fixture_dir / "nodes.ndjson"
|
||||
edges_path = fixture_dir / "edges.ndjson"
|
||||
manifest_path = fixture_dir / "manifest.json"
|
||||
|
||||
manifest = json.loads(manifest_path.read_text()) if manifest_path.exists() else {}
|
||||
overlay_resolved = resolve_overlay_path(fixture_dir, manifest, overlay_path)
|
||||
|
||||
t0 = time.perf_counter()
|
||||
adjacency, edge_count = build_graph(nodes_path, edges_path)
|
||||
overlay_added = 0
|
||||
overlay_nodes = 0
|
||||
overlay_hash = None
|
||||
overlay_ms = 0.0
|
||||
|
||||
if overlay_resolved:
|
||||
t_overlay = time.perf_counter()
|
||||
overlay_added, overlay_nodes = apply_overlay(adjacency, overlay_resolved)
|
||||
overlay_ms = (time.perf_counter() - t_overlay) * 1000
|
||||
overlay_hash = _sha256(overlay_resolved)
|
||||
build_ms = (time.perf_counter() - t0) * 1000
|
||||
|
||||
# deterministic sample: first N node ids sorted
|
||||
node_ids = sorted(adjacency.keys())[:sample_size]
|
||||
reach_counts = []
|
||||
t1 = time.perf_counter()
|
||||
for node_id in node_ids:
|
||||
reach_counts.append(bfs_limited(adjacency, node_id, max_depth=3))
|
||||
bfs_ms = (time.perf_counter() - t1) * 1000
|
||||
|
||||
avg_reach = sum(reach_counts) / len(reach_counts) if reach_counts else 0
|
||||
max_reach = max(reach_counts) if reach_counts else 0
|
||||
|
||||
return {
|
||||
"fixture": fixture_dir.name,
|
||||
"nodes": len(adjacency),
|
||||
"edges": edge_count + overlay_added,
|
||||
"build_ms": round(build_ms, 2),
|
||||
"overlay_ms": round(overlay_ms, 2),
|
||||
"bfs_ms": round(bfs_ms, 2),
|
||||
"bfs_samples": len(node_ids),
|
||||
"avg_reach_3": round(avg_reach, 2),
|
||||
"max_reach_3": max_reach,
|
||||
"manifest": manifest,
|
||||
"overlay": {
|
||||
"applied": overlay_resolved is not None,
|
||||
"added_edges": overlay_added,
|
||||
"introduced_nodes": overlay_nodes,
|
||||
"path": str(overlay_resolved) if overlay_resolved else None,
|
||||
"sha256": overlay_hash,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("--fixture", required=True, help="Path to fixture directory (nodes.ndjson, edges.ndjson)")
|
||||
parser.add_argument("--output", required=True, help="Path to write results JSON")
|
||||
parser.add_argument("--samples", type=int, default=100, help="Number of starting nodes to sample deterministically")
|
||||
parser.add_argument("--overlay", help="Optional overlay NDJSON path; defaults to overlay.ndjson next to fixture or manifest overlay.path")
|
||||
args = parser.parse_args()
|
||||
|
||||
fixture_dir = Path(args.fixture).resolve()
|
||||
out_path = Path(args.output).resolve()
|
||||
out_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
explicit_overlay = Path(args.overlay).resolve() if args.overlay else None
|
||||
result = run_bench(fixture_dir, sample_size=args.samples, overlay_path=explicit_overlay)
|
||||
out_path.write_text(json.dumps(result, indent=2, sort_keys=True))
|
||||
print(f"Wrote results to {out_path}")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
44
src/Tools/StellaOps.Bench/Graph/results/graph-40k.json
Normal file
44
src/Tools/StellaOps.Bench/Graph/results/graph-40k.json
Normal file
@@ -0,0 +1,44 @@
|
||||
{
|
||||
"avg_reach_3": 14.32,
|
||||
"bfs_ms": 0.8,
|
||||
"bfs_samples": 100,
|
||||
"build_ms": 5563.14,
|
||||
"edges": 100171,
|
||||
"fixture": "graph-40k",
|
||||
"manifest": {
|
||||
"counts": {
|
||||
"edges": 100071,
|
||||
"nodes": 40000,
|
||||
"overlays": {
|
||||
"policy.overlay.v1": 100
|
||||
}
|
||||
},
|
||||
"generated_at": "2025-11-22T00:00:00Z",
|
||||
"hashes": {
|
||||
"edges_ndjson_sha256": "143a294446f46ffa273846e821f83fd5e5023aea2cf74947ba7ccaeeab7ceba4",
|
||||
"nodes_ndjson_sha256": "d14e8c642d1b4450d8779971da79cecc190af22fe237dee56ec0dd583f0442f5",
|
||||
"overlay_ndjson_sha256": "627a0d8c273f55b2426c8c005037ef01d88324a75084ad44bd620b1330a539cc"
|
||||
},
|
||||
"inputs": {
|
||||
"sbom_source": "mock-sbom-v1"
|
||||
},
|
||||
"overlay": {
|
||||
"id_scheme": "sha256(tenant|nodeId|overlayKind)",
|
||||
"kind": "policy.overlay.v1",
|
||||
"path": "overlay.ndjson"
|
||||
},
|
||||
"seed": 424242,
|
||||
"snapshot_id": "graph-40k-policy-overlay-20251122",
|
||||
"tenant": "demo-tenant"
|
||||
},
|
||||
"max_reach_3": 36,
|
||||
"nodes": 40100,
|
||||
"overlay": {
|
||||
"added_edges": 100,
|
||||
"applied": true,
|
||||
"introduced_nodes": 100,
|
||||
"path": "/mnt/e/dev/git.stella-ops.org/samples/graph/graph-40k/overlay.ndjson",
|
||||
"sha256": "627a0d8c273f55b2426c8c005037ef01d88324a75084ad44bd620b1330a539cc"
|
||||
},
|
||||
"overlay_ms": 52.24
|
||||
}
|
||||
42
src/Tools/StellaOps.Bench/Graph/run_graph_bench.sh
Normal file
42
src/Tools/StellaOps.Bench/Graph/run_graph_bench.sh
Normal file
@@ -0,0 +1,42 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
# Repo root is four levels up from Graph/
|
||||
REPO_ROOT="$(cd "${ROOT}/../../../.." && pwd)"
|
||||
# Default to canonical graph-40k fixture; allow override or fallback to interim.
|
||||
FIXTURES_ROOT="${FIXTURES_ROOT:-${REPO_ROOT}/samples/graph}"
|
||||
OUT_DIR="${OUT_DIR:-$ROOT/results}"
|
||||
OVERLAY_ROOT="${OVERLAY_ROOT:-${FIXTURES_ROOT}}"
|
||||
SAMPLES="${SAMPLES:-100}"
|
||||
|
||||
mkdir -p "${OUT_DIR}"
|
||||
|
||||
run_one() {
|
||||
local fixture="$1"
|
||||
local name
|
||||
name="$(basename "${fixture}")"
|
||||
local out_file="${OUT_DIR}/${name}.json"
|
||||
local overlay_candidate="${OVERLAY_ROOT}/${name}/overlay.ndjson"
|
||||
|
||||
args=("--fixture" "${fixture}" "--output" "${out_file}" "--samples" "${SAMPLES}")
|
||||
if [[ -f "${overlay_candidate}" ]]; then
|
||||
args+=("--overlay" "${overlay_candidate}")
|
||||
fi
|
||||
|
||||
python "${ROOT}/graph_bench.py" "${args[@]}"
|
||||
}
|
||||
|
||||
if [[ -d "${FIXTURES_ROOT}/graph-40k" ]]; then
|
||||
run_one "${FIXTURES_ROOT}/graph-40k"
|
||||
fi
|
||||
|
||||
# legacy/interim comparisons
|
||||
if [[ -d "${FIXTURES_ROOT}/interim/graph-50k" ]]; then
|
||||
run_one "${FIXTURES_ROOT}/interim/graph-50k"
|
||||
fi
|
||||
if [[ -d "${FIXTURES_ROOT}/interim/graph-100k" ]]; then
|
||||
run_one "${FIXTURES_ROOT}/interim/graph-100k"
|
||||
fi
|
||||
|
||||
echo "Graph bench complete. Results in ${OUT_DIR}"
|
||||
Binary file not shown.
63
src/Tools/StellaOps.Bench/Graph/tests/test_graph_bench.py
Normal file
63
src/Tools/StellaOps.Bench/Graph/tests/test_graph_bench.py
Normal file
@@ -0,0 +1,63 @@
|
||||
import json
|
||||
import sys
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
import unittest
|
||||
|
||||
ROOT = Path(__file__).resolve().parents[1]
|
||||
if str(ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(ROOT))
|
||||
|
||||
|
||||
class GraphBenchTests(unittest.TestCase):
|
||||
def setUp(self) -> None:
|
||||
self.tmp = tempfile.TemporaryDirectory()
|
||||
self.root = Path(self.tmp.name)
|
||||
|
||||
def tearDown(self) -> None:
|
||||
self.tmp.cleanup()
|
||||
|
||||
def _write_ndjson(self, path: Path, records):
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
with path.open("w", encoding="utf-8") as f:
|
||||
for record in records:
|
||||
f.write(json.dumps(record))
|
||||
f.write("\n")
|
||||
|
||||
def test_overlay_edges_are_applied_and_counted(self):
|
||||
from graph_bench import run_bench
|
||||
|
||||
fixture = self.root / "fixture"
|
||||
fixture.mkdir()
|
||||
|
||||
self._write_ndjson(fixture / "nodes.ndjson", [{"id": "a"}, {"id": "b"}])
|
||||
self._write_ndjson(fixture / "edges.ndjson", [{"source": "a", "target": "b"}])
|
||||
self._write_ndjson(fixture / "overlay.ndjson", [{"source": "b", "target": "a"}])
|
||||
|
||||
result = run_bench(fixture, sample_size=2)
|
||||
|
||||
self.assertEqual(result["nodes"], 2)
|
||||
self.assertEqual(result["edges"], 2) # overlay added one edge
|
||||
self.assertTrue(result["overlay"]["applied"])
|
||||
self.assertEqual(result["overlay"]["added_edges"], 1)
|
||||
self.assertEqual(result["overlay"]["introduced_nodes"], 0)
|
||||
|
||||
def test_overlay_is_optional(self):
|
||||
from graph_bench import run_bench
|
||||
|
||||
fixture = self.root / "fixture-no-overlay"
|
||||
fixture.mkdir()
|
||||
|
||||
self._write_ndjson(fixture / "nodes.ndjson", [{"id": "x"}, {"id": "y"}])
|
||||
self._write_ndjson(fixture / "edges.ndjson", [{"source": "x", "target": "y"}])
|
||||
|
||||
result = run_bench(fixture, sample_size=2)
|
||||
|
||||
self.assertEqual(result["edges"], 1)
|
||||
self.assertFalse(result["overlay"]["applied"])
|
||||
self.assertEqual(result["overlay"]["added_edges"], 0)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
94
src/Tools/StellaOps.Bench/Graph/ui_bench_driver.mjs
Normal file
94
src/Tools/StellaOps.Bench/Graph/ui_bench_driver.mjs
Normal file
@@ -0,0 +1,94 @@
|
||||
#!/usr/bin/env node
|
||||
/**
|
||||
* ui_bench_driver.mjs
|
||||
*
|
||||
* Reads scenarios and fixture manifest, and emits a deterministic run plan.
|
||||
* This is browser-free; intended to be wrapped by Playwright later.
|
||||
*/
|
||||
import fs from "fs";
|
||||
import path from "path";
|
||||
import crypto from "crypto";
|
||||
|
||||
function readJson(p) {
|
||||
return JSON.parse(fs.readFileSync(p, "utf-8"));
|
||||
}
|
||||
|
||||
function sha256File(filePath) {
|
||||
const hash = crypto.createHash("sha256");
|
||||
hash.update(fs.readFileSync(filePath));
|
||||
return hash.digest("hex");
|
||||
}
|
||||
|
||||
function resolveOverlay(fixtureDir, manifest) {
|
||||
const manifestOverlay = manifest?.overlay?.path;
|
||||
const candidate = manifestOverlay
|
||||
? path.isAbsolute(manifestOverlay)
|
||||
? manifestOverlay
|
||||
: path.join(fixtureDir, manifestOverlay)
|
||||
: path.join(fixtureDir, "overlay.ndjson");
|
||||
|
||||
if (!fs.existsSync(candidate)) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return {
|
||||
path: candidate,
|
||||
sha256: sha256File(candidate),
|
||||
};
|
||||
}
|
||||
|
||||
function buildPlan(scenarios, manifest, fixtureName, fixtureDir) {
|
||||
const now = new Date().toISOString();
|
||||
const seed = process.env.UI_BENCH_SEED || "424242";
|
||||
const traceId =
|
||||
process.env.UI_BENCH_TRACE_ID ||
|
||||
(crypto.randomUUID ? crypto.randomUUID() : `trace-${Date.now()}`);
|
||||
const overlay = resolveOverlay(fixtureDir, manifest);
|
||||
|
||||
return {
|
||||
version: "1.0.0",
|
||||
fixture: fixtureName,
|
||||
manifestHash: manifest?.hashes || {},
|
||||
overlay,
|
||||
timestamp: now,
|
||||
seed,
|
||||
traceId,
|
||||
viewport: {
|
||||
width: 1280,
|
||||
height: 720,
|
||||
deviceScaleFactor: 1,
|
||||
},
|
||||
steps: scenarios.map((s, idx) => ({
|
||||
order: idx + 1,
|
||||
id: s.id,
|
||||
name: s.name,
|
||||
actions: s.steps,
|
||||
})),
|
||||
};
|
||||
}
|
||||
|
||||
function main() {
|
||||
const fixtureDir = process.argv[2];
|
||||
const scenariosPath = process.argv[3];
|
||||
const outputPath = process.argv[4];
|
||||
if (!fixtureDir || !scenariosPath || !outputPath) {
|
||||
console.error("usage: ui_bench_driver.mjs <fixture_dir> <scenarios.json> <output.json>");
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const manifestPath = path.join(fixtureDir, "manifest.json");
|
||||
const manifest = fs.existsSync(manifestPath) ? readJson(manifestPath) : {};
|
||||
const scenarios = readJson(scenariosPath).scenarios || [];
|
||||
|
||||
const plan = buildPlan(
|
||||
scenarios,
|
||||
manifest,
|
||||
path.basename(fixtureDir),
|
||||
fixtureDir
|
||||
);
|
||||
fs.mkdirSync(path.dirname(outputPath), { recursive: true });
|
||||
fs.writeFileSync(outputPath, JSON.stringify(plan, null, 2));
|
||||
console.log(`Wrote plan to ${outputPath}`);
|
||||
}
|
||||
|
||||
main();
|
||||
42
src/Tools/StellaOps.Bench/Graph/ui_bench_driver.test.mjs
Normal file
42
src/Tools/StellaOps.Bench/Graph/ui_bench_driver.test.mjs
Normal file
@@ -0,0 +1,42 @@
|
||||
import assert from "node:assert";
|
||||
import { test } from "node:test";
|
||||
import fs from "node:fs";
|
||||
import path from "node:path";
|
||||
import { fileURLToPath } from "node:url";
|
||||
import { spawnSync } from "node:child_process";
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
|
||||
test("ui bench driver emits overlay + seed metadata", () => {
|
||||
const tmp = fs.mkdtempSync(path.join(process.cwd(), "tmp-ui-bench-"));
|
||||
const fixtureDir = path.join(tmp, "fixture");
|
||||
fs.mkdirSync(fixtureDir, { recursive: true });
|
||||
|
||||
// minimal fixture files
|
||||
fs.writeFileSync(path.join(fixtureDir, "manifest.json"), JSON.stringify({ hashes: { nodes: "abc" } }));
|
||||
fs.writeFileSync(path.join(fixtureDir, "overlay.ndjson"), "{\"source\":\"a\",\"target\":\"b\"}\n");
|
||||
|
||||
const scenariosPath = path.join(tmp, "scenarios.json");
|
||||
fs.writeFileSync(
|
||||
scenariosPath,
|
||||
JSON.stringify({ version: "1.0.0", scenarios: [{ id: "load", name: "Load", steps: ["navigate"] }] })
|
||||
);
|
||||
|
||||
const outputPath = path.join(tmp, "plan.json");
|
||||
const env = { ...process.env, UI_BENCH_SEED: "1337", UI_BENCH_TRACE_ID: "trace-test" };
|
||||
const driverPath = path.join(__dirname, "ui_bench_driver.mjs");
|
||||
const result = spawnSync(process.execPath, [driverPath, fixtureDir, scenariosPath, outputPath], { env });
|
||||
assert.strictEqual(result.status, 0, result.stderr?.toString());
|
||||
|
||||
const plan = JSON.parse(fs.readFileSync(outputPath, "utf-8"));
|
||||
assert.strictEqual(plan.fixture, "fixture");
|
||||
assert.strictEqual(plan.seed, "1337");
|
||||
assert.strictEqual(plan.traceId, "trace-test");
|
||||
assert.ok(plan.overlay);
|
||||
assert.ok(plan.overlay.path.endsWith("overlay.ndjson"));
|
||||
assert.ok(plan.overlay.sha256);
|
||||
assert.deepStrictEqual(plan.viewport, { width: 1280, height: 720, deviceScaleFactor: 1 });
|
||||
|
||||
fs.rmSync(tmp, { recursive: true, force: true });
|
||||
});
|
||||
32
src/Tools/StellaOps.Bench/Graph/ui_bench_plan.md
Normal file
32
src/Tools/StellaOps.Bench/Graph/ui_bench_plan.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# Graph UI Bench Plan (BENCH-GRAPH-21-002)
|
||||
|
||||
Purpose: provide a deterministic, headless flow for measuring graph UI interactions over large fixtures (50k/100k nodes).
|
||||
|
||||
## Scope
|
||||
- Default fixture: `samples/graph/graph-40k` (SAMPLES-GRAPH-24-003) with policy overlay hashes.
|
||||
- Legacy comparison fixtures remain under `samples/graph/interim/`.
|
||||
- Optional overlay layer (`overlay.ndjson`) is loaded when present and toggled during the run to capture render/merge overhead.
|
||||
- Drive a deterministic sequence of interactions:
|
||||
1) Load graph canvas with specified fixture.
|
||||
2) Pan to node `pkg-000001`.
|
||||
3) Zoom in 2×, zoom out 1×.
|
||||
4) Apply filter `name contains "package-0001"`.
|
||||
5) Select node, expand neighbors (depth 1), collapse.
|
||||
6) Toggle overlay layer (once available).
|
||||
- Capture timings: initial render, filter apply, expand/collapse, overlay toggle (when available).
|
||||
|
||||
## Determinism rules
|
||||
- Fixed seed for any randomized layouts (seed `424242`).
|
||||
- Disable animations/transitions where possible; otherwise measure after `requestAnimationFrame` settle.
|
||||
- No network calls; fixtures loaded from local file served by test harness.
|
||||
- Stable viewport (width=1280, height=720), device scale factor 1.
|
||||
|
||||
## Artifacts
|
||||
- `ui_bench_scenarios.json` — canonical scenario list with step ids and notes.
|
||||
- `ui_bench_driver.mjs` — helper that reads scenario + fixture manifest and emits a run plan (no browser dependency). Intended to be wrapped by a Playwright script later.
|
||||
- Results format (proposed): NDJSON with `{stepId, name, durationMs, fixture, timestamp}`.
|
||||
|
||||
## Next steps
|
||||
- Bind `ui_bench_driver.mjs` into a Playwright harness when Graph UI build/serve target is available.
|
||||
- Swap fixtures to SAMPLES-GRAPH-24-003 + overlay once schema finalized; keep scenario ids stable.
|
||||
- Add CI slice to run the driver and validate scenario/fixture bindings (no browser) to keep determinism checked in commits.
|
||||
35
src/Tools/StellaOps.Bench/Graph/ui_bench_scenarios.json
Normal file
35
src/Tools/StellaOps.Bench/Graph/ui_bench_scenarios.json
Normal file
@@ -0,0 +1,35 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"scenarios": [
|
||||
{
|
||||
"id": "load",
|
||||
"name": "Load graph canvas",
|
||||
"steps": ["navigate", "waitForRender"]
|
||||
},
|
||||
{
|
||||
"id": "pan-start-node",
|
||||
"name": "Pan to pkg-000001",
|
||||
"steps": ["panTo:pkg-000001"]
|
||||
},
|
||||
{
|
||||
"id": "zoom-in-out",
|
||||
"name": "Zoom in twice, out once",
|
||||
"steps": ["zoomIn", "zoomIn", "zoomOut"]
|
||||
},
|
||||
{
|
||||
"id": "filter-name",
|
||||
"name": "Filter name contains package-0001",
|
||||
"steps": ["setFilter:name=package-0001", "waitForRender"]
|
||||
},
|
||||
{
|
||||
"id": "expand-collapse",
|
||||
"name": "Expand neighbors then collapse",
|
||||
"steps": ["select:pkg-000001", "expandDepth:1", "collapseSelection"]
|
||||
},
|
||||
{
|
||||
"id": "overlay-toggle",
|
||||
"name": "Toggle overlay layer",
|
||||
"steps": ["toggleOverlay:on", "toggleOverlay:off"]
|
||||
}
|
||||
]
|
||||
}
|
||||
22
src/Tools/StellaOps.Bench/ImpactIndex/README.md
Normal file
22
src/Tools/StellaOps.Bench/ImpactIndex/README.md
Normal file
@@ -0,0 +1,22 @@
|
||||
# ImpactIndex Throughput Benchmark
|
||||
|
||||
This harness replays a deterministic set of productKeys to measure cold vs warm lookup performance for the ImpactIndex planner. It is offline-only and relies on the bundled NDJSON dataset.
|
||||
|
||||
## Inputs
|
||||
- `docs/samples/impactindex/products-10k.ndjson` (+ `.sha256`), generated with seed `2025-01-01T00:00:00Z`.
|
||||
- No network calls are performed; all data is local.
|
||||
|
||||
## Running
|
||||
```bash
|
||||
python impact_index_bench.py --input ../../../../docs/samples/impactindex/products-10k.ndjson --output results/impactindex.ndjson --threads 1 --seed 20250101
|
||||
```
|
||||
|
||||
## Output
|
||||
- NDJSON with one record per pass (`cold`, `warm`), fields:
|
||||
`pass`, `startedAtUtc`, `durationMs`, `throughput_items_per_sec`, `p95Ms`, `p99Ms`, `maxMs`, `rssMb`, `managedMb`, `gc_gen2`, `cacheHitRate`.
|
||||
- Use `results/impactindex.ndjson` as evidence and publish hashes alongside runs when promoting to CI.
|
||||
|
||||
## Determinism Notes
|
||||
- Fixed seed controls per-product work and cache access order.
|
||||
- Single-threaded by default; use `--threads 1` for reproducible timing.
|
||||
- Property order is sorted in output NDJSON for stable diffs.
|
||||
1
src/Tools/StellaOps.Bench/ImpactIndex/__init__.py
Normal file
1
src/Tools/StellaOps.Bench/ImpactIndex/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Package marker for ImpactIndex bench harness.
|
||||
Binary file not shown.
146
src/Tools/StellaOps.Bench/ImpactIndex/impact_index_bench.py
Normal file
146
src/Tools/StellaOps.Bench/ImpactIndex/impact_index_bench.py
Normal file
@@ -0,0 +1,146 @@
|
||||
"""ImpactIndex throughput benchmark harness.
|
||||
|
||||
This harness replays a deterministic productKey dataset and records cold vs warm
|
||||
lookup performance. It is intentionally offline-friendly and relies only on the
|
||||
provided NDJSON inputs.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import gc
|
||||
import hashlib
|
||||
import json
|
||||
import random
|
||||
import statistics
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Iterable, List, Tuple
|
||||
|
||||
|
||||
def percentile(values: List[float], pct: float) -> float:
|
||||
"""Return an interpolated percentile to keep outputs deterministic."""
|
||||
if not values:
|
||||
return 0.0
|
||||
ordered = sorted(values)
|
||||
k = (len(ordered) - 1) * (pct / 100.0)
|
||||
lower = int(k)
|
||||
upper = min(lower + 1, len(ordered) - 1)
|
||||
if lower == upper:
|
||||
return float(ordered[lower])
|
||||
fraction = k - lower
|
||||
return float(ordered[lower] + (ordered[upper] - ordered[lower]) * fraction)
|
||||
|
||||
|
||||
def load_product_keys(path: Path) -> List[str]:
|
||||
with path.open(encoding="utf-8") as handle:
|
||||
return [json.loads(line)["productKey"] for line in handle if line.strip()]
|
||||
|
||||
|
||||
class ImpactIndexBench:
|
||||
def __init__(self, seed: int, threads: int):
|
||||
self.rng = random.Random(seed)
|
||||
self.threads = threads
|
||||
self.cache = {}
|
||||
self.cache_hits = 0
|
||||
self.cache_misses = 0
|
||||
|
||||
def _compute_cost(self, product_key: str) -> int:
|
||||
digest = hashlib.blake2b(product_key.encode("utf-8"), digest_size=16).digest()
|
||||
local_rng = random.Random(hashlib.sha1(product_key.encode("utf-8")).hexdigest())
|
||||
iterations = 40 + (digest[0] % 30)
|
||||
value = 0
|
||||
for i in range(iterations):
|
||||
value ^= (digest[i % len(digest)] + i * 31) & 0xFFFFFFFF
|
||||
value ^= local_rng.randint(0, 1024)
|
||||
# Simple deterministic cost proxy
|
||||
return value
|
||||
|
||||
def resolve(self, product_key: str) -> int:
|
||||
if product_key in self.cache:
|
||||
self.cache_hits += 1
|
||||
return self.cache[product_key]
|
||||
|
||||
cost = self._compute_cost(product_key)
|
||||
enriched = (cost % 1000) + 1
|
||||
self.cache[product_key] = enriched
|
||||
self.cache_misses += 1
|
||||
return enriched
|
||||
|
||||
|
||||
def run_pass(pass_name: str, bench: ImpactIndexBench, product_keys: Iterable[str]) -> Tuple[dict, List[float]]:
|
||||
started_at = datetime.now(timezone.utc).isoformat()
|
||||
timings_ms: List[float] = []
|
||||
|
||||
gc.collect()
|
||||
import tracemalloc
|
||||
|
||||
tracemalloc.start()
|
||||
start = time.perf_counter()
|
||||
for key in product_keys:
|
||||
t0 = time.perf_counter()
|
||||
bench.resolve(key)
|
||||
timings_ms.append((time.perf_counter() - t0) * 1000.0)
|
||||
duration_ms = (time.perf_counter() - start) * 1000.0
|
||||
current_bytes, peak_bytes = tracemalloc.get_traced_memory()
|
||||
tracemalloc.stop()
|
||||
|
||||
# GC stats are coarse; we surface gen2 collections as a proxy for managed pressure.
|
||||
if hasattr(gc, "get_stats"):
|
||||
gc_stats = gc.get_stats()
|
||||
gc_gen2 = gc_stats[2]["collections"] if len(gc_stats) > 2 else 0
|
||||
else:
|
||||
counts = gc.get_count()
|
||||
gc_gen2 = counts[2] if len(counts) > 2 else 0
|
||||
|
||||
throughput = (len(timings_ms) / (duration_ms / 1000.0)) if duration_ms else 0.0
|
||||
record = {
|
||||
"pass": pass_name,
|
||||
"startedAtUtc": started_at,
|
||||
"durationMs": round(duration_ms, 3),
|
||||
"throughput_items_per_sec": round(throughput, 3),
|
||||
"p95Ms": round(percentile(timings_ms, 95), 3),
|
||||
"p99Ms": round(percentile(timings_ms, 99), 3),
|
||||
"maxMs": round(max(timings_ms) if timings_ms else 0.0, 3),
|
||||
"rssMb": round(peak_bytes / (1024 * 1024), 3),
|
||||
"managedMb": round(peak_bytes / (1024 * 1024), 3),
|
||||
"gc_gen2": gc_gen2,
|
||||
"cacheHitRate": round(
|
||||
bench.cache_hits / max(1, (bench.cache_hits + bench.cache_misses)), 4
|
||||
),
|
||||
}
|
||||
return record, timings_ms
|
||||
|
||||
|
||||
def write_ndjson(records: List[dict], output: Path):
|
||||
output.parent.mkdir(parents=True, exist_ok=True)
|
||||
with output.open("w", encoding="utf-8") as handle:
|
||||
for record in records:
|
||||
handle.write(json.dumps(record, separators=(",", ":"), sort_keys=True) + "\n")
|
||||
|
||||
|
||||
def parse_args(argv: List[str] | None = None) -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description="ImpactIndex throughput benchmark")
|
||||
parser.add_argument("--input", required=True, help="Path to products-10k.ndjson dataset")
|
||||
parser.add_argument("--output", default="results/impactindex.ndjson", help="Output NDJSON path")
|
||||
parser.add_argument("--threads", type=int, default=1, help="Thread count (deterministic when 1)")
|
||||
parser.add_argument("--seed", type=int, default=20250101, help="Seed for deterministic runs")
|
||||
return parser.parse_args(argv)
|
||||
|
||||
|
||||
def main(argv: List[str] | None = None):
|
||||
args = parse_args(argv)
|
||||
dataset_path = Path(args.input)
|
||||
product_keys = load_product_keys(dataset_path)
|
||||
|
||||
bench = ImpactIndexBench(seed=args.seed, threads=args.threads)
|
||||
cold_record, cold_timings = run_pass("cold", bench, product_keys)
|
||||
warm_record, warm_timings = run_pass("warm", bench, product_keys)
|
||||
|
||||
output_path = Path(args.output)
|
||||
write_ndjson([cold_record, warm_record], output_path)
|
||||
print(f"Wrote {output_path} with {len(product_keys)} productKeys")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
@@ -0,0 +1,2 @@
|
||||
{"cacheHitRate":0.0,"durationMs":4327.484,"gc_gen2":1,"managedMb":0.743,"maxMs":1.454,"p95Ms":0.746,"p99Ms":0.948,"pass":"cold","rssMb":0.743,"startedAtUtc":"2025-12-11T20:46:49.411207+00:00","throughput_items_per_sec":2310.811}
|
||||
{"cacheHitRate":0.5,"durationMs":14.618,"gc_gen2":2,"managedMb":0.31,"maxMs":0.098,"p95Ms":0.001,"p99Ms":0.003,"pass":"warm","rssMb":0.31,"startedAtUtc":"2025-12-11T20:46:53.753219+00:00","throughput_items_per_sec":684092.79}
|
||||
@@ -0,0 +1 @@
|
||||
7e9f1041a4be6f1b0eeed26f1b4e730ae918876dc2846e36dab4403f9164485e impactindex.ndjson
|
||||
1
src/Tools/StellaOps.Bench/ImpactIndex/tests/__init__.py
Normal file
1
src/Tools/StellaOps.Bench/ImpactIndex/tests/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Package marker for unit test discovery.
|
||||
Binary file not shown.
Binary file not shown.
@@ -0,0 +1,61 @@
|
||||
import json
|
||||
import sys
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
ROOT = Path(__file__).resolve().parents[1]
|
||||
if str(ROOT) not in sys.path:
|
||||
sys.path.insert(0, str(ROOT))
|
||||
|
||||
import impact_index_bench as bench
|
||||
|
||||
|
||||
def build_dataset(tmp_path: Path) -> Path:
|
||||
path = tmp_path / "products.ndjson"
|
||||
samples = [
|
||||
{"productKey": "pkg:npm/alpha@1.0.0", "tenant": "bench"},
|
||||
{"productKey": "pkg:npm/bravo@1.0.1", "tenant": "bench"},
|
||||
{"productKey": "pkg:pypi/charlie@2.0.0", "tenant": "bench"},
|
||||
]
|
||||
with path.open("w", encoding="utf-8") as handle:
|
||||
for item in samples:
|
||||
handle.write(json.dumps(item, separators=(",", ":")) + "\n")
|
||||
return path
|
||||
|
||||
|
||||
class ImpactIndexBenchTests(unittest.TestCase):
|
||||
def test_percentile_interpolation(self):
|
||||
values = [1, 2, 3, 4, 5]
|
||||
self.assertEqual(bench.percentile(values, 50), 3)
|
||||
self.assertAlmostEqual(bench.percentile(values, 95), 4.8, places=3)
|
||||
|
||||
def test_bench_runs_cold_and_warm(self):
|
||||
tmp_path = Path(self._get_tempdir())
|
||||
dataset = build_dataset(tmp_path)
|
||||
keys = bench.load_product_keys(dataset)
|
||||
harness = bench.ImpactIndexBench(seed=20250101, threads=1)
|
||||
|
||||
cold_record, cold_timings = bench.run_pass("cold", harness, keys)
|
||||
warm_record, warm_timings = bench.run_pass("warm", harness, keys)
|
||||
|
||||
self.assertEqual(cold_record["pass"], "cold")
|
||||
self.assertEqual(warm_record["pass"], "warm")
|
||||
self.assertEqual(len(cold_timings), len(keys))
|
||||
self.assertEqual(len(warm_timings), len(keys))
|
||||
self.assertGreater(warm_record["cacheHitRate"], cold_record["cacheHitRate"])
|
||||
|
||||
def test_write_ndjson_orders_properties(self):
|
||||
tmp_path = Path(self._get_tempdir())
|
||||
output = tmp_path / "out.ndjson"
|
||||
bench.write_ndjson([{"b": 2, "a": 1}], output)
|
||||
content = output.read_text(encoding="utf-8").strip()
|
||||
self.assertEqual(content, '{"a":1,"b":2}')
|
||||
|
||||
def _get_tempdir(self) -> Path:
|
||||
import tempfile
|
||||
|
||||
return Path(tempfile.mkdtemp(prefix="impact-bench-test-"))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
26
src/Tools/StellaOps.Bench/LinkNotMerge.Vex/README.md
Normal file
26
src/Tools/StellaOps.Bench/LinkNotMerge.Vex/README.md
Normal file
@@ -0,0 +1,26 @@
|
||||
# Link-Not-Merge VEX Bench
|
||||
|
||||
Measures synthetic VEX observation ingest and event emission throughput for the Link-Not-Merge program.
|
||||
|
||||
## Scenarios
|
||||
|
||||
`config.json` defines workloads with varying statement density and tenant fan-out. Metrics captured per scenario:
|
||||
|
||||
- Total latency (ingest + correlation) and p95/max percentiles
|
||||
- Correlator-only latency and Mongo insert latency
|
||||
- Observation throughput (observations/sec)
|
||||
- Event emission throughput (events/sec)
|
||||
- Peak managed heap allocations
|
||||
|
||||
## Running locally
|
||||
|
||||
```bash
|
||||
dotnet run \
|
||||
--project src/Bench/StellaOps.Bench/LinkNotMerge.Vex/StellaOps.Bench.LinkNotMerge.Vex/StellaOps.Bench.LinkNotMerge.Vex.csproj \
|
||||
-- \
|
||||
--csv out/linknotmerge-vex-bench.csv \
|
||||
--json out/linknotmerge-vex-bench.json \
|
||||
--prometheus out/linknotmerge-vex-bench.prom
|
||||
```
|
||||
|
||||
The benchmark exits non-zero if latency thresholds are exceeded, observation or event throughput drops below configured floors, allocations exceed the ceiling, or regression ratios breach the baseline.
|
||||
@@ -0,0 +1,29 @@
|
||||
# LinkNotMerge VEX Benchmark Tests Charter
|
||||
|
||||
## Mission
|
||||
Own the LinkNotMerge VEX benchmark test suite. Validate config parsing, regression reporting, and deterministic benchmark helpers.
|
||||
|
||||
## Responsibilities
|
||||
- Maintain `StellaOps.Bench.LinkNotMerge.Vex.Tests`.
|
||||
- Ensure tests remain deterministic and offline-friendly.
|
||||
- Surface open work on `TASKS.md`; update statuses (TODO/DOING/DONE/BLOCKED/REVIEW).
|
||||
|
||||
## Key Paths
|
||||
- `BaselineLoaderTests.cs`
|
||||
- `BenchmarkScenarioReportTests.cs`
|
||||
- `VexScenarioRunnerTests.cs`
|
||||
|
||||
## Coordination
|
||||
- Bench guild for regression thresholds and baselines.
|
||||
- Platform guild for determinism expectations.
|
||||
|
||||
## Required Reading
|
||||
- `docs/modules/platform/architecture-overview.md`
|
||||
- `docs/README.md`
|
||||
|
||||
## Working Agreement
|
||||
- 1. Update task status to `DOING`/`DONE` in both corresponding sprint file `/docs/implplan/SPRINT_*.md` and the local `TASKS.md` when you start or finish work.
|
||||
- 2. Review this charter and the Required Reading documents before coding; confirm prerequisites are met.
|
||||
- 3. Keep changes deterministic (stable ordering, timestamps, hashes) and align with offline/air-gap expectations.
|
||||
- 4. Coordinate doc updates, tests, and cross-guild communication whenever contracts or workflows change.
|
||||
- 5. Revert to `TODO` if you pause the task without shipping changes; leave notes in commit/PR descriptions for context.
|
||||
@@ -0,0 +1,39 @@
|
||||
using System.IO;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using StellaOps.Bench.LinkNotMerge.Vex.Baseline;
|
||||
using Xunit;
|
||||
|
||||
using StellaOps.TestKit;
|
||||
namespace StellaOps.Bench.LinkNotMerge.Vex.Tests;
|
||||
|
||||
public sealed class BaselineLoaderTests
|
||||
{
|
||||
[Trait("Category", TestCategories.Unit)]
|
||||
[Fact]
|
||||
public async Task LoadAsync_ReadsEntries()
|
||||
{
|
||||
var path = Path.GetTempFileName();
|
||||
try
|
||||
{
|
||||
await File.WriteAllTextAsync(
|
||||
path,
|
||||
"scenario,iterations,observations,statements,events,mean_total_ms,p95_total_ms,max_total_ms,mean_insert_ms,mean_correlation_ms,mean_observation_throughput_per_sec,min_observation_throughput_per_sec,mean_event_throughput_per_sec,min_event_throughput_per_sec,max_allocated_mb\n" +
|
||||
"vex_ingest_baseline,5,4000,24000,12000,620.5,700.1,820.9,320.5,300.0,9800.0,9100.0,4200.0,3900.0,150.0\n");
|
||||
|
||||
var baseline = await BaselineLoader.LoadAsync(path, CancellationToken.None);
|
||||
var entry = Assert.Single(baseline);
|
||||
|
||||
Assert.Equal("vex_ingest_baseline", entry.Key);
|
||||
Assert.Equal(4000, entry.Value.Observations);
|
||||
Assert.Equal(24000, entry.Value.Statements);
|
||||
Assert.Equal(12000, entry.Value.Events);
|
||||
Assert.Equal(700.1, entry.Value.P95TotalMs);
|
||||
Assert.Equal(3900.0, entry.Value.MinEventThroughputPerSecond);
|
||||
}
|
||||
finally
|
||||
{
|
||||
File.Delete(path);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,86 @@
|
||||
using StellaOps.Bench.LinkNotMerge.Vex.Baseline;
|
||||
using StellaOps.Bench.LinkNotMerge.Vex.Reporting;
|
||||
using Xunit;
|
||||
|
||||
using StellaOps.TestKit;
|
||||
namespace StellaOps.Bench.LinkNotMerge.Vex.Tests;
|
||||
|
||||
public sealed class BenchmarkScenarioReportTests
|
||||
{
|
||||
[Trait("Category", TestCategories.Unit)]
|
||||
[Fact]
|
||||
public void RegressionDetection_FlagsBreaches()
|
||||
{
|
||||
var result = new VexScenarioResult(
|
||||
Id: "scenario",
|
||||
Label: "Scenario",
|
||||
Iterations: 3,
|
||||
ObservationCount: 1000,
|
||||
AliasGroups: 100,
|
||||
StatementCount: 6000,
|
||||
EventCount: 3200,
|
||||
TotalStatistics: new DurationStatistics(600, 700, 750),
|
||||
InsertStatistics: new DurationStatistics(320, 360, 380),
|
||||
CorrelationStatistics: new DurationStatistics(280, 320, 340),
|
||||
ObservationThroughputStatistics: new ThroughputStatistics(8000, 7000),
|
||||
EventThroughputStatistics: new ThroughputStatistics(3500, 3200),
|
||||
AllocationStatistics: new AllocationStatistics(180),
|
||||
ThresholdMs: null,
|
||||
MinObservationThroughputPerSecond: null,
|
||||
MinEventThroughputPerSecond: null,
|
||||
MaxAllocatedThresholdMb: null);
|
||||
|
||||
var baseline = new BaselineEntry(
|
||||
ScenarioId: "scenario",
|
||||
Iterations: 3,
|
||||
Observations: 1000,
|
||||
Statements: 6000,
|
||||
Events: 3200,
|
||||
MeanTotalMs: 520,
|
||||
P95TotalMs: 560,
|
||||
MaxTotalMs: 580,
|
||||
MeanInsertMs: 250,
|
||||
MeanCorrelationMs: 260,
|
||||
MeanObservationThroughputPerSecond: 9000,
|
||||
MinObservationThroughputPerSecond: 8500,
|
||||
MeanEventThroughputPerSecond: 4200,
|
||||
MinEventThroughputPerSecond: 3800,
|
||||
MaxAllocatedMb: 140);
|
||||
|
||||
var report = new BenchmarkScenarioReport(result, baseline, regressionLimit: 1.1);
|
||||
|
||||
Assert.True(report.DurationRegressionBreached);
|
||||
Assert.True(report.ObservationThroughputRegressionBreached);
|
||||
Assert.True(report.EventThroughputRegressionBreached);
|
||||
Assert.Contains(report.BuildRegressionFailureMessages(), message => message.Contains("event throughput"));
|
||||
}
|
||||
|
||||
[Trait("Category", TestCategories.Unit)]
|
||||
[Fact]
|
||||
public void RegressionDetection_NoBaseline_NoBreaches()
|
||||
{
|
||||
var result = new VexScenarioResult(
|
||||
Id: "scenario",
|
||||
Label: "Scenario",
|
||||
Iterations: 3,
|
||||
ObservationCount: 1000,
|
||||
AliasGroups: 100,
|
||||
StatementCount: 6000,
|
||||
EventCount: 3200,
|
||||
TotalStatistics: new DurationStatistics(480, 520, 540),
|
||||
InsertStatistics: new DurationStatistics(260, 280, 300),
|
||||
CorrelationStatistics: new DurationStatistics(220, 240, 260),
|
||||
ObservationThroughputStatistics: new ThroughputStatistics(9000, 8800),
|
||||
EventThroughputStatistics: new ThroughputStatistics(4200, 4100),
|
||||
AllocationStatistics: new AllocationStatistics(150),
|
||||
ThresholdMs: null,
|
||||
MinObservationThroughputPerSecond: null,
|
||||
MinEventThroughputPerSecond: null,
|
||||
MaxAllocatedThresholdMb: null);
|
||||
|
||||
var report = new BenchmarkScenarioReport(result, baseline: null, regressionLimit: null);
|
||||
|
||||
Assert.False(report.RegressionBreached);
|
||||
Assert.Empty(report.BuildRegressionFailureMessages());
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,21 @@
|
||||
<Project Sdk="Microsoft.NET.Sdk">
|
||||
<PropertyGroup>
|
||||
<TargetFramework>net10.0</TargetFramework>
|
||||
<Nullable>enable</Nullable>
|
||||
<ImplicitUsings>enable</ImplicitUsings>
|
||||
<LangVersion>preview</LangVersion>
|
||||
<TreatWarningsAsErrors>false</TreatWarningsAsErrors>
|
||||
</PropertyGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<PackageReference Include="coverlet.collector" >
|
||||
<IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
|
||||
<PrivateAssets>all</PrivateAssets>
|
||||
</PackageReference>
|
||||
</ItemGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<ProjectReference Include="..\StellaOps.Bench.LinkNotMerge.Vex\StellaOps.Bench.LinkNotMerge.Vex.csproj" />
|
||||
<ProjectReference Include="../../../../__Libraries/StellaOps.TestKit/StellaOps.TestKit.csproj" />
|
||||
</ItemGroup>
|
||||
</Project>
|
||||
@@ -0,0 +1,11 @@
|
||||
# LinkNotMerge VEX Benchmark Tests Task Board
|
||||
|
||||
This board mirrors active sprint tasks for this module.
|
||||
Source of truth: `docs-archived/implplan/2025-12-29-csproj-audit/SPRINT_20251229_049_BE_csproj_audit_maint_tests.md`.
|
||||
|
||||
| Task ID | Status | Notes |
|
||||
| --- | --- | --- |
|
||||
| AUDIT-0105-M | DONE | Revalidated 2026-01-06. |
|
||||
| AUDIT-0105-T | DONE | Revalidated 2026-01-06. |
|
||||
| AUDIT-0105-A | DONE | Waived (test project; revalidated 2026-01-06). |
|
||||
| REMED-06 | DONE | SOLID review notes captured for SPRINT_20260130_002. |
|
||||
@@ -0,0 +1,36 @@
|
||||
using System.Linq;
|
||||
using System.Threading;
|
||||
using Xunit;
|
||||
|
||||
using StellaOps.TestKit;
|
||||
namespace StellaOps.Bench.LinkNotMerge.Vex.Tests;
|
||||
|
||||
public sealed class VexScenarioRunnerTests
|
||||
{
|
||||
[Trait("Category", TestCategories.Unit)]
|
||||
[Fact]
|
||||
public void Execute_ComputesEvents()
|
||||
{
|
||||
var config = new VexScenarioConfig
|
||||
{
|
||||
Id = "unit",
|
||||
Observations = 600,
|
||||
AliasGroups = 120,
|
||||
StatementsPerObservation = 5,
|
||||
ProductsPerObservation = 3,
|
||||
Tenants = 2,
|
||||
BatchSize = 120,
|
||||
Seed = 12345,
|
||||
};
|
||||
|
||||
var runner = new VexScenarioRunner(config);
|
||||
var result = runner.Execute(2, CancellationToken.None);
|
||||
|
||||
Assert.Equal(600, result.ObservationCount);
|
||||
Assert.True(result.StatementCount > 0);
|
||||
Assert.True(result.EventCount > 0);
|
||||
Assert.All(result.TotalDurationsMs, duration => Assert.True(duration > 0));
|
||||
Assert.All(result.EventThroughputsPerSecond, throughput => Assert.True(throughput > 0));
|
||||
Assert.Equal(result.AggregationResult.EventCount, result.EventCount);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,30 @@
|
||||
# LinkNotMerge VEX Benchmark Charter
|
||||
|
||||
## Mission
|
||||
Own the LinkNotMerge VEX benchmark harness and reporting outputs. Keep runs deterministic, offline-friendly, and aligned with production VEX flows.
|
||||
|
||||
## Responsibilities
|
||||
- Maintain `StellaOps.Bench.LinkNotMerge.Vex` runner, config parsing, and output writers.
|
||||
- Keep benchmark inputs deterministic and document default datasets.
|
||||
- Surface open work on `TASKS.md`; update statuses (TODO/DOING/DONE/BLOCKED/REVIEW).
|
||||
|
||||
## Key Paths
|
||||
- `Program.cs`
|
||||
- `VexScenarioConfig.cs`
|
||||
- `VexScenarioRunner.cs`
|
||||
- `Reporting/`
|
||||
|
||||
## Coordination
|
||||
- Bench guild for performance baselines.
|
||||
- Platform guild for determinism and offline expectations.
|
||||
|
||||
## Required Reading
|
||||
- `docs/modules/platform/architecture-overview.md`
|
||||
- `docs/README.md`
|
||||
|
||||
## Working Agreement
|
||||
- 1. Update task status to `DOING`/`DONE` in both corresponding sprint file `/docs/implplan/SPRINT_*.md` and the local `TASKS.md` when you start or finish work.
|
||||
- 2. Review this charter and the Required Reading documents before coding; confirm prerequisites are met.
|
||||
- 3. Keep changes deterministic (stable ordering, timestamps, hashes) and align with offline/air-gap expectations.
|
||||
- 4. Coordinate doc updates, tests, and cross-guild communication whenever contracts or workflows change.
|
||||
- 5. Revert to `TODO` if you pause the task without shipping changes; leave notes in commit/PR descriptions for context.
|
||||
@@ -0,0 +1,18 @@
|
||||
namespace StellaOps.Bench.LinkNotMerge.Vex.Baseline;
|
||||
|
||||
internal sealed record BaselineEntry(
|
||||
string ScenarioId,
|
||||
int Iterations,
|
||||
int Observations,
|
||||
int Statements,
|
||||
int Events,
|
||||
double MeanTotalMs,
|
||||
double P95TotalMs,
|
||||
double MaxTotalMs,
|
||||
double MeanInsertMs,
|
||||
double MeanCorrelationMs,
|
||||
double MeanObservationThroughputPerSecond,
|
||||
double MinObservationThroughputPerSecond,
|
||||
double MeanEventThroughputPerSecond,
|
||||
double MinEventThroughputPerSecond,
|
||||
double MaxAllocatedMb);
|
||||
@@ -0,0 +1,87 @@
|
||||
using System.Globalization;
|
||||
|
||||
namespace StellaOps.Bench.LinkNotMerge.Vex.Baseline;
|
||||
|
||||
internal static class BaselineLoader
|
||||
{
|
||||
public static async Task<IReadOnlyDictionary<string, BaselineEntry>> LoadAsync(string path, CancellationToken cancellationToken)
|
||||
{
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(path);
|
||||
|
||||
var resolved = Path.GetFullPath(path);
|
||||
if (!File.Exists(resolved))
|
||||
{
|
||||
return new Dictionary<string, BaselineEntry>(StringComparer.OrdinalIgnoreCase);
|
||||
}
|
||||
|
||||
var result = new Dictionary<string, BaselineEntry>(StringComparer.OrdinalIgnoreCase);
|
||||
|
||||
await using var stream = new FileStream(resolved, FileMode.Open, FileAccess.Read, FileShare.Read);
|
||||
using var reader = new StreamReader(stream);
|
||||
|
||||
var lineNumber = 0;
|
||||
while (true)
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
|
||||
var line = await reader.ReadLineAsync().ConfigureAwait(false);
|
||||
if (line is null)
|
||||
{
|
||||
break;
|
||||
}
|
||||
|
||||
lineNumber++;
|
||||
if (lineNumber == 1 || string.IsNullOrWhiteSpace(line))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
var parts = line.Split(',', StringSplitOptions.TrimEntries);
|
||||
if (parts.Length < 15)
|
||||
{
|
||||
throw new InvalidOperationException($"Baseline '{resolved}' line {lineNumber} is invalid (expected 15 columns, found {parts.Length}).");
|
||||
}
|
||||
|
||||
var entry = new BaselineEntry(
|
||||
ScenarioId: parts[0],
|
||||
Iterations: ParseInt(parts[1], resolved, lineNumber),
|
||||
Observations: ParseInt(parts[2], resolved, lineNumber),
|
||||
Statements: ParseInt(parts[3], resolved, lineNumber),
|
||||
Events: ParseInt(parts[4], resolved, lineNumber),
|
||||
MeanTotalMs: ParseDouble(parts[5], resolved, lineNumber),
|
||||
P95TotalMs: ParseDouble(parts[6], resolved, lineNumber),
|
||||
MaxTotalMs: ParseDouble(parts[7], resolved, lineNumber),
|
||||
MeanInsertMs: ParseDouble(parts[8], resolved, lineNumber),
|
||||
MeanCorrelationMs: ParseDouble(parts[9], resolved, lineNumber),
|
||||
MeanObservationThroughputPerSecond: ParseDouble(parts[10], resolved, lineNumber),
|
||||
MinObservationThroughputPerSecond: ParseDouble(parts[11], resolved, lineNumber),
|
||||
MeanEventThroughputPerSecond: ParseDouble(parts[12], resolved, lineNumber),
|
||||
MinEventThroughputPerSecond: ParseDouble(parts[13], resolved, lineNumber),
|
||||
MaxAllocatedMb: ParseDouble(parts[14], resolved, lineNumber));
|
||||
|
||||
result[entry.ScenarioId] = entry;
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
private static int ParseInt(string value, string file, int line)
|
||||
{
|
||||
if (int.TryParse(value, NumberStyles.Integer, CultureInfo.InvariantCulture, out var parsed))
|
||||
{
|
||||
return parsed;
|
||||
}
|
||||
|
||||
throw new InvalidOperationException($"Baseline '{file}' line {line} contains an invalid integer '{value}'.");
|
||||
}
|
||||
|
||||
private static double ParseDouble(string value, string file, int line)
|
||||
{
|
||||
if (double.TryParse(value, NumberStyles.Float, CultureInfo.InvariantCulture, out var parsed))
|
||||
{
|
||||
return parsed;
|
||||
}
|
||||
|
||||
throw new InvalidOperationException($"Baseline '{file}' line {line} contains an invalid number '{value}'.");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,376 @@
|
||||
using System.Globalization;
|
||||
using StellaOps.Bench.LinkNotMerge.Vex.Baseline;
|
||||
using StellaOps.Bench.LinkNotMerge.Vex.Reporting;
|
||||
|
||||
namespace StellaOps.Bench.LinkNotMerge.Vex;
|
||||
|
||||
internal static class Program
|
||||
{
|
||||
public static async Task<int> Main(string[] args)
|
||||
{
|
||||
try
|
||||
{
|
||||
var options = ProgramOptions.Parse(args);
|
||||
var config = await VexBenchmarkConfig.LoadAsync(options.ConfigPath).ConfigureAwait(false);
|
||||
var baseline = await BaselineLoader.LoadAsync(options.BaselinePath, CancellationToken.None).ConfigureAwait(false);
|
||||
|
||||
var results = new List<VexScenarioResult>();
|
||||
var reports = new List<BenchmarkScenarioReport>();
|
||||
var failures = new List<string>();
|
||||
|
||||
foreach (var scenario in config.Scenarios)
|
||||
{
|
||||
var iterations = scenario.ResolveIterations(config.Iterations);
|
||||
var runner = new VexScenarioRunner(scenario);
|
||||
var execution = runner.Execute(iterations, CancellationToken.None);
|
||||
|
||||
var totalStats = DurationStatistics.From(execution.TotalDurationsMs);
|
||||
var insertStats = DurationStatistics.From(execution.InsertDurationsMs);
|
||||
var correlationStats = DurationStatistics.From(execution.CorrelationDurationsMs);
|
||||
var allocationStats = AllocationStatistics.From(execution.AllocatedMb);
|
||||
var observationThroughputStats = ThroughputStatistics.From(execution.ObservationThroughputsPerSecond);
|
||||
var eventThroughputStats = ThroughputStatistics.From(execution.EventThroughputsPerSecond);
|
||||
|
||||
var thresholdMs = scenario.ThresholdMs ?? options.ThresholdMs ?? config.ThresholdMs;
|
||||
var observationFloor = scenario.MinThroughputPerSecond ?? options.MinThroughputPerSecond ?? config.MinThroughputPerSecond;
|
||||
var eventFloor = scenario.MinEventThroughputPerSecond ?? options.MinEventThroughputPerSecond ?? config.MinEventThroughputPerSecond;
|
||||
var allocationLimit = scenario.MaxAllocatedMb ?? options.MaxAllocatedMb ?? config.MaxAllocatedMb;
|
||||
|
||||
var result = new VexScenarioResult(
|
||||
scenario.ScenarioId,
|
||||
scenario.DisplayLabel,
|
||||
iterations,
|
||||
execution.ObservationCount,
|
||||
execution.AliasGroups,
|
||||
execution.StatementCount,
|
||||
execution.EventCount,
|
||||
totalStats,
|
||||
insertStats,
|
||||
correlationStats,
|
||||
observationThroughputStats,
|
||||
eventThroughputStats,
|
||||
allocationStats,
|
||||
thresholdMs,
|
||||
observationFloor,
|
||||
eventFloor,
|
||||
allocationLimit);
|
||||
|
||||
results.Add(result);
|
||||
|
||||
if (thresholdMs is { } threshold && result.TotalStatistics.MaxMs > threshold)
|
||||
{
|
||||
failures.Add($"{result.Id} exceeded total latency threshold: {result.TotalStatistics.MaxMs:F2} ms > {threshold:F2} ms");
|
||||
}
|
||||
|
||||
if (observationFloor is { } obsFloor && result.ObservationThroughputStatistics.MinPerSecond < obsFloor)
|
||||
{
|
||||
failures.Add($"{result.Id} fell below observation throughput floor: {result.ObservationThroughputStatistics.MinPerSecond:N0} obs/s < {obsFloor:N0} obs/s");
|
||||
}
|
||||
|
||||
if (eventFloor is { } evtFloor && result.EventThroughputStatistics.MinPerSecond < evtFloor)
|
||||
{
|
||||
failures.Add($"{result.Id} fell below event throughput floor: {result.EventThroughputStatistics.MinPerSecond:N0} events/s < {evtFloor:N0} events/s");
|
||||
}
|
||||
|
||||
if (allocationLimit is { } limit && result.AllocationStatistics.MaxAllocatedMb > limit)
|
||||
{
|
||||
failures.Add($"{result.Id} exceeded allocation budget: {result.AllocationStatistics.MaxAllocatedMb:F2} MB > {limit:F2} MB");
|
||||
}
|
||||
|
||||
baseline.TryGetValue(result.Id, out var baselineEntry);
|
||||
var report = new BenchmarkScenarioReport(result, baselineEntry, options.RegressionLimit);
|
||||
reports.Add(report);
|
||||
failures.AddRange(report.BuildRegressionFailureMessages());
|
||||
}
|
||||
|
||||
TablePrinter.Print(results);
|
||||
|
||||
if (!string.IsNullOrWhiteSpace(options.CsvOutPath))
|
||||
{
|
||||
CsvWriter.Write(options.CsvOutPath!, results);
|
||||
}
|
||||
|
||||
if (!string.IsNullOrWhiteSpace(options.JsonOutPath))
|
||||
{
|
||||
var metadata = new BenchmarkJsonMetadata(
|
||||
SchemaVersion: "linknotmerge-vex-bench/1.0",
|
||||
CapturedAtUtc: (options.CapturedAtUtc ?? DateTimeOffset.UtcNow).ToUniversalTime(),
|
||||
Commit: options.Commit,
|
||||
Environment: options.Environment);
|
||||
|
||||
await BenchmarkJsonWriter.WriteAsync(options.JsonOutPath!, metadata, reports, CancellationToken.None).ConfigureAwait(false);
|
||||
}
|
||||
|
||||
if (!string.IsNullOrWhiteSpace(options.PrometheusOutPath))
|
||||
{
|
||||
PrometheusWriter.Write(options.PrometheusOutPath!, reports);
|
||||
}
|
||||
|
||||
if (failures.Count > 0)
|
||||
{
|
||||
Console.Error.WriteLine();
|
||||
Console.Error.WriteLine("Benchmark failures detected:");
|
||||
foreach (var failure in failures.Distinct())
|
||||
{
|
||||
Console.Error.WriteLine($" - {failure}");
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
Console.Error.WriteLine($"linknotmerge-vex-bench error: {ex.Message}");
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
private sealed record ProgramOptions(
|
||||
string ConfigPath,
|
||||
int? Iterations,
|
||||
double? ThresholdMs,
|
||||
double? MinThroughputPerSecond,
|
||||
double? MinEventThroughputPerSecond,
|
||||
double? MaxAllocatedMb,
|
||||
string? CsvOutPath,
|
||||
string? JsonOutPath,
|
||||
string? PrometheusOutPath,
|
||||
string BaselinePath,
|
||||
DateTimeOffset? CapturedAtUtc,
|
||||
string? Commit,
|
||||
string? Environment,
|
||||
double? RegressionLimit)
|
||||
{
|
||||
public static ProgramOptions Parse(string[] args)
|
||||
{
|
||||
var configPath = DefaultConfigPath();
|
||||
var baselinePath = DefaultBaselinePath();
|
||||
|
||||
int? iterations = null;
|
||||
double? thresholdMs = null;
|
||||
double? minThroughput = null;
|
||||
double? minEventThroughput = null;
|
||||
double? maxAllocated = null;
|
||||
string? csvOut = null;
|
||||
string? jsonOut = null;
|
||||
string? promOut = null;
|
||||
DateTimeOffset? capturedAt = null;
|
||||
string? commit = null;
|
||||
string? environment = null;
|
||||
double? regressionLimit = null;
|
||||
|
||||
for (var index = 0; index < args.Length; index++)
|
||||
{
|
||||
var current = args[index];
|
||||
switch (current)
|
||||
{
|
||||
case "--config":
|
||||
EnsureNext(args, index);
|
||||
configPath = Path.GetFullPath(args[++index]);
|
||||
break;
|
||||
case "--iterations":
|
||||
EnsureNext(args, index);
|
||||
iterations = int.Parse(args[++index], CultureInfo.InvariantCulture);
|
||||
break;
|
||||
case "--threshold-ms":
|
||||
EnsureNext(args, index);
|
||||
thresholdMs = double.Parse(args[++index], CultureInfo.InvariantCulture);
|
||||
break;
|
||||
case "--min-throughput":
|
||||
EnsureNext(args, index);
|
||||
minThroughput = double.Parse(args[++index], CultureInfo.InvariantCulture);
|
||||
break;
|
||||
case "--min-event-throughput":
|
||||
EnsureNext(args, index);
|
||||
minEventThroughput = double.Parse(args[++index], CultureInfo.InvariantCulture);
|
||||
break;
|
||||
case "--max-allocated-mb":
|
||||
EnsureNext(args, index);
|
||||
maxAllocated = double.Parse(args[++index], CultureInfo.InvariantCulture);
|
||||
break;
|
||||
case "--csv":
|
||||
EnsureNext(args, index);
|
||||
csvOut = args[++index];
|
||||
break;
|
||||
case "--json":
|
||||
EnsureNext(args, index);
|
||||
jsonOut = args[++index];
|
||||
break;
|
||||
case "--prometheus":
|
||||
EnsureNext(args, index);
|
||||
promOut = args[++index];
|
||||
break;
|
||||
case "--baseline":
|
||||
EnsureNext(args, index);
|
||||
baselinePath = Path.GetFullPath(args[++index]);
|
||||
break;
|
||||
case "--captured-at":
|
||||
EnsureNext(args, index);
|
||||
capturedAt = DateTimeOffset.Parse(args[++index], CultureInfo.InvariantCulture, DateTimeStyles.AssumeUniversal);
|
||||
break;
|
||||
case "--commit":
|
||||
EnsureNext(args, index);
|
||||
commit = args[++index];
|
||||
break;
|
||||
case "--environment":
|
||||
EnsureNext(args, index);
|
||||
environment = args[++index];
|
||||
break;
|
||||
case "--regression-limit":
|
||||
EnsureNext(args, index);
|
||||
regressionLimit = double.Parse(args[++index], CultureInfo.InvariantCulture);
|
||||
break;
|
||||
case "--help":
|
||||
case "-h":
|
||||
PrintUsage();
|
||||
System.Environment.Exit(0);
|
||||
break;
|
||||
default:
|
||||
throw new ArgumentException($"Unknown argument '{current}'.");
|
||||
}
|
||||
}
|
||||
|
||||
return new ProgramOptions(
|
||||
configPath,
|
||||
iterations,
|
||||
thresholdMs,
|
||||
minThroughput,
|
||||
minEventThroughput,
|
||||
maxAllocated,
|
||||
csvOut,
|
||||
jsonOut,
|
||||
promOut,
|
||||
baselinePath,
|
||||
capturedAt,
|
||||
commit,
|
||||
environment,
|
||||
regressionLimit);
|
||||
}
|
||||
|
||||
private static string DefaultConfigPath()
|
||||
{
|
||||
var binaryDir = AppContext.BaseDirectory;
|
||||
var projectDir = Path.GetFullPath(Path.Combine(binaryDir, "..", "..", ".."));
|
||||
var benchRoot = Path.GetFullPath(Path.Combine(projectDir, ".."));
|
||||
return Path.Combine(benchRoot, "config.json");
|
||||
}
|
||||
|
||||
private static string DefaultBaselinePath()
|
||||
{
|
||||
var binaryDir = AppContext.BaseDirectory;
|
||||
var projectDir = Path.GetFullPath(Path.Combine(binaryDir, "..", "..", ".."));
|
||||
var benchRoot = Path.GetFullPath(Path.Combine(projectDir, ".."));
|
||||
return Path.Combine(benchRoot, "baseline.csv");
|
||||
}
|
||||
|
||||
private static void EnsureNext(string[] args, int index)
|
||||
{
|
||||
if (index + 1 >= args.Length)
|
||||
{
|
||||
throw new ArgumentException("Missing value for argument.");
|
||||
}
|
||||
}
|
||||
|
||||
private static void PrintUsage()
|
||||
{
|
||||
Console.WriteLine("Usage: linknotmerge-vex-bench [options]");
|
||||
Console.WriteLine();
|
||||
Console.WriteLine("Options:");
|
||||
Console.WriteLine(" --config <path> Path to benchmark configuration JSON.");
|
||||
Console.WriteLine(" --iterations <count> Override iteration count.");
|
||||
Console.WriteLine(" --threshold-ms <value> Global latency threshold in milliseconds.");
|
||||
Console.WriteLine(" --min-throughput <value> Observation throughput floor (observations/second).");
|
||||
Console.WriteLine(" --min-event-throughput <value> Event emission throughput floor (events/second).");
|
||||
Console.WriteLine(" --max-allocated-mb <value> Global allocation ceiling (MB).");
|
||||
Console.WriteLine(" --csv <path> Write CSV results to path.");
|
||||
Console.WriteLine(" --json <path> Write JSON results to path.");
|
||||
Console.WriteLine(" --prometheus <path> Write Prometheus exposition metrics to path.");
|
||||
Console.WriteLine(" --baseline <path> Baseline CSV path.");
|
||||
Console.WriteLine(" --captured-at <iso8601> Timestamp to embed in JSON metadata.");
|
||||
Console.WriteLine(" --commit <sha> Commit identifier for metadata.");
|
||||
Console.WriteLine(" --environment <name> Environment label for metadata.");
|
||||
Console.WriteLine(" --regression-limit <value> Regression multiplier (default 1.15).");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
internal static class TablePrinter
|
||||
{
|
||||
public static void Print(IEnumerable<VexScenarioResult> results)
|
||||
{
|
||||
Console.WriteLine("Scenario | Observations | Statements | Events | Total(ms) | Correl(ms) | Insert(ms) | Obs k/s | Evnt k/s | Alloc(MB)");
|
||||
Console.WriteLine("---------------------------- | ------------- | ---------- | ------- | ---------- | ---------- | ----------- | ------- | -------- | --------");
|
||||
foreach (var row in results)
|
||||
{
|
||||
Console.WriteLine(string.Join(" | ", new[]
|
||||
{
|
||||
row.IdColumn,
|
||||
row.ObservationsColumn,
|
||||
row.StatementColumn,
|
||||
row.EventColumn,
|
||||
row.TotalMeanColumn,
|
||||
row.CorrelationMeanColumn,
|
||||
row.InsertMeanColumn,
|
||||
row.ObservationThroughputColumn,
|
||||
row.EventThroughputColumn,
|
||||
row.AllocatedColumn,
|
||||
}));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
internal static class CsvWriter
|
||||
{
|
||||
public static void Write(string path, IEnumerable<VexScenarioResult> results)
|
||||
{
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(path);
|
||||
ArgumentNullException.ThrowIfNull(results);
|
||||
|
||||
var resolved = Path.GetFullPath(path);
|
||||
var directory = Path.GetDirectoryName(resolved);
|
||||
if (!string.IsNullOrEmpty(directory))
|
||||
{
|
||||
Directory.CreateDirectory(directory);
|
||||
}
|
||||
|
||||
using var stream = new FileStream(resolved, FileMode.Create, FileAccess.Write, FileShare.None);
|
||||
using var writer = new StreamWriter(stream);
|
||||
writer.WriteLine("scenario,iterations,observations,statements,events,mean_total_ms,p95_total_ms,max_total_ms,mean_insert_ms,mean_correlation_ms,mean_observation_throughput_per_sec,min_observation_throughput_per_sec,mean_event_throughput_per_sec,min_event_throughput_per_sec,max_allocated_mb");
|
||||
|
||||
foreach (var result in results)
|
||||
{
|
||||
writer.Write(result.Id);
|
||||
writer.Write(',');
|
||||
writer.Write(result.Iterations.ToString(CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.ObservationCount.ToString(CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.StatementCount.ToString(CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.EventCount.ToString(CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.TotalStatistics.MeanMs.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.TotalStatistics.P95Ms.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.TotalStatistics.MaxMs.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.InsertStatistics.MeanMs.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.CorrelationStatistics.MeanMs.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.ObservationThroughputStatistics.MeanPerSecond.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.ObservationThroughputStatistics.MinPerSecond.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.EventThroughputStatistics.MeanPerSecond.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.EventThroughputStatistics.MinPerSecond.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.AllocationStatistics.MaxAllocatedMb.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.WriteLine();
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
using System.Runtime.CompilerServices;
|
||||
|
||||
[assembly: InternalsVisibleTo("StellaOps.Bench.LinkNotMerge.Vex.Tests")]
|
||||
@@ -0,0 +1,151 @@
|
||||
using System.Text.Json;
|
||||
using System.Text.Json.Serialization;
|
||||
|
||||
namespace StellaOps.Bench.LinkNotMerge.Vex.Reporting;
|
||||
|
||||
internal static class BenchmarkJsonWriter
|
||||
{
|
||||
private static readonly JsonSerializerOptions SerializerOptions = new(JsonSerializerDefaults.Web)
|
||||
{
|
||||
WriteIndented = true,
|
||||
DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull,
|
||||
};
|
||||
|
||||
public static async Task WriteAsync(
|
||||
string path,
|
||||
BenchmarkJsonMetadata metadata,
|
||||
IReadOnlyList<BenchmarkScenarioReport> reports,
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(path);
|
||||
ArgumentNullException.ThrowIfNull(metadata);
|
||||
ArgumentNullException.ThrowIfNull(reports);
|
||||
|
||||
var resolved = Path.GetFullPath(path);
|
||||
var directory = Path.GetDirectoryName(resolved);
|
||||
if (!string.IsNullOrEmpty(directory))
|
||||
{
|
||||
Directory.CreateDirectory(directory);
|
||||
}
|
||||
|
||||
var document = new BenchmarkJsonDocument(
|
||||
metadata.SchemaVersion,
|
||||
metadata.CapturedAtUtc,
|
||||
metadata.Commit,
|
||||
metadata.Environment,
|
||||
reports.Select(CreateScenario).ToArray());
|
||||
|
||||
await using var stream = new FileStream(resolved, FileMode.Create, FileAccess.Write, FileShare.None);
|
||||
await JsonSerializer.SerializeAsync(stream, document, SerializerOptions, cancellationToken).ConfigureAwait(false);
|
||||
await stream.FlushAsync(cancellationToken).ConfigureAwait(false);
|
||||
}
|
||||
|
||||
private static BenchmarkJsonScenario CreateScenario(BenchmarkScenarioReport report)
|
||||
{
|
||||
var baseline = report.Baseline;
|
||||
return new BenchmarkJsonScenario(
|
||||
report.Result.Id,
|
||||
report.Result.Label,
|
||||
report.Result.Iterations,
|
||||
report.Result.ObservationCount,
|
||||
report.Result.StatementCount,
|
||||
report.Result.EventCount,
|
||||
report.Result.TotalStatistics.MeanMs,
|
||||
report.Result.TotalStatistics.P95Ms,
|
||||
report.Result.TotalStatistics.MaxMs,
|
||||
report.Result.InsertStatistics.MeanMs,
|
||||
report.Result.CorrelationStatistics.MeanMs,
|
||||
report.Result.ObservationThroughputStatistics.MeanPerSecond,
|
||||
report.Result.ObservationThroughputStatistics.MinPerSecond,
|
||||
report.Result.EventThroughputStatistics.MeanPerSecond,
|
||||
report.Result.EventThroughputStatistics.MinPerSecond,
|
||||
report.Result.AllocationStatistics.MaxAllocatedMb,
|
||||
report.Result.ThresholdMs,
|
||||
report.Result.MinObservationThroughputPerSecond,
|
||||
report.Result.MinEventThroughputPerSecond,
|
||||
report.Result.MaxAllocatedThresholdMb,
|
||||
baseline is null
|
||||
? null
|
||||
: new BenchmarkJsonScenarioBaseline(
|
||||
baseline.Iterations,
|
||||
baseline.Observations,
|
||||
baseline.Statements,
|
||||
baseline.Events,
|
||||
baseline.MeanTotalMs,
|
||||
baseline.P95TotalMs,
|
||||
baseline.MaxTotalMs,
|
||||
baseline.MeanInsertMs,
|
||||
baseline.MeanCorrelationMs,
|
||||
baseline.MeanObservationThroughputPerSecond,
|
||||
baseline.MinObservationThroughputPerSecond,
|
||||
baseline.MeanEventThroughputPerSecond,
|
||||
baseline.MinEventThroughputPerSecond,
|
||||
baseline.MaxAllocatedMb),
|
||||
new BenchmarkJsonScenarioRegression(
|
||||
report.DurationRegressionRatio,
|
||||
report.ObservationThroughputRegressionRatio,
|
||||
report.EventThroughputRegressionRatio,
|
||||
report.RegressionLimit,
|
||||
report.RegressionBreached));
|
||||
}
|
||||
|
||||
private sealed record BenchmarkJsonDocument(
|
||||
string SchemaVersion,
|
||||
DateTimeOffset CapturedAt,
|
||||
string? Commit,
|
||||
string? Environment,
|
||||
IReadOnlyList<BenchmarkJsonScenario> Scenarios);
|
||||
|
||||
private sealed record BenchmarkJsonScenario(
|
||||
string Id,
|
||||
string Label,
|
||||
int Iterations,
|
||||
int Observations,
|
||||
int Statements,
|
||||
int Events,
|
||||
double MeanTotalMs,
|
||||
double P95TotalMs,
|
||||
double MaxTotalMs,
|
||||
double MeanInsertMs,
|
||||
double MeanCorrelationMs,
|
||||
double MeanObservationThroughputPerSecond,
|
||||
double MinObservationThroughputPerSecond,
|
||||
double MeanEventThroughputPerSecond,
|
||||
double MinEventThroughputPerSecond,
|
||||
double MaxAllocatedMb,
|
||||
double? ThresholdMs,
|
||||
double? MinObservationThroughputThresholdPerSecond,
|
||||
double? MinEventThroughputThresholdPerSecond,
|
||||
double? MaxAllocatedThresholdMb,
|
||||
BenchmarkJsonScenarioBaseline? Baseline,
|
||||
BenchmarkJsonScenarioRegression Regression);
|
||||
|
||||
private sealed record BenchmarkJsonScenarioBaseline(
|
||||
int Iterations,
|
||||
int Observations,
|
||||
int Statements,
|
||||
int Events,
|
||||
double MeanTotalMs,
|
||||
double P95TotalMs,
|
||||
double MaxTotalMs,
|
||||
double MeanInsertMs,
|
||||
double MeanCorrelationMs,
|
||||
double MeanObservationThroughputPerSecond,
|
||||
double MinObservationThroughputPerSecond,
|
||||
double MeanEventThroughputPerSecond,
|
||||
double MinEventThroughputPerSecond,
|
||||
double MaxAllocatedMb);
|
||||
|
||||
private sealed record BenchmarkJsonScenarioRegression(
|
||||
double? DurationRatio,
|
||||
double? ObservationThroughputRatio,
|
||||
double? EventThroughputRatio,
|
||||
double Limit,
|
||||
bool Breached);
|
||||
}
|
||||
|
||||
internal sealed record BenchmarkJsonMetadata(
|
||||
string SchemaVersion,
|
||||
DateTimeOffset CapturedAtUtc,
|
||||
string? Commit,
|
||||
string? Environment);
|
||||
@@ -0,0 +1,89 @@
|
||||
using StellaOps.Bench.LinkNotMerge.Vex.Baseline;
|
||||
|
||||
namespace StellaOps.Bench.LinkNotMerge.Vex.Reporting;
|
||||
|
||||
internal sealed class BenchmarkScenarioReport
|
||||
{
|
||||
private const double DefaultRegressionLimit = 1.15d;
|
||||
|
||||
public BenchmarkScenarioReport(VexScenarioResult result, BaselineEntry? baseline, double? regressionLimit = null)
|
||||
{
|
||||
Result = result ?? throw new ArgumentNullException(nameof(result));
|
||||
Baseline = baseline;
|
||||
RegressionLimit = regressionLimit is { } limit && limit > 0 ? limit : DefaultRegressionLimit;
|
||||
DurationRegressionRatio = CalculateRatio(result.TotalStatistics.MaxMs, baseline?.MaxTotalMs);
|
||||
ObservationThroughputRegressionRatio = CalculateInverseRatio(result.ObservationThroughputStatistics.MinPerSecond, baseline?.MinObservationThroughputPerSecond);
|
||||
EventThroughputRegressionRatio = CalculateInverseRatio(result.EventThroughputStatistics.MinPerSecond, baseline?.MinEventThroughputPerSecond);
|
||||
}
|
||||
|
||||
public VexScenarioResult Result { get; }
|
||||
|
||||
public BaselineEntry? Baseline { get; }
|
||||
|
||||
public double RegressionLimit { get; }
|
||||
|
||||
public double? DurationRegressionRatio { get; }
|
||||
|
||||
public double? ObservationThroughputRegressionRatio { get; }
|
||||
|
||||
public double? EventThroughputRegressionRatio { get; }
|
||||
|
||||
public bool DurationRegressionBreached => DurationRegressionRatio is { } ratio && ratio >= RegressionLimit;
|
||||
|
||||
public bool ObservationThroughputRegressionBreached => ObservationThroughputRegressionRatio is { } ratio && ratio >= RegressionLimit;
|
||||
|
||||
public bool EventThroughputRegressionBreached => EventThroughputRegressionRatio is { } ratio && ratio >= RegressionLimit;
|
||||
|
||||
public bool RegressionBreached => DurationRegressionBreached || ObservationThroughputRegressionBreached || EventThroughputRegressionBreached;
|
||||
|
||||
public IEnumerable<string> BuildRegressionFailureMessages()
|
||||
{
|
||||
if (Baseline is null)
|
||||
{
|
||||
yield break;
|
||||
}
|
||||
|
||||
if (DurationRegressionBreached && DurationRegressionRatio is { } durationRatio)
|
||||
{
|
||||
var delta = (durationRatio - 1d) * 100d;
|
||||
yield return $"{Result.Id} exceeded max duration budget: {Result.TotalStatistics.MaxMs:F2} ms vs baseline {Baseline.MaxTotalMs:F2} ms (+{delta:F1}%).";
|
||||
}
|
||||
|
||||
if (ObservationThroughputRegressionBreached && ObservationThroughputRegressionRatio is { } obsRatio)
|
||||
{
|
||||
var delta = (obsRatio - 1d) * 100d;
|
||||
yield return $"{Result.Id} observation throughput regressed: min {Result.ObservationThroughputStatistics.MinPerSecond:N0} obs/s vs baseline {Baseline.MinObservationThroughputPerSecond:N0} obs/s (-{delta:F1}%).";
|
||||
}
|
||||
|
||||
if (EventThroughputRegressionBreached && EventThroughputRegressionRatio is { } evtRatio)
|
||||
{
|
||||
var delta = (evtRatio - 1d) * 100d;
|
||||
yield return $"{Result.Id} event throughput regressed: min {Result.EventThroughputStatistics.MinPerSecond:N0} events/s vs baseline {Baseline.MinEventThroughputPerSecond:N0} events/s (-{delta:F1}%).";
|
||||
}
|
||||
}
|
||||
|
||||
private static double? CalculateRatio(double current, double? baseline)
|
||||
{
|
||||
if (!baseline.HasValue || baseline.Value <= 0d)
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
return current / baseline.Value;
|
||||
}
|
||||
|
||||
private static double? CalculateInverseRatio(double current, double? baseline)
|
||||
{
|
||||
if (!baseline.HasValue || baseline.Value <= 0d)
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
if (current <= 0d)
|
||||
{
|
||||
return double.PositiveInfinity;
|
||||
}
|
||||
|
||||
return baseline.Value / current;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,94 @@
|
||||
using System.Globalization;
|
||||
using System.Text;
|
||||
|
||||
namespace StellaOps.Bench.LinkNotMerge.Vex.Reporting;
|
||||
|
||||
internal static class PrometheusWriter
|
||||
{
|
||||
public static void Write(string path, IReadOnlyList<BenchmarkScenarioReport> reports)
|
||||
{
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(path);
|
||||
ArgumentNullException.ThrowIfNull(reports);
|
||||
|
||||
var resolved = Path.GetFullPath(path);
|
||||
var directory = Path.GetDirectoryName(resolved);
|
||||
if (!string.IsNullOrEmpty(directory))
|
||||
{
|
||||
Directory.CreateDirectory(directory);
|
||||
}
|
||||
|
||||
var builder = new StringBuilder();
|
||||
builder.AppendLine("# HELP linknotmerge_vex_bench_total_ms Link-Not-Merge VEX benchmark total duration (milliseconds).");
|
||||
builder.AppendLine("# TYPE linknotmerge_vex_bench_total_ms gauge");
|
||||
builder.AppendLine("# HELP linknotmerge_vex_bench_throughput_per_sec Link-Not-Merge VEX benchmark observation throughput (observations per second).");
|
||||
builder.AppendLine("# TYPE linknotmerge_vex_bench_throughput_per_sec gauge");
|
||||
builder.AppendLine("# HELP linknotmerge_vex_bench_event_throughput_per_sec Link-Not-Merge VEX benchmark event throughput (events per second).");
|
||||
builder.AppendLine("# TYPE linknotmerge_vex_bench_event_throughput_per_sec gauge");
|
||||
builder.AppendLine("# HELP linknotmerge_vex_bench_allocated_mb Link-Not-Merge VEX benchmark max allocations (megabytes).");
|
||||
builder.AppendLine("# TYPE linknotmerge_vex_bench_allocated_mb gauge");
|
||||
|
||||
foreach (var report in reports)
|
||||
{
|
||||
var scenario = Escape(report.Result.Id);
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_mean_total_ms", scenario, report.Result.TotalStatistics.MeanMs);
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_p95_total_ms", scenario, report.Result.TotalStatistics.P95Ms);
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_max_total_ms", scenario, report.Result.TotalStatistics.MaxMs);
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_threshold_ms", scenario, report.Result.ThresholdMs);
|
||||
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_mean_observation_throughput_per_sec", scenario, report.Result.ObservationThroughputStatistics.MeanPerSecond);
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_min_observation_throughput_per_sec", scenario, report.Result.ObservationThroughputStatistics.MinPerSecond);
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_observation_throughput_floor_per_sec", scenario, report.Result.MinObservationThroughputPerSecond);
|
||||
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_mean_event_throughput_per_sec", scenario, report.Result.EventThroughputStatistics.MeanPerSecond);
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_min_event_throughput_per_sec", scenario, report.Result.EventThroughputStatistics.MinPerSecond);
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_event_throughput_floor_per_sec", scenario, report.Result.MinEventThroughputPerSecond);
|
||||
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_max_allocated_mb", scenario, report.Result.AllocationStatistics.MaxAllocatedMb);
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_max_allocated_threshold_mb", scenario, report.Result.MaxAllocatedThresholdMb);
|
||||
|
||||
if (report.Baseline is { } baseline)
|
||||
{
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_baseline_max_total_ms", scenario, baseline.MaxTotalMs);
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_baseline_min_observation_throughput_per_sec", scenario, baseline.MinObservationThroughputPerSecond);
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_baseline_min_event_throughput_per_sec", scenario, baseline.MinEventThroughputPerSecond);
|
||||
}
|
||||
|
||||
if (report.DurationRegressionRatio is { } durationRatio)
|
||||
{
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_duration_regression_ratio", scenario, durationRatio);
|
||||
}
|
||||
|
||||
if (report.ObservationThroughputRegressionRatio is { } obsRatio)
|
||||
{
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_observation_regression_ratio", scenario, obsRatio);
|
||||
}
|
||||
|
||||
if (report.EventThroughputRegressionRatio is { } evtRatio)
|
||||
{
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_event_regression_ratio", scenario, evtRatio);
|
||||
}
|
||||
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_regression_limit", scenario, report.RegressionLimit);
|
||||
AppendMetric(builder, "linknotmerge_vex_bench_regression_breached", scenario, report.RegressionBreached ? 1 : 0);
|
||||
}
|
||||
|
||||
File.WriteAllText(resolved, builder.ToString(), Encoding.UTF8);
|
||||
}
|
||||
|
||||
private static void AppendMetric(StringBuilder builder, string metric, string scenario, double? value)
|
||||
{
|
||||
if (!value.HasValue)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
builder.Append(metric);
|
||||
builder.Append("{scenario=\"");
|
||||
builder.Append(scenario);
|
||||
builder.Append("\"} ");
|
||||
builder.AppendLine(value.Value.ToString("G17", CultureInfo.InvariantCulture));
|
||||
}
|
||||
|
||||
private static string Escape(string value) =>
|
||||
value.Replace("\\", "\\\\", StringComparison.Ordinal).Replace("\"", "\\\"", StringComparison.Ordinal);
|
||||
}
|
||||
@@ -0,0 +1,84 @@
|
||||
namespace StellaOps.Bench.LinkNotMerge.Vex;
|
||||
|
||||
internal readonly record struct DurationStatistics(double MeanMs, double P95Ms, double MaxMs)
|
||||
{
|
||||
public static DurationStatistics From(IReadOnlyList<double> values)
|
||||
{
|
||||
if (values.Count == 0)
|
||||
{
|
||||
return new DurationStatistics(0, 0, 0);
|
||||
}
|
||||
|
||||
var sorted = values.ToArray();
|
||||
Array.Sort(sorted);
|
||||
|
||||
var total = 0d;
|
||||
foreach (var value in values)
|
||||
{
|
||||
total += value;
|
||||
}
|
||||
|
||||
var mean = total / values.Count;
|
||||
var p95 = Percentile(sorted, 95);
|
||||
var max = sorted[^1];
|
||||
|
||||
return new DurationStatistics(mean, p95, max);
|
||||
}
|
||||
|
||||
private static double Percentile(IReadOnlyList<double> sorted, double percentile)
|
||||
{
|
||||
if (sorted.Count == 0)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
var rank = (percentile / 100d) * (sorted.Count - 1);
|
||||
var lower = (int)Math.Floor(rank);
|
||||
var upper = (int)Math.Ceiling(rank);
|
||||
var weight = rank - lower;
|
||||
|
||||
if (upper >= sorted.Count)
|
||||
{
|
||||
return sorted[lower];
|
||||
}
|
||||
|
||||
return sorted[lower] + weight * (sorted[upper] - sorted[lower]);
|
||||
}
|
||||
}
|
||||
|
||||
internal readonly record struct ThroughputStatistics(double MeanPerSecond, double MinPerSecond)
|
||||
{
|
||||
public static ThroughputStatistics From(IReadOnlyList<double> values)
|
||||
{
|
||||
if (values.Count == 0)
|
||||
{
|
||||
return new ThroughputStatistics(0, 0);
|
||||
}
|
||||
|
||||
var total = 0d;
|
||||
var min = double.MaxValue;
|
||||
|
||||
foreach (var value in values)
|
||||
{
|
||||
total += value;
|
||||
min = Math.Min(min, value);
|
||||
}
|
||||
|
||||
var mean = total / values.Count;
|
||||
return new ThroughputStatistics(mean, min);
|
||||
}
|
||||
}
|
||||
|
||||
internal readonly record struct AllocationStatistics(double MaxAllocatedMb)
|
||||
{
|
||||
public static AllocationStatistics From(IReadOnlyList<double> values)
|
||||
{
|
||||
var max = 0d;
|
||||
foreach (var value in values)
|
||||
{
|
||||
max = Math.Max(max, value);
|
||||
}
|
||||
|
||||
return new AllocationStatistics(max);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,14 @@
|
||||
<Project Sdk="Microsoft.NET.Sdk">
|
||||
<PropertyGroup>
|
||||
<OutputType>Exe</OutputType>
|
||||
<TargetFramework>net10.0</TargetFramework>
|
||||
<Nullable>enable</Nullable>
|
||||
<ImplicitUsings>enable</ImplicitUsings>
|
||||
<LangVersion>preview</LangVersion>
|
||||
<TreatWarningsAsErrors>true</TreatWarningsAsErrors>
|
||||
</PropertyGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<PackageReference Include="Microsoft.Extensions.Logging.Abstractions" />
|
||||
</ItemGroup>
|
||||
</Project>
|
||||
@@ -0,0 +1,11 @@
|
||||
# LinkNotMerge VEX Benchmark Task Board
|
||||
|
||||
This board mirrors active sprint tasks for this module.
|
||||
Source of truth: `docs-archived/implplan/2025-12-29-csproj-audit/SPRINT_20251229_049_BE_csproj_audit_maint_tests.md`.
|
||||
|
||||
| Task ID | Status | Notes |
|
||||
| --- | --- | --- |
|
||||
| AUDIT-0104-M | DONE | Revalidated 2026-01-06. |
|
||||
| AUDIT-0104-T | DONE | Revalidated 2026-01-06. |
|
||||
| AUDIT-0104-A | DONE | Waived (benchmark project; revalidated 2026-01-06). |
|
||||
| REMED-06 | DONE | SOLID review notes captured for SPRINT_20260130_002. |
|
||||
@@ -0,0 +1,150 @@
|
||||
namespace StellaOps.Bench.LinkNotMerge.Vex;
|
||||
|
||||
internal sealed class VexLinksetAggregator
|
||||
{
|
||||
public VexAggregationResult Correlate(IEnumerable<VexObservationDocument> documents)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(documents);
|
||||
|
||||
var groups = new Dictionary<string, VexAccumulator>(StringComparer.Ordinal);
|
||||
var statementsSeen = 0;
|
||||
|
||||
foreach (var document in documents)
|
||||
{
|
||||
var tenant = document.Tenant;
|
||||
var aliases = document.Aliases;
|
||||
var statements = document.Statements;
|
||||
|
||||
foreach (var statementValue in statements)
|
||||
{
|
||||
statementsSeen++;
|
||||
|
||||
var status = statementValue.Status;
|
||||
var justification = statementValue.Justification;
|
||||
var lastUpdated = statementValue.LastUpdated;
|
||||
var productKey = statementValue.Product.Purl;
|
||||
|
||||
foreach (var alias in aliases)
|
||||
{
|
||||
var key = string.Create(alias.Length + tenant.Length + productKey.Length + 2, (tenant, alias, productKey), static (span, data) =>
|
||||
{
|
||||
var (tenantValue, aliasValue, productValue) = data;
|
||||
var offset = 0;
|
||||
tenantValue.AsSpan().CopyTo(span);
|
||||
offset += tenantValue.Length;
|
||||
span[offset++] = '|';
|
||||
aliasValue.AsSpan().CopyTo(span[offset..]);
|
||||
offset += aliasValue.Length;
|
||||
span[offset++] = '|';
|
||||
productValue.AsSpan().CopyTo(span[offset..]);
|
||||
});
|
||||
|
||||
if (!groups.TryGetValue(key, out var accumulator))
|
||||
{
|
||||
accumulator = new VexAccumulator(tenant, alias, productKey);
|
||||
groups[key] = accumulator;
|
||||
}
|
||||
|
||||
accumulator.AddStatement(status, justification, lastUpdated);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var eventDocuments = new List<VexEvent>(groups.Count);
|
||||
foreach (var accumulator in groups.Values)
|
||||
{
|
||||
if (accumulator.ShouldEmitEvent)
|
||||
{
|
||||
eventDocuments.Add(accumulator.ToEvent());
|
||||
}
|
||||
}
|
||||
|
||||
return new VexAggregationResult(
|
||||
LinksetCount: groups.Count,
|
||||
StatementCount: statementsSeen,
|
||||
EventCount: eventDocuments.Count,
|
||||
EventDocuments: eventDocuments);
|
||||
}
|
||||
|
||||
private sealed class VexAccumulator
|
||||
{
|
||||
private readonly Dictionary<string, int> _statusCounts = new(StringComparer.Ordinal);
|
||||
private readonly HashSet<string> _justifications = new(StringComparer.Ordinal);
|
||||
private readonly string _tenant;
|
||||
private readonly string _alias;
|
||||
private readonly string _product;
|
||||
private DateTimeOffset? _latest;
|
||||
|
||||
public VexAccumulator(string tenant, string alias, string product)
|
||||
{
|
||||
_tenant = tenant;
|
||||
_alias = alias;
|
||||
_product = product;
|
||||
}
|
||||
|
||||
public void AddStatement(string status, string justification, DateTimeOffset updatedAt)
|
||||
{
|
||||
if (!_statusCounts.TryAdd(status, 1))
|
||||
{
|
||||
_statusCounts[status]++;
|
||||
}
|
||||
|
||||
if (!string.IsNullOrEmpty(justification))
|
||||
{
|
||||
_justifications.Add(justification);
|
||||
}
|
||||
|
||||
if (updatedAt != default)
|
||||
{
|
||||
var value = updatedAt.ToUniversalTime();
|
||||
if (!_latest.HasValue || value > _latest.Value)
|
||||
{
|
||||
_latest = value;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public bool ShouldEmitEvent
|
||||
{
|
||||
get
|
||||
{
|
||||
if (_statusCounts.TryGetValue("affected", out var affected) && affected > 0)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
if (_statusCounts.TryGetValue("under_investigation", out var investigating) && investigating > 0)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
public VexEvent ToEvent()
|
||||
{
|
||||
return new VexEvent(
|
||||
_tenant,
|
||||
_alias,
|
||||
_product,
|
||||
new Dictionary<string, int>(_statusCounts, StringComparer.Ordinal),
|
||||
_justifications.ToArray(),
|
||||
_latest);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
internal sealed record VexAggregationResult(
|
||||
int LinksetCount,
|
||||
int StatementCount,
|
||||
int EventCount,
|
||||
IReadOnlyList<VexEvent> EventDocuments);
|
||||
|
||||
internal sealed record VexEvent(
|
||||
string Tenant,
|
||||
string Alias,
|
||||
string Product,
|
||||
IReadOnlyDictionary<string, int> Statuses,
|
||||
IReadOnlyCollection<string> Justifications,
|
||||
DateTimeOffset? LastUpdated);
|
||||
@@ -0,0 +1,195 @@
|
||||
using System.Collections.Immutable;
|
||||
using System.Globalization;
|
||||
using System.Security.Cryptography;
|
||||
using System.Text;
|
||||
|
||||
namespace StellaOps.Bench.LinkNotMerge.Vex;
|
||||
|
||||
internal static class VexObservationGenerator
|
||||
{
|
||||
private static readonly ImmutableArray<string> StatusPool = ImmutableArray.Create(
|
||||
"affected",
|
||||
"not_affected",
|
||||
"under_investigation");
|
||||
|
||||
private static readonly ImmutableArray<string> JustificationPool = ImmutableArray.Create(
|
||||
"exploitation_mitigated",
|
||||
"component_not_present",
|
||||
"vulnerable_code_not_present",
|
||||
"vulnerable_code_not_in_execute_path");
|
||||
|
||||
public static IReadOnlyList<VexObservationSeed> Generate(VexScenarioConfig config)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(config);
|
||||
|
||||
var observationCount = config.ResolveObservationCount();
|
||||
var aliasGroups = config.ResolveAliasGroups();
|
||||
var statementsPerObservation = config.ResolveStatementsPerObservation();
|
||||
var tenantCount = config.ResolveTenantCount();
|
||||
var productsPerObservation = config.ResolveProductsPerObservation();
|
||||
var seed = config.ResolveSeed();
|
||||
|
||||
var seeds = new VexObservationSeed[observationCount];
|
||||
var random = new Random(seed);
|
||||
var baseTime = new DateTimeOffset(2025, 10, 1, 0, 0, 0, TimeSpan.Zero);
|
||||
|
||||
for (var index = 0; index < observationCount; index++)
|
||||
{
|
||||
var tenantIndex = index % tenantCount;
|
||||
var tenant = $"tenant-{tenantIndex:D2}";
|
||||
var group = index % aliasGroups;
|
||||
var revision = index / aliasGroups;
|
||||
var vulnerabilityAlias = $"CVE-2025-{group:D4}";
|
||||
var upstreamId = $"VEX-{group:D4}-{revision:D3}";
|
||||
var observationId = $"{tenant}:vex:{group:D5}:{revision:D6}";
|
||||
|
||||
var fetchedAt = baseTime.AddMinutes(revision);
|
||||
var receivedAt = fetchedAt.AddSeconds(2);
|
||||
var documentVersion = fetchedAt.AddSeconds(15).ToString("O", CultureInfo.InvariantCulture);
|
||||
|
||||
var products = CreateProducts(group, revision, productsPerObservation);
|
||||
var statements = CreateStatements(vulnerabilityAlias, products, statementsPerObservation, random, fetchedAt);
|
||||
var contentHash = ComputeContentHash(upstreamId, vulnerabilityAlias, statements, tenant, group, revision);
|
||||
|
||||
var aliases = ImmutableArray.Create(vulnerabilityAlias, $"GHSA-{group:D4}-{revision % 26 + 'a'}{revision % 26 + 'a'}");
|
||||
var references = ImmutableArray.Create(
|
||||
new VexReference("advisory", $"https://vendor.example/advisories/{vulnerabilityAlias.ToLowerInvariant()}"),
|
||||
new VexReference("fix", $"https://vendor.example/patch/{vulnerabilityAlias.ToLowerInvariant()}"));
|
||||
|
||||
seeds[index] = new VexObservationSeed(
|
||||
ObservationId: observationId,
|
||||
Tenant: tenant,
|
||||
Vendor: "excititor-bench",
|
||||
Stream: "simulated",
|
||||
Api: $"https://bench.stella/vex/{group:D4}/{revision:D3}",
|
||||
CollectorVersion: "1.0.0-bench",
|
||||
UpstreamId: upstreamId,
|
||||
DocumentVersion: documentVersion,
|
||||
FetchedAt: fetchedAt,
|
||||
ReceivedAt: receivedAt,
|
||||
ContentHash: contentHash,
|
||||
VulnerabilityAlias: vulnerabilityAlias,
|
||||
Aliases: aliases,
|
||||
Products: products,
|
||||
Statements: statements,
|
||||
References: references,
|
||||
ContentFormat: "CycloneDX-VEX",
|
||||
SpecVersion: "1.4");
|
||||
}
|
||||
|
||||
return seeds;
|
||||
}
|
||||
|
||||
private static ImmutableArray<VexProduct> CreateProducts(int group, int revision, int count)
|
||||
{
|
||||
var builder = ImmutableArray.CreateBuilder<VexProduct>(count);
|
||||
for (var index = 0; index < count; index++)
|
||||
{
|
||||
var purl = $"pkg:generic/stella/product-{group:D4}-{index}@{1 + revision % 5}.{index + 1}.{revision % 9}";
|
||||
builder.Add(new VexProduct(purl, $"component-{group % 30:D2}", $"namespace-{group % 10:D2}"));
|
||||
}
|
||||
|
||||
return builder.MoveToImmutable();
|
||||
}
|
||||
|
||||
private static ImmutableArray<VexStatement> CreateStatements(
|
||||
string vulnerabilityAlias,
|
||||
ImmutableArray<VexProduct> products,
|
||||
int statementsPerObservation,
|
||||
Random random,
|
||||
DateTimeOffset baseTime)
|
||||
{
|
||||
var builder = ImmutableArray.CreateBuilder<VexStatement>(statementsPerObservation);
|
||||
for (var index = 0; index < statementsPerObservation; index++)
|
||||
{
|
||||
var statusIndex = random.Next(StatusPool.Length);
|
||||
var status = StatusPool[statusIndex];
|
||||
var justification = JustificationPool[random.Next(JustificationPool.Length)];
|
||||
var product = products[index % products.Length];
|
||||
var statementId = $"stmt-{vulnerabilityAlias}-{index:D2}";
|
||||
var lastUpdated = baseTime.AddMinutes(index).ToUniversalTime();
|
||||
|
||||
builder.Add(new VexStatement(
|
||||
StatementId: statementId,
|
||||
VulnerabilityAlias: vulnerabilityAlias,
|
||||
Product: product,
|
||||
Status: status,
|
||||
Justification: justification,
|
||||
LastUpdated: lastUpdated));
|
||||
}
|
||||
|
||||
return builder.MoveToImmutable();
|
||||
}
|
||||
|
||||
private static string ComputeContentHash(
|
||||
string upstreamId,
|
||||
string vulnerabilityAlias,
|
||||
ImmutableArray<VexStatement> statements,
|
||||
string tenant,
|
||||
int group,
|
||||
int revision)
|
||||
{
|
||||
using var sha256 = SHA256.Create();
|
||||
var builder = new StringBuilder();
|
||||
builder.Append(tenant).Append('|').Append(group).Append('|').Append(revision).Append('|');
|
||||
builder.Append(upstreamId).Append('|').Append(vulnerabilityAlias).Append('|');
|
||||
foreach (var statement in statements)
|
||||
{
|
||||
builder.Append(statement.StatementId).Append('|')
|
||||
.Append(statement.Status).Append('|')
|
||||
.Append(statement.Product.Purl).Append('|')
|
||||
.Append(statement.Justification).Append('|')
|
||||
.Append(statement.LastUpdated.ToUniversalTime().ToString("O", CultureInfo.InvariantCulture)).Append('|');
|
||||
}
|
||||
|
||||
var data = Encoding.UTF8.GetBytes(builder.ToString());
|
||||
var hash = sha256.ComputeHash(data);
|
||||
return $"sha256:{Convert.ToHexString(hash)}";
|
||||
}
|
||||
}
|
||||
|
||||
internal sealed record VexObservationSeed(
|
||||
string ObservationId,
|
||||
string Tenant,
|
||||
string Vendor,
|
||||
string Stream,
|
||||
string Api,
|
||||
string CollectorVersion,
|
||||
string UpstreamId,
|
||||
string DocumentVersion,
|
||||
DateTimeOffset FetchedAt,
|
||||
DateTimeOffset ReceivedAt,
|
||||
string ContentHash,
|
||||
string VulnerabilityAlias,
|
||||
ImmutableArray<string> Aliases,
|
||||
ImmutableArray<VexProduct> Products,
|
||||
ImmutableArray<VexStatement> Statements,
|
||||
ImmutableArray<VexReference> References,
|
||||
string ContentFormat,
|
||||
string SpecVersion)
|
||||
{
|
||||
public VexObservationDocument ToDocument()
|
||||
{
|
||||
return new VexObservationDocument(
|
||||
Tenant,
|
||||
Aliases,
|
||||
Statements);
|
||||
}
|
||||
}
|
||||
|
||||
internal sealed record VexObservationDocument(
|
||||
string Tenant,
|
||||
ImmutableArray<string> Aliases,
|
||||
ImmutableArray<VexStatement> Statements);
|
||||
|
||||
internal sealed record VexStatement(
|
||||
string StatementId,
|
||||
string VulnerabilityAlias,
|
||||
VexProduct Product,
|
||||
string Status,
|
||||
string Justification,
|
||||
DateTimeOffset LastUpdated);
|
||||
|
||||
internal sealed record VexProduct(string Purl, string Component, string Namespace);
|
||||
|
||||
internal sealed record VexReference(string Type, string Url);
|
||||
@@ -0,0 +1,183 @@
|
||||
using System.Text.Json;
|
||||
using System.Text.Json.Serialization;
|
||||
|
||||
namespace StellaOps.Bench.LinkNotMerge.Vex;
|
||||
|
||||
internal sealed record VexBenchmarkConfig(
|
||||
double? ThresholdMs,
|
||||
double? MinThroughputPerSecond,
|
||||
double? MinEventThroughputPerSecond,
|
||||
double? MaxAllocatedMb,
|
||||
int? Iterations,
|
||||
IReadOnlyList<VexScenarioConfig> Scenarios)
|
||||
{
|
||||
public static async Task<VexBenchmarkConfig> LoadAsync(string path)
|
||||
{
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(path);
|
||||
|
||||
var resolved = Path.GetFullPath(path);
|
||||
if (!File.Exists(resolved))
|
||||
{
|
||||
throw new FileNotFoundException($"Benchmark configuration '{resolved}' was not found.", resolved);
|
||||
}
|
||||
|
||||
await using var stream = File.OpenRead(resolved);
|
||||
var model = await JsonSerializer.DeserializeAsync<VexBenchmarkConfigModel>(
|
||||
stream,
|
||||
new JsonSerializerOptions(JsonSerializerDefaults.Web)
|
||||
{
|
||||
PropertyNameCaseInsensitive = true,
|
||||
ReadCommentHandling = JsonCommentHandling.Skip,
|
||||
AllowTrailingCommas = true,
|
||||
}).ConfigureAwait(false);
|
||||
|
||||
if (model is null)
|
||||
{
|
||||
throw new InvalidOperationException($"Benchmark configuration '{resolved}' could not be parsed.");
|
||||
}
|
||||
|
||||
if (model.Scenarios.Count == 0)
|
||||
{
|
||||
throw new InvalidOperationException($"Benchmark configuration '{resolved}' does not contain any scenarios.");
|
||||
}
|
||||
|
||||
foreach (var scenario in model.Scenarios)
|
||||
{
|
||||
scenario.Validate();
|
||||
}
|
||||
|
||||
return new VexBenchmarkConfig(
|
||||
model.ThresholdMs,
|
||||
model.MinThroughputPerSecond,
|
||||
model.MinEventThroughputPerSecond,
|
||||
model.MaxAllocatedMb,
|
||||
model.Iterations,
|
||||
model.Scenarios);
|
||||
}
|
||||
|
||||
private sealed class VexBenchmarkConfigModel
|
||||
{
|
||||
[JsonPropertyName("thresholdMs")]
|
||||
public double? ThresholdMs { get; init; }
|
||||
|
||||
[JsonPropertyName("minThroughputPerSecond")]
|
||||
public double? MinThroughputPerSecond { get; init; }
|
||||
|
||||
[JsonPropertyName("minEventThroughputPerSecond")]
|
||||
public double? MinEventThroughputPerSecond { get; init; }
|
||||
|
||||
[JsonPropertyName("maxAllocatedMb")]
|
||||
public double? MaxAllocatedMb { get; init; }
|
||||
|
||||
[JsonPropertyName("iterations")]
|
||||
public int? Iterations { get; init; }
|
||||
|
||||
[JsonPropertyName("scenarios")]
|
||||
public List<VexScenarioConfig> Scenarios { get; init; } = new();
|
||||
}
|
||||
}
|
||||
|
||||
internal sealed class VexScenarioConfig
|
||||
{
|
||||
private const int DefaultObservationCount = 4_000;
|
||||
private const int DefaultAliasGroups = 400;
|
||||
private const int DefaultStatementsPerObservation = 6;
|
||||
private const int DefaultProductsPerObservation = 3;
|
||||
private const int DefaultTenants = 3;
|
||||
private const int DefaultBatchSize = 250;
|
||||
private const int DefaultSeed = 520_025;
|
||||
|
||||
[JsonPropertyName("id")]
|
||||
public string? Id { get; init; }
|
||||
|
||||
[JsonPropertyName("label")]
|
||||
public string? Label { get; init; }
|
||||
|
||||
[JsonPropertyName("observations")]
|
||||
public int? Observations { get; init; }
|
||||
|
||||
[JsonPropertyName("aliasGroups")]
|
||||
public int? AliasGroups { get; init; }
|
||||
|
||||
[JsonPropertyName("statementsPerObservation")]
|
||||
public int? StatementsPerObservation { get; init; }
|
||||
|
||||
[JsonPropertyName("productsPerObservation")]
|
||||
public int? ProductsPerObservation { get; init; }
|
||||
|
||||
[JsonPropertyName("tenants")]
|
||||
public int? Tenants { get; init; }
|
||||
|
||||
[JsonPropertyName("batchSize")]
|
||||
public int? BatchSize { get; init; }
|
||||
|
||||
[JsonPropertyName("seed")]
|
||||
public int? Seed { get; init; }
|
||||
|
||||
[JsonPropertyName("iterations")]
|
||||
public int? Iterations { get; init; }
|
||||
|
||||
[JsonPropertyName("thresholdMs")]
|
||||
public double? ThresholdMs { get; init; }
|
||||
|
||||
[JsonPropertyName("minThroughputPerSecond")]
|
||||
public double? MinThroughputPerSecond { get; init; }
|
||||
|
||||
[JsonPropertyName("minEventThroughputPerSecond")]
|
||||
public double? MinEventThroughputPerSecond { get; init; }
|
||||
|
||||
[JsonPropertyName("maxAllocatedMb")]
|
||||
public double? MaxAllocatedMb { get; init; }
|
||||
|
||||
public string ScenarioId => string.IsNullOrWhiteSpace(Id) ? "vex" : Id!.Trim();
|
||||
|
||||
public string DisplayLabel => string.IsNullOrWhiteSpace(Label) ? ScenarioId : Label!.Trim();
|
||||
|
||||
public int ResolveObservationCount() => Observations is > 0 ? Observations.Value : DefaultObservationCount;
|
||||
|
||||
public int ResolveAliasGroups() => AliasGroups is > 0 ? AliasGroups.Value : DefaultAliasGroups;
|
||||
|
||||
public int ResolveStatementsPerObservation() => StatementsPerObservation is > 0 ? StatementsPerObservation.Value : DefaultStatementsPerObservation;
|
||||
|
||||
public int ResolveProductsPerObservation() => ProductsPerObservation is > 0 ? ProductsPerObservation.Value : DefaultProductsPerObservation;
|
||||
|
||||
public int ResolveTenantCount() => Tenants is > 0 ? Tenants.Value : DefaultTenants;
|
||||
|
||||
public int ResolveBatchSize() => BatchSize is > 0 ? BatchSize.Value : DefaultBatchSize;
|
||||
|
||||
public int ResolveSeed() => Seed is > 0 ? Seed.Value : DefaultSeed;
|
||||
|
||||
public int ResolveIterations(int? defaultIterations)
|
||||
{
|
||||
var iterations = Iterations ?? defaultIterations ?? 3;
|
||||
if (iterations <= 0)
|
||||
{
|
||||
throw new InvalidOperationException($"Scenario '{ScenarioId}' requires iterations > 0.");
|
||||
}
|
||||
|
||||
return iterations;
|
||||
}
|
||||
|
||||
public void Validate()
|
||||
{
|
||||
if (ResolveObservationCount() <= 0)
|
||||
{
|
||||
throw new InvalidOperationException($"Scenario '{ScenarioId}' requires observations > 0.");
|
||||
}
|
||||
|
||||
if (ResolveAliasGroups() <= 0)
|
||||
{
|
||||
throw new InvalidOperationException($"Scenario '{ScenarioId}' requires aliasGroups > 0.");
|
||||
}
|
||||
|
||||
if (ResolveStatementsPerObservation() <= 0)
|
||||
{
|
||||
throw new InvalidOperationException($"Scenario '{ScenarioId}' requires statementsPerObservation > 0.");
|
||||
}
|
||||
|
||||
if (ResolveProductsPerObservation() <= 0)
|
||||
{
|
||||
throw new InvalidOperationException($"Scenario '{ScenarioId}' requires productsPerObservation > 0.");
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,14 @@
|
||||
namespace StellaOps.Bench.LinkNotMerge.Vex;
|
||||
|
||||
internal sealed record VexScenarioExecutionResult(
|
||||
IReadOnlyList<double> TotalDurationsMs,
|
||||
IReadOnlyList<double> InsertDurationsMs,
|
||||
IReadOnlyList<double> CorrelationDurationsMs,
|
||||
IReadOnlyList<double> AllocatedMb,
|
||||
IReadOnlyList<double> ObservationThroughputsPerSecond,
|
||||
IReadOnlyList<double> EventThroughputsPerSecond,
|
||||
int ObservationCount,
|
||||
int AliasGroups,
|
||||
int StatementCount,
|
||||
int EventCount,
|
||||
VexAggregationResult AggregationResult);
|
||||
@@ -0,0 +1,43 @@
|
||||
using System.Globalization;
|
||||
|
||||
namespace StellaOps.Bench.LinkNotMerge.Vex;
|
||||
|
||||
internal sealed record VexScenarioResult(
|
||||
string Id,
|
||||
string Label,
|
||||
int Iterations,
|
||||
int ObservationCount,
|
||||
int AliasGroups,
|
||||
int StatementCount,
|
||||
int EventCount,
|
||||
DurationStatistics TotalStatistics,
|
||||
DurationStatistics InsertStatistics,
|
||||
DurationStatistics CorrelationStatistics,
|
||||
ThroughputStatistics ObservationThroughputStatistics,
|
||||
ThroughputStatistics EventThroughputStatistics,
|
||||
AllocationStatistics AllocationStatistics,
|
||||
double? ThresholdMs,
|
||||
double? MinObservationThroughputPerSecond,
|
||||
double? MinEventThroughputPerSecond,
|
||||
double? MaxAllocatedThresholdMb)
|
||||
{
|
||||
public string IdColumn => Id.Length <= 28 ? Id.PadRight(28) : Id[..28];
|
||||
|
||||
public string ObservationsColumn => ObservationCount.ToString("N0", CultureInfo.InvariantCulture).PadLeft(12);
|
||||
|
||||
public string StatementColumn => StatementCount.ToString("N0", CultureInfo.InvariantCulture).PadLeft(10);
|
||||
|
||||
public string EventColumn => EventCount.ToString("N0", CultureInfo.InvariantCulture).PadLeft(8);
|
||||
|
||||
public string TotalMeanColumn => TotalStatistics.MeanMs.ToString("F2", CultureInfo.InvariantCulture).PadLeft(10);
|
||||
|
||||
public string CorrelationMeanColumn => CorrelationStatistics.MeanMs.ToString("F2", CultureInfo.InvariantCulture).PadLeft(10);
|
||||
|
||||
public string InsertMeanColumn => InsertStatistics.MeanMs.ToString("F2", CultureInfo.InvariantCulture).PadLeft(10);
|
||||
|
||||
public string ObservationThroughputColumn => (ObservationThroughputStatistics.MinPerSecond / 1_000d).ToString("F2", CultureInfo.InvariantCulture).PadLeft(11);
|
||||
|
||||
public string EventThroughputColumn => (EventThroughputStatistics.MinPerSecond / 1_000d).ToString("F2", CultureInfo.InvariantCulture).PadLeft(11);
|
||||
|
||||
public string AllocatedColumn => AllocationStatistics.MaxAllocatedMb.ToString("F2", CultureInfo.InvariantCulture).PadLeft(9);
|
||||
}
|
||||
@@ -0,0 +1,98 @@
|
||||
using System.Diagnostics;
|
||||
|
||||
namespace StellaOps.Bench.LinkNotMerge.Vex;
|
||||
|
||||
internal sealed class VexScenarioRunner
|
||||
{
|
||||
private readonly VexScenarioConfig _config;
|
||||
private readonly IReadOnlyList<VexObservationSeed> _seeds;
|
||||
|
||||
public VexScenarioRunner(VexScenarioConfig config)
|
||||
{
|
||||
_config = config ?? throw new ArgumentNullException(nameof(config));
|
||||
_seeds = VexObservationGenerator.Generate(config);
|
||||
}
|
||||
|
||||
public VexScenarioExecutionResult Execute(int iterations, CancellationToken cancellationToken)
|
||||
{
|
||||
if (iterations <= 0)
|
||||
{
|
||||
throw new ArgumentOutOfRangeException(nameof(iterations), iterations, "Iterations must be positive.");
|
||||
}
|
||||
|
||||
var totalDurations = new double[iterations];
|
||||
var insertDurations = new double[iterations];
|
||||
var correlationDurations = new double[iterations];
|
||||
var allocated = new double[iterations];
|
||||
var observationThroughputs = new double[iterations];
|
||||
var eventThroughputs = new double[iterations];
|
||||
VexAggregationResult lastAggregation = new(0, 0, 0, Array.Empty<VexEvent>());
|
||||
|
||||
for (var iteration = 0; iteration < iterations; iteration++)
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
|
||||
var beforeAllocated = GC.GetTotalAllocatedBytes();
|
||||
|
||||
var insertStopwatch = Stopwatch.StartNew();
|
||||
var documents = InsertObservations(_seeds, _config.ResolveBatchSize(), cancellationToken);
|
||||
insertStopwatch.Stop();
|
||||
|
||||
var correlationStopwatch = Stopwatch.StartNew();
|
||||
var aggregator = new VexLinksetAggregator();
|
||||
lastAggregation = aggregator.Correlate(documents);
|
||||
correlationStopwatch.Stop();
|
||||
|
||||
var totalElapsed = insertStopwatch.Elapsed + correlationStopwatch.Elapsed;
|
||||
var afterAllocated = GC.GetTotalAllocatedBytes();
|
||||
|
||||
totalDurations[iteration] = totalElapsed.TotalMilliseconds;
|
||||
insertDurations[iteration] = insertStopwatch.Elapsed.TotalMilliseconds;
|
||||
correlationDurations[iteration] = correlationStopwatch.Elapsed.TotalMilliseconds;
|
||||
allocated[iteration] = Math.Max(0, afterAllocated - beforeAllocated) / (1024d * 1024d);
|
||||
|
||||
var totalSeconds = Math.Max(totalElapsed.TotalSeconds, 0.0001d);
|
||||
observationThroughputs[iteration] = _seeds.Count / totalSeconds;
|
||||
|
||||
var eventSeconds = Math.Max(correlationStopwatch.Elapsed.TotalSeconds, 0.0001d);
|
||||
var eventCount = Math.Max(lastAggregation.EventCount, 1);
|
||||
eventThroughputs[iteration] = eventCount / eventSeconds;
|
||||
}
|
||||
|
||||
return new VexScenarioExecutionResult(
|
||||
totalDurations,
|
||||
insertDurations,
|
||||
correlationDurations,
|
||||
allocated,
|
||||
observationThroughputs,
|
||||
eventThroughputs,
|
||||
ObservationCount: _seeds.Count,
|
||||
AliasGroups: _config.ResolveAliasGroups(),
|
||||
StatementCount: lastAggregation.StatementCount,
|
||||
EventCount: lastAggregation.EventCount,
|
||||
AggregationResult: lastAggregation);
|
||||
}
|
||||
|
||||
private static IReadOnlyList<VexObservationDocument> InsertObservations(
|
||||
IReadOnlyList<VexObservationSeed> seeds,
|
||||
int batchSize,
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
var documents = new List<VexObservationDocument>(seeds.Count);
|
||||
for (var offset = 0; offset < seeds.Count; offset += batchSize)
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
|
||||
var remaining = Math.Min(batchSize, seeds.Count - offset);
|
||||
var batch = new List<VexObservationDocument>(remaining);
|
||||
for (var index = 0; index < remaining; index++)
|
||||
{
|
||||
batch.Add(seeds[offset + index].ToDocument());
|
||||
}
|
||||
|
||||
documents.AddRange(batch);
|
||||
}
|
||||
|
||||
return documents;
|
||||
}
|
||||
}
|
||||
4
src/Tools/StellaOps.Bench/LinkNotMerge.Vex/baseline.csv
Normal file
4
src/Tools/StellaOps.Bench/LinkNotMerge.Vex/baseline.csv
Normal file
@@ -0,0 +1,4 @@
|
||||
scenario,iterations,observations,statements,events,mean_total_ms,p95_total_ms,max_total_ms,mean_insert_ms,mean_correlation_ms,mean_observation_throughput_per_sec,min_observation_throughput_per_sec,mean_event_throughput_per_sec,min_event_throughput_per_sec,max_allocated_mb
|
||||
vex_ingest_baseline,5,4000,24000,21326,842.8191,1319.3038,1432.7675,346.7277,496.0915,5349.8940,2791.7998,48942.4901,24653.0556,138.6365
|
||||
vex_ingest_medium,5,8000,64000,56720,1525.9929,1706.8900,1748.9056,533.3378,992.6552,5274.5883,4574.2892,57654.9190,48531.7353,326.8638
|
||||
vex_ingest_high,5,12000,120000,106910,2988.5094,3422.1728,3438.9364,903.3927,2085.1167,4066.2300,3489.4510,52456.9493,42358.0556,583.9903
|
||||
|
54
src/Tools/StellaOps.Bench/LinkNotMerge.Vex/config.json
Normal file
54
src/Tools/StellaOps.Bench/LinkNotMerge.Vex/config.json
Normal file
@@ -0,0 +1,54 @@
|
||||
{
|
||||
"thresholdMs": 4200,
|
||||
"minThroughputPerSecond": 1800,
|
||||
"minEventThroughputPerSecond": 2000,
|
||||
"maxAllocatedMb": 800,
|
||||
"iterations": 5,
|
||||
"scenarios": [
|
||||
{
|
||||
"id": "vex_ingest_baseline",
|
||||
"label": "4k observations, 400 aliases",
|
||||
"observations": 4000,
|
||||
"aliasGroups": 400,
|
||||
"statementsPerObservation": 6,
|
||||
"productsPerObservation": 3,
|
||||
"tenants": 3,
|
||||
"batchSize": 200,
|
||||
"seed": 420020,
|
||||
"thresholdMs": 2300,
|
||||
"minThroughputPerSecond": 1800,
|
||||
"minEventThroughputPerSecond": 2000,
|
||||
"maxAllocatedMb": 220
|
||||
},
|
||||
{
|
||||
"id": "vex_ingest_medium",
|
||||
"label": "8k observations, 700 aliases",
|
||||
"observations": 8000,
|
||||
"aliasGroups": 700,
|
||||
"statementsPerObservation": 8,
|
||||
"productsPerObservation": 4,
|
||||
"tenants": 5,
|
||||
"batchSize": 300,
|
||||
"seed": 520020,
|
||||
"thresholdMs": 3200,
|
||||
"minThroughputPerSecond": 2200,
|
||||
"minEventThroughputPerSecond": 2500,
|
||||
"maxAllocatedMb": 400
|
||||
},
|
||||
{
|
||||
"id": "vex_ingest_high",
|
||||
"label": "12k observations, 1100 aliases",
|
||||
"observations": 12000,
|
||||
"aliasGroups": 1100,
|
||||
"statementsPerObservation": 10,
|
||||
"productsPerObservation": 5,
|
||||
"tenants": 7,
|
||||
"batchSize": 400,
|
||||
"seed": 620020,
|
||||
"thresholdMs": 4200,
|
||||
"minThroughputPerSecond": 2200,
|
||||
"minEventThroughputPerSecond": 2500,
|
||||
"maxAllocatedMb": 700
|
||||
}
|
||||
]
|
||||
}
|
||||
26
src/Tools/StellaOps.Bench/LinkNotMerge/README.md
Normal file
26
src/Tools/StellaOps.Bench/LinkNotMerge/README.md
Normal file
@@ -0,0 +1,26 @@
|
||||
# Link-Not-Merge Bench
|
||||
|
||||
Synthetic workload that measures advisory observation ingestion and linkset correlation throughput for the Link-Not-Merge program.
|
||||
|
||||
## Scenarios
|
||||
|
||||
`config.json` defines three scenarios that vary observation volume, alias density, and correlation fan-out. Each scenario captures:
|
||||
|
||||
- Total latency (ingest + correlation) and p95/max percentiles
|
||||
- Insert latency against an ephemeral MongoDB instance
|
||||
- Correlator-only latency, tracking fan-out costs
|
||||
- Observation and Mongo insert throughput (ops/sec)
|
||||
- Peak managed heap allocations
|
||||
|
||||
## Running locally
|
||||
|
||||
```bash
|
||||
dotnet run \
|
||||
--project src/Bench/StellaOps.Bench/LinkNotMerge/StellaOps.Bench.LinkNotMerge/StellaOps.Bench.LinkNotMerge.csproj \
|
||||
-- \
|
||||
--csv out/linknotmerge-bench.csv \
|
||||
--json out/linknotmerge-bench.json \
|
||||
--prometheus out/linknotmerge-bench.prom
|
||||
```
|
||||
|
||||
The benchmark exits non-zero if latency exceeds configured thresholds, throughput falls below the floor, Mongo insert throughput regresses, allocations exceed the ceiling, or regression ratios breach the baseline.
|
||||
@@ -0,0 +1,29 @@
|
||||
# LinkNotMerge Benchmark Tests Charter
|
||||
|
||||
## Mission
|
||||
Own the LinkNotMerge benchmark test suite. Validate config parsing, regression reporting, and deterministic benchmark helpers.
|
||||
|
||||
## Responsibilities
|
||||
- Maintain `StellaOps.Bench.LinkNotMerge.Tests`.
|
||||
- Ensure tests remain deterministic and offline-friendly.
|
||||
- Surface open work on `TASKS.md`; update statuses (TODO/DOING/DONE/BLOCKED/REVIEW).
|
||||
|
||||
## Key Paths
|
||||
- `BaselineLoaderTests.cs`
|
||||
- `BenchmarkScenarioReportTests.cs`
|
||||
- `LinkNotMergeScenarioRunnerTests.cs`
|
||||
|
||||
## Coordination
|
||||
- Bench guild for regression thresholds and baselines.
|
||||
- Platform guild for determinism expectations.
|
||||
|
||||
## Required Reading
|
||||
- `docs/modules/platform/architecture-overview.md`
|
||||
- `docs/README.md`
|
||||
|
||||
## Working Agreement
|
||||
- 1. Update task status to `DOING`/`DONE` in both corresponding sprint file `/docs/implplan/SPRINT_*.md` and the local `TASKS.md` when you start or finish work.
|
||||
- 2. Review this charter and the Required Reading documents before coding; confirm prerequisites are met.
|
||||
- 3. Keep changes deterministic (stable ordering, timestamps, hashes) and align with offline/air-gap expectations.
|
||||
- 4. Coordinate doc updates, tests, and cross-guild communication whenever contracts or workflows change.
|
||||
- 5. Revert to `TODO` if you pause the task without shipping changes; leave notes in commit/PR descriptions for context.
|
||||
@@ -0,0 +1,40 @@
|
||||
using System.IO;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using StellaOps.Bench.LinkNotMerge.Baseline;
|
||||
using Xunit;
|
||||
|
||||
using StellaOps.TestKit;
|
||||
namespace StellaOps.Bench.LinkNotMerge.Tests;
|
||||
|
||||
public sealed class BaselineLoaderTests
|
||||
{
|
||||
[Trait("Category", TestCategories.Unit)]
|
||||
[Fact]
|
||||
public async Task LoadAsync_ReadsEntries()
|
||||
{
|
||||
var path = Path.GetTempFileName();
|
||||
try
|
||||
{
|
||||
await File.WriteAllTextAsync(
|
||||
path,
|
||||
"scenario,iterations,observations,aliases,linksets,mean_total_ms,p95_total_ms,max_total_ms,mean_insert_ms,mean_correlation_ms,mean_throughput_per_sec,min_throughput_per_sec,mean_insert_throughput_per_sec,min_insert_throughput_per_sec,max_allocated_mb\n" +
|
||||
"lnm_ingest_baseline,5,5000,500,450,320.5,340.1,360.9,120.2,210.3,15000.0,13500.0,18000.0,16500.0,96.5\n");
|
||||
|
||||
var baseline = await BaselineLoader.LoadAsync(path, CancellationToken.None);
|
||||
var entry = Assert.Single(baseline);
|
||||
|
||||
Assert.Equal("lnm_ingest_baseline", entry.Key);
|
||||
Assert.Equal(5, entry.Value.Iterations);
|
||||
Assert.Equal(5000, entry.Value.Observations);
|
||||
Assert.Equal(500, entry.Value.Aliases);
|
||||
Assert.Equal(360.9, entry.Value.MaxTotalMs);
|
||||
Assert.Equal(16500.0, entry.Value.MinInsertThroughputPerSecond);
|
||||
Assert.Equal(96.5, entry.Value.MaxAllocatedMb);
|
||||
}
|
||||
finally
|
||||
{
|
||||
File.Delete(path);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,84 @@
|
||||
using StellaOps.Bench.LinkNotMerge.Baseline;
|
||||
using StellaOps.Bench.LinkNotMerge.Reporting;
|
||||
using Xunit;
|
||||
|
||||
using StellaOps.TestKit;
|
||||
namespace StellaOps.Bench.LinkNotMerge.Tests;
|
||||
|
||||
public sealed class BenchmarkScenarioReportTests
|
||||
{
|
||||
[Trait("Category", TestCategories.Unit)]
|
||||
[Fact]
|
||||
public void RegressionDetection_FlagsBreaches()
|
||||
{
|
||||
var result = new ScenarioResult(
|
||||
Id: "scenario",
|
||||
Label: "Scenario",
|
||||
Iterations: 3,
|
||||
ObservationCount: 1000,
|
||||
AliasGroups: 100,
|
||||
LinksetCount: 90,
|
||||
TotalStatistics: new DurationStatistics(200, 240, 260),
|
||||
InsertStatistics: new DurationStatistics(80, 90, 100),
|
||||
CorrelationStatistics: new DurationStatistics(120, 150, 170),
|
||||
TotalThroughputStatistics: new ThroughputStatistics(8000, 7000),
|
||||
InsertThroughputStatistics: new ThroughputStatistics(9000, 8000),
|
||||
AllocationStatistics: new AllocationStatistics(120),
|
||||
ThresholdMs: null,
|
||||
MinThroughputThresholdPerSecond: null,
|
||||
MinInsertThroughputThresholdPerSecond: null,
|
||||
MaxAllocatedThresholdMb: null);
|
||||
|
||||
var baseline = new BaselineEntry(
|
||||
ScenarioId: "scenario",
|
||||
Iterations: 3,
|
||||
Observations: 1000,
|
||||
Aliases: 100,
|
||||
Linksets: 90,
|
||||
MeanTotalMs: 150,
|
||||
P95TotalMs: 170,
|
||||
MaxTotalMs: 180,
|
||||
MeanInsertMs: 60,
|
||||
MeanCorrelationMs: 90,
|
||||
MeanThroughputPerSecond: 9000,
|
||||
MinThroughputPerSecond: 8500,
|
||||
MeanInsertThroughputPerSecond: 10000,
|
||||
MinInsertThroughputPerSecond: 9500,
|
||||
MaxAllocatedMb: 100);
|
||||
|
||||
var report = new BenchmarkScenarioReport(result, baseline, regressionLimit: 1.1);
|
||||
|
||||
Assert.True(report.DurationRegressionBreached);
|
||||
Assert.True(report.ThroughputRegressionBreached);
|
||||
Assert.True(report.InsertThroughputRegressionBreached);
|
||||
Assert.Contains(report.BuildRegressionFailureMessages(), message => message.Contains("max duration"));
|
||||
}
|
||||
|
||||
[Trait("Category", TestCategories.Unit)]
|
||||
[Fact]
|
||||
public void RegressionDetection_NoBaseline_NoBreaches()
|
||||
{
|
||||
var result = new ScenarioResult(
|
||||
Id: "scenario",
|
||||
Label: "Scenario",
|
||||
Iterations: 3,
|
||||
ObservationCount: 1000,
|
||||
AliasGroups: 100,
|
||||
LinksetCount: 90,
|
||||
TotalStatistics: new DurationStatistics(200, 220, 230),
|
||||
InsertStatistics: new DurationStatistics(90, 100, 110),
|
||||
CorrelationStatistics: new DurationStatistics(110, 120, 130),
|
||||
TotalThroughputStatistics: new ThroughputStatistics(8000, 7900),
|
||||
InsertThroughputStatistics: new ThroughputStatistics(9000, 8900),
|
||||
AllocationStatistics: new AllocationStatistics(64),
|
||||
ThresholdMs: null,
|
||||
MinThroughputThresholdPerSecond: null,
|
||||
MinInsertThroughputThresholdPerSecond: null,
|
||||
MaxAllocatedThresholdMb: null);
|
||||
|
||||
var report = new BenchmarkScenarioReport(result, baseline: null, regressionLimit: null);
|
||||
|
||||
Assert.False(report.RegressionBreached);
|
||||
Assert.Empty(report.BuildRegressionFailureMessages());
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,40 @@
|
||||
using System.Linq;
|
||||
using System.Threading;
|
||||
using StellaOps.Bench.LinkNotMerge.Baseline;
|
||||
using Xunit;
|
||||
|
||||
using StellaOps.TestKit;
|
||||
namespace StellaOps.Bench.LinkNotMerge.Tests;
|
||||
|
||||
public sealed class LinkNotMergeScenarioRunnerTests
|
||||
{
|
||||
[Trait("Category", TestCategories.Unit)]
|
||||
[Fact]
|
||||
public void Execute_BuildsDeterministicAggregation()
|
||||
{
|
||||
var config = new LinkNotMergeScenarioConfig
|
||||
{
|
||||
Id = "unit",
|
||||
Observations = 120,
|
||||
AliasGroups = 24,
|
||||
PurlsPerObservation = 3,
|
||||
CpesPerObservation = 2,
|
||||
ReferencesPerObservation = 2,
|
||||
Tenants = 3,
|
||||
BatchSize = 40,
|
||||
Seed = 1337,
|
||||
};
|
||||
|
||||
var runner = new LinkNotMergeScenarioRunner(config);
|
||||
var result = runner.Execute(iterations: 2, CancellationToken.None);
|
||||
|
||||
Assert.Equal(120, result.ObservationCount);
|
||||
Assert.Equal(24, result.AliasGroups);
|
||||
Assert.True(result.TotalDurationsMs.All(value => value > 0));
|
||||
Assert.True(result.InsertThroughputsPerSecond.All(value => value > 0));
|
||||
Assert.True(result.TotalThroughputsPerSecond.All(value => value > 0));
|
||||
Assert.True(result.AllocatedMb.All(value => value >= 0));
|
||||
Assert.Equal(result.AggregationResult.LinksetCount, result.LinksetCount);
|
||||
Assert.Equal(result.AggregationResult.ObservationCount, result.ObservationCount);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,21 @@
|
||||
<Project Sdk="Microsoft.NET.Sdk">
|
||||
<PropertyGroup>
|
||||
<TargetFramework>net10.0</TargetFramework>
|
||||
<Nullable>enable</Nullable>
|
||||
<ImplicitUsings>enable</ImplicitUsings>
|
||||
<LangVersion>preview</LangVersion>
|
||||
<TreatWarningsAsErrors>false</TreatWarningsAsErrors>
|
||||
</PropertyGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<PackageReference Include="coverlet.collector" >
|
||||
<IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
|
||||
<PrivateAssets>all</PrivateAssets>
|
||||
</PackageReference>
|
||||
</ItemGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<ProjectReference Include="..\StellaOps.Bench.LinkNotMerge\StellaOps.Bench.LinkNotMerge.csproj" />
|
||||
<ProjectReference Include="../../../../__Libraries/StellaOps.TestKit/StellaOps.TestKit.csproj" />
|
||||
</ItemGroup>
|
||||
</Project>
|
||||
@@ -0,0 +1,11 @@
|
||||
# LinkNotMerge Benchmark Tests Task Board
|
||||
|
||||
This board mirrors active sprint tasks for this module.
|
||||
Source of truth: `docs-archived/implplan/2025-12-29-csproj-audit/SPRINT_20251229_049_BE_csproj_audit_maint_tests.md`.
|
||||
|
||||
| Task ID | Status | Notes |
|
||||
| --- | --- | --- |
|
||||
| AUDIT-0103-M | DONE | Revalidated 2026-01-06. |
|
||||
| AUDIT-0103-T | DONE | Revalidated 2026-01-06. |
|
||||
| AUDIT-0103-A | DONE | Waived (test project; revalidated 2026-01-06). |
|
||||
| REMED-06 | DONE | SOLID review notes captured for SPRINT_20260130_002. |
|
||||
@@ -0,0 +1,30 @@
|
||||
# LinkNotMerge Benchmark Charter
|
||||
|
||||
## Mission
|
||||
Own the LinkNotMerge benchmark harness and reporting outputs. Keep runs deterministic, offline-friendly, and aligned with production behavior.
|
||||
|
||||
## Responsibilities
|
||||
- Maintain `StellaOps.Bench.LinkNotMerge` runner, config parsing, and output writers.
|
||||
- Keep benchmark inputs deterministic and document default datasets.
|
||||
- Surface open work on `TASKS.md`; update statuses (TODO/DOING/DONE/BLOCKED/REVIEW).
|
||||
|
||||
## Key Paths
|
||||
- `Program.cs`
|
||||
- `BenchmarkConfig.cs`
|
||||
- `LinkNotMergeScenarioRunner.cs`
|
||||
- `Reporting/`
|
||||
|
||||
## Coordination
|
||||
- Bench guild for performance baselines.
|
||||
- Platform guild for determinism and offline expectations.
|
||||
|
||||
## Required Reading
|
||||
- `docs/modules/platform/architecture-overview.md`
|
||||
- `docs/README.md`
|
||||
|
||||
## Working Agreement
|
||||
- 1. Update task status to `DOING`/`DONE` in both corresponding sprint file `/docs/implplan/SPRINT_*.md` and the local `TASKS.md` when you start or finish work.
|
||||
- 2. Review this charter and the Required Reading documents before coding; confirm prerequisites are met.
|
||||
- 3. Keep changes deterministic (stable ordering, timestamps, hashes) and align with offline/air-gap expectations.
|
||||
- 4. Coordinate doc updates, tests, and cross-guild communication whenever contracts or workflows change.
|
||||
- 5. Revert to `TODO` if you pause the task without shipping changes; leave notes in commit/PR descriptions for context.
|
||||
@@ -0,0 +1,18 @@
|
||||
namespace StellaOps.Bench.LinkNotMerge.Baseline;
|
||||
|
||||
internal sealed record BaselineEntry(
|
||||
string ScenarioId,
|
||||
int Iterations,
|
||||
int Observations,
|
||||
int Aliases,
|
||||
int Linksets,
|
||||
double MeanTotalMs,
|
||||
double P95TotalMs,
|
||||
double MaxTotalMs,
|
||||
double MeanInsertMs,
|
||||
double MeanCorrelationMs,
|
||||
double MeanThroughputPerSecond,
|
||||
double MinThroughputPerSecond,
|
||||
double MeanInsertThroughputPerSecond,
|
||||
double MinInsertThroughputPerSecond,
|
||||
double MaxAllocatedMb);
|
||||
@@ -0,0 +1,87 @@
|
||||
using System.Globalization;
|
||||
|
||||
namespace StellaOps.Bench.LinkNotMerge.Baseline;
|
||||
|
||||
internal static class BaselineLoader
|
||||
{
|
||||
public static async Task<IReadOnlyDictionary<string, BaselineEntry>> LoadAsync(string path, CancellationToken cancellationToken)
|
||||
{
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(path);
|
||||
|
||||
var resolved = Path.GetFullPath(path);
|
||||
if (!File.Exists(resolved))
|
||||
{
|
||||
return new Dictionary<string, BaselineEntry>(StringComparer.OrdinalIgnoreCase);
|
||||
}
|
||||
|
||||
var result = new Dictionary<string, BaselineEntry>(StringComparer.OrdinalIgnoreCase);
|
||||
|
||||
await using var stream = new FileStream(resolved, FileMode.Open, FileAccess.Read, FileShare.Read);
|
||||
using var reader = new StreamReader(stream);
|
||||
|
||||
var lineNumber = 0;
|
||||
while (true)
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
|
||||
var line = await reader.ReadLineAsync().ConfigureAwait(false);
|
||||
if (line is null)
|
||||
{
|
||||
break;
|
||||
}
|
||||
|
||||
lineNumber++;
|
||||
if (lineNumber == 1 || string.IsNullOrWhiteSpace(line))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
var parts = line.Split(',', StringSplitOptions.TrimEntries);
|
||||
if (parts.Length < 15)
|
||||
{
|
||||
throw new InvalidOperationException($"Baseline '{resolved}' line {lineNumber} is invalid (expected 15 columns, found {parts.Length}).");
|
||||
}
|
||||
|
||||
var entry = new BaselineEntry(
|
||||
ScenarioId: parts[0],
|
||||
Iterations: ParseInt(parts[1], resolved, lineNumber),
|
||||
Observations: ParseInt(parts[2], resolved, lineNumber),
|
||||
Aliases: ParseInt(parts[3], resolved, lineNumber),
|
||||
Linksets: ParseInt(parts[4], resolved, lineNumber),
|
||||
MeanTotalMs: ParseDouble(parts[5], resolved, lineNumber),
|
||||
P95TotalMs: ParseDouble(parts[6], resolved, lineNumber),
|
||||
MaxTotalMs: ParseDouble(parts[7], resolved, lineNumber),
|
||||
MeanInsertMs: ParseDouble(parts[8], resolved, lineNumber),
|
||||
MeanCorrelationMs: ParseDouble(parts[9], resolved, lineNumber),
|
||||
MeanThroughputPerSecond: ParseDouble(parts[10], resolved, lineNumber),
|
||||
MinThroughputPerSecond: ParseDouble(parts[11], resolved, lineNumber),
|
||||
MeanInsertThroughputPerSecond: ParseDouble(parts[12], resolved, lineNumber),
|
||||
MinInsertThroughputPerSecond: ParseDouble(parts[13], resolved, lineNumber),
|
||||
MaxAllocatedMb: ParseDouble(parts[14], resolved, lineNumber));
|
||||
|
||||
result[entry.ScenarioId] = entry;
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
private static int ParseInt(string value, string file, int line)
|
||||
{
|
||||
if (int.TryParse(value, NumberStyles.Integer, CultureInfo.InvariantCulture, out var result))
|
||||
{
|
||||
return result;
|
||||
}
|
||||
|
||||
throw new InvalidOperationException($"Baseline '{file}' line {line} contains an invalid integer '{value}'.");
|
||||
}
|
||||
|
||||
private static double ParseDouble(string value, string file, int line)
|
||||
{
|
||||
if (double.TryParse(value, NumberStyles.Float, CultureInfo.InvariantCulture, out var result))
|
||||
{
|
||||
return result;
|
||||
}
|
||||
|
||||
throw new InvalidOperationException($"Baseline '{file}' line {line} contains an invalid number '{value}'.");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,210 @@
|
||||
using System.Text.Json;
|
||||
using System.Text.Json.Serialization;
|
||||
|
||||
namespace StellaOps.Bench.LinkNotMerge;
|
||||
|
||||
internal sealed record BenchmarkConfig(
|
||||
double? ThresholdMs,
|
||||
double? MinThroughputPerSecond,
|
||||
double? MinInsertThroughputPerSecond,
|
||||
double? MaxAllocatedMb,
|
||||
int? Iterations,
|
||||
IReadOnlyList<LinkNotMergeScenarioConfig> Scenarios)
|
||||
{
|
||||
public static async Task<BenchmarkConfig> LoadAsync(string path)
|
||||
{
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(path);
|
||||
|
||||
var resolved = Path.GetFullPath(path);
|
||||
if (!File.Exists(resolved))
|
||||
{
|
||||
throw new FileNotFoundException($"Benchmark configuration '{resolved}' was not found.", resolved);
|
||||
}
|
||||
|
||||
await using var stream = File.OpenRead(resolved);
|
||||
var model = await JsonSerializer.DeserializeAsync<BenchmarkConfigModel>(
|
||||
stream,
|
||||
new JsonSerializerOptions(JsonSerializerDefaults.Web)
|
||||
{
|
||||
PropertyNameCaseInsensitive = true,
|
||||
ReadCommentHandling = JsonCommentHandling.Skip,
|
||||
AllowTrailingCommas = true,
|
||||
}).ConfigureAwait(false);
|
||||
|
||||
if (model is null)
|
||||
{
|
||||
throw new InvalidOperationException($"Benchmark configuration '{resolved}' could not be parsed.");
|
||||
}
|
||||
|
||||
if (model.Scenarios.Count == 0)
|
||||
{
|
||||
throw new InvalidOperationException($"Benchmark configuration '{resolved}' does not contain any scenarios.");
|
||||
}
|
||||
|
||||
foreach (var scenario in model.Scenarios)
|
||||
{
|
||||
scenario.Validate();
|
||||
}
|
||||
|
||||
return new BenchmarkConfig(
|
||||
model.ThresholdMs,
|
||||
model.MinThroughputPerSecond,
|
||||
model.MinInsertThroughputPerSecond,
|
||||
model.MaxAllocatedMb,
|
||||
model.Iterations,
|
||||
model.Scenarios);
|
||||
}
|
||||
|
||||
private sealed class BenchmarkConfigModel
|
||||
{
|
||||
[JsonPropertyName("thresholdMs")]
|
||||
public double? ThresholdMs { get; init; }
|
||||
|
||||
[JsonPropertyName("minThroughputPerSecond")]
|
||||
public double? MinThroughputPerSecond { get; init; }
|
||||
|
||||
[JsonPropertyName("minInsertThroughputPerSecond")]
|
||||
public double? MinInsertThroughputPerSecond { get; init; }
|
||||
|
||||
[JsonPropertyName("maxAllocatedMb")]
|
||||
public double? MaxAllocatedMb { get; init; }
|
||||
|
||||
[JsonPropertyName("iterations")]
|
||||
public int? Iterations { get; init; }
|
||||
|
||||
[JsonPropertyName("scenarios")]
|
||||
public List<LinkNotMergeScenarioConfig> Scenarios { get; init; } = new();
|
||||
}
|
||||
}
|
||||
|
||||
internal sealed class LinkNotMergeScenarioConfig
|
||||
{
|
||||
private const int DefaultObservationCount = 5_000;
|
||||
private const int DefaultAliasGroups = 500;
|
||||
private const int DefaultPurlsPerObservation = 4;
|
||||
private const int DefaultCpesPerObservation = 2;
|
||||
private const int DefaultReferencesPerObservation = 3;
|
||||
private const int DefaultTenants = 4;
|
||||
private const int DefaultBatchSize = 500;
|
||||
private const int DefaultSeed = 42_022;
|
||||
|
||||
[JsonPropertyName("id")]
|
||||
public string? Id { get; init; }
|
||||
|
||||
[JsonPropertyName("label")]
|
||||
public string? Label { get; init; }
|
||||
|
||||
[JsonPropertyName("observations")]
|
||||
public int? Observations { get; init; }
|
||||
|
||||
[JsonPropertyName("aliasGroups")]
|
||||
public int? AliasGroups { get; init; }
|
||||
|
||||
[JsonPropertyName("purlsPerObservation")]
|
||||
public int? PurlsPerObservation { get; init; }
|
||||
|
||||
[JsonPropertyName("cpesPerObservation")]
|
||||
public int? CpesPerObservation { get; init; }
|
||||
|
||||
[JsonPropertyName("referencesPerObservation")]
|
||||
public int? ReferencesPerObservation { get; init; }
|
||||
|
||||
[JsonPropertyName("tenants")]
|
||||
public int? Tenants { get; init; }
|
||||
|
||||
[JsonPropertyName("batchSize")]
|
||||
public int? BatchSize { get; init; }
|
||||
|
||||
[JsonPropertyName("seed")]
|
||||
public int? Seed { get; init; }
|
||||
|
||||
[JsonPropertyName("iterations")]
|
||||
public int? Iterations { get; init; }
|
||||
|
||||
[JsonPropertyName("thresholdMs")]
|
||||
public double? ThresholdMs { get; init; }
|
||||
|
||||
[JsonPropertyName("minThroughputPerSecond")]
|
||||
public double? MinThroughputPerSecond { get; init; }
|
||||
|
||||
[JsonPropertyName("minInsertThroughputPerSecond")]
|
||||
public double? MinInsertThroughputPerSecond { get; init; }
|
||||
|
||||
[JsonPropertyName("maxAllocatedMb")]
|
||||
public double? MaxAllocatedMb { get; init; }
|
||||
|
||||
public string ScenarioId => string.IsNullOrWhiteSpace(Id) ? "linknotmerge" : Id!.Trim();
|
||||
|
||||
public string DisplayLabel => string.IsNullOrWhiteSpace(Label) ? ScenarioId : Label!.Trim();
|
||||
|
||||
public int ResolveObservationCount() => Observations.HasValue && Observations.Value > 0
|
||||
? Observations.Value
|
||||
: DefaultObservationCount;
|
||||
|
||||
public int ResolveAliasGroups() => AliasGroups.HasValue && AliasGroups.Value > 0
|
||||
? AliasGroups.Value
|
||||
: DefaultAliasGroups;
|
||||
|
||||
public int ResolvePurlsPerObservation() => PurlsPerObservation.HasValue && PurlsPerObservation.Value > 0
|
||||
? PurlsPerObservation.Value
|
||||
: DefaultPurlsPerObservation;
|
||||
|
||||
public int ResolveCpesPerObservation() => CpesPerObservation.HasValue && CpesPerObservation.Value >= 0
|
||||
? CpesPerObservation.Value
|
||||
: DefaultCpesPerObservation;
|
||||
|
||||
public int ResolveReferencesPerObservation() => ReferencesPerObservation.HasValue && ReferencesPerObservation.Value >= 0
|
||||
? ReferencesPerObservation.Value
|
||||
: DefaultReferencesPerObservation;
|
||||
|
||||
public int ResolveTenantCount() => Tenants.HasValue && Tenants.Value > 0
|
||||
? Tenants.Value
|
||||
: DefaultTenants;
|
||||
|
||||
public int ResolveBatchSize() => BatchSize.HasValue && BatchSize.Value > 0
|
||||
? BatchSize.Value
|
||||
: DefaultBatchSize;
|
||||
|
||||
public int ResolveSeed() => Seed.HasValue && Seed.Value > 0
|
||||
? Seed.Value
|
||||
: DefaultSeed;
|
||||
|
||||
public int ResolveIterations(int? defaultIterations)
|
||||
{
|
||||
var iterations = Iterations ?? defaultIterations ?? 3;
|
||||
if (iterations <= 0)
|
||||
{
|
||||
throw new InvalidOperationException($"Scenario '{ScenarioId}' requires iterations > 0.");
|
||||
}
|
||||
|
||||
return iterations;
|
||||
}
|
||||
|
||||
public void Validate()
|
||||
{
|
||||
if (ResolveObservationCount() <= 0)
|
||||
{
|
||||
throw new InvalidOperationException($"Scenario '{ScenarioId}' requires observations > 0.");
|
||||
}
|
||||
|
||||
if (ResolveAliasGroups() <= 0)
|
||||
{
|
||||
throw new InvalidOperationException($"Scenario '{ScenarioId}' requires aliasGroups > 0.");
|
||||
}
|
||||
|
||||
if (ResolvePurlsPerObservation() <= 0)
|
||||
{
|
||||
throw new InvalidOperationException($"Scenario '{ScenarioId}' requires purlsPerObservation > 0.");
|
||||
}
|
||||
|
||||
if (ResolveTenantCount() <= 0)
|
||||
{
|
||||
throw new InvalidOperationException($"Scenario '{ScenarioId}' requires tenants > 0.");
|
||||
}
|
||||
|
||||
if (ResolveBatchSize() > ResolveObservationCount())
|
||||
{
|
||||
throw new InvalidOperationException($"Scenario '{ScenarioId}' batchSize cannot exceed observations.");
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,96 @@
|
||||
using System.Diagnostics;
|
||||
|
||||
namespace StellaOps.Bench.LinkNotMerge;
|
||||
|
||||
internal sealed class LinkNotMergeScenarioRunner
|
||||
{
|
||||
private readonly LinkNotMergeScenarioConfig _config;
|
||||
private readonly IReadOnlyList<ObservationSeed> _seeds;
|
||||
|
||||
public LinkNotMergeScenarioRunner(LinkNotMergeScenarioConfig config)
|
||||
{
|
||||
_config = config ?? throw new ArgumentNullException(nameof(config));
|
||||
_seeds = ObservationGenerator.Generate(config);
|
||||
}
|
||||
|
||||
public ScenarioExecutionResult Execute(int iterations, CancellationToken cancellationToken)
|
||||
{
|
||||
if (iterations <= 0)
|
||||
{
|
||||
throw new ArgumentOutOfRangeException(nameof(iterations), iterations, "Iterations must be positive.");
|
||||
}
|
||||
|
||||
var totalDurations = new double[iterations];
|
||||
var insertDurations = new double[iterations];
|
||||
var correlationDurations = new double[iterations];
|
||||
var allocated = new double[iterations];
|
||||
var totalThroughputs = new double[iterations];
|
||||
var insertThroughputs = new double[iterations];
|
||||
LinksetAggregationResult lastAggregation = new(0, 0, 0, 0, 0);
|
||||
|
||||
for (var iteration = 0; iteration < iterations; iteration++)
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
|
||||
var beforeAllocated = GC.GetTotalAllocatedBytes();
|
||||
var insertStopwatch = Stopwatch.StartNew();
|
||||
var documents = InsertObservations(_seeds, _config.ResolveBatchSize(), cancellationToken);
|
||||
insertStopwatch.Stop();
|
||||
|
||||
var correlationStopwatch = Stopwatch.StartNew();
|
||||
var correlator = new LinksetAggregator();
|
||||
lastAggregation = correlator.Correlate(documents);
|
||||
correlationStopwatch.Stop();
|
||||
|
||||
var totalElapsed = insertStopwatch.Elapsed + correlationStopwatch.Elapsed;
|
||||
var afterAllocated = GC.GetTotalAllocatedBytes();
|
||||
|
||||
totalDurations[iteration] = totalElapsed.TotalMilliseconds;
|
||||
insertDurations[iteration] = insertStopwatch.Elapsed.TotalMilliseconds;
|
||||
correlationDurations[iteration] = correlationStopwatch.Elapsed.TotalMilliseconds;
|
||||
allocated[iteration] = Math.Max(0, afterAllocated - beforeAllocated) / (1024d * 1024d);
|
||||
|
||||
var totalSeconds = Math.Max(totalElapsed.TotalSeconds, 0.0001d);
|
||||
totalThroughputs[iteration] = _seeds.Count / totalSeconds;
|
||||
|
||||
var insertSeconds = Math.Max(insertStopwatch.Elapsed.TotalSeconds, 0.0001d);
|
||||
insertThroughputs[iteration] = _seeds.Count / insertSeconds;
|
||||
}
|
||||
|
||||
return new ScenarioExecutionResult(
|
||||
totalDurations,
|
||||
insertDurations,
|
||||
correlationDurations,
|
||||
allocated,
|
||||
totalThroughputs,
|
||||
insertThroughputs,
|
||||
ObservationCount: _seeds.Count,
|
||||
AliasGroups: _config.ResolveAliasGroups(),
|
||||
LinksetCount: lastAggregation.LinksetCount,
|
||||
TenantCount: _config.ResolveTenantCount(),
|
||||
AggregationResult: lastAggregation);
|
||||
}
|
||||
|
||||
private static IReadOnlyList<ObservationDocument> InsertObservations(
|
||||
IReadOnlyList<ObservationSeed> seeds,
|
||||
int batchSize,
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
var documents = new List<ObservationDocument>(seeds.Count);
|
||||
for (var offset = 0; offset < seeds.Count; offset += batchSize)
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
|
||||
var remaining = Math.Min(batchSize, seeds.Count - offset);
|
||||
var batch = new List<ObservationDocument>(remaining);
|
||||
for (var index = 0; index < remaining; index++)
|
||||
{
|
||||
batch.Add(seeds[offset + index].ToDocument());
|
||||
}
|
||||
|
||||
documents.AddRange(batch);
|
||||
}
|
||||
|
||||
return documents;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,121 @@
|
||||
namespace StellaOps.Bench.LinkNotMerge;
|
||||
|
||||
internal sealed class LinksetAggregator
|
||||
{
|
||||
public LinksetAggregationResult Correlate(IEnumerable<ObservationDocument> documents)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(documents);
|
||||
|
||||
var groups = new Dictionary<string, LinksetAccumulator>(StringComparer.Ordinal);
|
||||
var totalObservations = 0;
|
||||
|
||||
foreach (var document in documents)
|
||||
{
|
||||
totalObservations++;
|
||||
|
||||
var tenant = document.Tenant;
|
||||
var linkset = document.Linkset;
|
||||
var aliases = linkset.Aliases;
|
||||
var purls = linkset.Purls;
|
||||
var cpes = linkset.Cpes;
|
||||
var references = linkset.References;
|
||||
|
||||
foreach (var aliasValue in aliases)
|
||||
{
|
||||
var alias = aliasValue;
|
||||
var key = string.Create(alias.Length + tenant.Length + 1, (tenant, alias), static (span, data) =>
|
||||
{
|
||||
var (tenantValue, aliasValue) = data;
|
||||
tenantValue.AsSpan().CopyTo(span);
|
||||
span[tenantValue.Length] = '|';
|
||||
aliasValue.AsSpan().CopyTo(span[(tenantValue.Length + 1)..]);
|
||||
});
|
||||
|
||||
if (!groups.TryGetValue(key, out var accumulator))
|
||||
{
|
||||
accumulator = new LinksetAccumulator(tenant, alias);
|
||||
groups[key] = accumulator;
|
||||
}
|
||||
|
||||
accumulator.AddPurls(purls);
|
||||
accumulator.AddCpes(cpes);
|
||||
accumulator.AddReferences(references);
|
||||
}
|
||||
}
|
||||
|
||||
var totalReferences = 0;
|
||||
var totalPurls = 0;
|
||||
var totalCpes = 0;
|
||||
|
||||
foreach (var accumulator in groups.Values)
|
||||
{
|
||||
totalReferences += accumulator.ReferenceCount;
|
||||
totalPurls += accumulator.PurlCount;
|
||||
totalCpes += accumulator.CpeCount;
|
||||
}
|
||||
|
||||
return new LinksetAggregationResult(
|
||||
LinksetCount: groups.Count,
|
||||
ObservationCount: totalObservations,
|
||||
TotalPurls: totalPurls,
|
||||
TotalCpes: totalCpes,
|
||||
TotalReferences: totalReferences);
|
||||
}
|
||||
|
||||
private sealed class LinksetAccumulator
|
||||
{
|
||||
private readonly HashSet<string> _purls = new(StringComparer.Ordinal);
|
||||
private readonly HashSet<string> _cpes = new(StringComparer.Ordinal);
|
||||
private readonly HashSet<string> _references = new(StringComparer.Ordinal);
|
||||
|
||||
public LinksetAccumulator(string tenant, string alias)
|
||||
{
|
||||
Tenant = tenant;
|
||||
Alias = alias;
|
||||
}
|
||||
|
||||
public string Tenant { get; }
|
||||
|
||||
public string Alias { get; }
|
||||
|
||||
public int PurlCount => _purls.Count;
|
||||
|
||||
public int CpeCount => _cpes.Count;
|
||||
|
||||
public int ReferenceCount => _references.Count;
|
||||
|
||||
public void AddPurls(IEnumerable<string> array)
|
||||
{
|
||||
foreach (var item in array)
|
||||
{
|
||||
if (!string.IsNullOrEmpty(item))
|
||||
_purls.Add(item);
|
||||
}
|
||||
}
|
||||
|
||||
public void AddCpes(IEnumerable<string> array)
|
||||
{
|
||||
foreach (var item in array)
|
||||
{
|
||||
if (!string.IsNullOrEmpty(item))
|
||||
_cpes.Add(item);
|
||||
}
|
||||
}
|
||||
|
||||
public void AddReferences(IEnumerable<ObservationReference> array)
|
||||
{
|
||||
foreach (var item in array)
|
||||
{
|
||||
if (!string.IsNullOrEmpty(item.Url))
|
||||
_references.Add(item.Url);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
internal sealed record LinksetAggregationResult(
|
||||
int LinksetCount,
|
||||
int ObservationCount,
|
||||
int TotalPurls,
|
||||
int TotalCpes,
|
||||
int TotalReferences);
|
||||
@@ -0,0 +1,199 @@
|
||||
using System.Collections.Immutable;
|
||||
using System.Globalization;
|
||||
using System.Security.Cryptography;
|
||||
using System.Text;
|
||||
|
||||
namespace StellaOps.Bench.LinkNotMerge;
|
||||
|
||||
internal static class ObservationGenerator
|
||||
{
|
||||
public static IReadOnlyList<ObservationSeed> Generate(LinkNotMergeScenarioConfig config)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(config);
|
||||
|
||||
var observationCount = config.ResolveObservationCount();
|
||||
var aliasGroups = config.ResolveAliasGroups();
|
||||
var purlsPerObservation = config.ResolvePurlsPerObservation();
|
||||
var cpesPerObservation = config.ResolveCpesPerObservation();
|
||||
var referencesPerObservation = config.ResolveReferencesPerObservation();
|
||||
var tenantCount = config.ResolveTenantCount();
|
||||
var seed = config.ResolveSeed();
|
||||
|
||||
var seeds = new ObservationSeed[observationCount];
|
||||
var random = new Random(seed);
|
||||
var baseTime = new DateTimeOffset(2025, 10, 1, 0, 0, 0, TimeSpan.Zero);
|
||||
|
||||
for (var index = 0; index < observationCount; index++)
|
||||
{
|
||||
var tenantIndex = index % tenantCount;
|
||||
var tenant = $"tenant-{tenantIndex:D2}";
|
||||
var group = index % aliasGroups;
|
||||
var revision = index / aliasGroups;
|
||||
var primaryAlias = $"CVE-2025-{group:D4}";
|
||||
var vendorAlias = $"VENDOR-{group:D4}";
|
||||
var thirdAlias = $"GHSA-{group:D4}-{(revision % 26 + 'a')}{(revision % 26 + 'a')}";
|
||||
var aliases = ImmutableArray.Create(primaryAlias, vendorAlias, thirdAlias);
|
||||
|
||||
var observationId = $"{tenant}:advisory:{group:D5}:{revision:D6}";
|
||||
var upstreamId = primaryAlias;
|
||||
var documentVersion = baseTime.AddMinutes(revision).ToString("O", CultureInfo.InvariantCulture);
|
||||
var fetchedAt = baseTime.AddSeconds(index % 1_800);
|
||||
var receivedAt = fetchedAt.AddSeconds(1);
|
||||
|
||||
var purls = CreatePurls(group, revision, purlsPerObservation);
|
||||
var cpes = CreateCpes(group, revision, cpesPerObservation);
|
||||
var references = CreateReferences(primaryAlias, referencesPerObservation);
|
||||
|
||||
var contentHash = ComputeContentHash(primaryAlias, vendorAlias, purls, cpes, references, tenant, group, revision);
|
||||
|
||||
seeds[index] = new ObservationSeed(
|
||||
ObservationId: observationId,
|
||||
Tenant: tenant,
|
||||
Vendor: "concelier-bench",
|
||||
Stream: "simulated",
|
||||
Api: $"https://bench.stella/{group:D4}/{revision:D2}",
|
||||
CollectorVersion: "1.0.0-bench",
|
||||
UpstreamId: upstreamId,
|
||||
DocumentVersion: documentVersion,
|
||||
FetchedAt: fetchedAt,
|
||||
ReceivedAt: receivedAt,
|
||||
ContentHash: contentHash,
|
||||
Aliases: aliases,
|
||||
Purls: purls,
|
||||
Cpes: cpes,
|
||||
References: references,
|
||||
ContentFormat: "CSAF",
|
||||
SpecVersion: "2.0");
|
||||
}
|
||||
|
||||
return seeds;
|
||||
}
|
||||
|
||||
private static ImmutableArray<string> CreatePurls(int group, int revision, int count)
|
||||
{
|
||||
if (count <= 0)
|
||||
{
|
||||
return ImmutableArray<string>.Empty;
|
||||
}
|
||||
|
||||
var builder = ImmutableArray.CreateBuilder<string>(count);
|
||||
for (var index = 0; index < count; index++)
|
||||
{
|
||||
var version = $"{revision % 9 + 1}.{index + 1}.{group % 10}";
|
||||
builder.Add($"pkg:generic/stella/sample-{group:D4}-{index}@{version}");
|
||||
}
|
||||
|
||||
return builder.MoveToImmutable();
|
||||
}
|
||||
|
||||
private static ImmutableArray<string> CreateCpes(int group, int revision, int count)
|
||||
{
|
||||
if (count <= 0)
|
||||
{
|
||||
return ImmutableArray<string>.Empty;
|
||||
}
|
||||
|
||||
var builder = ImmutableArray.CreateBuilder<string>(count);
|
||||
for (var index = 0; index < count; index++)
|
||||
{
|
||||
var component = $"benchtool{group % 50:D2}";
|
||||
var version = $"{revision % 5}.{index}";
|
||||
builder.Add($"cpe:2.3:a:stellaops:{component}:{version}:*:*:*:*:*:*:*");
|
||||
}
|
||||
|
||||
return builder.MoveToImmutable();
|
||||
}
|
||||
|
||||
private static ImmutableArray<ObservationReference> CreateReferences(string primaryAlias, int count)
|
||||
{
|
||||
if (count <= 0)
|
||||
{
|
||||
return ImmutableArray<ObservationReference>.Empty;
|
||||
}
|
||||
|
||||
var builder = ImmutableArray.CreateBuilder<ObservationReference>(count);
|
||||
for (var index = 0; index < count; index++)
|
||||
{
|
||||
builder.Add(new ObservationReference(
|
||||
Type: index % 2 == 0 ? "advisory" : "patch",
|
||||
Url: $"https://vendor.example/{primaryAlias.ToLowerInvariant()}/ref/{index:D2}"));
|
||||
}
|
||||
|
||||
return builder.MoveToImmutable();
|
||||
}
|
||||
|
||||
private static string ComputeContentHash(
|
||||
string primaryAlias,
|
||||
string vendorAlias,
|
||||
IReadOnlyCollection<string> purls,
|
||||
IReadOnlyCollection<string> cpes,
|
||||
IReadOnlyCollection<ObservationReference> references,
|
||||
string tenant,
|
||||
int group,
|
||||
int revision)
|
||||
{
|
||||
using var sha256 = SHA256.Create();
|
||||
var builder = new StringBuilder();
|
||||
builder.Append(tenant).Append('|').Append(group).Append('|').Append(revision).Append('|');
|
||||
builder.Append(primaryAlias).Append('|').Append(vendorAlias).Append('|');
|
||||
foreach (var purl in purls)
|
||||
{
|
||||
builder.Append(purl).Append('|');
|
||||
}
|
||||
|
||||
foreach (var cpe in cpes)
|
||||
{
|
||||
builder.Append(cpe).Append('|');
|
||||
}
|
||||
|
||||
foreach (var reference in references)
|
||||
{
|
||||
builder.Append(reference.Type).Append(':').Append(reference.Url).Append('|');
|
||||
}
|
||||
|
||||
var data = Encoding.UTF8.GetBytes(builder.ToString());
|
||||
var hash = sha256.ComputeHash(data);
|
||||
return $"sha256:{Convert.ToHexString(hash)}";
|
||||
}
|
||||
}
|
||||
|
||||
internal sealed record ObservationSeed(
|
||||
string ObservationId,
|
||||
string Tenant,
|
||||
string Vendor,
|
||||
string Stream,
|
||||
string Api,
|
||||
string CollectorVersion,
|
||||
string UpstreamId,
|
||||
string DocumentVersion,
|
||||
DateTimeOffset FetchedAt,
|
||||
DateTimeOffset ReceivedAt,
|
||||
string ContentHash,
|
||||
ImmutableArray<string> Aliases,
|
||||
ImmutableArray<string> Purls,
|
||||
ImmutableArray<string> Cpes,
|
||||
ImmutableArray<ObservationReference> References,
|
||||
string ContentFormat,
|
||||
string SpecVersion)
|
||||
{
|
||||
public ObservationDocument ToDocument()
|
||||
{
|
||||
return new ObservationDocument(
|
||||
Tenant,
|
||||
new LinksetDocument(
|
||||
Aliases,
|
||||
Purls,
|
||||
Cpes,
|
||||
References));
|
||||
}
|
||||
}
|
||||
|
||||
internal sealed record ObservationDocument(string Tenant, LinksetDocument Linkset);
|
||||
|
||||
internal sealed record LinksetDocument(
|
||||
ImmutableArray<string> Aliases,
|
||||
ImmutableArray<string> Purls,
|
||||
ImmutableArray<string> Cpes,
|
||||
ImmutableArray<ObservationReference> References);
|
||||
|
||||
internal sealed record ObservationReference(string Type, string Url);
|
||||
@@ -0,0 +1,375 @@
|
||||
using System.Globalization;
|
||||
using StellaOps.Bench.LinkNotMerge.Baseline;
|
||||
using StellaOps.Bench.LinkNotMerge.Reporting;
|
||||
|
||||
namespace StellaOps.Bench.LinkNotMerge;
|
||||
|
||||
internal static class Program
|
||||
{
|
||||
public static async Task<int> Main(string[] args)
|
||||
{
|
||||
try
|
||||
{
|
||||
var options = ProgramOptions.Parse(args);
|
||||
var config = await BenchmarkConfig.LoadAsync(options.ConfigPath).ConfigureAwait(false);
|
||||
var baseline = await BaselineLoader.LoadAsync(options.BaselinePath, CancellationToken.None).ConfigureAwait(false);
|
||||
|
||||
var results = new List<ScenarioResult>();
|
||||
var reports = new List<BenchmarkScenarioReport>();
|
||||
var failures = new List<string>();
|
||||
|
||||
foreach (var scenario in config.Scenarios)
|
||||
{
|
||||
var iterations = scenario.ResolveIterations(config.Iterations);
|
||||
var runner = new LinkNotMergeScenarioRunner(scenario);
|
||||
var execution = runner.Execute(iterations, CancellationToken.None);
|
||||
|
||||
var totalStats = DurationStatistics.From(execution.TotalDurationsMs);
|
||||
var insertStats = DurationStatistics.From(execution.InsertDurationsMs);
|
||||
var correlationStats = DurationStatistics.From(execution.CorrelationDurationsMs);
|
||||
var allocationStats = AllocationStatistics.From(execution.AllocatedMb);
|
||||
var throughputStats = ThroughputStatistics.From(execution.TotalThroughputsPerSecond);
|
||||
var insertThroughputStats = ThroughputStatistics.From(execution.InsertThroughputsPerSecond);
|
||||
|
||||
var thresholdMs = scenario.ThresholdMs ?? options.ThresholdMs ?? config.ThresholdMs;
|
||||
var throughputFloor = scenario.MinThroughputPerSecond ?? options.MinThroughputPerSecond ?? config.MinThroughputPerSecond;
|
||||
var insertThroughputFloor = scenario.MinInsertThroughputPerSecond ?? options.MinInsertThroughputPerSecond ?? config.MinInsertThroughputPerSecond;
|
||||
var allocationLimit = scenario.MaxAllocatedMb ?? options.MaxAllocatedMb ?? config.MaxAllocatedMb;
|
||||
|
||||
var result = new ScenarioResult(
|
||||
scenario.ScenarioId,
|
||||
scenario.DisplayLabel,
|
||||
iterations,
|
||||
execution.ObservationCount,
|
||||
execution.AliasGroups,
|
||||
execution.LinksetCount,
|
||||
totalStats,
|
||||
insertStats,
|
||||
correlationStats,
|
||||
throughputStats,
|
||||
insertThroughputStats,
|
||||
allocationStats,
|
||||
thresholdMs,
|
||||
throughputFloor,
|
||||
insertThroughputFloor,
|
||||
allocationLimit);
|
||||
|
||||
results.Add(result);
|
||||
|
||||
if (thresholdMs is { } threshold && result.TotalStatistics.MaxMs > threshold)
|
||||
{
|
||||
failures.Add($"{result.Id} exceeded total latency threshold: {result.TotalStatistics.MaxMs:F2} ms > {threshold:F2} ms");
|
||||
}
|
||||
|
||||
if (throughputFloor is { } floor && result.TotalThroughputStatistics.MinPerSecond < floor)
|
||||
{
|
||||
failures.Add($"{result.Id} fell below throughput floor: {result.TotalThroughputStatistics.MinPerSecond:N0} obs/s < {floor:N0} obs/s");
|
||||
}
|
||||
|
||||
if (insertThroughputFloor is { } insertFloor && result.InsertThroughputStatistics.MinPerSecond < insertFloor)
|
||||
{
|
||||
failures.Add($"{result.Id} fell below insert throughput floor: {result.InsertThroughputStatistics.MinPerSecond:N0} ops/s < {insertFloor:N0} ops/s");
|
||||
}
|
||||
|
||||
if (allocationLimit is { } limit && result.AllocationStatistics.MaxAllocatedMb > limit)
|
||||
{
|
||||
failures.Add($"{result.Id} exceeded allocation budget: {result.AllocationStatistics.MaxAllocatedMb:F2} MB > {limit:F2} MB");
|
||||
}
|
||||
|
||||
baseline.TryGetValue(result.Id, out var baselineEntry);
|
||||
var report = new BenchmarkScenarioReport(result, baselineEntry, options.RegressionLimit);
|
||||
reports.Add(report);
|
||||
failures.AddRange(report.BuildRegressionFailureMessages());
|
||||
}
|
||||
|
||||
TablePrinter.Print(results);
|
||||
|
||||
if (!string.IsNullOrWhiteSpace(options.CsvOutPath))
|
||||
{
|
||||
CsvWriter.Write(options.CsvOutPath!, results);
|
||||
}
|
||||
|
||||
if (!string.IsNullOrWhiteSpace(options.JsonOutPath))
|
||||
{
|
||||
var metadata = new BenchmarkJsonMetadata(
|
||||
SchemaVersion: "linknotmerge-bench/1.0",
|
||||
CapturedAtUtc: (options.CapturedAtUtc ?? DateTimeOffset.UtcNow).ToUniversalTime(),
|
||||
Commit: options.Commit,
|
||||
Environment: options.Environment);
|
||||
|
||||
await BenchmarkJsonWriter.WriteAsync(options.JsonOutPath!, metadata, reports, CancellationToken.None).ConfigureAwait(false);
|
||||
}
|
||||
|
||||
if (!string.IsNullOrWhiteSpace(options.PrometheusOutPath))
|
||||
{
|
||||
PrometheusWriter.Write(options.PrometheusOutPath!, reports);
|
||||
}
|
||||
|
||||
if (failures.Count > 0)
|
||||
{
|
||||
Console.Error.WriteLine();
|
||||
Console.Error.WriteLine("Benchmark failures detected:");
|
||||
foreach (var failure in failures.Distinct())
|
||||
{
|
||||
Console.Error.WriteLine($" - {failure}");
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
Console.Error.WriteLine($"linknotmerge-bench error: {ex.Message}");
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
private sealed record ProgramOptions(
|
||||
string ConfigPath,
|
||||
int? Iterations,
|
||||
double? ThresholdMs,
|
||||
double? MinThroughputPerSecond,
|
||||
double? MinInsertThroughputPerSecond,
|
||||
double? MaxAllocatedMb,
|
||||
string? CsvOutPath,
|
||||
string? JsonOutPath,
|
||||
string? PrometheusOutPath,
|
||||
string BaselinePath,
|
||||
DateTimeOffset? CapturedAtUtc,
|
||||
string? Commit,
|
||||
string? Environment,
|
||||
double? RegressionLimit)
|
||||
{
|
||||
public static ProgramOptions Parse(string[] args)
|
||||
{
|
||||
var configPath = DefaultConfigPath();
|
||||
var baselinePath = DefaultBaselinePath();
|
||||
|
||||
int? iterations = null;
|
||||
double? thresholdMs = null;
|
||||
double? minThroughput = null;
|
||||
double? minInsertThroughput = null;
|
||||
double? maxAllocated = null;
|
||||
string? csvOut = null;
|
||||
string? jsonOut = null;
|
||||
string? promOut = null;
|
||||
DateTimeOffset? capturedAt = null;
|
||||
string? commit = null;
|
||||
string? environment = null;
|
||||
double? regressionLimit = null;
|
||||
|
||||
for (var index = 0; index < args.Length; index++)
|
||||
{
|
||||
var current = args[index];
|
||||
switch (current)
|
||||
{
|
||||
case "--config":
|
||||
EnsureNext(args, index);
|
||||
configPath = Path.GetFullPath(args[++index]);
|
||||
break;
|
||||
case "--iterations":
|
||||
EnsureNext(args, index);
|
||||
iterations = int.Parse(args[++index], CultureInfo.InvariantCulture);
|
||||
break;
|
||||
case "--threshold-ms":
|
||||
EnsureNext(args, index);
|
||||
thresholdMs = double.Parse(args[++index], CultureInfo.InvariantCulture);
|
||||
break;
|
||||
case "--min-throughput":
|
||||
EnsureNext(args, index);
|
||||
minThroughput = double.Parse(args[++index], CultureInfo.InvariantCulture);
|
||||
break;
|
||||
case "--min-insert-throughput":
|
||||
EnsureNext(args, index);
|
||||
minInsertThroughput = double.Parse(args[++index], CultureInfo.InvariantCulture);
|
||||
break;
|
||||
case "--max-allocated-mb":
|
||||
EnsureNext(args, index);
|
||||
maxAllocated = double.Parse(args[++index], CultureInfo.InvariantCulture);
|
||||
break;
|
||||
case "--csv":
|
||||
EnsureNext(args, index);
|
||||
csvOut = args[++index];
|
||||
break;
|
||||
case "--json":
|
||||
EnsureNext(args, index);
|
||||
jsonOut = args[++index];
|
||||
break;
|
||||
case "--prometheus":
|
||||
EnsureNext(args, index);
|
||||
promOut = args[++index];
|
||||
break;
|
||||
case "--baseline":
|
||||
EnsureNext(args, index);
|
||||
baselinePath = Path.GetFullPath(args[++index]);
|
||||
break;
|
||||
case "--captured-at":
|
||||
EnsureNext(args, index);
|
||||
capturedAt = DateTimeOffset.Parse(args[++index], CultureInfo.InvariantCulture, DateTimeStyles.AssumeUniversal);
|
||||
break;
|
||||
case "--commit":
|
||||
EnsureNext(args, index);
|
||||
commit = args[++index];
|
||||
break;
|
||||
case "--environment":
|
||||
EnsureNext(args, index);
|
||||
environment = args[++index];
|
||||
break;
|
||||
case "--regression-limit":
|
||||
EnsureNext(args, index);
|
||||
regressionLimit = double.Parse(args[++index], CultureInfo.InvariantCulture);
|
||||
break;
|
||||
case "--help":
|
||||
case "-h":
|
||||
PrintUsage();
|
||||
System.Environment.Exit(0);
|
||||
break;
|
||||
default:
|
||||
throw new ArgumentException($"Unknown argument '{current}'.");
|
||||
}
|
||||
}
|
||||
|
||||
return new ProgramOptions(
|
||||
configPath,
|
||||
iterations,
|
||||
thresholdMs,
|
||||
minThroughput,
|
||||
minInsertThroughput,
|
||||
maxAllocated,
|
||||
csvOut,
|
||||
jsonOut,
|
||||
promOut,
|
||||
baselinePath,
|
||||
capturedAt,
|
||||
commit,
|
||||
environment,
|
||||
regressionLimit);
|
||||
}
|
||||
|
||||
private static string DefaultConfigPath()
|
||||
{
|
||||
var binaryDir = AppContext.BaseDirectory;
|
||||
var projectDir = Path.GetFullPath(Path.Combine(binaryDir, "..", "..", ".."));
|
||||
var benchRoot = Path.GetFullPath(Path.Combine(projectDir, ".."));
|
||||
return Path.Combine(benchRoot, "config.json");
|
||||
}
|
||||
|
||||
private static string DefaultBaselinePath()
|
||||
{
|
||||
var binaryDir = AppContext.BaseDirectory;
|
||||
var projectDir = Path.GetFullPath(Path.Combine(binaryDir, "..", "..", ".."));
|
||||
var benchRoot = Path.GetFullPath(Path.Combine(projectDir, ".."));
|
||||
return Path.Combine(benchRoot, "baseline.csv");
|
||||
}
|
||||
|
||||
private static void EnsureNext(string[] args, int index)
|
||||
{
|
||||
if (index + 1 >= args.Length)
|
||||
{
|
||||
throw new ArgumentException("Missing value for argument.");
|
||||
}
|
||||
}
|
||||
|
||||
private static void PrintUsage()
|
||||
{
|
||||
Console.WriteLine("Usage: linknotmerge-bench [options]");
|
||||
Console.WriteLine();
|
||||
Console.WriteLine("Options:");
|
||||
Console.WriteLine(" --config <path> Path to benchmark configuration JSON.");
|
||||
Console.WriteLine(" --iterations <count> Override iteration count.");
|
||||
Console.WriteLine(" --threshold-ms <value> Global latency threshold in milliseconds.");
|
||||
Console.WriteLine(" --min-throughput <value> Global throughput floor (observations/second).");
|
||||
Console.WriteLine(" --min-insert-throughput <value> Insert throughput floor (ops/second).");
|
||||
Console.WriteLine(" --max-allocated-mb <value> Global allocation ceiling (MB).");
|
||||
Console.WriteLine(" --csv <path> Write CSV results to path.");
|
||||
Console.WriteLine(" --json <path> Write JSON results to path.");
|
||||
Console.WriteLine(" --prometheus <path> Write Prometheus exposition metrics to path.");
|
||||
Console.WriteLine(" --baseline <path> Baseline CSV path.");
|
||||
Console.WriteLine(" --captured-at <iso8601> Timestamp to embed in JSON metadata.");
|
||||
Console.WriteLine(" --commit <sha> Commit identifier for metadata.");
|
||||
Console.WriteLine(" --environment <name> Environment label for metadata.");
|
||||
Console.WriteLine(" --regression-limit <value> Regression multiplier (default 1.15).");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
internal static class TablePrinter
|
||||
{
|
||||
public static void Print(IEnumerable<ScenarioResult> results)
|
||||
{
|
||||
Console.WriteLine("Scenario | Observations | Aliases | Linksets | Total(ms) | Correl(ms) | Insert(ms) | Min k/s | Ins k/s | Alloc(MB)");
|
||||
Console.WriteLine("---------------------------- | ------------- | ------- | -------- | ---------- | ---------- | ----------- | -------- | --------- | --------");
|
||||
foreach (var row in results)
|
||||
{
|
||||
Console.WriteLine(string.Join(" | ", new[]
|
||||
{
|
||||
row.IdColumn,
|
||||
row.ObservationsColumn,
|
||||
row.AliasColumn,
|
||||
row.LinksetColumn,
|
||||
row.TotalMeanColumn,
|
||||
row.CorrelationMeanColumn,
|
||||
row.InsertMeanColumn,
|
||||
row.ThroughputColumn,
|
||||
row.InsertThroughputColumn,
|
||||
row.AllocatedColumn,
|
||||
}));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
internal static class CsvWriter
|
||||
{
|
||||
public static void Write(string path, IEnumerable<ScenarioResult> results)
|
||||
{
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(path);
|
||||
ArgumentNullException.ThrowIfNull(results);
|
||||
|
||||
var resolved = Path.GetFullPath(path);
|
||||
var directory = Path.GetDirectoryName(resolved);
|
||||
if (!string.IsNullOrEmpty(directory))
|
||||
{
|
||||
Directory.CreateDirectory(directory);
|
||||
}
|
||||
|
||||
using var stream = new FileStream(resolved, FileMode.Create, FileAccess.Write, FileShare.None);
|
||||
using var writer = new StreamWriter(stream);
|
||||
writer.WriteLine("scenario,iterations,observations,aliases,linksets,mean_total_ms,p95_total_ms,max_total_ms,mean_insert_ms,mean_correlation_ms,mean_throughput_per_sec,min_throughput_per_sec,mean_insert_throughput_per_sec,min_insert_throughput_per_sec,max_allocated_mb");
|
||||
|
||||
foreach (var result in results)
|
||||
{
|
||||
writer.Write(result.Id);
|
||||
writer.Write(',');
|
||||
writer.Write(result.Iterations.ToString(CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.ObservationCount.ToString(CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.AliasGroups.ToString(CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.LinksetCount.ToString(CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.TotalStatistics.MeanMs.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.TotalStatistics.P95Ms.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.TotalStatistics.MaxMs.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.InsertStatistics.MeanMs.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.CorrelationStatistics.MeanMs.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.TotalThroughputStatistics.MeanPerSecond.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.TotalThroughputStatistics.MinPerSecond.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.InsertThroughputStatistics.MeanPerSecond.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.InsertThroughputStatistics.MinPerSecond.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(result.AllocationStatistics.MaxAllocatedMb.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.WriteLine();
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
using System.Runtime.CompilerServices;
|
||||
|
||||
[assembly: InternalsVisibleTo("StellaOps.Bench.LinkNotMerge.Tests")]
|
||||
@@ -0,0 +1,151 @@
|
||||
using System.Text.Json;
|
||||
using System.Text.Json.Serialization;
|
||||
|
||||
namespace StellaOps.Bench.LinkNotMerge.Reporting;
|
||||
|
||||
internal static class BenchmarkJsonWriter
|
||||
{
|
||||
private static readonly JsonSerializerOptions SerializerOptions = new(JsonSerializerDefaults.Web)
|
||||
{
|
||||
WriteIndented = true,
|
||||
DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull,
|
||||
};
|
||||
|
||||
public static async Task WriteAsync(
|
||||
string path,
|
||||
BenchmarkJsonMetadata metadata,
|
||||
IReadOnlyList<BenchmarkScenarioReport> reports,
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(path);
|
||||
ArgumentNullException.ThrowIfNull(metadata);
|
||||
ArgumentNullException.ThrowIfNull(reports);
|
||||
|
||||
var resolved = Path.GetFullPath(path);
|
||||
var directory = Path.GetDirectoryName(resolved);
|
||||
if (!string.IsNullOrEmpty(directory))
|
||||
{
|
||||
Directory.CreateDirectory(directory);
|
||||
}
|
||||
|
||||
var document = new BenchmarkJsonDocument(
|
||||
metadata.SchemaVersion,
|
||||
metadata.CapturedAtUtc,
|
||||
metadata.Commit,
|
||||
metadata.Environment,
|
||||
reports.Select(CreateScenario).ToArray());
|
||||
|
||||
await using var stream = new FileStream(resolved, FileMode.Create, FileAccess.Write, FileShare.None);
|
||||
await JsonSerializer.SerializeAsync(stream, document, SerializerOptions, cancellationToken).ConfigureAwait(false);
|
||||
await stream.FlushAsync(cancellationToken).ConfigureAwait(false);
|
||||
}
|
||||
|
||||
private static BenchmarkJsonScenario CreateScenario(BenchmarkScenarioReport report)
|
||||
{
|
||||
var baseline = report.Baseline;
|
||||
return new BenchmarkJsonScenario(
|
||||
report.Result.Id,
|
||||
report.Result.Label,
|
||||
report.Result.Iterations,
|
||||
report.Result.ObservationCount,
|
||||
report.Result.AliasGroups,
|
||||
report.Result.LinksetCount,
|
||||
report.Result.TotalStatistics.MeanMs,
|
||||
report.Result.TotalStatistics.P95Ms,
|
||||
report.Result.TotalStatistics.MaxMs,
|
||||
report.Result.InsertStatistics.MeanMs,
|
||||
report.Result.CorrelationStatistics.MeanMs,
|
||||
report.Result.TotalThroughputStatistics.MeanPerSecond,
|
||||
report.Result.TotalThroughputStatistics.MinPerSecond,
|
||||
report.Result.InsertThroughputStatistics.MeanPerSecond,
|
||||
report.Result.InsertThroughputStatistics.MinPerSecond,
|
||||
report.Result.AllocationStatistics.MaxAllocatedMb,
|
||||
report.Result.ThresholdMs,
|
||||
report.Result.MinThroughputThresholdPerSecond,
|
||||
report.Result.MinInsertThroughputThresholdPerSecond,
|
||||
report.Result.MaxAllocatedThresholdMb,
|
||||
baseline is null
|
||||
? null
|
||||
: new BenchmarkJsonScenarioBaseline(
|
||||
baseline.Iterations,
|
||||
baseline.Observations,
|
||||
baseline.Aliases,
|
||||
baseline.Linksets,
|
||||
baseline.MeanTotalMs,
|
||||
baseline.P95TotalMs,
|
||||
baseline.MaxTotalMs,
|
||||
baseline.MeanInsertMs,
|
||||
baseline.MeanCorrelationMs,
|
||||
baseline.MeanThroughputPerSecond,
|
||||
baseline.MinThroughputPerSecond,
|
||||
baseline.MeanInsertThroughputPerSecond,
|
||||
baseline.MinInsertThroughputPerSecond,
|
||||
baseline.MaxAllocatedMb),
|
||||
new BenchmarkJsonScenarioRegression(
|
||||
report.DurationRegressionRatio,
|
||||
report.ThroughputRegressionRatio,
|
||||
report.InsertThroughputRegressionRatio,
|
||||
report.RegressionLimit,
|
||||
report.RegressionBreached));
|
||||
}
|
||||
|
||||
private sealed record BenchmarkJsonDocument(
|
||||
string SchemaVersion,
|
||||
DateTimeOffset CapturedAt,
|
||||
string? Commit,
|
||||
string? Environment,
|
||||
IReadOnlyList<BenchmarkJsonScenario> Scenarios);
|
||||
|
||||
private sealed record BenchmarkJsonScenario(
|
||||
string Id,
|
||||
string Label,
|
||||
int Iterations,
|
||||
int Observations,
|
||||
int Aliases,
|
||||
int Linksets,
|
||||
double MeanTotalMs,
|
||||
double P95TotalMs,
|
||||
double MaxTotalMs,
|
||||
double MeanInsertMs,
|
||||
double MeanCorrelationMs,
|
||||
double MeanThroughputPerSecond,
|
||||
double MinThroughputPerSecond,
|
||||
double MeanInsertThroughputPerSecond,
|
||||
double MinInsertThroughputPerSecond,
|
||||
double MaxAllocatedMb,
|
||||
double? ThresholdMs,
|
||||
double? MinThroughputThresholdPerSecond,
|
||||
double? MinInsertThroughputThresholdPerSecond,
|
||||
double? MaxAllocatedThresholdMb,
|
||||
BenchmarkJsonScenarioBaseline? Baseline,
|
||||
BenchmarkJsonScenarioRegression Regression);
|
||||
|
||||
private sealed record BenchmarkJsonScenarioBaseline(
|
||||
int Iterations,
|
||||
int Observations,
|
||||
int Aliases,
|
||||
int Linksets,
|
||||
double MeanTotalMs,
|
||||
double P95TotalMs,
|
||||
double MaxTotalMs,
|
||||
double MeanInsertMs,
|
||||
double MeanCorrelationMs,
|
||||
double MeanThroughputPerSecond,
|
||||
double MinThroughputPerSecond,
|
||||
double MeanInsertThroughputPerSecond,
|
||||
double MinInsertThroughputPerSecond,
|
||||
double MaxAllocatedMb);
|
||||
|
||||
private sealed record BenchmarkJsonScenarioRegression(
|
||||
double? DurationRatio,
|
||||
double? ThroughputRatio,
|
||||
double? InsertThroughputRatio,
|
||||
double Limit,
|
||||
bool Breached);
|
||||
}
|
||||
|
||||
internal sealed record BenchmarkJsonMetadata(
|
||||
string SchemaVersion,
|
||||
DateTimeOffset CapturedAtUtc,
|
||||
string? Commit,
|
||||
string? Environment);
|
||||
@@ -0,0 +1,89 @@
|
||||
using StellaOps.Bench.LinkNotMerge.Baseline;
|
||||
|
||||
namespace StellaOps.Bench.LinkNotMerge.Reporting;
|
||||
|
||||
internal sealed class BenchmarkScenarioReport
|
||||
{
|
||||
private const double DefaultRegressionLimit = 1.15d;
|
||||
|
||||
public BenchmarkScenarioReport(ScenarioResult result, BaselineEntry? baseline, double? regressionLimit = null)
|
||||
{
|
||||
Result = result ?? throw new ArgumentNullException(nameof(result));
|
||||
Baseline = baseline;
|
||||
RegressionLimit = regressionLimit is { } limit && limit > 0 ? limit : DefaultRegressionLimit;
|
||||
DurationRegressionRatio = CalculateRatio(result.TotalStatistics.MaxMs, baseline?.MaxTotalMs);
|
||||
ThroughputRegressionRatio = CalculateInverseRatio(result.TotalThroughputStatistics.MinPerSecond, baseline?.MinThroughputPerSecond);
|
||||
InsertThroughputRegressionRatio = CalculateInverseRatio(result.InsertThroughputStatistics.MinPerSecond, baseline?.MinInsertThroughputPerSecond);
|
||||
}
|
||||
|
||||
public ScenarioResult Result { get; }
|
||||
|
||||
public BaselineEntry? Baseline { get; }
|
||||
|
||||
public double RegressionLimit { get; }
|
||||
|
||||
public double? DurationRegressionRatio { get; }
|
||||
|
||||
public double? ThroughputRegressionRatio { get; }
|
||||
|
||||
public double? InsertThroughputRegressionRatio { get; }
|
||||
|
||||
public bool DurationRegressionBreached => DurationRegressionRatio is { } ratio && ratio >= RegressionLimit;
|
||||
|
||||
public bool ThroughputRegressionBreached => ThroughputRegressionRatio is { } ratio && ratio >= RegressionLimit;
|
||||
|
||||
public bool InsertThroughputRegressionBreached => InsertThroughputRegressionRatio is { } ratio && ratio >= RegressionLimit;
|
||||
|
||||
public bool RegressionBreached => DurationRegressionBreached || ThroughputRegressionBreached || InsertThroughputRegressionBreached;
|
||||
|
||||
public IEnumerable<string> BuildRegressionFailureMessages()
|
||||
{
|
||||
if (Baseline is null)
|
||||
{
|
||||
yield break;
|
||||
}
|
||||
|
||||
if (DurationRegressionBreached && DurationRegressionRatio is { } durationRatio)
|
||||
{
|
||||
var delta = (durationRatio - 1d) * 100d;
|
||||
yield return $"{Result.Id} exceeded max duration budget: {Result.TotalStatistics.MaxMs:F2} ms vs baseline {Baseline.MaxTotalMs:F2} ms (+{delta:F1}%).";
|
||||
}
|
||||
|
||||
if (ThroughputRegressionBreached && ThroughputRegressionRatio is { } throughputRatio)
|
||||
{
|
||||
var delta = (throughputRatio - 1d) * 100d;
|
||||
yield return $"{Result.Id} throughput regressed: min {Result.TotalThroughputStatistics.MinPerSecond:N0} obs/s vs baseline {Baseline.MinThroughputPerSecond:N0} obs/s (-{delta:F1}%).";
|
||||
}
|
||||
|
||||
if (InsertThroughputRegressionBreached && InsertThroughputRegressionRatio is { } insertRatio)
|
||||
{
|
||||
var delta = (insertRatio - 1d) * 100d;
|
||||
yield return $"{Result.Id} insert throughput regressed: min {Result.InsertThroughputStatistics.MinPerSecond:N0} ops/s vs baseline {Baseline.MinInsertThroughputPerSecond:N0} ops/s (-{delta:F1}%).";
|
||||
}
|
||||
}
|
||||
|
||||
private static double? CalculateRatio(double current, double? baseline)
|
||||
{
|
||||
if (!baseline.HasValue || baseline.Value <= 0d)
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
return current / baseline.Value;
|
||||
}
|
||||
|
||||
private static double? CalculateInverseRatio(double current, double? baseline)
|
||||
{
|
||||
if (!baseline.HasValue || baseline.Value <= 0d)
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
if (current <= 0d)
|
||||
{
|
||||
return double.PositiveInfinity;
|
||||
}
|
||||
|
||||
return baseline.Value / current;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,101 @@
|
||||
using System.Globalization;
|
||||
using System.Text;
|
||||
|
||||
namespace StellaOps.Bench.LinkNotMerge.Reporting;
|
||||
|
||||
internal static class PrometheusWriter
|
||||
{
|
||||
public static void Write(string path, IReadOnlyList<BenchmarkScenarioReport> reports)
|
||||
{
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(path);
|
||||
ArgumentNullException.ThrowIfNull(reports);
|
||||
|
||||
var resolved = Path.GetFullPath(path);
|
||||
var directory = Path.GetDirectoryName(resolved);
|
||||
if (!string.IsNullOrEmpty(directory))
|
||||
{
|
||||
Directory.CreateDirectory(directory);
|
||||
}
|
||||
|
||||
var builder = new StringBuilder();
|
||||
builder.AppendLine("# HELP linknotmerge_bench_total_ms Link-Not-Merge benchmark total duration metrics (milliseconds).");
|
||||
builder.AppendLine("# TYPE linknotmerge_bench_total_ms gauge");
|
||||
builder.AppendLine("# HELP linknotmerge_bench_correlation_ms Link-Not-Merge benchmark correlation duration metrics (milliseconds).");
|
||||
builder.AppendLine("# TYPE linknotmerge_bench_correlation_ms gauge");
|
||||
builder.AppendLine("# HELP linknotmerge_bench_insert_ms Link-Not-Merge benchmark insert duration metrics (milliseconds).");
|
||||
builder.AppendLine("# TYPE linknotmerge_bench_insert_ms gauge");
|
||||
builder.AppendLine("# HELP linknotmerge_bench_throughput_per_sec Link-Not-Merge benchmark throughput metrics (observations per second).");
|
||||
builder.AppendLine("# TYPE linknotmerge_bench_throughput_per_sec gauge");
|
||||
builder.AppendLine("# HELP linknotmerge_bench_insert_throughput_per_sec Link-Not-Merge benchmark insert throughput metrics (operations per second).");
|
||||
builder.AppendLine("# TYPE linknotmerge_bench_insert_throughput_per_sec gauge");
|
||||
builder.AppendLine("# HELP linknotmerge_bench_allocated_mb Link-Not-Merge benchmark allocation metrics (megabytes).");
|
||||
builder.AppendLine("# TYPE linknotmerge_bench_allocated_mb gauge");
|
||||
|
||||
foreach (var report in reports)
|
||||
{
|
||||
var scenario = Escape(report.Result.Id);
|
||||
AppendMetric(builder, "linknotmerge_bench_mean_total_ms", scenario, report.Result.TotalStatistics.MeanMs);
|
||||
AppendMetric(builder, "linknotmerge_bench_p95_total_ms", scenario, report.Result.TotalStatistics.P95Ms);
|
||||
AppendMetric(builder, "linknotmerge_bench_max_total_ms", scenario, report.Result.TotalStatistics.MaxMs);
|
||||
AppendMetric(builder, "linknotmerge_bench_threshold_ms", scenario, report.Result.ThresholdMs);
|
||||
|
||||
AppendMetric(builder, "linknotmerge_bench_mean_correlation_ms", scenario, report.Result.CorrelationStatistics.MeanMs);
|
||||
AppendMetric(builder, "linknotmerge_bench_mean_insert_ms", scenario, report.Result.InsertStatistics.MeanMs);
|
||||
|
||||
AppendMetric(builder, "linknotmerge_bench_mean_throughput_per_sec", scenario, report.Result.TotalThroughputStatistics.MeanPerSecond);
|
||||
AppendMetric(builder, "linknotmerge_bench_min_throughput_per_sec", scenario, report.Result.TotalThroughputStatistics.MinPerSecond);
|
||||
AppendMetric(builder, "linknotmerge_bench_throughput_floor_per_sec", scenario, report.Result.MinThroughputThresholdPerSecond);
|
||||
|
||||
AppendMetric(builder, "linknotmerge_bench_mean_insert_throughput_per_sec", scenario, report.Result.InsertThroughputStatistics.MeanPerSecond);
|
||||
AppendMetric(builder, "linknotmerge_bench_min_insert_throughput_per_sec", scenario, report.Result.InsertThroughputStatistics.MinPerSecond);
|
||||
AppendMetric(builder, "linknotmerge_bench_insert_throughput_floor_per_sec", scenario, report.Result.MinInsertThroughputThresholdPerSecond);
|
||||
|
||||
AppendMetric(builder, "linknotmerge_bench_max_allocated_mb", scenario, report.Result.AllocationStatistics.MaxAllocatedMb);
|
||||
AppendMetric(builder, "linknotmerge_bench_max_allocated_threshold_mb", scenario, report.Result.MaxAllocatedThresholdMb);
|
||||
|
||||
if (report.Baseline is { } baseline)
|
||||
{
|
||||
AppendMetric(builder, "linknotmerge_bench_baseline_max_total_ms", scenario, baseline.MaxTotalMs);
|
||||
AppendMetric(builder, "linknotmerge_bench_baseline_min_throughput_per_sec", scenario, baseline.MinThroughputPerSecond);
|
||||
AppendMetric(builder, "linknotmerge_bench_baseline_min_insert_throughput_per_sec", scenario, baseline.MinInsertThroughputPerSecond);
|
||||
}
|
||||
|
||||
if (report.DurationRegressionRatio is { } durationRatio)
|
||||
{
|
||||
AppendMetric(builder, "linknotmerge_bench_duration_regression_ratio", scenario, durationRatio);
|
||||
}
|
||||
|
||||
if (report.ThroughputRegressionRatio is { } throughputRatio)
|
||||
{
|
||||
AppendMetric(builder, "linknotmerge_bench_throughput_regression_ratio", scenario, throughputRatio);
|
||||
}
|
||||
|
||||
if (report.InsertThroughputRegressionRatio is { } insertRatio)
|
||||
{
|
||||
AppendMetric(builder, "linknotmerge_bench_insert_throughput_regression_ratio", scenario, insertRatio);
|
||||
}
|
||||
|
||||
AppendMetric(builder, "linknotmerge_bench_regression_limit", scenario, report.RegressionLimit);
|
||||
AppendMetric(builder, "linknotmerge_bench_regression_breached", scenario, report.RegressionBreached ? 1 : 0);
|
||||
}
|
||||
|
||||
File.WriteAllText(resolved, builder.ToString(), Encoding.UTF8);
|
||||
}
|
||||
|
||||
private static void AppendMetric(StringBuilder builder, string metric, string scenario, double? value)
|
||||
{
|
||||
if (!value.HasValue)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
builder.Append(metric);
|
||||
builder.Append("{scenario=\"");
|
||||
builder.Append(scenario);
|
||||
builder.Append("\"} ");
|
||||
builder.AppendLine(value.Value.ToString("G17", CultureInfo.InvariantCulture));
|
||||
}
|
||||
|
||||
private static string Escape(string value) =>
|
||||
value.Replace("\\", "\\\\", StringComparison.Ordinal).Replace("\"", "\\\"", StringComparison.Ordinal);
|
||||
}
|
||||
@@ -0,0 +1,14 @@
|
||||
namespace StellaOps.Bench.LinkNotMerge;
|
||||
|
||||
internal sealed record ScenarioExecutionResult(
|
||||
IReadOnlyList<double> TotalDurationsMs,
|
||||
IReadOnlyList<double> InsertDurationsMs,
|
||||
IReadOnlyList<double> CorrelationDurationsMs,
|
||||
IReadOnlyList<double> AllocatedMb,
|
||||
IReadOnlyList<double> TotalThroughputsPerSecond,
|
||||
IReadOnlyList<double> InsertThroughputsPerSecond,
|
||||
int ObservationCount,
|
||||
int AliasGroups,
|
||||
int LinksetCount,
|
||||
int TenantCount,
|
||||
LinksetAggregationResult AggregationResult);
|
||||
@@ -0,0 +1,42 @@
|
||||
using System.Globalization;
|
||||
|
||||
namespace StellaOps.Bench.LinkNotMerge;
|
||||
|
||||
internal sealed record ScenarioResult(
|
||||
string Id,
|
||||
string Label,
|
||||
int Iterations,
|
||||
int ObservationCount,
|
||||
int AliasGroups,
|
||||
int LinksetCount,
|
||||
DurationStatistics TotalStatistics,
|
||||
DurationStatistics InsertStatistics,
|
||||
DurationStatistics CorrelationStatistics,
|
||||
ThroughputStatistics TotalThroughputStatistics,
|
||||
ThroughputStatistics InsertThroughputStatistics,
|
||||
AllocationStatistics AllocationStatistics,
|
||||
double? ThresholdMs,
|
||||
double? MinThroughputThresholdPerSecond,
|
||||
double? MinInsertThroughputThresholdPerSecond,
|
||||
double? MaxAllocatedThresholdMb)
|
||||
{
|
||||
public string IdColumn => Id.Length <= 28 ? Id.PadRight(28) : Id[..28];
|
||||
|
||||
public string ObservationsColumn => ObservationCount.ToString("N0", CultureInfo.InvariantCulture).PadLeft(12);
|
||||
|
||||
public string AliasColumn => AliasGroups.ToString("N0", CultureInfo.InvariantCulture).PadLeft(8);
|
||||
|
||||
public string LinksetColumn => LinksetCount.ToString("N0", CultureInfo.InvariantCulture).PadLeft(9);
|
||||
|
||||
public string TotalMeanColumn => TotalStatistics.MeanMs.ToString("F2", CultureInfo.InvariantCulture).PadLeft(10);
|
||||
|
||||
public string CorrelationMeanColumn => CorrelationStatistics.MeanMs.ToString("F2", CultureInfo.InvariantCulture).PadLeft(10);
|
||||
|
||||
public string InsertMeanColumn => InsertStatistics.MeanMs.ToString("F2", CultureInfo.InvariantCulture).PadLeft(10);
|
||||
|
||||
public string ThroughputColumn => (TotalThroughputStatistics.MinPerSecond / 1_000d).ToString("F2", CultureInfo.InvariantCulture).PadLeft(11);
|
||||
|
||||
public string InsertThroughputColumn => (InsertThroughputStatistics.MinPerSecond / 1_000d).ToString("F2", CultureInfo.InvariantCulture).PadLeft(11);
|
||||
|
||||
public string AllocatedColumn => AllocationStatistics.MaxAllocatedMb.ToString("F2", CultureInfo.InvariantCulture).PadLeft(9);
|
||||
}
|
||||
@@ -0,0 +1,84 @@
|
||||
namespace StellaOps.Bench.LinkNotMerge;
|
||||
|
||||
internal readonly record struct DurationStatistics(double MeanMs, double P95Ms, double MaxMs)
|
||||
{
|
||||
public static DurationStatistics From(IReadOnlyList<double> values)
|
||||
{
|
||||
if (values.Count == 0)
|
||||
{
|
||||
return new DurationStatistics(0, 0, 0);
|
||||
}
|
||||
|
||||
var sorted = values.ToArray();
|
||||
Array.Sort(sorted);
|
||||
|
||||
var total = 0d;
|
||||
foreach (var value in values)
|
||||
{
|
||||
total += value;
|
||||
}
|
||||
|
||||
var mean = total / values.Count;
|
||||
var p95 = Percentile(sorted, 95);
|
||||
var max = sorted[^1];
|
||||
|
||||
return new DurationStatistics(mean, p95, max);
|
||||
}
|
||||
|
||||
private static double Percentile(IReadOnlyList<double> sorted, double percentile)
|
||||
{
|
||||
if (sorted.Count == 0)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
var rank = (percentile / 100d) * (sorted.Count - 1);
|
||||
var lower = (int)Math.Floor(rank);
|
||||
var upper = (int)Math.Ceiling(rank);
|
||||
var weight = rank - lower;
|
||||
|
||||
if (upper >= sorted.Count)
|
||||
{
|
||||
return sorted[lower];
|
||||
}
|
||||
|
||||
return sorted[lower] + weight * (sorted[upper] - sorted[lower]);
|
||||
}
|
||||
}
|
||||
|
||||
internal readonly record struct ThroughputStatistics(double MeanPerSecond, double MinPerSecond)
|
||||
{
|
||||
public static ThroughputStatistics From(IReadOnlyList<double> values)
|
||||
{
|
||||
if (values.Count == 0)
|
||||
{
|
||||
return new ThroughputStatistics(0, 0);
|
||||
}
|
||||
|
||||
var total = 0d;
|
||||
var min = double.MaxValue;
|
||||
|
||||
foreach (var value in values)
|
||||
{
|
||||
total += value;
|
||||
min = Math.Min(min, value);
|
||||
}
|
||||
|
||||
var mean = total / values.Count;
|
||||
return new ThroughputStatistics(mean, min);
|
||||
}
|
||||
}
|
||||
|
||||
internal readonly record struct AllocationStatistics(double MaxAllocatedMb)
|
||||
{
|
||||
public static AllocationStatistics From(IReadOnlyList<double> values)
|
||||
{
|
||||
var max = 0d;
|
||||
foreach (var value in values)
|
||||
{
|
||||
max = Math.Max(max, value);
|
||||
}
|
||||
|
||||
return new AllocationStatistics(max);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,14 @@
|
||||
<Project Sdk="Microsoft.NET.Sdk">
|
||||
<PropertyGroup>
|
||||
<OutputType>Exe</OutputType>
|
||||
<TargetFramework>net10.0</TargetFramework>
|
||||
<Nullable>enable</Nullable>
|
||||
<ImplicitUsings>enable</ImplicitUsings>
|
||||
<LangVersion>preview</LangVersion>
|
||||
<TreatWarningsAsErrors>true</TreatWarningsAsErrors>
|
||||
</PropertyGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<PackageReference Include="Microsoft.Extensions.Logging.Abstractions" />
|
||||
</ItemGroup>
|
||||
</Project>
|
||||
@@ -0,0 +1,11 @@
|
||||
# LinkNotMerge Benchmark Task Board
|
||||
|
||||
This board mirrors active sprint tasks for this module.
|
||||
Source of truth: `docs-archived/implplan/2025-12-29-csproj-audit/SPRINT_20251229_049_BE_csproj_audit_maint_tests.md`.
|
||||
|
||||
| Task ID | Status | Notes |
|
||||
| --- | --- | --- |
|
||||
| AUDIT-0102-M | DONE | Revalidated 2026-01-06. |
|
||||
| AUDIT-0102-T | DONE | Revalidated 2026-01-06. |
|
||||
| AUDIT-0102-A | DONE | Waived (benchmark project; revalidated 2026-01-06). |
|
||||
| REMED-06 | DONE | SOLID review notes captured for SPRINT_20260130_002. |
|
||||
4
src/Tools/StellaOps.Bench/LinkNotMerge/baseline.csv
Normal file
4
src/Tools/StellaOps.Bench/LinkNotMerge/baseline.csv
Normal file
@@ -0,0 +1,4 @@
|
||||
scenario,iterations,observations,aliases,linksets,mean_total_ms,p95_total_ms,max_total_ms,mean_insert_ms,mean_correlation_ms,mean_throughput_per_sec,min_throughput_per_sec,mean_insert_throughput_per_sec,min_insert_throughput_per_sec,max_allocated_mb
|
||||
lnm_ingest_baseline,5,5000,500,6000,555.1984,823.4957,866.6236,366.2635,188.9349,9877.7916,5769.5175,15338.0851,8405.1257,62.4477
|
||||
lnm_ingest_fanout_medium,5,10000,800,14800,785.8909,841.6247,842.8815,453.5087,332.3822,12794.9550,11864.0639,22086.0320,20891.0579,145.8328
|
||||
lnm_ingest_fanout_high,5,15000,1200,17400,1299.3458,1367.0934,1369.9430,741.6265,557.7193,11571.0991,10949.3607,20232.5180,19781.6762,238.3450
|
||||
|
57
src/Tools/StellaOps.Bench/LinkNotMerge/config.json
Normal file
57
src/Tools/StellaOps.Bench/LinkNotMerge/config.json
Normal file
@@ -0,0 +1,57 @@
|
||||
{
|
||||
"thresholdMs": 2000,
|
||||
"minThroughputPerSecond": 7000,
|
||||
"minInsertThroughputPerSecond": 12000,
|
||||
"maxAllocatedMb": 600,
|
||||
"iterations": 5,
|
||||
"scenarios": [
|
||||
{
|
||||
"id": "lnm_ingest_baseline",
|
||||
"label": "5k observations, 500 aliases",
|
||||
"observations": 5000,
|
||||
"aliasGroups": 500,
|
||||
"purlsPerObservation": 4,
|
||||
"cpesPerObservation": 2,
|
||||
"referencesPerObservation": 3,
|
||||
"tenants": 4,
|
||||
"batchSize": 250,
|
||||
"seed": 42022,
|
||||
"thresholdMs": 900,
|
||||
"minThroughputPerSecond": 5500,
|
||||
"minInsertThroughputPerSecond": 8000,
|
||||
"maxAllocatedMb": 160
|
||||
},
|
||||
{
|
||||
"id": "lnm_ingest_fanout_medium",
|
||||
"label": "10k observations, 800 aliases",
|
||||
"observations": 10000,
|
||||
"aliasGroups": 800,
|
||||
"purlsPerObservation": 6,
|
||||
"cpesPerObservation": 3,
|
||||
"referencesPerObservation": 4,
|
||||
"tenants": 6,
|
||||
"batchSize": 400,
|
||||
"seed": 52022,
|
||||
"thresholdMs": 1300,
|
||||
"minThroughputPerSecond": 8000,
|
||||
"minInsertThroughputPerSecond": 13000,
|
||||
"maxAllocatedMb": 220
|
||||
},
|
||||
{
|
||||
"id": "lnm_ingest_fanout_high",
|
||||
"label": "15k observations, 1200 aliases",
|
||||
"observations": 15000,
|
||||
"aliasGroups": 1200,
|
||||
"purlsPerObservation": 8,
|
||||
"cpesPerObservation": 4,
|
||||
"referencesPerObservation": 5,
|
||||
"tenants": 8,
|
||||
"batchSize": 500,
|
||||
"seed": 62022,
|
||||
"thresholdMs": 2200,
|
||||
"minThroughputPerSecond": 7000,
|
||||
"minInsertThroughputPerSecond": 13000,
|
||||
"maxAllocatedMb": 300
|
||||
}
|
||||
]
|
||||
}
|
||||
25
src/Tools/StellaOps.Bench/Notify/README.md
Normal file
25
src/Tools/StellaOps.Bench/Notify/README.md
Normal file
@@ -0,0 +1,25 @@
|
||||
# Notify Dispatch Bench
|
||||
|
||||
Synthetic workload measuring rule evaluation and channel dispatch fan-out under varying rule densities.
|
||||
|
||||
## Scenarios
|
||||
|
||||
`config.json` defines three density profiles (5%, 20%, 40%). Each scenario synthesizes deterministic tenants, rules, and delivery actions to measure:
|
||||
|
||||
- Latency (mean/p95/max milliseconds)
|
||||
- Throughput (deliveries per second)
|
||||
- Managed heap allocations (megabytes)
|
||||
- Match fan-out statistics (matches and deliveries per event)
|
||||
|
||||
## Running locally
|
||||
|
||||
```bash
|
||||
dotnet run \
|
||||
--project src/Bench/StellaOps.Bench/Notify/StellaOps.Bench.Notify/StellaOps.Bench.Notify.csproj \
|
||||
-- \
|
||||
--csv out/notify-bench.csv \
|
||||
--json out/notify-bench.json \
|
||||
--prometheus out/notify-bench.prom
|
||||
```
|
||||
|
||||
The benchmark exits non-zero if latency exceeds the configured thresholds, throughput drops below the floor, allocations exceed the ceiling, or regression limits are breached relative to `baseline.csv`.
|
||||
@@ -0,0 +1,30 @@
|
||||
# Notify Benchmark Tests Charter
|
||||
|
||||
## Mission
|
||||
Own the Notify benchmark test suite. Validate config parsing, regression reporting, and deterministic benchmark helpers.
|
||||
|
||||
## Responsibilities
|
||||
- Maintain `StellaOps.Bench.Notify.Tests`.
|
||||
- Ensure tests remain deterministic and offline-friendly.
|
||||
- Surface open work on `TASKS.md`; update statuses (TODO/DOING/DONE/BLOCKED/REVIEW).
|
||||
|
||||
## Key Paths
|
||||
- `BaselineLoaderTests.cs`
|
||||
- `BenchmarkScenarioReportTests.cs`
|
||||
- `NotifyScenarioRunnerTests.cs`
|
||||
- `PrometheusWriterTests.cs`
|
||||
|
||||
## Coordination
|
||||
- Bench guild for regression thresholds and baselines.
|
||||
- Platform guild for determinism expectations.
|
||||
|
||||
## Required Reading
|
||||
- `docs/modules/platform/architecture-overview.md`
|
||||
- `docs/README.md`
|
||||
|
||||
## Working Agreement
|
||||
- 1. Update task status to `DOING`/`DONE` in both corresponding sprint file `/docs/implplan/SPRINT_*.md` and the local `TASKS.md` when you start or finish work.
|
||||
- 2. Review this charter and the Required Reading documents before coding; confirm prerequisites are met.
|
||||
- 3. Keep changes deterministic (stable ordering, timestamps, hashes) and align with offline/air-gap expectations.
|
||||
- 4. Coordinate doc updates, tests, and cross-guild communication whenever contracts or workflows change.
|
||||
- 5. Revert to `TODO` if you pause the task without shipping changes; leave notes in commit/PR descriptions for context.
|
||||
@@ -0,0 +1,40 @@
|
||||
using System.IO;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using StellaOps.Bench.Notify.Baseline;
|
||||
using Xunit;
|
||||
|
||||
using StellaOps.TestKit;
|
||||
namespace StellaOps.Bench.Notify.Tests;
|
||||
|
||||
public sealed class BaselineLoaderTests
|
||||
{
|
||||
[Trait("Category", TestCategories.Unit)]
|
||||
[Fact]
|
||||
public async Task LoadAsync_ReadsBaselineEntries()
|
||||
{
|
||||
var path = Path.GetTempFileName();
|
||||
try
|
||||
{
|
||||
await File.WriteAllTextAsync(
|
||||
path,
|
||||
"scenario,iterations,events,deliveries,mean_ms,p95_ms,max_ms,mean_throughput_per_sec,min_throughput_per_sec,max_allocated_mb\n" +
|
||||
"notify_dispatch_density_05,5,5000,25000,120.5,150.1,199.9,42000.5,39000.2,85.7\n");
|
||||
|
||||
var entries = await BaselineLoader.LoadAsync(path, CancellationToken.None);
|
||||
var entry = Assert.Single(entries);
|
||||
|
||||
Assert.Equal("notify_dispatch_density_05", entry.Key);
|
||||
Assert.Equal(5, entry.Value.Iterations);
|
||||
Assert.Equal(5000, entry.Value.EventCount);
|
||||
Assert.Equal(25000, entry.Value.DeliveryCount);
|
||||
Assert.Equal(120.5, entry.Value.MeanMs);
|
||||
Assert.Equal(39000.2, entry.Value.MinThroughputPerSecond);
|
||||
Assert.Equal(85.7, entry.Value.MaxAllocatedMb);
|
||||
}
|
||||
finally
|
||||
{
|
||||
File.Delete(path);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,88 @@
|
||||
using System.Linq;
|
||||
using StellaOps.Bench.Notify.Baseline;
|
||||
using StellaOps.Bench.Notify.Reporting;
|
||||
using Xunit;
|
||||
|
||||
using StellaOps.TestKit;
|
||||
namespace StellaOps.Bench.Notify.Tests;
|
||||
|
||||
public sealed class BenchmarkScenarioReportTests
|
||||
{
|
||||
[Trait("Category", TestCategories.Unit)]
|
||||
[Fact]
|
||||
public void RegressionDetection_FlagsLatencies()
|
||||
{
|
||||
var result = new ScenarioResult(
|
||||
Id: "scenario",
|
||||
Label: "Scenario",
|
||||
Iterations: 3,
|
||||
TotalEvents: 1000,
|
||||
TotalRules: 100,
|
||||
ActionsPerRule: 2,
|
||||
AverageMatchesPerEvent: 10,
|
||||
MinMatchesPerEvent: 8,
|
||||
MaxMatchesPerEvent: 12,
|
||||
AverageDeliveriesPerEvent: 20,
|
||||
TotalDeliveries: 20000,
|
||||
MeanMs: 200,
|
||||
P95Ms: 250,
|
||||
MaxMs: 300,
|
||||
MeanThroughputPerSecond: 50000,
|
||||
MinThroughputPerSecond: 40000,
|
||||
MaxAllocatedMb: 100,
|
||||
ThresholdMs: null,
|
||||
MinThroughputThresholdPerSecond: null,
|
||||
MaxAllocatedThresholdMb: null);
|
||||
|
||||
var baseline = new BaselineEntry(
|
||||
ScenarioId: "scenario",
|
||||
Iterations: 3,
|
||||
EventCount: 1000,
|
||||
DeliveryCount: 20000,
|
||||
MeanMs: 150,
|
||||
P95Ms: 180,
|
||||
MaxMs: 200,
|
||||
MeanThroughputPerSecond: 60000,
|
||||
MinThroughputPerSecond: 50000,
|
||||
MaxAllocatedMb: 90);
|
||||
|
||||
var report = new BenchmarkScenarioReport(result, baseline, regressionLimit: 1.1);
|
||||
|
||||
Assert.True(report.DurationRegressionBreached);
|
||||
Assert.True(report.ThroughputRegressionBreached);
|
||||
Assert.Contains(report.BuildRegressionFailureMessages(), message => message.Contains("max duration"));
|
||||
}
|
||||
|
||||
[Trait("Category", TestCategories.Unit)]
|
||||
[Fact]
|
||||
public void RegressionDetection_NoBaseline_NoBreaches()
|
||||
{
|
||||
var result = new ScenarioResult(
|
||||
Id: "scenario",
|
||||
Label: "Scenario",
|
||||
Iterations: 3,
|
||||
TotalEvents: 1000,
|
||||
TotalRules: 100,
|
||||
ActionsPerRule: 2,
|
||||
AverageMatchesPerEvent: 10,
|
||||
MinMatchesPerEvent: 8,
|
||||
MaxMatchesPerEvent: 12,
|
||||
AverageDeliveriesPerEvent: 20,
|
||||
TotalDeliveries: 20000,
|
||||
MeanMs: 200,
|
||||
P95Ms: 250,
|
||||
MaxMs: 300,
|
||||
MeanThroughputPerSecond: 50000,
|
||||
MinThroughputPerSecond: 40000,
|
||||
MaxAllocatedMb: 100,
|
||||
ThresholdMs: null,
|
||||
MinThroughputThresholdPerSecond: null,
|
||||
MaxAllocatedThresholdMb: null);
|
||||
|
||||
var report = new BenchmarkScenarioReport(result, baseline: null, regressionLimit: null);
|
||||
|
||||
Assert.False(report.DurationRegressionBreached);
|
||||
Assert.False(report.ThroughputRegressionBreached);
|
||||
Assert.Empty(report.BuildRegressionFailureMessages());
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,35 @@
|
||||
using System.Threading;
|
||||
using Xunit;
|
||||
|
||||
using StellaOps.TestKit;
|
||||
namespace StellaOps.Bench.Notify.Tests;
|
||||
|
||||
public sealed class NotifyScenarioRunnerTests
|
||||
{
|
||||
[Trait("Category", TestCategories.Unit)]
|
||||
[Fact]
|
||||
public void Execute_ComputesDeterministicMetrics()
|
||||
{
|
||||
var config = new NotifyScenarioConfig
|
||||
{
|
||||
Id = "unit_test",
|
||||
EventCount = 500,
|
||||
RuleCount = 40,
|
||||
ActionsPerRule = 3,
|
||||
MatchRate = 0.25,
|
||||
TenantCount = 4,
|
||||
ChannelCount = 16
|
||||
};
|
||||
|
||||
var runner = new NotifyScenarioRunner(config);
|
||||
var result = runner.Execute(2, CancellationToken.None);
|
||||
|
||||
Assert.Equal(config.ResolveEventCount(), result.TotalEvents);
|
||||
Assert.Equal(config.ResolveRuleCount(), result.TotalRules);
|
||||
Assert.Equal(config.ResolveActionsPerRule(), result.ActionsPerRule);
|
||||
Assert.True(result.TotalMatches > 0);
|
||||
Assert.Equal(result.TotalMatches * result.ActionsPerRule, result.TotalDeliveries);
|
||||
Assert.Equal(2, result.Durations.Count);
|
||||
Assert.All(result.Durations, value => Assert.True(value > 0));
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,66 @@
|
||||
using System.IO;
|
||||
using StellaOps.Bench.Notify.Baseline;
|
||||
using StellaOps.Bench.Notify.Reporting;
|
||||
using Xunit;
|
||||
|
||||
using StellaOps.TestKit;
|
||||
namespace StellaOps.Bench.Notify.Tests;
|
||||
|
||||
public sealed class PrometheusWriterTests
|
||||
{
|
||||
[Trait("Category", TestCategories.Unit)]
|
||||
[Fact]
|
||||
public void Write_EmitsScenarioMetrics()
|
||||
{
|
||||
var result = new ScenarioResult(
|
||||
Id: "scenario",
|
||||
Label: "Scenario",
|
||||
Iterations: 3,
|
||||
TotalEvents: 1000,
|
||||
TotalRules: 100,
|
||||
ActionsPerRule: 2,
|
||||
AverageMatchesPerEvent: 10,
|
||||
MinMatchesPerEvent: 8,
|
||||
MaxMatchesPerEvent: 12,
|
||||
AverageDeliveriesPerEvent: 20,
|
||||
TotalDeliveries: 20000,
|
||||
MeanMs: 200,
|
||||
P95Ms: 250,
|
||||
MaxMs: 300,
|
||||
MeanThroughputPerSecond: 50000,
|
||||
MinThroughputPerSecond: 40000,
|
||||
MaxAllocatedMb: 100,
|
||||
ThresholdMs: 900,
|
||||
MinThroughputThresholdPerSecond: 35000,
|
||||
MaxAllocatedThresholdMb: 150);
|
||||
|
||||
var baseline = new BaselineEntry(
|
||||
ScenarioId: "scenario",
|
||||
Iterations: 3,
|
||||
EventCount: 1000,
|
||||
DeliveryCount: 20000,
|
||||
MeanMs: 180,
|
||||
P95Ms: 210,
|
||||
MaxMs: 240,
|
||||
MeanThroughputPerSecond: 52000,
|
||||
MinThroughputPerSecond: 41000,
|
||||
MaxAllocatedMb: 95);
|
||||
|
||||
var report = new BenchmarkScenarioReport(result, baseline);
|
||||
|
||||
var path = Path.GetTempFileName();
|
||||
try
|
||||
{
|
||||
PrometheusWriter.Write(path, new[] { report });
|
||||
var content = File.ReadAllText(path);
|
||||
|
||||
Assert.Contains("notify_dispatch_bench_mean_ms", content);
|
||||
Assert.Contains("scenario\"} 200", content);
|
||||
Assert.Contains("notify_dispatch_bench_baseline_mean_ms", content);
|
||||
}
|
||||
finally
|
||||
{
|
||||
File.Delete(path);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,21 @@
|
||||
<Project Sdk="Microsoft.NET.Sdk">
|
||||
<PropertyGroup>
|
||||
<TargetFramework>net10.0</TargetFramework>
|
||||
<Nullable>enable</Nullable>
|
||||
<ImplicitUsings>enable</ImplicitUsings>
|
||||
<LangVersion>preview</LangVersion>
|
||||
<TreatWarningsAsErrors>false</TreatWarningsAsErrors>
|
||||
</PropertyGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<PackageReference Include="coverlet.collector" >
|
||||
<IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
|
||||
<PrivateAssets>all</PrivateAssets>
|
||||
</PackageReference>
|
||||
</ItemGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<ProjectReference Include="..\StellaOps.Bench.Notify\StellaOps.Bench.Notify.csproj" />
|
||||
<ProjectReference Include="../../../../__Libraries/StellaOps.TestKit/StellaOps.TestKit.csproj" />
|
||||
</ItemGroup>
|
||||
</Project>
|
||||
@@ -0,0 +1,11 @@
|
||||
# Notify Benchmark Tests Task Board
|
||||
|
||||
This board mirrors active sprint tasks for this module.
|
||||
Source of truth: `docs-archived/implplan/2025-12-29-csproj-audit/SPRINT_20251229_049_BE_csproj_audit_maint_tests.md`.
|
||||
|
||||
| Task ID | Status | Notes |
|
||||
| --- | --- | --- |
|
||||
| AUDIT-0107-M | DONE | Revalidated 2026-01-06. |
|
||||
| AUDIT-0107-T | DONE | Revalidated 2026-01-06. |
|
||||
| AUDIT-0107-A | DONE | Waived (test project; revalidated 2026-01-06). |
|
||||
| REMED-06 | DONE | SOLID review notes captured for SPRINT_20260130_002. |
|
||||
@@ -0,0 +1,30 @@
|
||||
# Notify Benchmark Charter
|
||||
|
||||
## Mission
|
||||
Own the Notify dispatch benchmark harness and reporting outputs. Keep runs deterministic, offline-friendly, and aligned with production notify flows.
|
||||
|
||||
## Responsibilities
|
||||
- Maintain `StellaOps.Bench.Notify` runner, config parsing, and output writers.
|
||||
- Keep benchmark inputs deterministic and document default datasets.
|
||||
- Surface open work on `TASKS.md`; update statuses (TODO/DOING/DONE/BLOCKED/REVIEW).
|
||||
|
||||
## Key Paths
|
||||
- `Program.cs`
|
||||
- `BenchmarkConfig.cs`
|
||||
- `NotifyScenarioRunner.cs`
|
||||
- `Reporting/`
|
||||
|
||||
## Coordination
|
||||
- Bench guild for performance baselines.
|
||||
- Platform guild for determinism and offline expectations.
|
||||
|
||||
## Required Reading
|
||||
- `docs/modules/platform/architecture-overview.md`
|
||||
- `docs/README.md`
|
||||
|
||||
## Working Agreement
|
||||
- 1. Update task status to `DOING`/`DONE` in both corresponding sprint file `/docs/implplan/SPRINT_*.md` and the local `TASKS.md` when you start or finish work.
|
||||
- 2. Review this charter and the Required Reading documents before coding; confirm prerequisites are met.
|
||||
- 3. Keep changes deterministic (stable ordering, timestamps, hashes) and align with offline/air-gap expectations.
|
||||
- 4. Coordinate doc updates, tests, and cross-guild communication whenever contracts or workflows change.
|
||||
- 5. Revert to `TODO` if you pause the task without shipping changes; leave notes in commit/PR descriptions for context.
|
||||
@@ -0,0 +1,13 @@
|
||||
namespace StellaOps.Bench.Notify.Baseline;
|
||||
|
||||
internal sealed record BaselineEntry(
|
||||
string ScenarioId,
|
||||
int Iterations,
|
||||
int EventCount,
|
||||
int DeliveryCount,
|
||||
double MeanMs,
|
||||
double P95Ms,
|
||||
double MaxMs,
|
||||
double MeanThroughputPerSecond,
|
||||
double MinThroughputPerSecond,
|
||||
double MaxAllocatedMb);
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user