# Scanner Analyzer Microbench Harness The bench harness exercises the language analyzers against representative filesystem layouts so that regressions are caught before they ship. ## Layout - `StellaOps.Bench.ScannerAnalyzers/` – .NET 10 console harness that executes the real language analyzers (and fallback metadata walks for ecosystems that are still underway). - `config.json` – Declarative list of scenarios the harness executes. Each scenario points at a directory in `samples/`. - `baseline.csv` – Reference numbers captured on the 4 vCPU warm rig described in `docs/12_PERFORMANCE_WORKBOOK.md`. CI publishes fresh CSVs so perf trends stay visible. ## Current scenarios - `node_monorepo_walk` → runs the Node analyzer across `samples/runtime/npm-monorepo`. - `java_demo_archive` → runs the Java analyzer against `samples/runtime/java-demo/libs/demo.jar`. - `python_site_packages_walk` → temporary metadata walk over `samples/runtime/python-venv` until the Python analyzer lands. ## Running locally ```bash dotnet run \ --project bench/Scanner.Analyzers/StellaOps.Bench.ScannerAnalyzers/StellaOps.Bench.ScannerAnalyzers.csproj \ -- \ --repo-root . \ --out bench/Scanner.Analyzers/baseline.csv ``` The harness prints a table to stdout and writes the CSV (if `--out` is specified) with the following headers: ``` scenario,iterations,sample_count,mean_ms,p95_ms,max_ms ``` Use `--iterations` to override the default (5 passes per scenario) and `--threshold-ms` to customize the failure budget. Budgets default to 5 000 ms (or per-scenario overrides in `config.json`), aligned with the SBOM compose objective. ## Adding scenarios 1. Drop the fixture tree under `samples//...`. 2. Append a new scenario entry to `config.json` describing: - `id` – snake_case scenario name (also used in CSV). - `label` – human-friendly description shown in logs. - `root` – path to the directory that will be scanned. - For analyzer-backed scenarios, set `analyzers` to the list of language analyzer ids (for example, `["node"]`). - For temporary metadata walks (used until the analyzer ships), provide `parser` (`node` or `python`) and the `matcher` glob describing files to parse. 3. Re-run the harness (`dotnet run … --out baseline.csv`). 4. Commit both the fixture and updated baseline.