Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled
- Implemented RustFsArtifactObjectStore for managing artifacts in RustFS. - Added unit tests for RustFsArtifactObjectStore functionality. - Created a RustFS migrator tool to transfer objects from S3 to RustFS. - Introduced policy preview and report models for API integration. - Added fixtures and tests for policy preview and report functionality. - Included necessary metadata and scripts for cache_pkg package.
Scanner Analyzer Microbench Harness
The bench harness exercises the language analyzers against representative filesystem layouts so that regressions are caught before they ship.
Layout
StellaOps.Bench.ScannerAnalyzers/– .NET 10 console harness that executes the real language analyzers (and fallback metadata walks for ecosystems that are still underway).config.json– Declarative list of scenarios the harness executes. Each scenario points at a directory insamples/.baseline.csv– Reference numbers captured on the 4 vCPU warm rig described indocs/12_PERFORMANCE_WORKBOOK.md. CI publishes fresh CSVs so perf trends stay visible.
Current scenarios
node_monorepo_walk→ runs the Node analyzer acrosssamples/runtime/npm-monorepo.java_demo_archive→ runs the Java analyzer againstsamples/runtime/java-demo/libs/demo.jar.python_site_packages_walk→ temporary metadata walk oversamples/runtime/python-venvuntil the Python analyzer lands.
Running locally
dotnet run \
--project bench/Scanner.Analyzers/StellaOps.Bench.ScannerAnalyzers/StellaOps.Bench.ScannerAnalyzers.csproj \
-- \
--repo-root . \
--out bench/Scanner.Analyzers/baseline.csv \
--json out/bench/scanner-analyzers/latest.json \
--prom out/bench/scanner-analyzers/latest.prom \
--commit "$(git rev-parse HEAD)"
The harness prints a table to stdout and writes the CSV (if --out is specified) with the following headers:
scenario,iterations,sample_count,mean_ms,p95_ms,max_ms
Additional outputs:
--jsonemits a deterministic report consumable by Grafana/automation (schema1.0, seedocs/12_PERFORMANCE_WORKBOOK.md).--promexports Prometheus-compatible gauges (scanner_analyzer_bench_*), which CI uploads for dashboards and alerts.
Use --iterations to override the default (5 passes per scenario) and --threshold-ms to customize the failure budget. Budgets default to 5 000 ms (or per-scenario overrides in config.json), aligned with the SBOM compose objective. Provide --baseline path/to/baseline.csv (defaults to the repo baseline) to compare against historical numbers—regressions ≥ 20 % on the max_ms metric or breaches of the configured threshold will fail the run.
Metadata options:
--captured-at 2025-10-23T12:00:00Zto inject a deterministic timestamp (otherwiseUtcNow).--commitand--environmentannotate the JSON report for dashboards.--regression-limit 1.15adjusts the ratio guard (default 1.20 ⇒ +20 %).
Adding scenarios
- Drop the fixture tree under
samples/<area>/.... - Append a new scenario entry to
config.jsondescribing:id– snake_case scenario name (also used in CSV).label– human-friendly description shown in logs.root– path to the directory that will be scanned.- For analyzer-backed scenarios, set
analyzersto the list of language analyzer ids (for example,["node"]). - For temporary metadata walks (used until the analyzer ships), provide
parser(nodeorpython) and thematcherglob describing files to parse.
- Re-run the harness (
dotnet run … --out baseline.csv --json out/.../new.json --prom out/.../new.prom). - Commit both the fixture and updated baseline.