Files
git.stella-ops.org/bench/Scanner.Analyzers/README.md
master 791e12baab
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled
Add tests and implement StubBearer authentication for Signer endpoints
- Created SignerEndpointsTests to validate the SignDsse and VerifyReferrers endpoints.
- Implemented StubBearerAuthenticationDefaults and StubBearerAuthenticationHandler for token-based authentication.
- Developed ConcelierExporterClient for managing Trivy DB settings and export operations.
- Added TrivyDbSettingsPageComponent for UI interactions with Trivy DB settings, including form handling and export triggering.
- Implemented styles and HTML structure for Trivy DB settings page.
- Created NotifySmokeCheck tool for validating Redis event streams and Notify deliveries.
2025-10-21 09:37:07 +03:00

43 lines
2.3 KiB
Markdown
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# Scanner Analyzer Microbench Harness
The bench harness exercises the language analyzers against representative filesystem layouts so that regressions are caught before they ship.
## Layout
- `StellaOps.Bench.ScannerAnalyzers/` .NET 10 console harness that executes the real language analyzers (and fallback metadata walks for ecosystems that are still underway).
- `config.json` Declarative list of scenarios the harness executes. Each scenario points at a directory in `samples/`.
- `baseline.csv` Reference numbers captured on the 4vCPU warm rig described in `docs/12_PERFORMANCE_WORKBOOK.md`. CI publishes fresh CSVs so perf trends stay visible.
## Current scenarios
- `node_monorepo_walk` → runs the Node analyzer across `samples/runtime/npm-monorepo`.
- `java_demo_archive` → runs the Java analyzer against `samples/runtime/java-demo/libs/demo.jar`.
- `python_site_packages_walk` → temporary metadata walk over `samples/runtime/python-venv` until the Python analyzer lands.
## Running locally
```bash
dotnet run \
--project bench/Scanner.Analyzers/StellaOps.Bench.ScannerAnalyzers/StellaOps.Bench.ScannerAnalyzers.csproj \
-- \
--repo-root . \
--out bench/Scanner.Analyzers/baseline.csv
```
The harness prints a table to stdout and writes the CSV (if `--out` is specified) with the following headers:
```
scenario,iterations,sample_count,mean_ms,p95_ms,max_ms
```
Use `--iterations` to override the default (5 passes per scenario) and `--threshold-ms` to customize the failure budget. Budgets default to 5000ms (or per-scenario overrides in `config.json`), aligned with the SBOM compose objective.
## Adding scenarios
1. Drop the fixture tree under `samples/<area>/...`.
2. Append a new scenario entry to `config.json` describing:
- `id` snake_case scenario name (also used in CSV).
- `label` human-friendly description shown in logs.
- `root` path to the directory that will be scanned.
- For analyzer-backed scenarios, set `analyzers` to the list of language analyzer ids (for example, `["node"]`).
- For temporary metadata walks (used until the analyzer ships), provide `parser` (`node` or `python`) and the `matcher` glob describing files to parse.
3. Re-run the harness (`dotnet run … --out baseline.csv`).
4. Commit both the fixture and updated baseline.