Add tests and implement StubBearer authentication for Signer endpoints
- Created SignerEndpointsTests to validate the SignDsse and VerifyReferrers endpoints. - Implemented StubBearerAuthenticationDefaults and StubBearerAuthenticationHandler for token-based authentication. - Developed ConcelierExporterClient for managing Trivy DB settings and export operations. - Added TrivyDbSettingsPageComponent for UI interactions with Trivy DB settings, including form handling and export triggering. - Implemented styles and HTML structure for Trivy DB settings page. - Created NotifySmokeCheck tool for validating Redis event streams and Notify deliveries.
This commit is contained in:
		@@ -2,35 +2,41 @@
 | 
			
		||||
 | 
			
		||||
The bench harness exercises the language analyzers against representative filesystem layouts so that regressions are caught before they ship.
 | 
			
		||||
 | 
			
		||||
## Layout
 | 
			
		||||
- `run-bench.js` – Node.js script that traverses the sample `node_modules/` and `site-packages/` trees, replicating the package discovery work performed by the upcoming analyzers.
 | 
			
		||||
- `config.json` – Declarative list of scenarios the harness executes. Each scenario points at a directory in `samples/`.
 | 
			
		||||
- `baseline.csv` – Reference numbers captured on the 4 vCPU warm rig described in `docs/12_PERFORMANCE_WORKBOOK.md`. CI publishes fresh CSVs so perf trends stay visible.
 | 
			
		||||
 | 
			
		||||
## Running locally
 | 
			
		||||
 | 
			
		||||
```bash
 | 
			
		||||
cd bench/Scanner.Analyzers
 | 
			
		||||
node run-bench.js --out baseline.csv --samples ../..
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
The harness prints a table to stdout and writes the CSV (if `--out` is specified) with the following headers:
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
scenario,iterations,sample_count,mean_ms,p95_ms,max_ms
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
Use `--iterations` to override the default (5 passes per scenario) and `--threshold-ms` to customize the failure budget. Budgets default to 5 000 ms, aligned with the SBOM compose objective.
 | 
			
		||||
 | 
			
		||||
## Adding scenarios
 | 
			
		||||
1. Drop the fixture tree under `samples/<area>/...`.
 | 
			
		||||
2. Append a new scenario entry to `config.json` describing:
 | 
			
		||||
   - `id` – snake_case scenario name (also used in CSV).
 | 
			
		||||
   - `label` – human-friendly description shown in logs.
 | 
			
		||||
   - `root` – path to the directory that will be scanned.
 | 
			
		||||
   - `matcher` – glob describing files that will be parsed (POSIX `**` patterns).
 | 
			
		||||
   - `parser` – `node` or `python` to choose the metadata reader.
 | 
			
		||||
3. Re-run `node run-bench.js --out baseline.csv`.
 | 
			
		||||
4. Commit both the fixture and updated baseline.
 | 
			
		||||
 | 
			
		||||
The harness is intentionally dependency-free to remain runnable inside minimal CI runners.
 | 
			
		||||
## Layout
 | 
			
		||||
- `StellaOps.Bench.ScannerAnalyzers/` – .NET 10 console harness that executes the real language analyzers (and fallback metadata walks for ecosystems that are still underway).
 | 
			
		||||
- `config.json` – Declarative list of scenarios the harness executes. Each scenario points at a directory in `samples/`.
 | 
			
		||||
- `baseline.csv` – Reference numbers captured on the 4 vCPU warm rig described in `docs/12_PERFORMANCE_WORKBOOK.md`. CI publishes fresh CSVs so perf trends stay visible.
 | 
			
		||||
 | 
			
		||||
## Current scenarios
 | 
			
		||||
- `node_monorepo_walk` → runs the Node analyzer across `samples/runtime/npm-monorepo`.
 | 
			
		||||
- `java_demo_archive` → runs the Java analyzer against `samples/runtime/java-demo/libs/demo.jar`.
 | 
			
		||||
- `python_site_packages_walk` → temporary metadata walk over `samples/runtime/python-venv` until the Python analyzer lands.
 | 
			
		||||
 | 
			
		||||
## Running locally
 | 
			
		||||
 | 
			
		||||
```bash
 | 
			
		||||
dotnet run \
 | 
			
		||||
  --project bench/Scanner.Analyzers/StellaOps.Bench.ScannerAnalyzers/StellaOps.Bench.ScannerAnalyzers.csproj \
 | 
			
		||||
  -- \
 | 
			
		||||
  --repo-root . \
 | 
			
		||||
  --out bench/Scanner.Analyzers/baseline.csv
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
The harness prints a table to stdout and writes the CSV (if `--out` is specified) with the following headers:
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
scenario,iterations,sample_count,mean_ms,p95_ms,max_ms
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
Use `--iterations` to override the default (5 passes per scenario) and `--threshold-ms` to customize the failure budget. Budgets default to 5 000 ms (or per-scenario overrides in `config.json`), aligned with the SBOM compose objective.
 | 
			
		||||
 | 
			
		||||
## Adding scenarios
 | 
			
		||||
1. Drop the fixture tree under `samples/<area>/...`.
 | 
			
		||||
2. Append a new scenario entry to `config.json` describing:
 | 
			
		||||
   - `id` – snake_case scenario name (also used in CSV).
 | 
			
		||||
   - `label` – human-friendly description shown in logs.
 | 
			
		||||
   - `root` – path to the directory that will be scanned.
 | 
			
		||||
   - For analyzer-backed scenarios, set `analyzers` to the list of language analyzer ids (for example, `["node"]`).
 | 
			
		||||
   - For temporary metadata walks (used until the analyzer ships), provide `parser` (`node` or `python`) and the `matcher` glob describing files to parse.
 | 
			
		||||
3. Re-run the harness (`dotnet run … --out baseline.csv`).
 | 
			
		||||
4. Commit both the fixture and updated baseline.
 | 
			
		||||
 
 | 
			
		||||
		Reference in New Issue
	
	Block a user