feat: Add new projects to solution and implement contract testing documentation
- Added "StellaOps.Policy.Engine", "StellaOps.Cartographer", and "StellaOps.SbomService" projects to the StellaOps solution. - Created AGENTS.md to outline the Contract Testing Guild Charter, detailing mission, scope, and definition of done. - Established TASKS.md for the Contract Testing Task Board, outlining tasks for Sprint 62 and Sprint 63 related to mock servers and replay testing.
This commit is contained in:
54
src/StellaOps.Bench/Scanner.Analyzers/README.md
Normal file
54
src/StellaOps.Bench/Scanner.Analyzers/README.md
Normal file
@@ -0,0 +1,54 @@
|
||||
# Scanner Analyzer Microbench Harness
|
||||
|
||||
The bench harness exercises the language analyzers against representative filesystem layouts so that regressions are caught before they ship.
|
||||
|
||||
## Layout
|
||||
- `StellaOps.Bench.ScannerAnalyzers/` – .NET 10 console harness that executes the real language analyzers (and fallback metadata walks for ecosystems that are still underway).
|
||||
- `config.json` – Declarative list of scenarios the harness executes. Each scenario points at a directory in `samples/`.
|
||||
- `baseline.csv` – Reference numbers captured on the 4 vCPU warm rig described in `docs/12_PERFORMANCE_WORKBOOK.md`. CI publishes fresh CSVs so perf trends stay visible.
|
||||
|
||||
## Current scenarios
|
||||
- `node_monorepo_walk` → runs the Node analyzer across `samples/runtime/npm-monorepo`.
|
||||
- `java_demo_archive` → runs the Java analyzer against `samples/runtime/java-demo/libs/demo.jar`.
|
||||
- `python_site_packages_walk` → temporary metadata walk over `samples/runtime/python-venv` until the Python analyzer lands.
|
||||
|
||||
## Running locally
|
||||
|
||||
```bash
|
||||
dotnet run \
|
||||
--project src/StellaOps.Bench/Scanner.Analyzers/StellaOps.Bench.ScannerAnalyzers/StellaOps.Bench.ScannerAnalyzers.csproj \
|
||||
-- \
|
||||
--repo-root . \
|
||||
--out src/StellaOps.Bench/Scanner.Analyzers/baseline.csv \
|
||||
--json out/bench/scanner-analyzers/latest.json \
|
||||
--prom out/bench/scanner-analyzers/latest.prom \
|
||||
--commit "$(git rev-parse HEAD)"
|
||||
```
|
||||
|
||||
The harness prints a table to stdout and writes the CSV (if `--out` is specified) with the following headers:
|
||||
|
||||
```
|
||||
scenario,iterations,sample_count,mean_ms,p95_ms,max_ms
|
||||
```
|
||||
|
||||
Additional outputs:
|
||||
- `--json` emits a deterministic report consumable by Grafana/automation (schema `1.0`, see `docs/12_PERFORMANCE_WORKBOOK.md`).
|
||||
- `--prom` exports Prometheus-compatible gauges (`scanner_analyzer_bench_*`), which CI uploads for dashboards and alerts.
|
||||
|
||||
Use `--iterations` to override the default (5 passes per scenario) and `--threshold-ms` to customize the failure budget. Budgets default to 5 000 ms (or per-scenario overrides in `config.json`), aligned with the SBOM compose objective. Provide `--baseline path/to/baseline.csv` (defaults to the repo baseline) to compare against historical numbers—regressions ≥ 20 % on the `max_ms` metric or breaches of the configured threshold will fail the run.
|
||||
|
||||
Metadata options:
|
||||
- `--captured-at 2025-10-23T12:00:00Z` to inject a deterministic timestamp (otherwise `UtcNow`).
|
||||
- `--commit` and `--environment` annotate the JSON report for dashboards.
|
||||
- `--regression-limit 1.15` adjusts the ratio guard (default 1.20 ⇒ +20 %).
|
||||
|
||||
## Adding scenarios
|
||||
1. Drop the fixture tree under `samples/<area>/...`.
|
||||
2. Append a new scenario entry to `config.json` describing:
|
||||
- `id` – snake_case scenario name (also used in CSV).
|
||||
- `label` – human-friendly description shown in logs.
|
||||
- `root` – path to the directory that will be scanned.
|
||||
- For analyzer-backed scenarios, set `analyzers` to the list of language analyzer ids (for example, `["node"]`).
|
||||
- For temporary metadata walks (used until the analyzer ships), provide `parser` (`node` or `python`) and the `matcher` glob describing files to parse.
|
||||
3. Re-run the harness (`dotnet run … --out baseline.csv --json out/.../new.json --prom out/.../new.prom`).
|
||||
4. Commit both the fixture and updated baseline.
|
||||
@@ -0,0 +1,37 @@
|
||||
using System.Text;
|
||||
using StellaOps.Bench.ScannerAnalyzers.Baseline;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Bench.ScannerAnalyzers.Tests;
|
||||
|
||||
public sealed class BaselineLoaderTests
|
||||
{
|
||||
[Fact]
|
||||
public async Task LoadAsync_ReadsCsvIntoDictionary()
|
||||
{
|
||||
var csv = """
|
||||
scenario,iterations,sample_count,mean_ms,p95_ms,max_ms
|
||||
node_monorepo_walk,5,4,9.4303,36.1354,45.0012
|
||||
python_site_packages_walk,5,10,12.1000,18.2000,26.3000
|
||||
""";
|
||||
|
||||
var path = await WriteTempFileAsync(csv);
|
||||
|
||||
var result = await BaselineLoader.LoadAsync(path, CancellationToken.None);
|
||||
|
||||
Assert.Equal(2, result.Count);
|
||||
var entry = Assert.Contains("node_monorepo_walk", result);
|
||||
Assert.Equal(5, entry.Iterations);
|
||||
Assert.Equal(4, entry.SampleCount);
|
||||
Assert.Equal(9.4303, entry.MeanMs, 4);
|
||||
Assert.Equal(36.1354, entry.P95Ms, 4);
|
||||
Assert.Equal(45.0012, entry.MaxMs, 4);
|
||||
}
|
||||
|
||||
private static async Task<string> WriteTempFileAsync(string content)
|
||||
{
|
||||
var path = Path.Combine(Path.GetTempPath(), $"baseline-{Guid.NewGuid():N}.csv");
|
||||
await File.WriteAllTextAsync(path, content, Encoding.UTF8);
|
||||
return path;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,41 @@
|
||||
using System.Text.Json;
|
||||
using StellaOps.Bench.ScannerAnalyzers;
|
||||
using StellaOps.Bench.ScannerAnalyzers.Baseline;
|
||||
using StellaOps.Bench.ScannerAnalyzers.Reporting;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Bench.ScannerAnalyzers.Tests;
|
||||
|
||||
public sealed class BenchmarkJsonWriterTests
|
||||
{
|
||||
[Fact]
|
||||
public async Task WriteAsync_EmitsMetadataAndScenarioDetails()
|
||||
{
|
||||
var metadata = new BenchmarkJsonMetadata("1.0", DateTimeOffset.Parse("2025-10-23T12:00:00Z"), "abc123", "ci");
|
||||
var result = new ScenarioResult(
|
||||
"scenario",
|
||||
"Scenario",
|
||||
SampleCount: 5,
|
||||
MeanMs: 10,
|
||||
P95Ms: 12,
|
||||
MaxMs: 20,
|
||||
Iterations: 5,
|
||||
ThresholdMs: 5000);
|
||||
var baseline = new BaselineEntry("scenario", 5, 5, 9, 11, 10);
|
||||
var report = new BenchmarkScenarioReport(result, baseline, 1.2);
|
||||
|
||||
var path = Path.Combine(Path.GetTempPath(), $"bench-{Guid.NewGuid():N}.json");
|
||||
await BenchmarkJsonWriter.WriteAsync(path, metadata, new[] { report }, CancellationToken.None);
|
||||
|
||||
using var document = JsonDocument.Parse(await File.ReadAllTextAsync(path));
|
||||
var root = document.RootElement;
|
||||
|
||||
Assert.Equal("1.0", root.GetProperty("schemaVersion").GetString());
|
||||
Assert.Equal("abc123", root.GetProperty("commit").GetString());
|
||||
var scenario = root.GetProperty("scenarios")[0];
|
||||
Assert.Equal("scenario", scenario.GetProperty("id").GetString());
|
||||
Assert.Equal(20, scenario.GetProperty("maxMs").GetDouble());
|
||||
Assert.Equal(10, scenario.GetProperty("baseline").GetProperty("maxMs").GetDouble());
|
||||
Assert.True(scenario.GetProperty("regression").GetProperty("breached").GetBoolean());
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,58 @@
|
||||
using StellaOps.Bench.ScannerAnalyzers;
|
||||
using StellaOps.Bench.ScannerAnalyzers.Baseline;
|
||||
using StellaOps.Bench.ScannerAnalyzers.Reporting;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Bench.ScannerAnalyzers.Tests;
|
||||
|
||||
public sealed class BenchmarkScenarioReportTests
|
||||
{
|
||||
[Fact]
|
||||
public void RegressionRatio_ComputedWhenBaselinePresent()
|
||||
{
|
||||
var result = new ScenarioResult(
|
||||
"scenario",
|
||||
"Scenario",
|
||||
SampleCount: 5,
|
||||
MeanMs: 10,
|
||||
P95Ms: 12,
|
||||
MaxMs: 20,
|
||||
Iterations: 5,
|
||||
ThresholdMs: 5000);
|
||||
|
||||
var baseline = new BaselineEntry(
|
||||
"scenario",
|
||||
Iterations: 5,
|
||||
SampleCount: 5,
|
||||
MeanMs: 8,
|
||||
P95Ms: 11,
|
||||
MaxMs: 15);
|
||||
|
||||
var report = new BenchmarkScenarioReport(result, baseline, regressionLimit: 1.2);
|
||||
|
||||
Assert.True(report.MaxRegressionRatio.HasValue);
|
||||
Assert.Equal(20d / 15d, report.MaxRegressionRatio.Value, 6);
|
||||
Assert.True(report.RegressionBreached);
|
||||
Assert.Contains("+33.3%", report.BuildRegressionFailureMessage());
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void RegressionRatio_NullWhenBaselineMissing()
|
||||
{
|
||||
var result = new ScenarioResult(
|
||||
"scenario",
|
||||
"Scenario",
|
||||
SampleCount: 5,
|
||||
MeanMs: 10,
|
||||
P95Ms: 12,
|
||||
MaxMs: 20,
|
||||
Iterations: 5,
|
||||
ThresholdMs: 5000);
|
||||
|
||||
var report = new BenchmarkScenarioReport(result, baseline: null, regressionLimit: 1.2);
|
||||
|
||||
Assert.Null(report.MaxRegressionRatio);
|
||||
Assert.False(report.RegressionBreached);
|
||||
Assert.Null(report.BuildRegressionFailureMessage());
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,32 @@
|
||||
using StellaOps.Bench.ScannerAnalyzers;
|
||||
using StellaOps.Bench.ScannerAnalyzers.Baseline;
|
||||
using StellaOps.Bench.ScannerAnalyzers.Reporting;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Bench.ScannerAnalyzers.Tests;
|
||||
|
||||
public sealed class PrometheusWriterTests
|
||||
{
|
||||
[Fact]
|
||||
public void Write_EmitsMetricsForScenario()
|
||||
{
|
||||
var result = new ScenarioResult(
|
||||
"scenario_a",
|
||||
"Scenario A",
|
||||
SampleCount: 5,
|
||||
MeanMs: 10,
|
||||
P95Ms: 12,
|
||||
MaxMs: 20,
|
||||
Iterations: 5,
|
||||
ThresholdMs: 5000);
|
||||
var baseline = new BaselineEntry("scenario_a", 5, 5, 9, 11, 18);
|
||||
var report = new BenchmarkScenarioReport(result, baseline, 1.2);
|
||||
|
||||
var path = Path.Combine(Path.GetTempPath(), $"metrics-{Guid.NewGuid():N}.prom");
|
||||
PrometheusWriter.Write(path, new[] { report });
|
||||
|
||||
var contents = File.ReadAllText(path);
|
||||
Assert.Contains("scanner_analyzer_bench_max_ms{scenario=\"scenario_a\"} 20", contents);
|
||||
Assert.Contains("scanner_analyzer_bench_regression_ratio{scenario=\"scenario_a\"}", contents);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,26 @@
|
||||
<Project Sdk="Microsoft.NET.Sdk">
|
||||
<PropertyGroup>
|
||||
<TargetFramework>net10.0</TargetFramework>
|
||||
<Nullable>enable</Nullable>
|
||||
<ImplicitUsings>enable</ImplicitUsings>
|
||||
<LangVersion>preview</LangVersion>
|
||||
<TreatWarningsAsErrors>true</TreatWarningsAsErrors>
|
||||
</PropertyGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.14.0" />
|
||||
<PackageReference Include="xunit" Version="2.9.2" />
|
||||
<PackageReference Include="xunit.runner.visualstudio" Version="2.8.2">
|
||||
<IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
|
||||
<PrivateAssets>all</PrivateAssets>
|
||||
</PackageReference>
|
||||
<PackageReference Include="coverlet.collector" Version="6.0.4">
|
||||
<IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
|
||||
<PrivateAssets>all</PrivateAssets>
|
||||
</PackageReference>
|
||||
</ItemGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<ProjectReference Include="..\StellaOps.Bench.ScannerAnalyzers\StellaOps.Bench.ScannerAnalyzers.csproj" />
|
||||
</ItemGroup>
|
||||
</Project>
|
||||
@@ -0,0 +1,9 @@
|
||||
namespace StellaOps.Bench.ScannerAnalyzers.Baseline;
|
||||
|
||||
internal sealed record BaselineEntry(
|
||||
string ScenarioId,
|
||||
int Iterations,
|
||||
int SampleCount,
|
||||
double MeanMs,
|
||||
double P95Ms,
|
||||
double MaxMs);
|
||||
@@ -0,0 +1,88 @@
|
||||
using System.Globalization;
|
||||
|
||||
namespace StellaOps.Bench.ScannerAnalyzers.Baseline;
|
||||
|
||||
internal static class BaselineLoader
|
||||
{
|
||||
public static async Task<IReadOnlyDictionary<string, BaselineEntry>> LoadAsync(string path, CancellationToken cancellationToken)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(path))
|
||||
{
|
||||
throw new ArgumentException("Baseline path must be provided.", nameof(path));
|
||||
}
|
||||
|
||||
var resolved = Path.GetFullPath(path);
|
||||
if (!File.Exists(resolved))
|
||||
{
|
||||
throw new FileNotFoundException($"Baseline file not found at {resolved}", resolved);
|
||||
}
|
||||
|
||||
var result = new Dictionary<string, BaselineEntry>(StringComparer.OrdinalIgnoreCase);
|
||||
|
||||
await using var stream = new FileStream(resolved, FileMode.Open, FileAccess.Read, FileShare.Read);
|
||||
using var reader = new StreamReader(stream);
|
||||
string? line;
|
||||
var isFirst = true;
|
||||
|
||||
while ((line = await reader.ReadLineAsync().ConfigureAwait(false)) is not null)
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
if (string.IsNullOrWhiteSpace(line))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
if (isFirst)
|
||||
{
|
||||
isFirst = false;
|
||||
if (line.StartsWith("scenario,", StringComparison.OrdinalIgnoreCase))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
var entry = ParseLine(line);
|
||||
result[entry.ScenarioId] = entry;
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
private static BaselineEntry ParseLine(string line)
|
||||
{
|
||||
var parts = line.Split(',', StringSplitOptions.TrimEntries);
|
||||
if (parts.Length < 6)
|
||||
{
|
||||
throw new InvalidDataException($"Baseline CSV row malformed: '{line}'");
|
||||
}
|
||||
|
||||
var scenarioId = parts[0];
|
||||
var iterations = ParseInt(parts[1], nameof(BaselineEntry.Iterations));
|
||||
var sampleCount = ParseInt(parts[2], nameof(BaselineEntry.SampleCount));
|
||||
var meanMs = ParseDouble(parts[3], nameof(BaselineEntry.MeanMs));
|
||||
var p95Ms = ParseDouble(parts[4], nameof(BaselineEntry.P95Ms));
|
||||
var maxMs = ParseDouble(parts[5], nameof(BaselineEntry.MaxMs));
|
||||
|
||||
return new BaselineEntry(scenarioId, iterations, sampleCount, meanMs, p95Ms, maxMs);
|
||||
}
|
||||
|
||||
private static int ParseInt(string value, string field)
|
||||
{
|
||||
if (!int.TryParse(value, NumberStyles.Integer, CultureInfo.InvariantCulture, out var parsed))
|
||||
{
|
||||
throw new InvalidDataException($"Failed to parse integer {field} from '{value}'.");
|
||||
}
|
||||
|
||||
return parsed;
|
||||
}
|
||||
|
||||
private static double ParseDouble(string value, string field)
|
||||
{
|
||||
if (!double.TryParse(value, NumberStyles.Float, CultureInfo.InvariantCulture, out var parsed))
|
||||
{
|
||||
throw new InvalidDataException($"Failed to parse double {field} from '{value}'.");
|
||||
}
|
||||
|
||||
return parsed;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,104 @@
|
||||
using System.Text.Json;
|
||||
using System.Text.Json.Serialization;
|
||||
|
||||
namespace StellaOps.Bench.ScannerAnalyzers;
|
||||
|
||||
internal sealed record BenchmarkConfig
|
||||
{
|
||||
[JsonPropertyName("iterations")]
|
||||
public int? Iterations { get; init; }
|
||||
|
||||
[JsonPropertyName("thresholdMs")]
|
||||
public double? ThresholdMs { get; init; }
|
||||
|
||||
[JsonPropertyName("scenarios")]
|
||||
public List<BenchmarkScenarioConfig> Scenarios { get; init; } = new();
|
||||
|
||||
public static async Task<BenchmarkConfig> LoadAsync(string path)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(path))
|
||||
{
|
||||
throw new ArgumentException("Config path is required.", nameof(path));
|
||||
}
|
||||
|
||||
await using var stream = File.OpenRead(path);
|
||||
var config = await JsonSerializer.DeserializeAsync<BenchmarkConfig>(stream, SerializerOptions).ConfigureAwait(false);
|
||||
if (config is null)
|
||||
{
|
||||
throw new InvalidOperationException($"Failed to parse benchmark config '{path}'.");
|
||||
}
|
||||
|
||||
if (config.Scenarios.Count == 0)
|
||||
{
|
||||
throw new InvalidOperationException("config.scenarios must declare at least one scenario.");
|
||||
}
|
||||
|
||||
foreach (var scenario in config.Scenarios)
|
||||
{
|
||||
scenario.Validate();
|
||||
}
|
||||
|
||||
return config;
|
||||
}
|
||||
|
||||
private static JsonSerializerOptions SerializerOptions => new()
|
||||
{
|
||||
PropertyNameCaseInsensitive = true,
|
||||
ReadCommentHandling = JsonCommentHandling.Skip,
|
||||
AllowTrailingCommas = true,
|
||||
};
|
||||
}
|
||||
|
||||
internal sealed record BenchmarkScenarioConfig
|
||||
{
|
||||
[JsonPropertyName("id")]
|
||||
public string? Id { get; init; }
|
||||
|
||||
[JsonPropertyName("label")]
|
||||
public string? Label { get; init; }
|
||||
|
||||
[JsonPropertyName("root")]
|
||||
public string? Root { get; init; }
|
||||
|
||||
[JsonPropertyName("analyzers")]
|
||||
public List<string>? Analyzers { get; init; }
|
||||
|
||||
[JsonPropertyName("matcher")]
|
||||
public string? Matcher { get; init; }
|
||||
|
||||
[JsonPropertyName("parser")]
|
||||
public string? Parser { get; init; }
|
||||
|
||||
[JsonPropertyName("thresholdMs")]
|
||||
public double? ThresholdMs { get; init; }
|
||||
|
||||
public bool HasAnalyzers => Analyzers is { Count: > 0 };
|
||||
|
||||
public void Validate()
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(Id))
|
||||
{
|
||||
throw new InvalidOperationException("scenario.id is required.");
|
||||
}
|
||||
|
||||
if (string.IsNullOrWhiteSpace(Root))
|
||||
{
|
||||
throw new InvalidOperationException($"Scenario '{Id}' must specify a root path.");
|
||||
}
|
||||
|
||||
if (HasAnalyzers)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
if (string.IsNullOrWhiteSpace(Parser))
|
||||
{
|
||||
throw new InvalidOperationException($"Scenario '{Id}' must specify parser or analyzers.");
|
||||
}
|
||||
|
||||
if (string.IsNullOrWhiteSpace(Matcher))
|
||||
{
|
||||
throw new InvalidOperationException($"Scenario '{Id}' must specify matcher when parser is used.");
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,393 @@
|
||||
using System.Globalization;
|
||||
using StellaOps.Bench.ScannerAnalyzers.Baseline;
|
||||
using StellaOps.Bench.ScannerAnalyzers.Reporting;
|
||||
using StellaOps.Bench.ScannerAnalyzers.Scenarios;
|
||||
|
||||
namespace StellaOps.Bench.ScannerAnalyzers;
|
||||
|
||||
internal static class Program
|
||||
{
|
||||
public static async Task<int> Main(string[] args)
|
||||
{
|
||||
try
|
||||
{
|
||||
var options = ProgramOptions.Parse(args);
|
||||
var config = await BenchmarkConfig.LoadAsync(options.ConfigPath).ConfigureAwait(false);
|
||||
|
||||
var iterations = options.Iterations ?? config.Iterations ?? 5;
|
||||
var thresholdMs = options.ThresholdMs ?? config.ThresholdMs ?? 5000;
|
||||
var repoRoot = ResolveRepoRoot(options.RepoRoot, options.ConfigPath);
|
||||
var regressionLimit = options.RegressionLimit ?? 1.2d;
|
||||
var capturedAt = (options.CapturedAtUtc ?? DateTimeOffset.UtcNow).ToUniversalTime();
|
||||
|
||||
var baseline = await LoadBaselineDictionaryAsync(options.BaselinePath, CancellationToken.None).ConfigureAwait(false);
|
||||
|
||||
var results = new List<ScenarioResult>();
|
||||
var reports = new List<BenchmarkScenarioReport>();
|
||||
var failures = new List<string>();
|
||||
|
||||
foreach (var scenario in config.Scenarios)
|
||||
{
|
||||
var runner = ScenarioRunnerFactory.Create(scenario);
|
||||
var scenarioRoot = ResolveScenarioRoot(repoRoot, scenario.Root!);
|
||||
|
||||
var execution = await runner.ExecuteAsync(scenarioRoot, iterations, CancellationToken.None).ConfigureAwait(false);
|
||||
var stats = ScenarioStatistics.FromDurations(execution.Durations);
|
||||
var scenarioThreshold = scenario.ThresholdMs ?? thresholdMs;
|
||||
|
||||
var result = new ScenarioResult(
|
||||
scenario.Id!,
|
||||
scenario.Label ?? scenario.Id!,
|
||||
execution.SampleCount,
|
||||
stats.MeanMs,
|
||||
stats.P95Ms,
|
||||
stats.MaxMs,
|
||||
iterations,
|
||||
scenarioThreshold);
|
||||
|
||||
results.Add(result);
|
||||
|
||||
if (stats.MaxMs > scenarioThreshold)
|
||||
{
|
||||
failures.Add($"{scenario.Id} exceeded threshold: {stats.MaxMs:F2} ms > {scenarioThreshold:F2} ms");
|
||||
}
|
||||
|
||||
baseline.TryGetValue(result.Id, out var baselineEntry);
|
||||
var report = new BenchmarkScenarioReport(result, baselineEntry, regressionLimit);
|
||||
if (report.BuildRegressionFailureMessage() is { } regressionFailure)
|
||||
{
|
||||
failures.Add(regressionFailure);
|
||||
}
|
||||
|
||||
reports.Add(report);
|
||||
}
|
||||
|
||||
TablePrinter.Print(results);
|
||||
|
||||
if (!string.IsNullOrWhiteSpace(options.CsvOutPath))
|
||||
{
|
||||
CsvWriter.Write(options.CsvOutPath!, results);
|
||||
}
|
||||
|
||||
if (!string.IsNullOrWhiteSpace(options.JsonOutPath))
|
||||
{
|
||||
var metadata = new BenchmarkJsonMetadata(
|
||||
"1.0",
|
||||
capturedAt,
|
||||
options.Commit,
|
||||
options.Environment);
|
||||
|
||||
await BenchmarkJsonWriter.WriteAsync(options.JsonOutPath!, metadata, reports, CancellationToken.None).ConfigureAwait(false);
|
||||
}
|
||||
|
||||
if (!string.IsNullOrWhiteSpace(options.PrometheusOutPath))
|
||||
{
|
||||
PrometheusWriter.Write(options.PrometheusOutPath!, reports);
|
||||
}
|
||||
|
||||
if (failures.Count > 0)
|
||||
{
|
||||
Console.Error.WriteLine();
|
||||
Console.Error.WriteLine("Performance threshold exceeded:");
|
||||
foreach (var failure in failures)
|
||||
{
|
||||
Console.Error.WriteLine($" - {failure}");
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
Console.Error.WriteLine(ex.Message);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
private static async Task<IReadOnlyDictionary<string, BaselineEntry>> LoadBaselineDictionaryAsync(string? baselinePath, CancellationToken cancellationToken)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(baselinePath))
|
||||
{
|
||||
return new Dictionary<string, BaselineEntry>(StringComparer.OrdinalIgnoreCase);
|
||||
}
|
||||
|
||||
var resolved = Path.GetFullPath(baselinePath);
|
||||
if (!File.Exists(resolved))
|
||||
{
|
||||
return new Dictionary<string, BaselineEntry>(StringComparer.OrdinalIgnoreCase);
|
||||
}
|
||||
|
||||
return await BaselineLoader.LoadAsync(resolved, cancellationToken).ConfigureAwait(false);
|
||||
}
|
||||
|
||||
private static string ResolveRepoRoot(string? overridePath, string configPath)
|
||||
{
|
||||
if (!string.IsNullOrWhiteSpace(overridePath))
|
||||
{
|
||||
return Path.GetFullPath(overridePath);
|
||||
}
|
||||
|
||||
var configDirectory = Path.GetDirectoryName(configPath);
|
||||
if (string.IsNullOrWhiteSpace(configDirectory))
|
||||
{
|
||||
return Directory.GetCurrentDirectory();
|
||||
}
|
||||
|
||||
return Path.GetFullPath(Path.Combine(configDirectory, "..", ".."));
|
||||
}
|
||||
|
||||
private static string ResolveScenarioRoot(string repoRoot, string relativeRoot)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(relativeRoot))
|
||||
{
|
||||
throw new InvalidOperationException("Scenario root is required.");
|
||||
}
|
||||
|
||||
var combined = Path.GetFullPath(Path.Combine(repoRoot, relativeRoot));
|
||||
if (!PathUtilities.IsWithinRoot(repoRoot, combined))
|
||||
{
|
||||
throw new InvalidOperationException($"Scenario root '{relativeRoot}' escapes repository root '{repoRoot}'.");
|
||||
}
|
||||
|
||||
if (!Directory.Exists(combined))
|
||||
{
|
||||
throw new DirectoryNotFoundException($"Scenario root '{combined}' does not exist.");
|
||||
}
|
||||
|
||||
return combined;
|
||||
}
|
||||
|
||||
private sealed record ProgramOptions(
|
||||
string ConfigPath,
|
||||
int? Iterations,
|
||||
double? ThresholdMs,
|
||||
string? CsvOutPath,
|
||||
string? JsonOutPath,
|
||||
string? PrometheusOutPath,
|
||||
string? RepoRoot,
|
||||
string? BaselinePath,
|
||||
DateTimeOffset? CapturedAtUtc,
|
||||
string? Commit,
|
||||
string? Environment,
|
||||
double? RegressionLimit)
|
||||
{
|
||||
public static ProgramOptions Parse(string[] args)
|
||||
{
|
||||
var configPath = DefaultConfigPath();
|
||||
var baselinePath = DefaultBaselinePath();
|
||||
int? iterations = null;
|
||||
double? thresholdMs = null;
|
||||
string? csvOut = null;
|
||||
string? jsonOut = null;
|
||||
string? promOut = null;
|
||||
string? repoRoot = null;
|
||||
DateTimeOffset? capturedAt = null;
|
||||
string? commit = null;
|
||||
string? environment = null;
|
||||
double? regressionLimit = null;
|
||||
|
||||
for (var index = 0; index < args.Length; index++)
|
||||
{
|
||||
var current = args[index];
|
||||
switch (current)
|
||||
{
|
||||
case "--config":
|
||||
EnsureNext(args, index);
|
||||
configPath = Path.GetFullPath(args[++index]);
|
||||
break;
|
||||
case "--iterations":
|
||||
EnsureNext(args, index);
|
||||
iterations = int.Parse(args[++index], CultureInfo.InvariantCulture);
|
||||
break;
|
||||
case "--threshold-ms":
|
||||
EnsureNext(args, index);
|
||||
thresholdMs = double.Parse(args[++index], CultureInfo.InvariantCulture);
|
||||
break;
|
||||
case "--out":
|
||||
case "--csv":
|
||||
EnsureNext(args, index);
|
||||
csvOut = args[++index];
|
||||
break;
|
||||
case "--json":
|
||||
EnsureNext(args, index);
|
||||
jsonOut = args[++index];
|
||||
break;
|
||||
case "--prom":
|
||||
case "--prometheus":
|
||||
EnsureNext(args, index);
|
||||
promOut = args[++index];
|
||||
break;
|
||||
case "--baseline":
|
||||
EnsureNext(args, index);
|
||||
baselinePath = args[++index];
|
||||
break;
|
||||
case "--repo-root":
|
||||
case "--samples":
|
||||
EnsureNext(args, index);
|
||||
repoRoot = args[++index];
|
||||
break;
|
||||
case "--captured-at":
|
||||
EnsureNext(args, index);
|
||||
capturedAt = DateTimeOffset.Parse(args[++index], CultureInfo.InvariantCulture, DateTimeStyles.AssumeUniversal | DateTimeStyles.AdjustToUniversal);
|
||||
break;
|
||||
case "--commit":
|
||||
EnsureNext(args, index);
|
||||
commit = args[++index];
|
||||
break;
|
||||
case "--environment":
|
||||
EnsureNext(args, index);
|
||||
environment = args[++index];
|
||||
break;
|
||||
case "--regression-limit":
|
||||
EnsureNext(args, index);
|
||||
regressionLimit = double.Parse(args[++index], CultureInfo.InvariantCulture);
|
||||
break;
|
||||
default:
|
||||
throw new ArgumentException($"Unknown argument: {current}", nameof(args));
|
||||
}
|
||||
}
|
||||
|
||||
return new ProgramOptions(configPath, iterations, thresholdMs, csvOut, jsonOut, promOut, repoRoot, baselinePath, capturedAt, commit, environment, regressionLimit);
|
||||
}
|
||||
|
||||
private static string DefaultConfigPath()
|
||||
{
|
||||
var binaryDir = AppContext.BaseDirectory;
|
||||
var projectRoot = Path.GetFullPath(Path.Combine(binaryDir, "..", "..", ".."));
|
||||
var configDirectory = Path.GetFullPath(Path.Combine(projectRoot, ".."));
|
||||
return Path.Combine(configDirectory, "config.json");
|
||||
}
|
||||
|
||||
private static string? DefaultBaselinePath()
|
||||
{
|
||||
var binaryDir = AppContext.BaseDirectory;
|
||||
var projectRoot = Path.GetFullPath(Path.Combine(binaryDir, "..", "..", ".."));
|
||||
var benchRoot = Path.GetFullPath(Path.Combine(projectRoot, ".."));
|
||||
var baselinePath = Path.Combine(benchRoot, "baseline.csv");
|
||||
return File.Exists(baselinePath) ? baselinePath : baselinePath;
|
||||
}
|
||||
|
||||
private static void EnsureNext(string[] args, int index)
|
||||
{
|
||||
if (index + 1 >= args.Length)
|
||||
{
|
||||
throw new ArgumentException("Missing value for argument.", nameof(args));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private sealed record ScenarioStatistics(double MeanMs, double P95Ms, double MaxMs)
|
||||
{
|
||||
public static ScenarioStatistics FromDurations(IReadOnlyList<double> durations)
|
||||
{
|
||||
if (durations.Count == 0)
|
||||
{
|
||||
return new ScenarioStatistics(0, 0, 0);
|
||||
}
|
||||
|
||||
var sorted = durations.ToArray();
|
||||
Array.Sort(sorted);
|
||||
|
||||
var total = 0d;
|
||||
foreach (var value in durations)
|
||||
{
|
||||
total += value;
|
||||
}
|
||||
|
||||
var mean = total / durations.Count;
|
||||
var p95 = Percentile(sorted, 95);
|
||||
var max = sorted[^1];
|
||||
|
||||
return new ScenarioStatistics(mean, p95, max);
|
||||
}
|
||||
|
||||
private static double Percentile(IReadOnlyList<double> sorted, double percentile)
|
||||
{
|
||||
if (sorted.Count == 0)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
var rank = (percentile / 100d) * (sorted.Count - 1);
|
||||
var lower = (int)Math.Floor(rank);
|
||||
var upper = (int)Math.Ceiling(rank);
|
||||
var weight = rank - lower;
|
||||
|
||||
if (upper >= sorted.Count)
|
||||
{
|
||||
return sorted[lower];
|
||||
}
|
||||
|
||||
return sorted[lower] + weight * (sorted[upper] - sorted[lower]);
|
||||
}
|
||||
}
|
||||
|
||||
private static class TablePrinter
|
||||
{
|
||||
public static void Print(IEnumerable<ScenarioResult> results)
|
||||
{
|
||||
Console.WriteLine("Scenario | Count | Mean(ms) | P95(ms) | Max(ms)");
|
||||
Console.WriteLine("---------------------------- | ----- | --------- | --------- | ----------");
|
||||
foreach (var row in results)
|
||||
{
|
||||
Console.WriteLine(string.Join(" | ", new[]
|
||||
{
|
||||
row.IdColumn,
|
||||
row.SampleCountColumn,
|
||||
row.MeanColumn,
|
||||
row.P95Column,
|
||||
row.MaxColumn
|
||||
}));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static class CsvWriter
|
||||
{
|
||||
public static void Write(string path, IEnumerable<ScenarioResult> results)
|
||||
{
|
||||
var resolvedPath = Path.GetFullPath(path);
|
||||
var directory = Path.GetDirectoryName(resolvedPath);
|
||||
if (!string.IsNullOrEmpty(directory))
|
||||
{
|
||||
Directory.CreateDirectory(directory);
|
||||
}
|
||||
|
||||
using var stream = new FileStream(resolvedPath, FileMode.Create, FileAccess.Write, FileShare.None);
|
||||
using var writer = new StreamWriter(stream);
|
||||
writer.WriteLine("scenario,iterations,sample_count,mean_ms,p95_ms,max_ms");
|
||||
|
||||
foreach (var row in results)
|
||||
{
|
||||
writer.Write(row.Id);
|
||||
writer.Write(',');
|
||||
writer.Write(row.Iterations.ToString(CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(row.SampleCount.ToString(CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(row.MeanMs.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(row.P95Ms.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.Write(',');
|
||||
writer.Write(row.MaxMs.ToString("F4", CultureInfo.InvariantCulture));
|
||||
writer.WriteLine();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
internal static class PathUtilities
|
||||
{
|
||||
public static bool IsWithinRoot(string root, string candidate)
|
||||
{
|
||||
var relative = Path.GetRelativePath(root, candidate);
|
||||
if (string.IsNullOrEmpty(relative) || relative == ".")
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
return !relative.StartsWith("..", StringComparison.Ordinal) && !Path.IsPathRooted(relative);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,108 @@
|
||||
using System.Text.Json;
|
||||
using System.Text.Json.Serialization;
|
||||
using StellaOps.Bench.ScannerAnalyzers.Baseline;
|
||||
|
||||
namespace StellaOps.Bench.ScannerAnalyzers.Reporting;
|
||||
|
||||
internal static class BenchmarkJsonWriter
|
||||
{
|
||||
private static readonly JsonSerializerOptions SerializerOptions = new(JsonSerializerDefaults.Web)
|
||||
{
|
||||
WriteIndented = true,
|
||||
DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull
|
||||
};
|
||||
|
||||
public static async Task WriteAsync(
|
||||
string path,
|
||||
BenchmarkJsonMetadata metadata,
|
||||
IReadOnlyList<BenchmarkScenarioReport> reports,
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(path);
|
||||
ArgumentNullException.ThrowIfNull(metadata);
|
||||
ArgumentNullException.ThrowIfNull(reports);
|
||||
|
||||
var resolved = Path.GetFullPath(path);
|
||||
var directory = Path.GetDirectoryName(resolved);
|
||||
if (!string.IsNullOrEmpty(directory))
|
||||
{
|
||||
Directory.CreateDirectory(directory);
|
||||
}
|
||||
|
||||
var document = new BenchmarkJsonDocument(
|
||||
metadata.SchemaVersion,
|
||||
metadata.CapturedAtUtc,
|
||||
metadata.Commit,
|
||||
metadata.Environment,
|
||||
reports.Select(CreateScenario).ToArray());
|
||||
|
||||
await using var stream = new FileStream(resolved, FileMode.Create, FileAccess.Write, FileShare.None);
|
||||
await JsonSerializer.SerializeAsync(stream, document, SerializerOptions, cancellationToken).ConfigureAwait(false);
|
||||
await stream.FlushAsync(cancellationToken).ConfigureAwait(false);
|
||||
}
|
||||
|
||||
private static BenchmarkJsonScenario CreateScenario(BenchmarkScenarioReport report)
|
||||
{
|
||||
var baseline = report.Baseline;
|
||||
return new BenchmarkJsonScenario(
|
||||
report.Result.Id,
|
||||
report.Result.Label,
|
||||
report.Result.Iterations,
|
||||
report.Result.SampleCount,
|
||||
report.Result.MeanMs,
|
||||
report.Result.P95Ms,
|
||||
report.Result.MaxMs,
|
||||
report.Result.ThresholdMs,
|
||||
baseline is null
|
||||
? null
|
||||
: new BenchmarkJsonScenarioBaseline(
|
||||
baseline.Iterations,
|
||||
baseline.SampleCount,
|
||||
baseline.MeanMs,
|
||||
baseline.P95Ms,
|
||||
baseline.MaxMs),
|
||||
new BenchmarkJsonScenarioRegression(
|
||||
report.MaxRegressionRatio,
|
||||
report.MeanRegressionRatio,
|
||||
report.RegressionLimit,
|
||||
report.RegressionBreached));
|
||||
}
|
||||
|
||||
private sealed record BenchmarkJsonDocument(
|
||||
string SchemaVersion,
|
||||
DateTimeOffset CapturedAt,
|
||||
string? Commit,
|
||||
string? Environment,
|
||||
IReadOnlyList<BenchmarkJsonScenario> Scenarios);
|
||||
|
||||
private sealed record BenchmarkJsonScenario(
|
||||
string Id,
|
||||
string Label,
|
||||
int Iterations,
|
||||
int SampleCount,
|
||||
double MeanMs,
|
||||
double P95Ms,
|
||||
double MaxMs,
|
||||
double ThresholdMs,
|
||||
BenchmarkJsonScenarioBaseline? Baseline,
|
||||
BenchmarkJsonScenarioRegression Regression);
|
||||
|
||||
private sealed record BenchmarkJsonScenarioBaseline(
|
||||
int Iterations,
|
||||
int SampleCount,
|
||||
double MeanMs,
|
||||
double P95Ms,
|
||||
double MaxMs);
|
||||
|
||||
private sealed record BenchmarkJsonScenarioRegression(
|
||||
double? MaxRatio,
|
||||
double? MeanRatio,
|
||||
double Limit,
|
||||
bool Breached);
|
||||
}
|
||||
|
||||
internal sealed record BenchmarkJsonMetadata(
|
||||
string SchemaVersion,
|
||||
DateTimeOffset CapturedAtUtc,
|
||||
string? Commit,
|
||||
string? Environment);
|
||||
@@ -0,0 +1,55 @@
|
||||
using StellaOps.Bench.ScannerAnalyzers.Baseline;
|
||||
|
||||
namespace StellaOps.Bench.ScannerAnalyzers.Reporting;
|
||||
|
||||
internal sealed class BenchmarkScenarioReport
|
||||
{
|
||||
private const double RegressionLimitDefault = 1.2d;
|
||||
|
||||
public BenchmarkScenarioReport(ScenarioResult result, BaselineEntry? baseline, double? regressionLimit = null)
|
||||
{
|
||||
Result = result ?? throw new ArgumentNullException(nameof(result));
|
||||
Baseline = baseline;
|
||||
RegressionLimit = regressionLimit is { } limit && limit > 0 ? limit : RegressionLimitDefault;
|
||||
MaxRegressionRatio = CalculateRatio(result.MaxMs, baseline?.MaxMs);
|
||||
MeanRegressionRatio = CalculateRatio(result.MeanMs, baseline?.MeanMs);
|
||||
}
|
||||
|
||||
public ScenarioResult Result { get; }
|
||||
|
||||
public BaselineEntry? Baseline { get; }
|
||||
|
||||
public double RegressionLimit { get; }
|
||||
|
||||
public double? MaxRegressionRatio { get; }
|
||||
|
||||
public double? MeanRegressionRatio { get; }
|
||||
|
||||
public bool RegressionBreached => MaxRegressionRatio.HasValue && MaxRegressionRatio.Value >= RegressionLimit;
|
||||
|
||||
public string? BuildRegressionFailureMessage()
|
||||
{
|
||||
if (!RegressionBreached || MaxRegressionRatio is null)
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
var percentage = (MaxRegressionRatio.Value - 1d) * 100d;
|
||||
return $"{Result.Id} exceeded regression budget: max {Result.MaxMs:F2} ms vs baseline {Baseline!.MaxMs:F2} ms (+{percentage:F1}%)";
|
||||
}
|
||||
|
||||
private static double? CalculateRatio(double current, double? baseline)
|
||||
{
|
||||
if (!baseline.HasValue)
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
if (baseline.Value <= 0d)
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
return current / baseline.Value;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,59 @@
|
||||
using System.Globalization;
|
||||
using System.Text;
|
||||
|
||||
namespace StellaOps.Bench.ScannerAnalyzers.Reporting;
|
||||
|
||||
internal static class PrometheusWriter
|
||||
{
|
||||
public static void Write(string path, IReadOnlyList<BenchmarkScenarioReport> reports)
|
||||
{
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(path);
|
||||
ArgumentNullException.ThrowIfNull(reports);
|
||||
|
||||
var resolved = Path.GetFullPath(path);
|
||||
var directory = Path.GetDirectoryName(resolved);
|
||||
if (!string.IsNullOrEmpty(directory))
|
||||
{
|
||||
Directory.CreateDirectory(directory);
|
||||
}
|
||||
|
||||
var builder = new StringBuilder();
|
||||
builder.AppendLine("# HELP scanner_analyzer_bench_duration_ms Analyzer benchmark duration metrics in milliseconds.");
|
||||
builder.AppendLine("# TYPE scanner_analyzer_bench_duration_ms gauge");
|
||||
|
||||
foreach (var report in reports)
|
||||
{
|
||||
var scenarioLabel = Escape(report.Result.Id);
|
||||
AppendMetric(builder, "scanner_analyzer_bench_mean_ms", scenarioLabel, report.Result.MeanMs);
|
||||
AppendMetric(builder, "scanner_analyzer_bench_p95_ms", scenarioLabel, report.Result.P95Ms);
|
||||
AppendMetric(builder, "scanner_analyzer_bench_max_ms", scenarioLabel, report.Result.MaxMs);
|
||||
AppendMetric(builder, "scanner_analyzer_bench_threshold_ms", scenarioLabel, report.Result.ThresholdMs);
|
||||
|
||||
if (report.Baseline is { } baseline)
|
||||
{
|
||||
AppendMetric(builder, "scanner_analyzer_bench_baseline_max_ms", scenarioLabel, baseline.MaxMs);
|
||||
AppendMetric(builder, "scanner_analyzer_bench_baseline_mean_ms", scenarioLabel, baseline.MeanMs);
|
||||
}
|
||||
|
||||
if (report.MaxRegressionRatio is { } ratio)
|
||||
{
|
||||
AppendMetric(builder, "scanner_analyzer_bench_regression_ratio", scenarioLabel, ratio);
|
||||
AppendMetric(builder, "scanner_analyzer_bench_regression_limit", scenarioLabel, report.RegressionLimit);
|
||||
AppendMetric(builder, "scanner_analyzer_bench_regression_breached", scenarioLabel, report.RegressionBreached ? 1 : 0);
|
||||
}
|
||||
}
|
||||
|
||||
File.WriteAllText(resolved, builder.ToString(), Encoding.UTF8);
|
||||
}
|
||||
|
||||
private static void AppendMetric(StringBuilder builder, string metric, string scenarioLabel, double value)
|
||||
{
|
||||
builder.Append(metric);
|
||||
builder.Append("{scenario=\"");
|
||||
builder.Append(scenarioLabel);
|
||||
builder.Append("\"} ");
|
||||
builder.AppendLine(value.ToString("G17", CultureInfo.InvariantCulture));
|
||||
}
|
||||
|
||||
private static string Escape(string value) => value.Replace("\\", "\\\\", StringComparison.Ordinal).Replace("\"", "\\\"", StringComparison.Ordinal);
|
||||
}
|
||||
@@ -0,0 +1,24 @@
|
||||
using System.Globalization;
|
||||
|
||||
namespace StellaOps.Bench.ScannerAnalyzers;
|
||||
|
||||
internal sealed record ScenarioResult(
|
||||
string Id,
|
||||
string Label,
|
||||
int SampleCount,
|
||||
double MeanMs,
|
||||
double P95Ms,
|
||||
double MaxMs,
|
||||
int Iterations,
|
||||
double ThresholdMs)
|
||||
{
|
||||
public string IdColumn => Id.Length <= 28 ? Id.PadRight(28) : Id[..28];
|
||||
|
||||
public string SampleCountColumn => SampleCount.ToString(CultureInfo.InvariantCulture).PadLeft(5);
|
||||
|
||||
public string MeanColumn => MeanMs.ToString("F2", CultureInfo.InvariantCulture).PadLeft(9);
|
||||
|
||||
public string P95Column => P95Ms.ToString("F2", CultureInfo.InvariantCulture).PadLeft(9);
|
||||
|
||||
public string MaxColumn => MaxMs.ToString("F2", CultureInfo.InvariantCulture).PadLeft(10);
|
||||
}
|
||||
@@ -0,0 +1,285 @@
|
||||
using System.Diagnostics;
|
||||
using System.Text;
|
||||
using System.Linq;
|
||||
using System.Text.Json;
|
||||
using System.Text.RegularExpressions;
|
||||
using StellaOps.Scanner.Analyzers.Lang;
|
||||
using StellaOps.Scanner.Analyzers.Lang.Go;
|
||||
using StellaOps.Scanner.Analyzers.Lang.Java;
|
||||
using StellaOps.Scanner.Analyzers.Lang.Node;
|
||||
using StellaOps.Scanner.Analyzers.Lang.DotNet;
|
||||
using StellaOps.Scanner.Analyzers.Lang.Python;
|
||||
|
||||
namespace StellaOps.Bench.ScannerAnalyzers.Scenarios;
|
||||
|
||||
internal interface IScenarioRunner
|
||||
{
|
||||
Task<ScenarioExecutionResult> ExecuteAsync(string rootPath, int iterations, CancellationToken cancellationToken);
|
||||
}
|
||||
|
||||
internal sealed record ScenarioExecutionResult(double[] Durations, int SampleCount);
|
||||
|
||||
internal static class ScenarioRunnerFactory
|
||||
{
|
||||
public static IScenarioRunner Create(BenchmarkScenarioConfig scenario)
|
||||
{
|
||||
if (scenario.HasAnalyzers)
|
||||
{
|
||||
return new LanguageAnalyzerScenarioRunner(scenario.Analyzers!);
|
||||
}
|
||||
|
||||
if (string.IsNullOrWhiteSpace(scenario.Parser) || string.IsNullOrWhiteSpace(scenario.Matcher))
|
||||
{
|
||||
throw new InvalidOperationException($"Scenario '{scenario.Id}' missing parser or matcher configuration.");
|
||||
}
|
||||
|
||||
return new MetadataWalkScenarioRunner(scenario.Parser, scenario.Matcher);
|
||||
}
|
||||
}
|
||||
|
||||
internal sealed class LanguageAnalyzerScenarioRunner : IScenarioRunner
|
||||
{
|
||||
private readonly IReadOnlyList<Func<ILanguageAnalyzer>> _analyzerFactories;
|
||||
|
||||
public LanguageAnalyzerScenarioRunner(IEnumerable<string> analyzerIds)
|
||||
{
|
||||
if (analyzerIds is null)
|
||||
{
|
||||
throw new ArgumentNullException(nameof(analyzerIds));
|
||||
}
|
||||
|
||||
_analyzerFactories = analyzerIds
|
||||
.Where(static id => !string.IsNullOrWhiteSpace(id))
|
||||
.Select(CreateFactory)
|
||||
.ToArray();
|
||||
|
||||
if (_analyzerFactories.Count == 0)
|
||||
{
|
||||
throw new InvalidOperationException("At least one analyzer id must be provided.");
|
||||
}
|
||||
}
|
||||
|
||||
public async Task<ScenarioExecutionResult> ExecuteAsync(string rootPath, int iterations, CancellationToken cancellationToken)
|
||||
{
|
||||
if (iterations <= 0)
|
||||
{
|
||||
throw new ArgumentOutOfRangeException(nameof(iterations), iterations, "Iterations must be positive.");
|
||||
}
|
||||
|
||||
var analyzers = _analyzerFactories.Select(factory => factory()).ToArray();
|
||||
var engine = new LanguageAnalyzerEngine(analyzers);
|
||||
var durations = new double[iterations];
|
||||
var componentCount = -1;
|
||||
|
||||
for (var i = 0; i < iterations; i++)
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
|
||||
var context = new LanguageAnalyzerContext(rootPath, TimeProvider.System);
|
||||
var stopwatch = Stopwatch.StartNew();
|
||||
var result = await engine.AnalyzeAsync(context, cancellationToken).ConfigureAwait(false);
|
||||
stopwatch.Stop();
|
||||
|
||||
durations[i] = stopwatch.Elapsed.TotalMilliseconds;
|
||||
|
||||
var currentCount = result.Components.Count;
|
||||
if (componentCount < 0)
|
||||
{
|
||||
componentCount = currentCount;
|
||||
}
|
||||
else if (componentCount != currentCount)
|
||||
{
|
||||
throw new InvalidOperationException($"Analyzer output count changed between iterations ({componentCount} vs {currentCount}).");
|
||||
}
|
||||
}
|
||||
|
||||
if (componentCount < 0)
|
||||
{
|
||||
componentCount = 0;
|
||||
}
|
||||
|
||||
return new ScenarioExecutionResult(durations, componentCount);
|
||||
}
|
||||
|
||||
private static Func<ILanguageAnalyzer> CreateFactory(string analyzerId)
|
||||
{
|
||||
var id = analyzerId.Trim().ToLowerInvariant();
|
||||
return id switch
|
||||
{
|
||||
"java" => static () => new JavaLanguageAnalyzer(),
|
||||
"go" => static () => new GoLanguageAnalyzer(),
|
||||
"node" => static () => new NodeLanguageAnalyzer(),
|
||||
"dotnet" => static () => new DotNetLanguageAnalyzer(),
|
||||
"python" => static () => new PythonLanguageAnalyzer(),
|
||||
_ => throw new InvalidOperationException($"Unsupported analyzer '{analyzerId}'."),
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
internal sealed class MetadataWalkScenarioRunner : IScenarioRunner
|
||||
{
|
||||
private readonly Regex _matcher;
|
||||
private readonly string _parserKind;
|
||||
|
||||
public MetadataWalkScenarioRunner(string parserKind, string globPattern)
|
||||
{
|
||||
_parserKind = parserKind?.Trim().ToLowerInvariant() ?? throw new ArgumentNullException(nameof(parserKind));
|
||||
_matcher = GlobToRegex(globPattern ?? throw new ArgumentNullException(nameof(globPattern)));
|
||||
}
|
||||
|
||||
public async Task<ScenarioExecutionResult> ExecuteAsync(string rootPath, int iterations, CancellationToken cancellationToken)
|
||||
{
|
||||
if (iterations <= 0)
|
||||
{
|
||||
throw new ArgumentOutOfRangeException(nameof(iterations), iterations, "Iterations must be positive.");
|
||||
}
|
||||
|
||||
var durations = new double[iterations];
|
||||
var sampleCount = -1;
|
||||
|
||||
for (var i = 0; i < iterations; i++)
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
|
||||
var stopwatch = Stopwatch.StartNew();
|
||||
var files = EnumerateMatchingFiles(rootPath);
|
||||
if (files.Count == 0)
|
||||
{
|
||||
throw new InvalidOperationException($"Parser '{_parserKind}' matched zero files under '{rootPath}'.");
|
||||
}
|
||||
|
||||
foreach (var file in files)
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
await ParseAsync(file).ConfigureAwait(false);
|
||||
}
|
||||
|
||||
stopwatch.Stop();
|
||||
durations[i] = stopwatch.Elapsed.TotalMilliseconds;
|
||||
|
||||
if (sampleCount < 0)
|
||||
{
|
||||
sampleCount = files.Count;
|
||||
}
|
||||
else if (sampleCount != files.Count)
|
||||
{
|
||||
throw new InvalidOperationException($"File count changed between iterations ({sampleCount} vs {files.Count}).");
|
||||
}
|
||||
}
|
||||
|
||||
if (sampleCount < 0)
|
||||
{
|
||||
sampleCount = 0;
|
||||
}
|
||||
|
||||
return new ScenarioExecutionResult(durations, sampleCount);
|
||||
}
|
||||
|
||||
private async ValueTask ParseAsync(string filePath)
|
||||
{
|
||||
switch (_parserKind)
|
||||
{
|
||||
case "node":
|
||||
{
|
||||
using var stream = File.OpenRead(filePath);
|
||||
using var document = await JsonDocument.ParseAsync(stream).ConfigureAwait(false);
|
||||
|
||||
if (!document.RootElement.TryGetProperty("name", out var name) || name.ValueKind != JsonValueKind.String)
|
||||
{
|
||||
throw new InvalidOperationException($"package.json '{filePath}' missing name.");
|
||||
}
|
||||
|
||||
if (!document.RootElement.TryGetProperty("version", out var version) || version.ValueKind != JsonValueKind.String)
|
||||
{
|
||||
throw new InvalidOperationException($"package.json '{filePath}' missing version.");
|
||||
}
|
||||
}
|
||||
break;
|
||||
case "python":
|
||||
{
|
||||
var (name, version) = await ParsePythonMetadataAsync(filePath).ConfigureAwait(false);
|
||||
if (string.IsNullOrEmpty(name) || string.IsNullOrEmpty(version))
|
||||
{
|
||||
throw new InvalidOperationException($"METADATA '{filePath}' missing Name/Version.");
|
||||
}
|
||||
}
|
||||
break;
|
||||
default:
|
||||
throw new InvalidOperationException($"Unknown parser '{_parserKind}'.");
|
||||
}
|
||||
}
|
||||
|
||||
private static async Task<(string? Name, string? Version)> ParsePythonMetadataAsync(string filePath)
|
||||
{
|
||||
using var stream = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.Read | FileShare.Delete);
|
||||
using var reader = new StreamReader(stream);
|
||||
|
||||
string? name = null;
|
||||
string? version = null;
|
||||
|
||||
while (await reader.ReadLineAsync().ConfigureAwait(false) is { } line)
|
||||
{
|
||||
if (line.StartsWith("Name:", StringComparison.OrdinalIgnoreCase))
|
||||
{
|
||||
name ??= line[5..].Trim();
|
||||
}
|
||||
else if (line.StartsWith("Version:", StringComparison.OrdinalIgnoreCase))
|
||||
{
|
||||
version ??= line[8..].Trim();
|
||||
}
|
||||
|
||||
if (!string.IsNullOrEmpty(name) && !string.IsNullOrEmpty(version))
|
||||
{
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return (name, version);
|
||||
}
|
||||
|
||||
private IReadOnlyList<string> EnumerateMatchingFiles(string rootPath)
|
||||
{
|
||||
var files = new List<string>();
|
||||
var stack = new Stack<string>();
|
||||
stack.Push(rootPath);
|
||||
|
||||
while (stack.Count > 0)
|
||||
{
|
||||
var current = stack.Pop();
|
||||
foreach (var directory in Directory.EnumerateDirectories(current))
|
||||
{
|
||||
stack.Push(directory);
|
||||
}
|
||||
|
||||
foreach (var file in Directory.EnumerateFiles(current))
|
||||
{
|
||||
var relative = Path.GetRelativePath(rootPath, file).Replace('\\', '/');
|
||||
if (_matcher.IsMatch(relative))
|
||||
{
|
||||
files.Add(file);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return files;
|
||||
}
|
||||
|
||||
private static Regex GlobToRegex(string pattern)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(pattern))
|
||||
{
|
||||
throw new ArgumentException("Glob pattern is required.", nameof(pattern));
|
||||
}
|
||||
|
||||
var normalized = pattern.Replace("\\", "/");
|
||||
normalized = normalized.Replace("**", "\u0001");
|
||||
normalized = normalized.Replace("*", "\u0002");
|
||||
|
||||
var escaped = Regex.Escape(normalized);
|
||||
escaped = escaped.Replace("\u0001/", "(?:.*/)?", StringComparison.Ordinal);
|
||||
escaped = escaped.Replace("\u0001", ".*", StringComparison.Ordinal);
|
||||
escaped = escaped.Replace("\u0002", "[^/]*", StringComparison.Ordinal);
|
||||
|
||||
return new Regex("^" + escaped + "$", RegexOptions.Compiled | RegexOptions.CultureInvariant);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,23 @@
|
||||
<Project Sdk="Microsoft.NET.Sdk">
|
||||
<PropertyGroup>
|
||||
<OutputType>Exe</OutputType>
|
||||
<TargetFramework>net10.0</TargetFramework>
|
||||
<Nullable>enable</Nullable>
|
||||
<ImplicitUsings>enable</ImplicitUsings>
|
||||
<LangVersion>preview</LangVersion>
|
||||
<TreatWarningsAsErrors>true</TreatWarningsAsErrors>
|
||||
</PropertyGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<ProjectReference Include="..\..\..\src\StellaOps.Scanner.Analyzers.Lang\StellaOps.Scanner.Analyzers.Lang.csproj" />
|
||||
<ProjectReference Include="..\..\..\src\StellaOps.Scanner.Analyzers.Lang.Go\StellaOps.Scanner.Analyzers.Lang.Go.csproj" />
|
||||
<ProjectReference Include="..\..\..\src\StellaOps.Scanner.Analyzers.Lang.Node\StellaOps.Scanner.Analyzers.Lang.Node.csproj" />
|
||||
<ProjectReference Include="..\..\..\src\StellaOps.Scanner.Analyzers.Lang.Java\StellaOps.Scanner.Analyzers.Lang.Java.csproj" />
|
||||
<ProjectReference Include="..\..\..\src\StellaOps.Scanner.Analyzers.Lang.DotNet\StellaOps.Scanner.Analyzers.Lang.DotNet.csproj" />
|
||||
<ProjectReference Include="..\..\..\src\StellaOps.Scanner.Analyzers.Lang.Python\StellaOps.Scanner.Analyzers.Lang.Python.csproj" />
|
||||
</ItemGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<InternalsVisibleTo Include="StellaOps.Bench.ScannerAnalyzers.Tests" />
|
||||
</ItemGroup>
|
||||
</Project>
|
||||
7
src/StellaOps.Bench/Scanner.Analyzers/baseline.csv
Normal file
7
src/StellaOps.Bench/Scanner.Analyzers/baseline.csv
Normal file
@@ -0,0 +1,7 @@
|
||||
scenario,iterations,sample_count,mean_ms,p95_ms,max_ms
|
||||
node_monorepo_walk,5,4,6.0975,21.7421,26.8537
|
||||
java_demo_archive,5,1,6.2007,23.4837,29.1143
|
||||
go_buildinfo_fixture,5,2,6.1949,22.6851,27.9196
|
||||
dotnet_multirid_fixture,5,2,11.4884,37.7460,46.4850
|
||||
python_site_packages_scan,5,3,5.6420,18.2943,22.3739
|
||||
python_pip_cache_fixture,5,1,5.8598,13.2855,15.6256
|
||||
|
54
src/StellaOps.Bench/Scanner.Analyzers/config.json
Normal file
54
src/StellaOps.Bench/Scanner.Analyzers/config.json
Normal file
@@ -0,0 +1,54 @@
|
||||
{
|
||||
"thresholdMs": 5000,
|
||||
"iterations": 5,
|
||||
"scenarios": [
|
||||
{
|
||||
"id": "node_monorepo_walk",
|
||||
"label": "Node.js analyzer on monorepo fixture",
|
||||
"root": "samples/runtime/npm-monorepo",
|
||||
"analyzers": [
|
||||
"node"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "java_demo_archive",
|
||||
"label": "Java analyzer on demo jar",
|
||||
"root": "samples/runtime/java-demo",
|
||||
"analyzers": [
|
||||
"java"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "go_buildinfo_fixture",
|
||||
"label": "Go analyzer on build-info binary",
|
||||
"root": "src/StellaOps.Scanner.Analyzers.Lang.Go.Tests/Fixtures/lang/go/basic",
|
||||
"analyzers": [
|
||||
"go"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "dotnet_multirid_fixture",
|
||||
"label": ".NET analyzer on multi-RID fixture",
|
||||
"root": "src/StellaOps.Scanner.Analyzers.Lang.Tests/Fixtures/lang/dotnet/multi",
|
||||
"analyzers": [
|
||||
"dotnet"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "python_site_packages_scan",
|
||||
"label": "Python analyzer on sample virtualenv",
|
||||
"root": "samples/runtime/python-venv",
|
||||
"analyzers": [
|
||||
"python"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "python_pip_cache_fixture",
|
||||
"label": "Python analyzer verifying RECORD hashes",
|
||||
"root": "src/StellaOps.Scanner.Analyzers.Lang.Python.Tests/Fixtures/lang/python/pip-cache",
|
||||
"analyzers": [
|
||||
"python"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
31
src/StellaOps.Bench/Scanner.Analyzers/lang/README.md
Normal file
31
src/StellaOps.Bench/Scanner.Analyzers/lang/README.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# Scanner Language Analyzer Benchmarks
|
||||
|
||||
This directory will capture benchmark results for language analyzers (Node, Python, Go, .NET, Rust).
|
||||
|
||||
Pending tasks:
|
||||
- LA1: Node analyzer microbench CSV + flamegraph.
|
||||
- LA2: Python hash throughput CSV.
|
||||
- LA3: Go build info extraction benchmarks.
|
||||
- LA4: .NET RID dedupe performance matrix.
|
||||
- LA5: Rust heuristic coverage comparisons.
|
||||
|
||||
Results should be committed as deterministic CSV/JSON outputs with accompanying methodology notes.
|
||||
|
||||
## Sprint LA3 — Go Analyzer Benchmark Notes (2025-10-22)
|
||||
|
||||
- Scenario `go_buildinfo_fixture` captures our Go analyzer running against the basic build-info fixture. The Oct 23 baseline (`baseline.csv`) shows a mean duration of **35.03 ms** (p95 136.55 ms, max 170.16 ms) over 5 iterations on the current rig; earlier Oct 21 measurement recorded **4.02 ms** mean when the analyzer was profiled on the warm perf runner.
|
||||
- Comparative run against Syft v1.29.1 on the same fixture (captured 2025-10-21) reported a mean of **5.18 ms** (p95 18.64 ms, max 23.51 ms); raw measurements live in `go/syft-comparison-20251021.csv`.
|
||||
- Bench command (from repo root):\
|
||||
`dotnet run --project src/StellaOps.Bench/Scanner.Analyzers/StellaOps.Bench.ScannerAnalyzers/StellaOps.Bench.ScannerAnalyzers.csproj -- --config src/StellaOps.Bench/Scanner.Analyzers/config.json --out src/StellaOps.Bench/Scanner.Analyzers/baseline.csv`
|
||||
|
||||
## Sprint LA4 — .NET Analyzer Benchmark Notes (2025-10-23)
|
||||
|
||||
- Scenario `dotnet_multirid_fixture` exercises the .NET analyzer against the multi-RID test fixture that merges two applications and four runtime identifiers. Latest baseline run (Release build, 5 iterations) records a mean duration of **29.19 ms** (p95 106.62 ms, max 132.30 ms) with a stable component count of 2.
|
||||
- Syft v1.29.1 scanning the same fixture (`syft scan dir:…`) averaged **1 546 ms** (p95 ≈2 100 ms, max ≈2 100 ms) while also reporting duplicate packages; raw numbers captured in `dotnet/syft-comparison-20251023.csv`.
|
||||
- The new scenario is declared in `src/StellaOps.Bench/Scanner.Analyzers/config.json`; rerun the bench command above after rebuilding analyzers to refresh baselines and comparison data.
|
||||
|
||||
## Sprint LA2 — Python Analyzer Benchmark Notes (2025-10-23)
|
||||
|
||||
- Added two Python scenarios to `config.json`: the virtualenv sample (`python_site_packages_scan`) and the RECORD-heavy pip cache fixture (`python_pip_cache_fixture`).
|
||||
- Baseline run (Release build, 5 iterations) records means of **5.64 ms** (p95 18.29 ms) for the virtualenv and **5.86 ms** (p95 13.29 ms) for the pip cache verifier; raw numbers stored in `python/hash-throughput-20251023.csv`.
|
||||
- The pip cache fixture exercises `PythonRecordVerifier` with 12 RECORD rows (7 hashed) and mismatched layer coverage, giving a repeatable hash-validation throughput reference for regression gating.
|
||||
@@ -0,0 +1,2 @@
|
||||
scenario,iterations,sample_count,mean_ms,p95_ms,max_ms
|
||||
syft_dotnet_multirid_fixture,5,2,1546.1609,2099.6870,2099.6870
|
||||
|
@@ -0,0 +1,2 @@
|
||||
scenario,iterations,sample_count,mean_ms,p95_ms,max_ms
|
||||
syft_go_buildinfo_fixture,5,2,5.1840,18.6375,23.5120
|
||||
|
@@ -0,0 +1,3 @@
|
||||
scenario,iterations,sample_count,mean_ms,p95_ms,max_ms
|
||||
python_site_packages_scan,5,3,5.6420,18.2943,22.3739
|
||||
python_pip_cache_fixture,5,1,5.8598,13.2855,15.6256
|
||||
|
43
src/StellaOps.Bench/TASKS.md
Normal file
43
src/StellaOps.Bench/TASKS.md
Normal file
@@ -0,0 +1,43 @@
|
||||
# Benchmarks Task Board
|
||||
|
||||
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|
||||
|----|--------|----------|------------|-------------|---------------|
|
||||
| BENCH-SCANNER-10-001 | DONE | Bench Guild, Scanner Team | SCANNER-ANALYZERS-LANG-10-303 | Analyzer microbench harness (node_modules, site-packages) + baseline CSV. | Harness committed under `src/StellaOps.Bench/Scanner.Analyzers`; baseline CSV recorded; CI job publishes results. |
|
||||
| BENCH-SCANNER-10-002 | DONE (2025-10-21) | Bench Guild, Language Analyzer Guild | SCANNER-ANALYZERS-LANG-10-301..309 | Wire real language analyzers into bench harness & refresh baselines post-implementation. | Harness executes analyzer assemblies end-to-end; updated baseline committed; CI trend doc linked. |
|
||||
| BENCH-IMPACT-16-001 | TODO | Bench Guild, Scheduler Team | SCHED-IMPACT-16-301 | ImpactIndex throughput bench (resolve 10k productKeys) + RAM profile. | Benchmark script ready; baseline metrics recorded; alert thresholds defined. |
|
||||
| BENCH-NOTIFY-15-001 | TODO | Bench Guild, Notify Team | NOTIFY-ENGINE-15-301 | Notify dispatch throughput bench (vary rule density) with results CSV. | Bench executed; results stored; regression alert configured. |
|
||||
|
||||
## Policy Engine v2
|
||||
|
||||
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|
||||
|----|--------|----------|------------|-------------|---------------|
|
||||
| BENCH-POLICY-20-001 | TODO | Bench Guild, Policy Guild | POLICY-ENGINE-20-002, POLICY-ENGINE-20-006 | Build policy evaluation benchmark suite (100k components, 1M advisories) capturing latency, throughput, memory. | Bench harness committed; baseline metrics recorded; ties into CI dashboards. |
|
||||
| BENCH-POLICY-20-002 | TODO | Bench Guild, Policy Guild, Scheduler Guild | BENCH-POLICY-20-001, SCHED-WORKER-20-302 | Add incremental run benchmark measuring delta evaluation vs full; capture SLA compliance. | Incremental bench executed; results stored; regression alerts configured. |
|
||||
|
||||
## Graph Explorer v1
|
||||
|
||||
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|
||||
|----|--------|----------|------------|-------------|---------------|
|
||||
| BENCH-GRAPH-21-001 | TODO | Bench Guild, Cartographer Guild | CARTO-GRAPH-21-004, CARTO-GRAPH-21-006 | Build graph viewport/path benchmark harness simulating 50k/100k nodes; record latency, memory, tile cache hit rates. | Harness committed; baseline metrics logged; integrates with perf dashboards. |
|
||||
| BENCH-GRAPH-21-002 | TODO | Bench Guild, UI Guild | BENCH-GRAPH-21-001, UI-GRAPH-21-001 | Add headless UI load benchmark (Playwright) for graph canvas interactions to track render times and FPS budgets. | Benchmark runs in CI; results exported; alert thresholds defined. |
|
||||
|
||||
## Link-Not-Merge v1
|
||||
|
||||
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|
||||
|----|--------|----------|------------|-------------|---------------|
|
||||
| BENCH-LNM-22-001 | TODO | Bench Guild, Concelier Guild | CONCELIER-LNM-21-002 | Create ingest benchmark simulating 500 advisory observations/sec, measuring correlator latency and Mongo throughput; publish baseline metrics. | Harness added; baseline stored; alerts wired for SLA breach. |
|
||||
| BENCH-LNM-22-002 | TODO | Bench Guild, Excititor Guild | EXCITITOR-LNM-21-002 | Build VEX ingestion/correlation perf test focusing on alias/product matching and event emission latency. | Benchmark executed; metrics captured; CI integration established. |
|
||||
|
||||
## Graph & Vuln Explorer v1
|
||||
|
||||
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|
||||
|----|--------|----------|------------|-------------|---------------|
|
||||
| BENCH-GRAPH-24-001 | TODO | Bench Guild, SBOM Service Guild | SBOM-GRAPH-24-002 | Develop SBOM graph performance benchmark measuring build time, memory, and cache warm latency for 40k-node assets. | Benchmark runs in CI; baseline metrics recorded; alerts configured. |
|
||||
| BENCH-GRAPH-24-002 | TODO | Bench Guild, UI Guild | UI-GRAPH-24-001..005 | Implement UI interaction benchmarks (filter/zoom/table operations) citing p95 latency; integrate with perf dashboards. | UI perf metrics collected; thresholds enforced; documentation updated. |
|
||||
|
||||
## Reachability v1
|
||||
|
||||
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|
||||
|----|--------|----------|------------|-------------|---------------|
|
||||
| BENCH-SIG-26-001 | TODO | Bench Guild, Signals Guild | SIGNALS-24-004 | Develop benchmark for reachability scoring pipeline (facts/sec, latency, memory) using synthetic callgraphs/runtime batches. | Benchmark runs in CI; baseline metrics recorded; alerts configured. |
|
||||
| BENCH-SIG-26-002 | TODO | Bench Guild, Policy Guild | POLICY-ENGINE-80-001 | Measure policy evaluation overhead with reachability cache hot/cold; ensure ≤8 ms p95 added latency. | Benchmark integrated; results tracked in dashboards; regression alerts set. |
|
||||
Reference in New Issue
Block a user