docs consolidation
This commit is contained in:
277
docs/testing/competitor-parity-testing.md
Normal file
277
docs/testing/competitor-parity-testing.md
Normal file
@@ -0,0 +1,277 @@
|
||||
# Competitor Parity Testing Guide
|
||||
|
||||
This document describes StellaOps' competitor parity testing methodology, which ensures we maintain feature parity and performance competitiveness with industry-standard scanners.
|
||||
|
||||
## Overview
|
||||
|
||||
Parity testing compares StellaOps against three primary competitors:
|
||||
|
||||
| Tool | Type | Primary Function |
|
||||
|------|------|------------------|
|
||||
| **Syft** (Anchore) | SBOM Generator | Package extraction and SBOM creation |
|
||||
| **Grype** (Anchore) | Vulnerability Scanner | CVE detection using Syft SBOMs |
|
||||
| **Trivy** (Aqua) | Multi-scanner | SBOM + vulnerability + secrets + misconfig |
|
||||
|
||||
## Goals
|
||||
|
||||
1. **Prevent regression**: Detect when StellaOps falls behind competitors on key metrics
|
||||
2. **Track trends**: Monitor parity metrics over time to identify drift patterns
|
||||
3. **Guide development**: Use competitor gaps to prioritize feature work
|
||||
4. **Validate claims**: Ensure marketing claims are backed by measurable data
|
||||
|
||||
## Fixture Set
|
||||
|
||||
Tests run against a standardized set of container images that represent diverse workloads:
|
||||
|
||||
### Quick Set (Nightly)
|
||||
- Alpine 3.19
|
||||
- Debian 12
|
||||
- Ubuntu 24.04
|
||||
- Python 3.12-slim
|
||||
- Node.js 22-alpine
|
||||
- nginx:1.25 (known vulnerabilities)
|
||||
|
||||
### Full Set (Weekly)
|
||||
All quick set images plus:
|
||||
- RHEL 9 UBI
|
||||
- Go 1.22-alpine
|
||||
- Rust 1.79-slim
|
||||
- OpenJDK 21-slim
|
||||
- .NET 8.0-alpine
|
||||
- postgres:16 (known vulnerabilities)
|
||||
- wordpress:6.5 (complex application)
|
||||
- redis:7-alpine
|
||||
- Multi-layer image (test depth)
|
||||
|
||||
## Metrics Collected
|
||||
|
||||
### SBOM Metrics
|
||||
|
||||
| Metric | Description | Target |
|
||||
|--------|-------------|--------|
|
||||
| Package Count | Total packages detected | ≥95% of Syft |
|
||||
| PURL Completeness | Packages with valid PURLs | ≥98% |
|
||||
| License Detection | Packages with license info | ≥90% |
|
||||
| CPE Mapping | Packages with CPE identifiers | ≥85% |
|
||||
|
||||
### Vulnerability Metrics
|
||||
|
||||
| Metric | Description | Target |
|
||||
|--------|-------------|--------|
|
||||
| Recall | CVEs found vs. union of all scanners | ≥95% |
|
||||
| Precision | True positives vs. total findings | ≥90% |
|
||||
| F1 Score | Harmonic mean of precision/recall | ≥92% |
|
||||
| Severity Distribution | Breakdown by Critical/High/Medium/Low | Match competitors ±10% |
|
||||
|
||||
### Latency Metrics
|
||||
|
||||
| Metric | Description | Target |
|
||||
|--------|-------------|--------|
|
||||
| P50 | Median scan time | ≤1.5x Grype |
|
||||
| P95 | 95th percentile scan time | ≤2x Grype |
|
||||
| P99 | 99th percentile scan time | ≤3x Grype |
|
||||
| Time-to-First-Signal | Time to first vulnerability found | ≤Grype |
|
||||
|
||||
### Error Handling Metrics
|
||||
|
||||
| Scenario | Expected Behavior |
|
||||
|----------|------------------|
|
||||
| Malformed manifest | Graceful degradation with partial results |
|
||||
| Network timeout | Clear error message; cached feed fallback |
|
||||
| Large image (>5GB) | Streaming extraction; no OOM |
|
||||
| Corrupt layer | Skip layer; report warning |
|
||||
| Missing base layer | Report incomplete scan |
|
||||
| Registry auth failure | Clear auth error; suggest remediation |
|
||||
| Rate limiting | Backoff + retry; clear message |
|
||||
|
||||
## Running Parity Tests
|
||||
|
||||
### Locally
|
||||
|
||||
```bash
|
||||
# Build the parity test project
|
||||
dotnet build tests/parity/StellaOps.Parity.Tests
|
||||
|
||||
# Run quick fixture set
|
||||
dotnet test tests/parity/StellaOps.Parity.Tests \
|
||||
-e PARITY_FIXTURE_SET=quick \
|
||||
-e PARITY_OUTPUT_PATH=./parity-results
|
||||
|
||||
# Run full fixture set
|
||||
dotnet test tests/parity/StellaOps.Parity.Tests \
|
||||
-e PARITY_FIXTURE_SET=full \
|
||||
-e PARITY_OUTPUT_PATH=./parity-results
|
||||
```
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Ensure competitor tools are installed and in PATH:
|
||||
|
||||
```bash
|
||||
# Install Syft
|
||||
curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin v1.9.0
|
||||
|
||||
# Install Grype
|
||||
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin v0.79.3
|
||||
|
||||
# Install Trivy
|
||||
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin v0.54.1
|
||||
```
|
||||
|
||||
### CI/CD
|
||||
|
||||
Parity tests run automatically:
|
||||
- **Nightly (02:00 UTC)**: Quick fixture set
|
||||
- **Weekly (Sunday 00:00 UTC)**: Full fixture set
|
||||
|
||||
Results are stored as workflow artifacts and optionally pushed to Prometheus.
|
||||
|
||||
## Drift Detection
|
||||
|
||||
The parity system includes automated drift detection that alerts when StellaOps falls behind competitors:
|
||||
|
||||
### Thresholds
|
||||
|
||||
| Metric | Threshold | Trend Period |
|
||||
|--------|-----------|--------------|
|
||||
| SBOM Completeness | >5% decline | 3 days |
|
||||
| Vulnerability Recall | >5% decline | 3 days |
|
||||
| Latency vs Grype | >10% increase | 3 days |
|
||||
| PURL Completeness | >5% decline | 3 days |
|
||||
| F1 Score | >5% decline | 3 days |
|
||||
|
||||
### Alert Severity
|
||||
|
||||
| Severity | Condition | Action |
|
||||
|----------|-----------|--------|
|
||||
| Low | 1-1.5x threshold | Monitor |
|
||||
| Medium | 1.5-2x threshold | Investigate within sprint |
|
||||
| High | 2-3x threshold | Prioritize fix |
|
||||
| Critical | >3x threshold | Immediate action required |
|
||||
|
||||
### Analyzing Drift
|
||||
|
||||
```bash
|
||||
# Run drift analysis on stored results
|
||||
dotnet run --project tests/parity/StellaOps.Parity.Tests \
|
||||
-- analyze-drift \
|
||||
--results-path ./parity-results \
|
||||
--threshold 0.05 \
|
||||
--trend-days 3
|
||||
```
|
||||
|
||||
## Result Storage
|
||||
|
||||
### JSON Format
|
||||
|
||||
Results are stored as timestamped JSON files:
|
||||
|
||||
```
|
||||
parity-results/
|
||||
├── parity-20250115T020000Z-abc123.json
|
||||
├── parity-20250116T020000Z-def456.json
|
||||
└── parity-20250122T000000Z-ghi789.json # Weekly
|
||||
```
|
||||
|
||||
### Retention Policy
|
||||
|
||||
- **Last 90 days**: Full detail retained
|
||||
- **Older than 90 days**: Aggregated to weekly summaries
|
||||
- **Artifacts**: Workflow artifacts retained for 90 days
|
||||
|
||||
### Prometheus Export
|
||||
|
||||
Results can be exported to Prometheus for dashboarding:
|
||||
|
||||
```
|
||||
stellaops_parity_sbom_completeness_ratio{run_id="..."} 0.97
|
||||
stellaops_parity_vuln_recall{run_id="..."} 0.95
|
||||
stellaops_parity_latency_p95_ms{scanner="stellaops",run_id="..."} 1250
|
||||
```
|
||||
|
||||
### InfluxDB Export
|
||||
|
||||
For InfluxDB time-series storage:
|
||||
|
||||
```
|
||||
parity_sbom,run_id=abc123 completeness_ratio=0.97,matched_count=142i 1705280400000000000
|
||||
parity_vuln,run_id=abc123 recall=0.95,precision=0.92,f1=0.935 1705280400000000000
|
||||
```
|
||||
|
||||
## Competitor Version Tracking
|
||||
|
||||
Competitor tools are pinned to specific versions to ensure reproducibility:
|
||||
|
||||
| Tool | Current Version | Last Updated |
|
||||
|------|-----------------|--------------|
|
||||
| Syft | 1.9.0 | 2025-01-15 |
|
||||
| Grype | 0.79.3 | 2025-01-15 |
|
||||
| Trivy | 0.54.1 | 2025-01-15 |
|
||||
|
||||
### Updating Versions
|
||||
|
||||
1. Update version in `.gitea/workflows/parity-tests.yml`
|
||||
2. Update version in `tests/parity/StellaOps.Parity.Tests/ParityHarness.cs`
|
||||
3. Run full parity test to establish new baseline
|
||||
4. Document version change in sprint execution log
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Syft not found**
|
||||
```
|
||||
Error: Syft executable not found in PATH
|
||||
```
|
||||
Solution: Install Syft or set `SYFT_PATH` environment variable.
|
||||
|
||||
**Grype DB outdated**
|
||||
```
|
||||
Warning: Grype vulnerability database is 7+ days old
|
||||
```
|
||||
Solution: Run `grype db update` to refresh the database.
|
||||
|
||||
**Image pull rate limit**
|
||||
```
|
||||
Error: docker pull rate limit exceeded
|
||||
```
|
||||
Solution: Use authenticated Docker Hub credentials or local registry mirror.
|
||||
|
||||
**Test timeout**
|
||||
```
|
||||
Error: Test exceeded 30 minute timeout
|
||||
```
|
||||
Solution: Increase timeout or use quick fixture set.
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Enable verbose logging:
|
||||
|
||||
```bash
|
||||
dotnet test tests/parity/StellaOps.Parity.Tests \
|
||||
-e PARITY_DEBUG=true \
|
||||
-e PARITY_KEEP_OUTPUTS=true
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
tests/parity/StellaOps.Parity.Tests/
|
||||
├── StellaOps.Parity.Tests.csproj # Project file
|
||||
├── ParityTestFixtureSet.cs # Container image fixtures
|
||||
├── ParityHarness.cs # Scanner execution harness
|
||||
├── SbomComparisonLogic.cs # SBOM comparison
|
||||
├── VulnerabilityComparisonLogic.cs # Vulnerability comparison
|
||||
├── LatencyComparisonLogic.cs # Latency comparison
|
||||
├── ErrorModeComparisonLogic.cs # Error handling comparison
|
||||
└── Storage/
|
||||
├── ParityResultStore.cs # Time-series storage
|
||||
└── ParityDriftDetector.cs # Drift detection
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- [Scanner Architecture](../modules/scanner/architecture.md)
|
||||
- [Test Suite Overview](../19_TEST_SUITE_OVERVIEW.md)
|
||||
- [CI/CD Workflows](.gitea/workflows/parity-tests.yml)
|
||||
- [Competitive Benchmark](.gitea/workflows/benchmark-vs-competitors.yml)
|
||||
Reference in New Issue
Block a user