semi implemented and features implemented save checkpoint

This commit is contained in:
master
2026-02-08 18:00:49 +02:00
parent 04360dff63
commit 1bf6bbf395
20895 changed files with 716795 additions and 64 deletions

View File

@@ -0,0 +1,22 @@
# CI Lint Hook for Implementor Guidelines
## Status
NOT_FOUND
## Description
The advisory called for a CI lint hook stub to enforce guidelines (e.g., docs-touched tagging, schema/versioning control). No automated enforcement tooling was found.
## Why Not Implemented
- No CI lint hooks, pre-commit hooks, or automated enforcement tooling found under `.gitea/`, `devops/`, or `src/Tools/`
- CI workflows exist in `.gitea/workflows/` but focus on build/test/deploy, not implementor guideline enforcement
- No docs-touched tagging or schema versioning control automation found
- The repo uses `AGENTS.md` files for contributor guidance but enforcement is manual
- Likely deferred; CI lint hooks are typically a low-priority quality-of-life improvement
## Source
- Feature matrix scan
## Notes
- Module: Uncategorized
- Modules referenced: N/A
- Could live under `.gitea/hooks/` or `devops/scripts/` when implemented

View File

@@ -0,0 +1,33 @@
# Comparative Evidence/Suppression Pattern Analysis
## Module
Attestor
## Status
PARTIALLY_IMPLEMENTED
## Description
Evidence and suppression patterns are implemented in the scanning and VEX override subsystems. The advisory was primarily a research/comparison document; its findings appear to have influenced the VEX override and evidence panel designs rather than producing a standalone feature.
## What's Implemented
- **VEX Override System**: `src/Attestor/__Libraries/StellaOps.Attestor.StandardPredicates/VexOverride/` -- VexOverridePredicateBuilder, VexOverridePredicateParser, VexOverrideDecision, EvidenceReference -- provides structured suppression with evidence.
- **Audit Hash Logger**: `__Libraries/StellaOps.Attestor.ProofChain/Audit/AuditHashLogger.cs` (with `.Validation`) -- audit logging for evidence and suppression actions.
- **Change Trace Attestation Service**: `ProofChain/ChangeTrace/ChangeTraceAttestationService.cs` -- tracks changes including suppressions.
- **VEX Delta Tracking**: `Predicates/VexDeltaPredicate.cs`, `VexDeltaChange.cs`, `VexDeltaSummary.cs` -- tracks VEX status transitions.
## What's Missing
- **Cross-organization pattern analysis**: No service that compares suppression patterns across tenants or organizations to detect anomalous suppression rates.
- **Suppression pattern dashboard**: No UX component showing suppression trends, outliers, or comparative analysis against baselines.
- **Suppression quality scoring**: No scoring model that evaluates the quality/legitimacy of suppressions based on evidence strength.
- **Anomaly detection**: No automated detection of suspicious suppression patterns (e.g., bulk suppressions without evidence, suppressions of critical CVEs).
## Implementation Plan
- Design a suppression analytics service that aggregates suppression patterns
- Implement cross-tenant comparison with configurable baselines
- Add suppression quality scoring based on evidence reference count and type
- Build anomaly detection rules for suspicious suppression patterns
- Add dashboard UX components for suppression trend visualization
- Add tests for pattern analysis, scoring, and anomaly detection
## Related Documentation
- Source: See feature catalog

View File

@@ -0,0 +1,45 @@
# DSSE Gateway Traversal (mTLS + provenance headers)
## Module
Scanner
## Status
PARTIALLY_IMPLEMENTED
## Description
HMAC-based DSSE envelope signing exists in the scanner worker for authenticating scan artifacts through gateway proxies. The feature envisions full mTLS gateway traversal with provenance headers injected by middleware, allowing scanner-to-registry and scanner-to-evidence-locker communication through reverse proxies (NGINX, Envoy, WAF) while maintaining attestation chain integrity.
## What's Implemented
- **HMAC DSSE Envelope Signing**:
- `src/Scanner/StellaOps.Scanner.Worker/Processing/Surface/HmacDsseEnvelopeSigner.cs` - `HmacDsseEnvelopeSigner` producing HMAC-signed DSSE envelopes for scan artifacts, providing integrity verification during transit through intermediary proxies
- **DSSE Signing Infrastructure**:
- `src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Signing/ProofChainSigner.Verification.cs` - DSSE signature verification pipeline used downstream for validating signed envelopes
- `src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Signing/DsseEnvelope.cs` - `DsseEnvelope` model representing Dead Simple Signing Envelope structures
- `src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Signing/DsseSignature.cs` - `DsseSignature` model for individual signatures within envelopes
## What's Missing
- **Gateway Configuration**: No NGINX/Envoy/WAF gateway configuration templates or middleware for injecting provenance headers (X-Stella-Provenance, X-Stella-Scan-Id) into proxied requests
- **mTLS Certificate Management**: No scanner-specific mTLS certificate provisioning, rotation, or trust store configuration for gateway traversal
- **Provenance Header Middleware**: No ASP.NET Core middleware for reading/validating provenance headers on the receiving side (WebService, EvidenceLocker endpoints)
- **Gateway Health Probes**: No health check endpoints specifically designed for gateway liveness/readiness through proxy chains
- **Configuration Schema**: No structured configuration for declaring gateway topology (proxy chain depth, intermediate certificate authorities, header propagation rules)
## Implementation Plan
1. Create `GatewayProvenanceMiddleware` in `StellaOps.Scanner.WebService` that reads and validates X-Stella-Provenance headers from proxied requests
2. Create `MtlsCertificateProvider` in `StellaOps.Scanner.Worker` for provisioning and rotating scanner client certificates
3. Add gateway configuration templates (NGINX, Envoy) under `devops/` with provenance header injection rules
4. Extend `HmacDsseEnvelopeSigner` to embed gateway hop metadata in DSSE envelope payloads
5. Add integration tests verifying envelope integrity through simulated proxy chains
6. Add configuration schema for gateway topology in `StellaOps.Scanner.Core`
## E2E Test Plan
- [ ] Configure a scanner worker behind an NGINX reverse proxy and verify HMAC-signed DSSE envelopes are produced and transit successfully to the evidence locker
- [ ] Verify provenance headers (X-Stella-Provenance, X-Stella-Scan-Id) are injected by the gateway and validated by the receiving service
- [ ] Verify mTLS client certificate authentication between scanner worker and gateway proxy
- [ ] Verify DSSE envelope signature remains valid after traversing a multi-hop proxy chain (scanner -> proxy -> WAF -> service)
- [ ] Verify gateway health probes report correct status through the proxy chain
- [ ] Verify the system rejects requests with missing or tampered provenance headers
## Related Documentation
- Source: See feature catalog
- Architecture: `docs/modules/scanner/architecture.md`

View File

@@ -0,0 +1,25 @@
# DSSE+Rekor Batch Size Benchmarking Tool (stella-attest-bench)
## Status
NOT_FOUND
## Description
The advisory proposed a dedicated CLI benchmarking tool (stella-attest-bench) to sweep DSSE envelope batch sizes against Rekor and determine optimal defaults. While the underlying DSSE and Rekor infrastructure exists, no dedicated benchmarking/experiment tool was implemented.
## Why Not Implemented
- No dedicated `stella-attest-bench` CLI tool found
- The underlying DSSE and Rekor infrastructure is fully implemented in `src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/`:
- `Signing/DsseEnvelope.cs`, `DsseSignature.cs` -- DSSE envelope support
- `Rekor/EnhancedRekorProofBuilder.Build.cs`, `EnhancedRekorProofBuilder.Validate.cs` -- Rekor integration
- `Rekor/RekorInclusionProof.cs` -- Rekor inclusion proofs
- The Bench module (`src/Bench/StellaOps.Bench/`) has benchmarking infrastructure (LinkNotMerge scenario runner, JSON/Prometheus reporting) but no DSSE batch size sweeping tool
- The Bench infrastructure (BenchmarkConfig, BenchmarkJsonWriter, PrometheusWriter) could serve as a foundation for a DSSE batch benchmark
- This is a low-priority optimization tool; the DSSE+Rekor pipeline works but batch size tuning requires a dedicated experiment harness
## Source
- Feature matrix scan
## Notes
- Module: Attestor
- Modules referenced: N/A
- Related: `src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Rekor/` (Rekor integration), `src/Bench/` (benchmarking infrastructure)

View File

@@ -0,0 +1,102 @@
# eBPF Runtime Signal Integration (Probe Management, Type Granularity, and Tier 5 Evidence)
## Module
Signals (with cross-module touchpoints in Scanner and Zastava)
## Status
PARTIALLY_IMPLEMENTED
## Description
eBPF signals library project exists with probe, parser, and enrichment infrastructure. Runtime signal ingestion is connected to the Unknowns module. The structure suggests it is in progress but not production-ready. This is the "Tier 5" runtime evidence layer complementing the existing Tiers 1-4 (static analysis, binary fingerprinting, SBOM-based evidence). Includes probe lifecycle management in Zastava and probe-type-aware confidence scoring in Scanner.
## What's Implemented
- **RuntimeSignalCollector**: `src/Signals/__Libraries/StellaOps.Signals.Ebpf/Services/RuntimeSignalCollector.cs` -- collects runtime signals from eBPF probes
- **RuntimeEvidenceCollector**: `src/Signals/__Libraries/StellaOps.Signals.Ebpf/Services/RuntimeEvidenceCollector.cs` -- collects runtime evidence from eBPF events
- **CoreProbeLoader**: `src/Signals/__Libraries/StellaOps.Signals.Ebpf/Probes/CoreProbeLoader.cs` -- loads core eBPF probes
- **AirGapProbeLoader**: `src/Signals/__Libraries/StellaOps.Signals.Ebpf/Probes/AirGapProbeLoader.cs` -- offline/air-gap compatible probe loading
- **EventParser**: `src/Signals/__Libraries/StellaOps.Signals.Ebpf/Parsers/EventParser.cs` -- parses raw eBPF events into structured models
- **RuntimeEventEnricher**: `src/Signals/__Libraries/StellaOps.Signals.Ebpf/Enrichment/RuntimeEventEnricher.cs` -- enriches runtime events with container/SBOM context
- **CgroupContainerResolver**: `src/Signals/__Libraries/StellaOps.Signals.Ebpf/Cgroup/CgroupContainerResolver.cs` -- resolves cgroup paths to container identities
- **RuntimeEvidenceNdjsonWriter**: `src/Signals/__Libraries/StellaOps.Signals.Ebpf/Output/RuntimeEvidenceNdjsonWriter.cs` -- writes evidence in NDJSON format
- **AttestorEvidenceChunkSigner**: `src/Signals/__Libraries/StellaOps.Signals.Ebpf/Signing/AttestorEvidenceChunkSigner.cs` -- signs evidence chunks for attestation
- **DotNetEventPipeAgent**: `src/Signals/StellaOps.Signals.RuntimeAgent/DotNetEventPipeAgent.cs` -- .NET EventPipe agent (production-ready for .NET)
- **Interfaces**: `IRuntimeSignalCollector`, `IEbpfProbeLoader`, `IContainerIdentityResolver`, `IContainerStateProvider`, `IImageDigestResolver`, `ISbomComponentProvider`
- **Scanner Runtime Trace Ingestion**: `src/Scanner/__Libraries/StellaOps.Scanner.Runtime/Ingestion/TraceIngestionService.cs` -- ingests runtime traces
- **Scanner Witness Infrastructure**:
- `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/Witnesses/RuntimeObservation.cs` -- runtime-observed function invocations (timestamp, function signature, process context), but currently without a ProbeType discriminator
- `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/Witnesses/PathWitness.cs` -- combines static call-graph paths with runtime observations
- `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/Witnesses/WitnessDsseSigner.cs` -- signs runtime witness predicates for attestation
- `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/Witnesses/WitnessPredicateBuilder.cs` -- builds DSSE-signable witness predicates from runtime observations
- **Zastava Probe Manager**: `src/Zastava/StellaOps.Zastava.Observer/Probes/EbpfProbeManager.cs` -- implements `IProbeManager` and `IAsyncDisposable`; manages eBPF probe lifecycle with `OnContainerStartAsync`/stop hooks; uses `IRuntimeSignalCollector` and `ISignalPublisher`; tracks active probe handles via `ConcurrentDictionary<string, SignalCollectionHandle>`; configurable via `EbpfProbeManagerOptions`
## What's Missing
### Signals (core infrastructure)
- **Production-grade kernel probe deployment**: No production deployment automation (probe installation, lifecycle management, Helm charts, systemd units)
- **Kernel-level function entry/exit tracing**: No BTF-backed function entry/exit tracing with accurate call stacks at scale
- **Performance SLA compliance**: No benchmarking proving low overhead under production workload
- **Kernel version compatibility matrix**: No detection and fallback strategies for different kernel versions
- **Cross-platform runtime agents**: Beyond .NET (Java JVMTI, Go delve, Python sys.settrace, Node.js native) not yet built
- **Runtime backport detection**: No logic comparing runtime traces against known-patched function signatures
- **Integration testing**: No integration tests with multiple container runtimes (containerd, CRI-O, Podman)
- **Production monitoring**: No dashboards and alerting for probe health
### Scanner (probe type granularity)
- **ProbeType Enum**: No `ProbeType` enum (Kprobe, Uprobe, Tracepoint, Usdt, Fentry, RawTracepoint) defined on or associated with `RuntimeObservation`
- **Probe-Aware Confidence Scoring**: Reachability confidence scoring does not differentiate based on probe attachment type (e.g., uprobe on a specific function is higher fidelity than a kprobe on a syscall)
- **ProbeType Propagation**: The Signals.Ebpf pipeline does not tag observations with their originating probe type before forwarding to the scanner
- **Predicate Schema Update**: Witness DSSE predicates do not include probeType in their signed payload schema
### Zastava (probe lifecycle management)
- No tests for EbpfProbeManager
- No integration with the Observer's `ContainerLifecycleHostedService` to automatically attach/detach probes
- No eBPF probe configuration UI or CLI
- Limited probe types (needs expansion for different kernel hook points)
- No probe health monitoring or failure recovery
## Implementation Plan
### Phase 1: Core production readiness (Signals)
- Benchmark eBPF probe overhead in production-like environments with performance SLAs
- Implement kernel version detection and compatibility matrix with fallback strategies
- Add integration tests for containerd, CRI-O, and Podman container runtimes
- Implement probe lifecycle management (hot-reload, graceful degradation)
- Production deployment automation with Helm charts and systemd units
### Phase 2: Probe type granularity (Scanner)
1. Define `ProbeType` enum in `StellaOps.Scanner.Reachability/Witnesses/` with values: Kprobe, Uprobe, Tracepoint, Usdt, Fentry, RawTracepoint, Unknown
2. Add optional `ProbeType` property to `RuntimeObservation`
3. Update `Signals.Ebpf` pipeline to tag observations with their originating probe type
4. Update `WitnessPredicateBuilder` to include probeType in signed predicates
5. Update reachability confidence scoring to apply probe-type-aware weights (uprobe > tracepoint > kprobe)
### Phase 3: Probe management (Zastava)
- Add unit tests for EbpfProbeManager lifecycle (attach/detach/dispose)
- Integrate with ContainerLifecycleHostedService for automatic probe management
- Expand probe types for syscall, network, and filesystem observation
- Add probe health monitoring with automatic reattachment on failure
- Add CLI/API for probe configuration management
### Phase 4: Extended runtime agents
- Add runtime backport detection comparing traces against patched function signatures
- Implement cross-platform runtime agents for Java, Go, Python
- Add production monitoring dashboards and alerting
## E2E Test Plan
- [ ] Collect runtime observations from a uprobe-attached function and verify the ProbeType field is set to `Uprobe`
- [ ] Collect runtime observations from a kprobe-attached syscall and verify the ProbeType field is set to `Kprobe`
- [ ] Verify reachability confidence scoring assigns higher weight to uprobe observations than kprobe observations
- [ ] Verify the witness DSSE predicate payload includes the probeType field and the signature covers it
- [ ] Verify backward compatibility: observations without ProbeType default to `Unknown`
- [ ] Verify ProbeType is preserved through the full pipeline: eBPF collection -> signal forwarding -> scanner ingestion -> witness predicate -> reachability score
- [ ] Verify EbpfProbeManager attaches probes on container start and detaches on container stop
- [ ] Verify probe health monitoring detects failed probes and triggers reattachment
## Related Documentation
- Source: See feature catalog
- Architecture: `docs/modules/scanner/architecture.md`
## Merged From
- `signals/tier-5-runtime-trace-evidence.md` (previously merged)
- `scanner/ebpf-probe-type-granularity.md` (merged -- probe type granularity for scanner witness infrastructure)
- `zastava/ebpf-probe-manager.md` (merged -- eBPF probe lifecycle management in Zastava observer)

View File

@@ -0,0 +1,52 @@
# Ecosystem Reality Acceptance Test Fixtures
## Module
Scanner
## Status
PARTIALLY_IMPLEMENTED
## Description
Maps five specific real-world production incidents (credential leak, offline DB schema mismatch, SBOM parity drift, scanner instability, ecosystem-specific SCA failure) into deterministic acceptance test fixtures. Each fixture reproduces the incident scenario end-to-end with frozen inputs and expected outputs, ensuring the scanner does not regress on previously observed production failures.
## What's Implemented
- **SCA Failure Catalogue**:
- `src/Scanner/__Tests/StellaOps.Scanner.Analyzers.Lang.Node.Tests/` - Node.js SCA test fixtures covering package resolution edge cases
- `src/Scanner/__Tests/StellaOps.Scanner.Analyzers.Lang.Ruby.Tests/` - Ruby SCA test fixtures covering Gemfile.lock parsing
- `src/Scanner/__Tests/StellaOps.Scanner.Analyzers.Lang.Java.Tests/` - Java SCA test fixtures covering Maven/Gradle dependency resolution
- **SmartDiff Golden Fixtures**:
- `src/Scanner/__Tests/StellaOps.Scanner.SmartDiff.Tests/` - Deterministic golden fixture tests for diff-aware rescanning, covering expected SBOM delta outputs
- **Determinism Infrastructure**:
- Existing test projects use frozen fixture files (JSON SBOMs, lock files, manifest files) as inputs with expected output assertions
## What's Missing
- **Incident-to-Test Mapping**: No structured mapping from five specific real-world incidents to dedicated test fixtures:
1. **Credential Leak Incident**: No fixture reproducing a secret detection false-negative that led to a credential leak in production
2. **Offline DB Schema Mismatch**: No fixture reproducing scanner behavior when the offline vulnerability database schema version mismatches the scanner version
3. **SBOM Parity Drift**: No fixture reproducing divergence between container-scanned SBOM and source-scanned SBOM for the same artifact
4. **Scanner Instability**: No fixture reproducing non-deterministic scanner output across repeated scans of the same image layer
5. **Ecosystem SCA Failure**: No fixture reproducing ecosystem-specific SCA resolution failures (e.g., npm optional dependency with missing platform binary)
- **Incident Metadata**: No `incident.metadata.json` files linking each fixture to its originating production incident (date, severity, root cause, fix)
- **Acceptance Test Runner**: No dedicated CI job or test category for running ecosystem reality acceptance tests separately from unit tests
## Implementation Plan
1. Create `src/Scanner/__Tests/StellaOps.Scanner.EcosystemReality.Tests/` project with five incident fixture directories
2. For each incident, create: `incident.metadata.json` (date, severity, root cause), frozen input fixtures, expected output assertions
3. Implement credential leak fixture using a container layer with an embedded secret that was previously missed
4. Implement offline DB schema mismatch fixture with mismatched vuln-db schema version headers
5. Implement SBOM parity drift fixture with container vs. source scan inputs producing divergent SBOMs
6. Implement scanner instability fixture verifying byte-identical output across 10 repeated scans
7. Implement ecosystem SCA failure fixture with npm optional dependency edge case
8. Add CI job category `ecosystem-reality` for running these acceptance tests
## E2E Test Plan
- [ ] Run the credential leak incident fixture and verify the scanner now detects the previously-missed embedded secret in the container layer
- [ ] Run the offline DB schema mismatch fixture and verify the scanner produces a clear error or graceful degradation when vuln-db schema version does not match
- [ ] Run the SBOM parity drift fixture and verify the scanner flags divergence between container-scanned and source-scanned SBOMs for the same artifact
- [ ] Run the scanner instability fixture and verify byte-identical SBOM output across 10 repeated scans of the same frozen image layer
- [ ] Run the ecosystem SCA failure fixture and verify correct handling of npm optional dependencies with missing platform binaries
- [ ] Verify each fixture includes incident.metadata.json with date, severity, root cause, and link to the originating production incident
## Related Documentation
- Source: See feature catalog
- Architecture: `docs/modules/scanner/architecture.md`

View File

@@ -0,0 +1,35 @@
# Evidence TTL and staleness policy
## Module
Signals
## Status
PARTIALLY_IMPLEMENTED
## Description
Retention options and lifecycle services exist for evidence expiry, but the advisory noted TTL strategy at 50% coverage.
## What's Implemented
- **Modules**: `src/Signals/StellaOps.Signals/Services/`, `src/Signals/StellaOps.Signals/Options/`
- **Key Classes**:
- `UnknownsDecayService` (`src/Signals/StellaOps.Signals/Services/UnknownsDecayService.cs`) - applies decay to stale unknown findings (related TTL behavior)
- `UnknownsDecayOptions` (`src/Signals/StellaOps.Signals/Options/UnknownsDecayOptions.cs`) - configurable decay/TTL thresholds
- `NightlyDecayWorker` (`src/Signals/StellaOps.Signals/Services/NightlyDecayWorker.cs`) - scheduled worker for TTL processing
- **Source**: Feature matrix scan
## What's Missing
- Comprehensive evidence TTL policy engine covering all evidence types (callgraph, runtime, SBOM correlation, attestation)
- Per-evidence-type configurable TTL with different retention periods
- Staleness detection that marks evidence as stale before hard expiry
- Automated evidence archival workflow (move to cold storage before deletion)
- TTL policy dashboard showing evidence age distribution and upcoming expirations
## Implementation Plan
- Implement `EvidenceTtlPolicyEngine` with per-type configurable retention periods
- Add staleness detection service that marks evidence nearing TTL as stale
- Implement evidence archival pipeline for cold storage migration
- Add TTL policy configuration UI and monitoring dashboard
- Extend `NightlyDecayWorker` to handle evidence expiry across all evidence types
## Related Documentation
- Source: See feature catalog

View File

@@ -0,0 +1,34 @@
# Golden Benchmark Fixtures (Core-10)
## Status
PARTIALLY_IMPLEMENTED
## Description
The advisory describes 10 golden reachability benchmark fixtures (C, Java, .NET, Python, container), but no pre-built fixture datasets were found in the source tree. The ReachGraph service infrastructure exists but the specific Core-10 fixture data files are not present.
## Module
Bench
## What's Implemented
- **Multi-runtime corpus**: `src/__Tests/reachability/corpus/` (5 runtimes: dotnet, go, java, python, rust)
- **Additional VEX corpus**: `src/tests/reachability/corpus/` (4 runtimes with OpenVEX files)
- **Expanded benchmarks**: `src/__Tests/reachability/fixtures/reachbench-2025-expanded/`
- **Patch oracles**: `src/__Tests/reachability/fixtures/patch-oracles/`
- **PoE fixtures**: `src/__Tests/reachability/PoE/Fixtures/`
- **Scoring golden corpus**: `src/__Tests/__Benchmarks/golden-corpus/` (VEX scenarios and severity levels)
- **Fixture harvester tool**: `src/__Tests/Tools/FixtureHarvester/SbomGoldenCommand.cs`
- **Corpus management scripts**: `src/__Tests/reachability/scripts/update_corpus_manifest.py`
- **Fixture tests**: `src/__Tests/reachability/StellaOps.Reachability.FixtureTests/`
## What's Missing
- The exact advisory-specified "Core-10" named fixture set (10 specific golden reachability benchmark fixtures covering C, Java, .NET, Python, container)
- Formal Core-10 naming convention and documentation
- C runtime reachability corpus (only dotnet, go, java, python, rust present)
## Implementation Plan
- Audit existing fixtures against the Core-10 specification from the advisory
- Add C runtime corpus if required
- Formalize Core-10 naming and documentation
## Source
- Feature matrix scan

View File

@@ -0,0 +1,39 @@
# Metrics for attestation coverage and time-to-evidence
## Module
Unknowns
## Status
PARTIALLY_IMPLEMENTED
## Description
Some metrics services exist but the advisory noted metrics coverage at only 30%.
## What's Implemented
- **Unknowns Metrics Service**: `src/Unknowns/StellaOps.Unknowns.Services/UnknownsMetricsService.cs` -- exposes basic Prometheus/OpenTelemetry metrics for the unknowns queue including queue depth, resolution counts, and SLA breach counts.
- **Unknowns SLA Monitor Service**: `src/Unknowns/StellaOps.Unknowns.Services/UnknownsSlaMonitorService.cs` -- monitors SLA compliance for unknown resolution timelines, providing partial time-to-evidence tracking for the unknowns domain.
- **Unknowns SLA Health Check**: `src/Unknowns/StellaOps.Unknowns.Services/UnknownsSlaHealthCheck.cs` -- health check endpoint reporting unknowns SLA status, contributing to observability but not comprehensive attestation coverage metrics.
- **Unknown Ranking Model**: `src/Unknowns/__Libraries/StellaOps.Unknowns.Core/Models/UnknownRanking.cs` -- ranking model that includes priority scoring which could inform metrics prioritization.
## What's Missing
- **Attestation Coverage Metrics**: No metrics track what percentage of release artifacts have complete attestation chains. The unknowns metrics cover queue health but not attestation completeness across the entire pipeline (e.g., "X% of images have SBOM + VEX + provenance attestations").
- **Time-to-Evidence Metrics**: No end-to-end time-to-evidence metric exists tracking the duration from vulnerability discovery to complete evidence availability (scanner result -> reachability analysis -> VEX decision -> attestation). The SLA monitor tracks unknowns resolution time but not the broader evidence pipeline.
- **Per-Provider Coverage Breakdown**: No metrics break down attestation coverage by provider/scanner (e.g., "Trivy provides SBOM for 95% of images, reachability analysis covers 60%").
- **Evidence Freshness Metrics**: No metrics track evidence staleness (e.g., "SBOM is 30 days old, VEX decision is 7 days old") across the artifact estate.
- **Dashboard Integration**: No pre-built Grafana dashboards or Web UI panels exist for visualizing attestation coverage and time-to-evidence trends.
- **Cross-Module Metrics Aggregation**: Metrics are siloed per module (Unknowns, Attestor, EvidenceLocker); no aggregation layer combines them into a unified attestation coverage view.
## Implementation Plan
- Define attestation coverage metrics: per-artifact attestation completeness (SBOM present, VEX present, provenance present, reachability analysis present)
- Implement a metrics aggregation service that queries Attestor, EvidenceLocker, and Unknowns to compute estate-wide attestation coverage percentages
- Add time-to-evidence histogram metrics tracking the duration from vulnerability publication to complete evidence chain availability
- Add per-provider coverage breakdown metrics
- Add evidence freshness metrics (age of latest SBOM, VEX, provenance per artifact)
- Build Grafana dashboard templates for attestation coverage and time-to-evidence visualization
- Target: increase metrics coverage from 30% to 90%+ of the advisory specification
## Related Documentation
- Unknowns metrics: `src/Unknowns/StellaOps.Unknowns.Services/UnknownsMetricsService.cs`
- Attestor proof chain: `src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/`
- Evidence locker: `src/EvidenceLocker/`
- VexLens (VEX processing): `src/VexLens/`

View File

@@ -0,0 +1,41 @@
# MI10 - Theme/Contrast Guidance (Light/Dark/HC Tokens)
## Module
Web
## Status
PARTIALLY_IMPLEMENTED
## Description
Color tokens and focus ring styles exist. Theme transition utilities are implemented. However, the specific theming doc `docs/modules/ui/micro-theme.md` and explicit HC (high-contrast) mode tokens with 4.5:1/3:1 contrast validation were not found as standalone artifacts.
## What's Implemented
- Color tokens and focus ring styles exist in the Angular codebase
- Theme transition utilities are implemented
- Dark mode support exists
## What's Missing
- **High-contrast (HC) mode tokens**: No dedicated HC theme with WCAG 4.5:1 (normal text) and 3:1 (large text/icons) validated contrast ratios
- **Theme specification document**: No `docs/modules/ui/micro-theme.md` formalizing the light/dark/HC token sets
- **Contrast validation tooling**: No automated CI check validating contrast ratios across all color tokens
- **HC mode toggle**: No user-facing toggle for high-contrast mode in settings
## Implementation Plan
- Create HC theme token set with WCAG-validated contrast ratios
- Add contrast ratio validation CI check using color-contrast tooling
- Add HC mode toggle to user settings
- Document theme tokens in `docs/modules/ui/micro-theme.md`
## E2E Test Plan
- **Setup**:
- [ ] Log in with a user that has appropriate permissions
- [ ] Navigate to the relevant page/section where this feature appears
- [ ] Ensure test data exists (scanned artifacts, SBOM data, or seed data as needed)
- **Core verification**:
- [ ] Verify the component renders correctly with sample data
- [ ] Verify interactive elements respond to user input
- [ ] Verify data is fetched and displayed from the correct API endpoints
- **Edge cases**:
- [ ] Verify graceful handling when backend API is unavailable (error state)
- [ ] Verify responsive layout at different viewport sizes
- [ ] Verify accessibility (keyboard navigation, screen reader labels, ARIA attributes)

View File

@@ -0,0 +1,53 @@
# MI4 - Error/Cancel/Undo Patterns (Snackbar/Toast with Undo)
## Module
Web
## Status
PARTIALLY_IMPLEMENTED
## Description
i18n keys for toast/undo/undoCountdown patterns exist and snackbar usage is present across components. However, a dedicated centralized snackbar/toast service with the specific 8s undo window and aria-live=polite pattern was not found as a standalone component.
## What's Implemented
- **Existing components**:
- `accordion` (`src/Web/StellaOps.Web/src/app/shared/components/accordion/accordion.component.ts`)
- `ai-assist-panel` (`src/Web/StellaOps.Web/src/app/shared/components/ai/ai-assist-panel.component.ts`)
- `ai-authority-badge` (`src/Web/StellaOps.Web/src/app/shared/components/ai/ai-authority-badge.component.ts`)
- `ai-chip` (`src/Web/StellaOps.Web/src/app/shared/components/ai/ai-chip.component.ts`)
- `ai-explain-chip` (`src/Web/StellaOps.Web/src/app/shared/components/ai/ai-explain-chip.component.ts`)
- `ai-exploitability-chip` (`src/Web/StellaOps.Web/src/app/shared/components/ai/ai-exploitability-chip.component.ts`)
- `ai-fix-chip` (`src/Web/StellaOps.Web/src/app/shared/components/ai/ai-fix-chip.component.ts`)
- `ai-needs-evidence-chip` (`src/Web/StellaOps.Web/src/app/shared/components/ai/ai-needs-evidence-chip.component.ts`)
- `ai-summary` (`src/Web/StellaOps.Web/src/app/shared/components/ai/ai-summary.component.ts`)
- `ai-vex-draft-chip` (`src/Web/StellaOps.Web/src/app/shared/components/ai/ai-vex-draft-chip.component.ts`)
- **Existing services**:
- `replay` (`src/Web/StellaOps.Web/src/app/shared/components/reproduce/replay.service.ts`)
- `graph-export` (`src/Web/StellaOps.Web/src/app/shared/services/graph-export.service.ts`)
- `plain-language` (`src/Web/StellaOps.Web/src/app/shared/services/plain-language.service.ts`)
## What's Missing
- **Centralized snackbar/toast service**: No centralized `ToastService` with the specific 8-second undo window, countdown timer, and `aria-live=polite` pattern
- **Undo action infrastructure**: i18n keys for `toast.undo` and `undoCountdown` exist but no centralized undo action queue that buffers destructive operations for the undo window
- **Cancel pattern standardization**: No consistent cancel pattern across all modal/drawer interactions (some modals lack cancel confirmation for dirty forms)
- **Error boundary component**: No centralized error boundary component that catches and displays user-friendly errors with retry actions
## Implementation Plan
- Create centralized `ToastService` with undo support, 8s countdown, and `aria-live=polite`
- Implement undo action queue for buffering destructive operations
- Standardize cancel patterns across modals and drawers
- Add error boundary component with retry actions
## E2E Test Plan
- **Setup**:
- [ ] Log in with a user that has appropriate permissions
- [ ] Navigate to the relevant page/section where this feature appears
- [ ] Ensure test data exists (scanned artifacts, SBOM data, or seed data as needed)
- **Core verification**:
- [ ] Verify the component renders correctly with sample data
- [ ] Verify interactive elements respond to user input
- [ ] Verify data is fetched and displayed from the correct API endpoints
- **Edge cases**:
- [ ] Verify graceful handling when backend API is unavailable (error state)
- [ ] Verify responsive layout at different viewport sizes
- [ ] Verify accessibility (keyboard navigation, screen reader labels, ARIA attributes)

View File

@@ -0,0 +1,47 @@
# MI5 - Performance Budgets (Interaction Response, Animation Frame, LCP)
## Module
Web
## Status
PARTIALLY_IMPLEMENTED
## Description
Lighthouse CI config exists for performance monitoring. Specific interaction response <=100ms, frame budget 16ms, and layout shift <0.05 budgets were not found as explicitly configured thresholds in test fixtures.
## What's Implemented
- **Existing components**:
- `extension-slot` (`src/Web/StellaOps.Web/src/app/core/plugins/extension-slots/extension-slot.component.ts`)
- **Existing services**:
- `evidence-panel-metrics` (`src/Web/StellaOps.Web/src/app/core/analytics/evidence-panel-metrics.service.ts`)
- `gateway-metrics` (`src/Web/StellaOps.Web/src/app/core/api/gateway-metrics.service.ts`)
- `policy-interop` (`src/Web/StellaOps.Web/src/app/core/api/policy-interop.service.ts`)
- `reachability-integration` (`src/Web/StellaOps.Web/src/app/core/api/reachability-integration.service.ts`)
- `vuln-export-orchestrator` (`src/Web/StellaOps.Web/src/app/core/api/vuln-export-orchestrator.service.ts`)
## What's Missing
- **Interaction response budget**: No explicitly configured <=100ms interaction response threshold in test fixtures or CI checks
- **Animation frame budget**: No configured 16ms frame budget validation for animations
- **Layout shift budget**: No configured <0.05 CLS (Cumulative Layout Shift) threshold enforcement
- **LCP budget**: No Largest Contentful Paint budget configured in Lighthouse CI or performance tests
- **Performance regression CI gate**: Lighthouse CI config exists but no CI gate failing builds on budget violations
## Implementation Plan
- Configure Lighthouse CI budgets: interaction <=100ms, frame 16ms, CLS <0.05, LCP <2.5s
- Add performance regression CI gate failing on budget violations
- Add `evidence-panel-metrics.service.ts` integration with performance budgets for key user flows
- Document performance budgets in frontend architecture docs
## E2E Test Plan
- **Setup**:
- [ ] Log in with a user that has appropriate permissions
- [ ] Navigate to the relevant page/section where this feature appears
- [ ] Ensure test data exists (scanned artifacts, SBOM data, or seed data as needed)
- **Core verification**:
- [ ] Verify the component renders correctly with sample data
- [ ] Verify interactive elements respond to user input
- [ ] Verify data is fetched and displayed from the correct API endpoints
- **Edge cases**:
- [ ] Verify graceful handling when backend API is unavailable (error state)
- [ ] Verify responsive layout at different viewport sizes
- [ ] Verify accessibility (keyboard navigation, screen reader labels, ARIA attributes)

View File

@@ -0,0 +1,22 @@
# MI7 - Telemetry Schema for ui.micro.* Events
## Status
NOT_FOUND
## Description
The ui.micro telemetry JSON schema and associated unit test validator were not found. Triage-specific telemetry exists but the generic micro-interaction telemetry schema is missing.
## Why Not Implemented
- No `ui.micro.*` telemetry JSON schema found in `src/Web/` or `docs/`
- No dedicated micro-interaction telemetry event system found in the Web UI source
- The Telemetry module (`src/Telemetry/StellaOps.Telemetry.Core/`) tracks backend metrics (Time-to-Evidence, attestation metrics, scan completion) but not frontend UI micro-events
- The Web UI does not appear to have an instrumented event bus for tracking fine-grained user interactions
- This was a documentation + schema deliverable; the generic telemetry infrastructure exists but the UI-specific micro-event schema was never defined
## Source
- Feature matrix scan
## Notes
- Module: Web
- Modules referenced: `src/Web`
- Related: `src/Telemetry/` (backend telemetry infrastructure)

View File

@@ -0,0 +1,40 @@
# MI8 - Deterministic Seeds/Snapshots (Fixed RNG, Frozen Timestamps)
## Module
Web
## Status
PARTIALLY_IMPLEMENTED
## Description
Deterministic fixture files exist for testing. Storybook preview is configured. However, specific chromatic.disableAnimation parameters and fixed seed exports from a `micro-fixtures.ts` file were not verified.
## What's Implemented
- **Existing services**:
- `determinization` (`src/Web/StellaOps.Web/src/app/core/services/determinization/determinization.service.ts`)
## What's Missing
- **Fixed RNG seed exports**: No `micro-fixtures.ts` file exporting deterministic seed values for Storybook stories and tests
- **Frozen timestamps**: No globally-configurable frozen timestamp provider for deterministic date rendering in snapshots
- **chromatic.disableAnimation**: Storybook preview may not have `chromatic.disableAnimation` configured for consistent visual regression snapshots
- **Deterministic service integration**: `determinization.service.ts` exists but its usage across all Storybook stories for reproducible snapshots is not confirmed
## Implementation Plan
- Create `micro-fixtures.ts` with exported seed values and frozen timestamp provider
- Configure `chromatic.disableAnimation` in Storybook preview config
- Wire `determinization.service.ts` into all Storybook stories for reproducible rendering
- Add documentation for deterministic snapshot patterns
## E2E Test Plan
- **Setup**:
- [ ] Log in with a user that has appropriate permissions
- [ ] Navigate to the relevant page/section where this feature appears
- [ ] Ensure test data exists (scanned artifacts, SBOM data, or seed data as needed)
- **Core verification**:
- [ ] Verify the component renders correctly with sample data
- [ ] Verify interactive elements respond to user input
- [ ] Verify data is fetched and displayed from the correct API endpoints
- **Edge cases**:
- [ ] Verify graceful handling when backend API is unavailable (error state)
- [ ] Verify responsive layout at different viewport sizes
- [ ] Verify accessibility (keyboard navigation, screen reader labels, ARIA attributes)

View File

@@ -0,0 +1,30 @@
# Mirror DSSE Revision Contract
## Module
AirGap
## Status
PARTIALLY_IMPLEMENTED
## Description
Defines the DSSE signing contract revision for mirror bundles, specifying envelope format, digest algorithm choices, and manifest inclusion rules for air-gapped import verification. Implementation is coordination-level (docs + scripts).
## What's Implemented
- DSSE envelope signing/verification infrastructure: `src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Signing/DsseEnvelope.cs`, `DsseSignature.cs`
- DSSE verification step: `src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Verification/DsseSignatureVerificationStep.cs`
- Importer DSSE parsing: `src/AirGap/StellaOps.AirGap.Importer/Reconciliation/Parsers/DsseAttestationParser.cs`
- Bundle library with manifest support: `src/AirGap/__Libraries/StellaOps.AirGap.Bundle/`
- SPDX3 DSSE signing: `src/Attestor/__Libraries/StellaOps.Attestor.Spdx3/DsseSpdx3Signer.*.cs`
- Source: SPRINT_0150_0001_0001_mirror_dsse.md
## What's Missing
- The mirror-specific DSSE revision contract (specifying envelope format, digest algorithm choices, manifest inclusion rules for mirror bundles specifically) may need formalization as a versioned contract document
- Mirror-specific DSSE tests are referenced in TASKS.md files but may not be complete
## Implementation Plan
- Formalize mirror DSSE contract as versioned specification
- Add mirror-specific DSSE validation tests
- Verify digest algorithm choices are consistent across mirror pipeline
## Related Documentation
- Source: SPRINT_0150_0001_0001_mirror_dsse.md

View File

@@ -0,0 +1,31 @@
# Mirror Orchestrator Hook Event (mirror.ready)
## Module
AirGap
## Status
PARTIALLY_IMPLEMENTED
## Description
Defines the mirror.ready event payload {bundleId, generation, generatedAt, dsseDigest, manifestDigest, location} with optional rekorUUID, enabling CLI and export automation to consume mirror bundle readiness notifications.
## What's Implemented
- AirGap controller with event hooks: `src/AirGap/StellaOps.AirGap.Controller/` -- state management and endpoints
- Time hooks: `src/AirGap/StellaOps.AirGap.Time/Hooks/` -- event hooks for time-related operations
- Bundle catalog model: `src/AirGap/StellaOps.AirGap.Importer/Models/BundleCatalogEntry.cs`, `BundleItem.cs` -- bundle metadata with ID, generation, timestamps
- Sync library: `src/AirGap/__Libraries/StellaOps.AirGap.Sync/` -- bundle synchronization infrastructure
- Source: SPRINT_0150_0001_0003_mirror_orch.md
## What's Missing
- The specific `mirror.ready` event with payload `{bundleId, generation, generatedAt, dsseDigest, manifestDigest, location}` may not be formalized as a named event
- CLI/export automation consumption of mirror readiness notifications needs verification
- Optional `rekorUUID` field in event payload needs confirmation
## Implementation Plan
- Define `mirror.ready` event type in eventing system
- Implement event publication when mirror bundle is ready
- Add CLI hook for consuming mirror.ready events
- Add tests for event payload validation
## Related Documentation
- Source: SPRINT_0150_0001_0003_mirror_orch.md

View File

@@ -0,0 +1,23 @@
# Multi-scanner Comparative Benchmarking
## Status
NOT_FOUND
## Description
Advisory describes a benchmarking protocol comparing StellaOps scan results against Trivy/Grype/Snyk with precision/recall metrics. No CLI comparison tool or benchmark harness found.
## Why Not Implemented
- No dedicated CLI comparison tool or multi-scanner benchmark harness found
- A `compare.py` script exists at `src/__Tests/__Benchmarks/tools/compare.py` but it appears to be a general comparison utility, not a full multi-scanner benchmarking protocol
- The Bench module (`src/Bench/`) has benchmarking infrastructure (LinkNotMerge scenarios, Prometheus reporting) but not scanner comparison harnesses
- Golden corpus exists at `src/__Tests/__Benchmarks/golden-corpus/` with VEX scenarios and severity levels, which could serve as ground truth for scanner comparison
- The Scanner module has its own benchmark and test infrastructure but does not compare against external scanners (Trivy/Grype/Snyk)
- This would require external scanner integration which conflicts with the offline-first posture
## Source
- Feature matrix scan
## Notes
- Module: Bench
- Modules referenced: N/A
- Related: `src/__Tests/__Benchmarks/tools/compare.py` (comparison utility), `src/__Tests/__Benchmarks/golden-corpus/` (ground truth data)

View File

@@ -0,0 +1,37 @@
# Playbook Learning (Run-to-Patch Pipeline)
## Module
AdvisoryAI
## Status
PARTIALLY_IMPLEMENTED
## Description
Run artifacts and evidence bundles support playbook-related data, but dedicated playbook learning, patch proposal generation, and versioned playbook management are not fully distinct modules yet.
## What's Implemented
- **Run tracking infrastructure**: `RunService` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Runs/RunService.cs`) tracks runs with artifacts and events
- **Run models**: `Run`, `RunArtifact`, `RunEvent` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Runs/Models/`) capture run outcomes
- **Run storage**: `InMemoryRunStore` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Runs/InMemoryRunStore.cs`) persists run data
- **Evidence bundle assembly**: `EvidenceBundleAssembler` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Assembly/EvidenceBundleAssembler.cs`) assembles evidence packs from data providers
- **Remediation planning**: `AiRemediationPlanner` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Remediation/AiRemediationPlanner.cs`) generates fix plans
- **PR generation**: `GitHubPullRequestGenerator`, `GitLabMergeRequestGenerator`, `AzureDevOpsPullRequestGenerator` create PRs from remediation plans
- **Run API endpoints**: `RunEndpoints` (`src/AdvisoryAi/StellaOps.AdvisoryAI.WebService/Endpoints/RunEndpoints.cs`) exposes run data
- **Advisory output persistence**: `AdvisoryOutputStore` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Outputs/AdvisoryOutputStore.cs`), `FileSystemAdvisoryOutputStore` (`src/AdvisoryAi/StellaOps.AdvisoryAI.Hosting/FileSystemAdvisoryOutputStore.cs`)
## What's Missing
- **Playbook learning engine**: No dedicated module that analyzes past run outcomes to learn optimal remediation patterns and build reusable playbooks
- **Versioned playbook management**: No playbook versioning, publishing, or catalog system for sharing learned remediation workflows
- **Patch proposal generation**: No automated system that generates patch proposals by combining learned patterns from successful past remediations
- **Feedback loop learning**: No mechanism to feed PR merge/reject outcomes back into the learning engine to improve future recommendations
- **Playbook template library**: No library of reusable playbook templates (e.g., "upgrade-npm-dependency", "patch-container-base-image") with parameterization
## Implementation Plan
- Build a playbook learning engine that analyzes successful `Run` outcomes from `RunService`/`InMemoryRunStore`
- Add versioned playbook model with CRUD operations and a catalog API
- Implement patch proposal generation by matching current vulnerabilities against learned playbook patterns
- Add feedback loop from SCM connectors (PR merge/reject events) back to the learning engine
- Create a playbook template library with parameterized remediation workflows
## Related Documentation
- Source: Feature matrix scan

View File

@@ -0,0 +1,23 @@
# Proof-Market Ledger and Adaptive Trust Economics
## Status
NOT_FOUND
## Description
No implementation of a proof marketplace or adaptive trust economics model was found in the source code.
## Why Not Implemented
- No proof marketplace, trust economics model, or adaptive trust ledger found anywhere in `src/`
- No `ProofMarket`, `TrustEconomics`, or `TrustLedger` modules, namespaces, or classes exist
- The proof chain system (`src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/`) provides cryptographic proof generation and verification but not a marketplace or economic model
- The trust verdict system tracks trust levels but not economics
- This appears to be a research/vision concept with no implementation started
- Likely deferred indefinitely as it requires novel research into trust economics
## Source
- Feature matrix scan
## Notes
- Module: Uncategorized
- Modules referenced: N/A
- This is a forward-looking research concept, not a near-term engineering deliverable

View File

@@ -0,0 +1,37 @@
# Runtime trace merge (eBPF/ETW observed edges)
## Module
Signals
## Status
PARTIALLY_IMPLEMENTED
## Description
Runtime facts ingestion and provenance normalization exist, but full eBPF/ETW trace integration appears to be at the synthetic probe level rather than production-grade runtime tracing.
## What's Implemented
- **Modules**: `src/Signals/StellaOps.Signals/Services/`, `src/Signals/StellaOps.Signals.RuntimeAgent/`, `src/Signals/__Libraries/StellaOps.Signals.Ebpf/`
- **Key Classes**:
- `RuntimeFactsIngestService` (`src/Signals/StellaOps.Signals.RuntimeAgent/RuntimeFactsIngestService.cs`) - ingests runtime facts from agents
- `SyntheticRuntimeProbeBuilder` (`src/Signals/StellaOps.Signals/Services/SyntheticRuntimeProbeBuilder.cs`) - builds synthetic runtime probes for testing reachability
- `ProcSnapshotDocument` (`src/Signals/StellaOps.Signals/Models/ProcSnapshotDocument.cs`) - process snapshot model for runtime state capture
- `ReachabilityLattice` (`src/Signals/StellaOps.Signals/Lattice/ReachabilityLattice.cs`) - merge logic for combining static and runtime evidence
- `RuntimeSignalCollector` (`src/Signals/__Libraries/StellaOps.Signals.Ebpf/Services/RuntimeSignalCollector.cs`) - eBPF-based runtime signal collection (experimental)
- **Source**: Feature matrix scan
## What's Missing
- Production-grade eBPF trace merging with static callgraph edges
- ETW (Event Tracing for Windows) trace collection and merge
- Conflict resolution when runtime traces contradict static analysis
- Runtime trace deduplication across multiple collection windows
- Performance profiling of trace merge under high-volume runtime data
## Implementation Plan
- Implement runtime-to-static edge merge algorithm with conflict resolution strategies
- Add ETW trace collection agent for Windows container environments
- Implement trace deduplication with temporal windowing
- Benchmark merge performance and optimize for high-throughput runtime streams
- Add integration tests for eBPF and ETW trace merge scenarios
## Related Documentation
- Source: See feature catalog

View File

@@ -0,0 +1,54 @@
# Scanner Deterministic Regression Test Framework
## Module
Scanner
## Status
PARTIALLY_IMPLEMENTED
## Description
A structured regression test framework with standardized case layout, golden fixture comparison, and dedicated CI job. Each regression case is identified by `SCN-XXXX-slug`, contains frozen inputs and expected outputs, and uses byte-level comparison to detect scanner output drift.
## What's Implemented
- **Existing Determinism Tests**:
- `src/Scanner/__Tests/StellaOps.Scanner.SmartDiff.Tests/` - Golden fixture tests for SmartDiff comparing actual vs. expected SBOM deltas with frozen inputs
- `src/Scanner/__Tests/StellaOps.Scanner.Analyzers.Lang.Node.Tests/` - Deterministic language analyzer tests with frozen package.json/lock files
- `src/Scanner/__Tests/StellaOps.Scanner.Analyzers.Lang.Ruby.Tests/` - Deterministic Ruby analyzer tests with frozen Gemfile.lock fixtures
- `src/Scanner/__Tests/StellaOps.Scanner.Analyzers.Lang.Java.Tests/` - Deterministic Java analyzer tests with frozen pom.xml/build.gradle fixtures
- **Reachability Tests**:
- `src/Scanner/__Tests/StellaOps.Scanner.Reachability.Tests/` - Reachability analysis tests with frozen call-graph fixtures and expected classification outputs
- **Test Infrastructure**:
- Existing test projects demonstrate the golden fixture pattern (frozen input -> run analyzer -> compare against expected output) but each project uses its own ad-hoc fixture layout
## What's Missing
- **Standardized Case Layout**: No `Regression/` directory with `SCN-XXXX-slug/` subdirectories containing:
- `case.metadata.json` (case ID, description, scanner version that introduced the regression, severity)
- `case.md` (human-readable regression description with root cause analysis)
- `input/` (frozen input fixtures: container layers, SBOMs, lock files)
- `expected/` (expected output fixtures: SBOMs, reachability results, verdict payloads)
- **Regression Test Runner**: No unified test runner that discovers all `SCN-XXXX-slug/` cases, runs each through the scanner pipeline, and performs byte-level output comparison
- **Dedicated CI Job**: No `scanner-regression` CI job that runs regression tests separately from unit tests with clear pass/fail reporting per case
- **Regression Case Generator**: No tooling to capture a failing scanner scenario and automatically generate a new `SCN-XXXX-slug/` case from it
- **Drift Detection**: No tooling to detect when scanner output changes (intentionally or unintentionally) and prompt for expected-output updates with review
## Implementation Plan
1. Create `src/Scanner/__Tests/StellaOps.Scanner.Regression.Tests/` project with case discovery infrastructure
2. Define `case.metadata.json` schema with fields: caseId, slug, description, introducedInVersion, severity, tags
3. Create initial regression cases from existing golden fixture tests (migrate 5-10 representative cases)
4. Implement `RegressionTestRunner` that discovers cases, runs scanner pipeline on inputs, compares outputs byte-by-byte
5. Add `case-capture` CLI tool that takes a scanner invocation and generates a new case directory with frozen inputs and current outputs
6. Add `scanner-regression` CI job in `.gitea/workflows/` that runs regression tests and reports per-case pass/fail
7. Add drift detection that generates a diff report when expected output changes
## E2E Test Plan
- [ ] Run the regression test runner and verify all `SCN-XXXX-slug/` cases produce output that byte-matches their `expected/` fixtures
- [ ] Add a new regression case using the case-capture tool and verify it is automatically discovered by the test runner on the next run
- [ ] Introduce an intentional scanner change that modifies output for one case and verify the regression test runner detects the drift and fails the case
- [ ] Update the expected output for the changed case and verify the test runner passes again
- [ ] Verify `case.metadata.json` is validated on test startup (missing required fields cause a clear error)
- [ ] Verify the CI job produces a per-case pass/fail report with case ID, slug, and failure diff for any failing cases
- [ ] Verify regression tests run in under 5 minutes for the initial 10-case corpus
## Related Documentation
- Source: See feature catalog
- Architecture: `docs/modules/scanner/architecture.md`

View File

@@ -0,0 +1,31 @@
# Time-to-Evidence (TTE) Metric
## Status
PARTIALLY_IMPLEMENTED
## Description
The TTE metric (measuring time from finding open to first proof rendered) is not implemented in the frontend or backend.
## Module
Telemetry
## What's Implemented
- **TTE metrics collection**: `src/Telemetry/StellaOps.Telemetry.Core/StellaOps.Telemetry.Core/TimeToEvidenceMetrics.cs`
- **Percentile exporter**: `src/Telemetry/StellaOps.Telemetry.Core/StellaOps.Telemetry.Core/TtePercentileExporter.cs` (P50/P90/P99)
- **Scan completion integration**: `src/Telemetry/StellaOps.Telemetry.Core/StellaOps.Telemetry.Core/ScanCompletionMetricsIntegration.cs`
- **Attestation metrics**: `src/Telemetry/StellaOps.Telemetry.Core/StellaOps.Telemetry.Core/Metrics/AttestationMetrics.cs`
- **DI registration**: `src/Telemetry/StellaOps.Telemetry.Core/StellaOps.Telemetry.Core/TelemetryServiceCollectionExtensions.cs`
- **Baseline tracking**: `src/__Tests/__Benchmarks/baselines/ttfs-baseline.json`
## What's Missing
- Frontend visualization of TTE metrics (dashboard/chart in Web UI)
- TTE trend visualization over time
- Per-finding TTE breakdown in the UI
## Implementation Plan
- Add TTE dashboard widget to Web UI
- Show TTE percentile trends over time
- Include TTE metric in finding detail view
## Source
- Feature matrix scan