semi implemented and features implemented save checkpoint

This commit is contained in:
master
2026-02-08 18:00:49 +02:00
parent 04360dff63
commit 1bf6bbf395
20895 changed files with 716795 additions and 64 deletions

View File

@@ -0,0 +1,26 @@
# Acceptance Test Packs with Guardrails
## Module
__Tests
## Status
IMPLEMENTED
## Description
Acceptance test packs with guardrail definitions exist under the test fixtures with expected output validation.
## Implementation Details
- **Acceptance Test Directory**: `src/__Tests/acceptance/` -- acceptance test suite containing end-to-end scenarios with expected output validation and guardrail definitions.
- **Test Evidence Service**: `src/__Tests/__Libraries/StellaOps.Testing.Evidence/TestEvidenceService.cs` (implements `ITestEvidenceService`) -- captures test execution evidence (inputs, outputs, assertions) for acceptance validation.
- **Explainability Assertions**: `src/__Tests/__Libraries/StellaOps.Testing.Explainability/ExplainabilityAssertions.cs` -- assertion library for verifying decision explainability in acceptance tests, ensuring verdicts include human-readable rationale.
- **Explainable Decision Interface**: `src/__Tests/__Libraries/StellaOps.Testing.Explainability/IExplainableDecision.cs` -- contract for decisions that must provide explanations as part of guardrail verification.
- **Policy Regression Test Base**: `src/__Tests/__Libraries/StellaOps.Testing.Policy/PolicyRegressionTestBase.cs` -- base class for policy acceptance tests with guardrail enforcement.
- **Policy Diff Engine**: `src/__Tests/__Libraries/StellaOps.Testing.Policy/PolicyDiffEngine.cs` -- diffs policy evaluation outcomes between test runs to detect regressions.
- **Tests**: `src/__Tests/__Libraries/StellaOps.Testing.Evidence.Tests/TestEvidenceServiceTests.cs`
## E2E Test Plan
- [ ] Run the acceptance test suite and verify all test packs pass with expected outputs matching guardrail definitions
- [ ] Verify explainability guardrail: run an acceptance test that produces a verdict and confirm the decision includes a human-readable explanation via `ExplainabilityAssertions`
- [ ] Verify regression detection: modify a policy rule, re-run acceptance tests, and confirm `PolicyDiffEngine` detects the outcome change
- [ ] Verify evidence capture: run an acceptance test and confirm `TestEvidenceService` captures the full input/output evidence for audit review
- [ ] Verify guardrail enforcement: introduce a test that violates a guardrail (e.g., missing explanation) and confirm the test fails with a descriptive error

View File

@@ -0,0 +1,22 @@
# Air-Gap (No-Egress) Test Enforcement
## Module
__Tests
## Status
IMPLEMENTED
## Description
Network-isolated test base classes and docker container builders that enforce no-egress in CI, with dedicated offline E2E tests.
## Implementation Details
- **Network Isolated Test Base**: `src/__Tests/__Libraries/StellaOps.Testing.AirGap/NetworkIsolatedTestBase.cs` -- xUnit test base class that runs tests in a network-isolated environment, verifying no outbound network calls are made during test execution.
- **Isolated Container Builder**: `src/__Tests/__Libraries/StellaOps.Testing.AirGap/Docker/IsolatedContainerBuilder.cs` -- builds Docker containers with network isolation (no-egress) for air-gap integration tests using Testcontainers.
- **Offline E2E Tests**: `src/__Tests/offline/` -- dedicated offline end-to-end test suite that runs the full platform stack without network access.
## E2E Test Plan
- [ ] Run a test inheriting from `NetworkIsolatedTestBase` and verify it completes without making any outbound network requests
- [ ] Build an isolated container via `IsolatedContainerBuilder` and verify it has no network connectivity (e.g., DNS resolution fails, HTTP requests time out)
- [ ] Run the offline E2E test suite and verify all tests pass without network access
- [ ] Verify detection: add a test that makes an outbound HTTP call while using `NetworkIsolatedTestBase` and confirm the test fails with a network isolation violation
- [ ] Verify the isolated container runs the full platform stack (web service, database) in air-gap mode

View File

@@ -0,0 +1,24 @@
# Chaos/Failure Testing Infrastructure
## Module
__Tests
## Status
IMPLEMENTED
## Description
A chaos testing library exists for failure choreography and integration testing scenarios.
## Implementation Details
- **Failure Choreographer**: `src/__Tests/__Libraries/StellaOps.Testing.Chaos/FailureChoreographer.cs` -- orchestrates failure injection sequences across distributed services, coordinating timed failures for integration tests.
- **Failure Injector Interface**: `src/__Tests/__Libraries/StellaOps.Testing.Chaos/IFailureInjector.cs` -- contract for failure injection strategies (network partition, service crash, resource exhaustion).
- **Convergence Tracker Interface**: `src/__Tests/__Libraries/StellaOps.Testing.Chaos/IConvergenceTracker.cs` -- tracks system convergence after failure injection, verifying the system recovers to a consistent state within a timeout.
- **Chaos Models**: `src/__Tests/__Libraries/StellaOps.Testing.Chaos/Models.cs` -- data models for failure scenarios, injection points, and convergence results.
- **Tests**: `src/__Tests/__Libraries/StellaOps.Testing.Chaos.Tests/FailureChoreographerTests.cs`, `FailureInjectorTests.cs`, `ConvergenceTrackerTests.cs`
## E2E Test Plan
- [ ] Configure a `FailureChoreographer` with a network partition scenario and verify it injects the failure at the specified time and restores connectivity after the duration
- [ ] Inject a service crash via `IFailureInjector` and verify `IConvergenceTracker` detects the system has not converged within the expected timeout
- [ ] Inject a failure and verify the system eventually converges to a consistent state after the failure is removed
- [ ] Run a choreographed sequence of 3 failures (network delay, service restart, resource exhaustion) and verify each failure is applied in order with correct timing
- [ ] Verify the chaos tests are isolated and do not affect other test suites running in parallel

View File

@@ -0,0 +1,29 @@
# Determinism Property-Based Testing
## Module
__Tests
## Status
IMPLEMENTED
## Description
Comprehensive determinism property-based tests covering unicode normalization, SBOM/VEX ordering, floating-point stability, digest computation, and canonical JSON to ensure reproducible verdicts.
## Implementation Details
- **Unicode Normalization Properties**: `src/__Tests/__Libraries/StellaOps.Testing.Determinism.Properties/UnicodeNormalizationDeterminismProperties.cs` -- FsCheck property-based tests verifying Unicode normalization produces identical output for equivalent Unicode representations.
- **SBOM/VEX Ordering Properties**: `src/__Tests/__Libraries/StellaOps.Testing.Determinism.Properties/SbomVexOrderingDeterminismProperties.cs` -- verifies SBOM and VEX document processing produces identical results regardless of element ordering.
- **Floating-Point Stability Properties**: `src/__Tests/__Libraries/StellaOps.Testing.Determinism.Properties/FloatingPointStabilityProperties.cs` -- verifies floating-point computations (scores, percentages) produce identical results across platforms and evaluation orders.
- **Digest Computation Properties**: `src/__Tests/__Libraries/StellaOps.Testing.Determinism.Properties/DigestComputationDeterminismProperties.cs` -- verifies SHA-256 digest computations are deterministic for identical inputs.
- **Canonical JSON Properties**: `src/__Tests/__Libraries/StellaOps.Testing.Determinism.Properties/CanonicalJsonDeterminismProperties.cs` -- verifies RFC 8785 canonical JSON serialization produces identical byte output for semantically equivalent JSON documents.
- **JSON Object Arbitraries**: `src/__Tests/__Libraries/StellaOps.Testing.Determinism.Properties/JsonObjectArbitraries.cs` -- FsCheck arbitrary generators for producing random JSON structures for property-based testing.
- **Determinism Gate**: `src/__Tests/__Libraries/StellaOps.Testing.Determinism/Determinism/DeterminismGate.cs` -- CI gate that fails the build if determinism properties are violated.
- **Determinism Manifest**: `src/__Tests/__Libraries/StellaOps.Testing.Determinism/Determinism/DeterminismManifest.cs` -- captures determinism verification results as a test artifact.
- **Determinism Baseline Store**: `src/__Tests/__Libraries/StellaOps.Testing.Determinism/Determinism/DeterminismBaselineStore.cs` -- stores determinism baselines for comparison across test runs.
## E2E Test Plan
- [ ] Run the Unicode normalization property tests with 1000 randomly generated Unicode strings and verify all produce identical normalized output
- [ ] Run the SBOM ordering property tests with randomly shuffled SBOM components and verify the digest is identical regardless of input order
- [ ] Run the floating-point stability properties and verify score computations produce identical results when operands are reordered
- [ ] Run the canonical JSON properties with randomly generated JSON objects and verify RFC 8785 canonicalization produces identical output for equivalent inputs
- [ ] Verify the determinism gate: introduce a non-deterministic computation and confirm the gate blocks the build
- [ ] Verify determinism manifest: run the full property suite and confirm the manifest captures all property results with pass/fail status

View File

@@ -0,0 +1,28 @@
# Deterministic Run Manifest (Replay Key)
## Module
__Tests
## Status
IMPLEMENTED
## Description
Run manifest as a first-class test artifact capturing all inputs (artifact digests, feed snapshots, policy versions, tool versions) needed for byte-identical verdict replay.
## Implementation Details
- **Run Manifest Model**: `src/__Tests/__Libraries/StellaOps.Testing.Manifests/Models/RunManifest.cs` -- data model capturing all inputs needed for replay: artifact digests, feed snapshot versions, policy rule versions, tool versions, and environment metadata.
- **Manifest Capture Service**: `src/__Tests/__Libraries/StellaOps.Testing.Manifests/Services/ManifestCaptureService.cs` -- captures runtime state during test execution and serializes it into a `RunManifest` for replay.
- **Run Manifest Serializer**: `src/__Tests/__Libraries/StellaOps.Testing.Manifests/Serialization/RunManifestSerializer.cs` -- canonical serializer for run manifests ensuring deterministic byte output.
- **Run Manifest Validator**: `src/__Tests/__Libraries/StellaOps.Testing.Manifests/Validation/RunManifestValidator.cs` -- validates manifest completeness and integrity before replay.
- **Schema Loader**: `src/__Tests/__Libraries/StellaOps.Testing.Manifests/Validation/SchemaLoader.cs` -- loads JSON schema for manifest validation.
- **Test Run Attestation Generator**: `src/__Tests/__Libraries/StellaOps.Testing.Manifests/Attestation/TestRunAttestationGenerator.cs` -- generates DSSE attestations for test runs, binding the run manifest to a cryptographic signature.
- **Test Run Evidence**: `src/__Tests/__Libraries/StellaOps.Testing.Manifests/Attestation/TestRunEvidence.cs` -- evidence model for attested test runs.
- **Test Run Attestation Models**: `src/__Tests/__Libraries/StellaOps.Testing.Manifests/Attestation/TestRunAttestationModels.cs` -- DTOs for test run attestation metadata.
## E2E Test Plan
- [ ] Capture a run manifest during a verdict execution via `ManifestCaptureService` and verify it includes all required fields (artifact digests, feed versions, policy versions, tool versions)
- [ ] Serialize the manifest via `RunManifestSerializer` and verify the output is deterministic (same manifest produces identical bytes on re-serialization)
- [ ] Validate the manifest via `RunManifestValidator` and verify it passes schema validation
- [ ] Use the captured manifest to replay the verdict and verify the replayed output is byte-identical to the original
- [ ] Generate a test run attestation via `TestRunAttestationGenerator` and verify the DSSE envelope contains the manifest digest and a valid signature
- [ ] Verify incomplete manifest detection: remove a required field from the manifest and confirm `RunManifestValidator` rejects it with a descriptive error

View File

@@ -0,0 +1,24 @@
# Expanded Reachability Benchmark Fixtures
## Module
__Tests
## Status
IMPLEMENTED
## Description
Expanded benchmark corpus with real CVE cases (WordPress, Rust/Axum, runc, Redis) and cross-platform test runners.
## Implementation Details
- **Reachability Test Corpus**: `src/__Tests/reachability/` -- multi-language reachability test corpus with labeled samples for PHP (WordPress), Rust (Axum), Go (runc), C (Redis), and other ecosystems.
- **Benchmark Datasets**: `src/__Tests/__Datasets/` -- ground-truth datasets for reachability benchmarks with labeled reachable/unreachable code paths.
- **Scanner Analyzers Benchmark**: `src/Bench/StellaOps.Bench/Scanner.Analyzers/StellaOps.Bench.ScannerAnalyzers/` -- benchmark runner that executes scanner analyzers against the reachability corpus and measures precision/recall.
- **Baseline Loader**: `src/Bench/StellaOps.Bench/Scanner.Analyzers/StellaOps.Bench.ScannerAnalyzers/Baseline/BaselineLoader.cs` -- loads ground-truth baseline data for benchmark comparison.
## E2E Test Plan
- [ ] Run the reachability benchmark against the WordPress (PHP) corpus and verify precision and recall metrics are computed against the ground truth labels
- [ ] Run the benchmark against the Rust/Axum corpus and verify cross-language reachability analysis produces correct results
- [ ] Run the benchmark against the runc (Go) corpus and verify native code reachability paths are correctly identified
- [ ] Run the benchmark against the Redis (C) corpus and verify native memory access patterns are correctly analyzed
- [ ] Verify cross-platform compatibility: run the benchmark on both Linux and Windows and confirm results are identical
- [ ] Verify new fixture addition: add a new labeled sample to the corpus and confirm the benchmark runner includes it in the next evaluation

View File

@@ -0,0 +1,26 @@
# Golden Corpus (Pinned Test Fixtures)
## Module
__Tests
## Status
IMPLEMENTED
## Description
Versioned golden corpus with curated artifacts including container images, SBOMs, VEX examples, vulnerability feed snapshots, expected verdicts, and golden backport fixtures.
## Implementation Details
- **Test Fixtures**: `src/__Tests/fixtures/` -- pinned test fixture directory containing curated SBOMs (CycloneDX, SPDX), VEX documents, vulnerability feed snapshots, and expected verdict baselines.
- **Golden Datasets**: `src/__Tests/__Datasets/` -- ground-truth datasets with labeled vulnerability data for deterministic testing.
- **Determinism Baseline Store**: `src/__Tests/__Libraries/StellaOps.Testing.Determinism/Determinism/DeterminismBaselineStore.cs` -- stores and retrieves golden baseline hashes for verdict determinism comparison.
- **Determinism Manifest Writer**: `src/__Tests/__Libraries/StellaOps.Testing.Determinism/Determinism/DeterminismManifestWriter.cs` -- writes golden manifest snapshots capturing expected outputs.
- **Determinism Manifest Reader**: `src/__Tests/__Libraries/StellaOps.Testing.Determinism/Determinism/DeterminismManifestReader.cs` -- reads golden manifest snapshots for comparison during replay.
- **Golden Pairs Tools**: `src/Tools/GoldenPairs/` -- tooling for managing golden pair fixtures (see golden-pairs-mirror-and-diff-pipeline feature).
## E2E Test Plan
- [ ] Load a golden SBOM fixture and run the analysis pipeline; verify the verdict matches the expected output stored in the golden corpus
- [ ] Load a golden VEX fixture and verify the VEX processing produces the expected status for each vulnerability
- [ ] Verify determinism baseline: compute the verdict for a golden fixture, compare against the stored baseline hash, and confirm they match
- [ ] Add a new golden fixture with an expected verdict, run the test suite, and confirm the new fixture is included in the test pass
- [ ] Modify a golden fixture's expected verdict and verify the test suite detects the mismatch and reports which fixture failed
- [ ] Verify golden manifest round-trip: write a manifest via `DeterminismManifestWriter`, read it back via `DeterminismManifestReader`, and confirm identical content

View File

@@ -0,0 +1,24 @@
# Ground-Truth Reachability Test Corpus
## Module
__Tests
## Status
IMPLEMENTED
## Description
Multi-language ground-truth corpus exists with schema, manifest, labeled samples (PHP, JS, C#), and reproduction scripts for benchmarking scanner accuracy.
## Implementation Details
- **Reachability Corpus**: `src/__Tests/reachability/` -- directory containing multi-language ground-truth samples with labeled reachable and unreachable code paths for PHP, JavaScript, C#, and other languages.
- **Benchmark Datasets**: `src/__Tests/__Datasets/` -- structured datasets with schema definitions for ground-truth data including entry points, call chains, and reachability labels.
- **Benchmark Infrastructure**: `src/__Tests/__Benchmarks/reachability-benchmark/` -- benchmark suite infrastructure with submission format, evaluation scripts, and schema definitions.
- **Scanner Analyzers Benchmark**: `src/Bench/StellaOps.Bench/Scanner.Analyzers/StellaOps.Bench.ScannerAnalyzers/` -- benchmark harness that evaluates scanner analysis against ground-truth labels and computes precision/recall/F1 metrics.
## E2E Test Plan
- [ ] Load the PHP ground-truth corpus and run the scanner analyzer; verify the precision and recall scores are computed correctly against the labels
- [ ] Load the JavaScript corpus and verify cross-language reachability analysis correctly identifies reachable and dead code paths
- [ ] Load the C# corpus and verify .NET-specific call chain analysis correctly handles virtual dispatch and interface implementations
- [ ] Verify corpus schema: validate all ground-truth files against the schema definition and confirm they are well-formed
- [ ] Verify reproduction: run the reproduction scripts for a specific labeled sample and confirm the scanner produces the expected reachability result
- [ ] Add a new labeled sample to the corpus and verify the benchmark harness includes it in the next evaluation run

View File

@@ -0,0 +1,43 @@
# Multi-Runtime Reachability Corpus (Go, .NET, Python, Rust)
## Status
IMPLEMENTED
## Description
The multi-runtime reachability validation corpus with minimal apps per runtime, EXPECT.yaml ground truth, and runtime trace capture scripts is not implemented as a standalone test corpus.
## Why Marked as Dropped (Correction)
**FINDING: The multi-runtime reachability corpus IS implemented.** Two corpus locations exist:
1. `src/__Tests/reachability/corpus/` -- primary corpus with `dotnet/`, `go/`, `java/`, `python/`, `rust/` directories
2. `src/tests/reachability/corpus/` -- secondary corpus with `dotnet/`, `go/`, `python/`, `rust/` (includes OpenVEX scenario files per runtime)
Supporting infrastructure:
- `src/__Tests/reachability/scripts/update_corpus_manifest.py` -- corpus management
- `src/__Tests/reachability/scripts/README.md` -- documentation
- `src/__Tests/reachability/runners/` -- test runners
- `src/__Tests/reachability/samples-public/` -- public samples with runners, schema, and scripts
- `src/__Tests/reachability/StellaOps.Reachability.FixtureTests/` -- fixture-driven tests
- `src/__Tests/reachability/StellaOps.Signals.Reachability.Tests/` -- signals reachability tests
- Integration tests: `src/__Tests/Integration/StellaOps.Integration.Reachability/`
Specific CVE test cases exist per runtime (e.g., `dotnet-kestrel-CVE-2023-44487-http2-rapid-reset`, `go-ssh-CVE-2020-9283-keyexchange`, `python-django-CVE-2019-19844-sqli-like`).
## Implementation Details
- Primary corpus: `src/__Tests/reachability/corpus/` (5 runtimes)
- VEX corpus: `src/tests/reachability/corpus/` (4 runtimes with OpenVEX files)
- Management scripts: `src/__Tests/reachability/scripts/`
- Fixture tests: `src/__Tests/reachability/StellaOps.Reachability.FixtureTests/`
- Integration tests: `src/__Tests/Integration/StellaOps.Integration.Reachability/`
## E2E Test Plan
- Run fixture tests across all runtimes
- Verify corpus manifest is up to date
- Validate OpenVEX scenario files produce correct verdicts
## Source
- Feature matrix scan
## Notes
- Module: __Tests
- Modules referenced: `src/__Tests/reachability/`, `src/tests/reachability/`
- **Status should be reclassified from NOT_FOUND to IMPLEMENTED**

View File

@@ -0,0 +1,24 @@
# Public Reachability Benchmark Dataset
## Module
__Tests
## Status
IMPLEMENTED
## Description
Complete reachability benchmark dataset with JSON/YAML schemas for ground truth, traces, submissions, cases, coverage, and entrypoints. Includes website, submission guide, and legal notices (LICENSE/NOTICE).
## Implementation Details
- **Benchmark Dataset**: `src/__Tests/__Benchmarks/reachability-benchmark/` -- complete public benchmark dataset including JSON/YAML schemas for ground truth, trace data, submission formats, test cases, coverage metrics, and entry point definitions.
- **Benchmark Harness**: `src/Bench/StellaOps.Bench/Scanner.Analyzers/StellaOps.Bench.ScannerAnalyzers/` -- evaluation harness that scores submissions against the ground truth.
- **Baseline Infrastructure**: `src/Bench/StellaOps.Bench/Scanner.Analyzers/StellaOps.Bench.ScannerAnalyzers/Baseline/BaselineEntry.cs`, `BaselineLoader.cs` -- loads ground-truth baselines for benchmark evaluation.
- **Reporting**: `src/Bench/StellaOps.Bench/Scanner.Analyzers/StellaOps.Bench.ScannerAnalyzers/Reporting/BenchmarkScenarioReport.cs` -- produces detailed benchmark reports with precision, recall, and F1 scores per category.
## E2E Test Plan
- [ ] Validate all JSON schemas in the benchmark dataset and verify they are well-formed and internally consistent
- [ ] Submit a scanner's reachability results in the submission format and verify the evaluation harness produces a valid score report
- [ ] Verify the ground-truth data covers all declared entry points and traces
- [ ] Verify coverage metrics: submit a complete analysis and confirm the coverage report shows 100% of test cases evaluated
- [ ] Verify the dataset includes required legal notices (LICENSE, NOTICE) and the submission guide is accessible
- [ ] Load the baseline and compare a new submission against it; verify the harness correctly identifies improvements and regressions

View File

@@ -0,0 +1,24 @@
# Schema Evolution Testing
## Module
__Tests
## Status
IMPLEMENTED
## Description
Schema evolution test base for verifying database migration forward/backward compatibility in CI.
## Implementation Details
- **Schema Evolution Test Base**: `src/__Tests/__Libraries/StellaOps.Testing.SchemaEvolution/SchemaEvolutionTestBase.cs` -- abstract xUnit test base class for schema evolution tests; provides infrastructure to apply migrations forward and backward, verifying data integrity at each step.
- **Postgres Schema Evolution Test Base**: `src/__Tests/__Libraries/StellaOps.Testing.SchemaEvolution/PostgresSchemaEvolutionTestBase.cs` -- PostgreSQL-specific schema evolution test base using Testcontainers to spin up ephemeral databases for migration testing.
- **Schema Evolution Models**: `src/__Tests/__Libraries/StellaOps.Testing.SchemaEvolution/Models.cs` -- data models for schema evolution test state (migration version, schema snapshot, data integrity checks).
- **Postgres Integration Fixture**: `src/__Tests/__Libraries/StellaOps.Infrastructure.Postgres.Testing/PostgresIntegrationFixture.cs` -- Testcontainers-based PostgreSQL fixture for integration tests.
- **Migration Test Attribute**: `src/__Tests/__Libraries/StellaOps.Infrastructure.Postgres.Testing/MigrationTestAttribute.cs` -- xUnit attribute marking tests as migration tests requiring database setup.
## E2E Test Plan
- [ ] Create a `PostgresSchemaEvolutionTestBase` subclass, apply all migrations forward on an empty database, and verify the final schema matches the expected structure
- [ ] Apply a migration forward, insert test data, apply the next migration, and verify the data is preserved (forward compatibility)
- [ ] Apply all migrations forward, then roll back the last migration, and verify the data remains intact (backward compatibility)
- [ ] Verify the `MigrationTestAttribute` correctly identifies and runs migration-specific tests in the CI pipeline
- [ ] Run schema evolution tests for two different modules (e.g., Authority and Findings) in parallel on separate Testcontainers instances and verify no cross-contamination

View File

@@ -0,0 +1,28 @@
# Testcontainers Integration (.NET xUnit)
## Module
__Tests
## Status
IMPLEMENTED
## Description
Testcontainers used for Postgres integration fixtures, router chaos testing, and OCI registry testing with multiple container types.
## Implementation Details
- **PostgreSQL Integration Fixture**: `src/__Tests/__Libraries/StellaOps.Infrastructure.Postgres.Testing/PostgresIntegrationFixture.cs` -- Testcontainers-based PostgreSQL fixture that spins up an ephemeral Postgres container for integration tests; manages connection strings, schema migrations, and container lifecycle.
- **Migration Test Attribute**: `src/__Tests/__Libraries/StellaOps.Infrastructure.Postgres.Testing/MigrationTestAttribute.cs` -- xUnit attribute that marks tests requiring a database container, ensuring proper fixture setup and teardown.
- **OCI Distribution Registry Container**: `src/__Tests/__Libraries/StellaOps.Infrastructure.Registry.Testing/DistributionRegistryContainer.cs` -- Testcontainers wrapper for the Docker Distribution (registry:2) container used in OCI push/pull integration tests.
- **Zot Registry Container**: `src/__Tests/__Libraries/StellaOps.Infrastructure.Registry.Testing/ZotRegistryContainer.cs` -- Testcontainers wrapper for the Zot OCI registry, testing compatibility with alternative registry implementations.
- **Harbor Registry Container**: `src/__Tests/__Libraries/StellaOps.Infrastructure.Registry.Testing/HarborRegistryContainer.cs` -- Testcontainers wrapper for Harbor registry, testing enterprise registry features (replication, scanning, RBAC).
- **Schema Evolution Test Base**: `src/__Tests/__Libraries/StellaOps.Testing.SchemaEvolution/PostgresSchemaEvolutionTestBase.cs` -- abstract test base that uses Testcontainers PostgreSQL for running schema migration forward/backward compatibility tests.
- **Network Isolated Test Base**: `src/__Tests/__Libraries/StellaOps.Testing.AirGap/NetworkIsolatedTestBase.cs` -- Testcontainers-based test base that creates network-isolated containers for air-gap scenario testing.
## E2E Test Plan
- [ ] Start a `PostgresIntegrationFixture`, run a migration, insert test data, query it back, and verify the container is properly cleaned up after the test completes
- [ ] Start a `DistributionRegistryContainer`, push an OCI image, pull it back, and verify the image digest matches
- [ ] Start a `ZotRegistryContainer`, push an OCI artifact, and verify Zot-specific API compatibility (catalog, tags list)
- [ ] Start a `HarborRegistryContainer`, push an image, and verify Harbor-specific endpoints (projects, repositories) are accessible
- [ ] Run a `PostgresSchemaEvolutionTestBase` subclass, apply migrations forward and backward, and verify the Testcontainers Postgres instance is properly provisioned and torn down
- [ ] Run two Testcontainers-based tests in parallel (e.g., Postgres + OCI registry) and verify no port conflicts or container name collisions occur
- [ ] Verify `NetworkIsolatedTestBase` creates a container with no external network access by attempting an outbound HTTP request and confirming it fails