- Created a new document for the Stella Ops Reference Architecture outlining the system's topology, trust boundaries, artifact association, and interfaces. - Developed a comprehensive Testing Strategy document detailing the importance of offline readiness, interoperability, determinism, and operational guardrails. - Introduced a README for the Testing Strategy, summarizing processing details and key concepts implemented. - Added guidance for AI agents and developers in the tests directory, including directory structure, test categories, key patterns, and rules for test development.
8.6 KiB
Executable File
Automated Test-Suite Overview
This document enumerates every automated check executed by the Stella Ops CI pipeline, from unit level to chaos experiments. It is intended for contributors who need to extend coverage or diagnose failures.
Build parameters – values such as
{{ dotnet }}(runtime) and{{ angular }}(UI framework) are injected at build time.
Test Philosophy
Core Principles
- Determinism as Contract: Scan verdicts must be reproducible. Same inputs → byte-identical outputs.
- Offline by Default: Every test (except explicitly tagged "online") runs without network access.
- Evidence-First Validation: Assertions verify the complete evidence chain, not just pass/fail.
- Interop is Required: Compatibility with ecosystem tools (Syft, Grype, Trivy, cosign) blocks releases.
- Coverage by Risk: Prioritize testing high-risk paths over line coverage metrics.
Test Boundaries
- Lattice/policy merge algorithms run in
scanner.webservice - Concelier/Excitors preserve prune source (no conflict resolution)
- Tests enforce these boundaries explicitly
Layer Map
| Layer | Tooling | Entry-point | Frequency |
|---|---|---|---|
| 1. Unit | xUnit (dotnet test) |
*.Tests.csproj |
per PR / push |
| 2. Property-based | FsCheck |
SbomPropertyTests, Canonicalization |
per PR |
| 3. Integration (API) | Testcontainers suite |
test/Api.Integration |
per PR + nightly |
| 4. Integration (DB-merge) | Testcontainers PostgreSQL + Valkey | Concelier.Integration |
per PR |
| 5. Contract (OpenAPI) | Schema validation | docs/api/*.yaml |
per PR |
| 6. Front-end unit | Jest |
ui/src/**/*.spec.ts |
per PR |
| 7. Front-end E2E | Playwright |
ui/e2e/** |
nightly |
| 8. Lighthouse perf / a11y | lighthouse-ci (Chrome headless) |
ui/dist/index.html |
nightly |
| 9. Load | k6 scripted scenarios |
tests/load/*.js |
nightly |
| 10. Chaos | pumba, custom harness |
tests/chaos/ |
weekly |
| 11. Interop | Syft/Grype/cosign | tests/interop/ |
nightly |
| 12. Offline E2E | Network-isolated containers | tests/offline/ |
nightly |
| 13. Replay Verification | Golden corpus replay | bench/golden-corpus/ |
per PR |
| 14. Dependency scanning | Trivy fs + dotnet list package --vuln |
root | per PR |
| 15. License compliance | LicenceFinder |
root | per PR |
| 16. SBOM reproducibility | in-toto attestation diff |
GitLab job | release tags |
Test Categories (xUnit Traits)
[Trait("Category", "Unit")] // Fast, isolated unit tests
[Trait("Category", "Integration")] // Tests requiring infrastructure
[Trait("Category", "E2E")] // Full end-to-end workflows
[Trait("Category", "AirGap")] // Must work without network
[Trait("Category", "Interop")] // Third-party tool compatibility
[Trait("Category", "Performance")] // Performance benchmarks
[Trait("Category", "Chaos")] // Failure injection tests
[Trait("Category", "Security")] // Security-focused tests
Quality Gates
| Metric | Budget | Gate |
|---|---|---|
| API unit coverage | ≥ 85% lines | PR merge |
| API response P95 | ≤ 120 ms | nightly alert |
| Δ-SBOM warm scan P95 (4 vCPU) | ≤ 5 s | nightly alert |
| Lighthouse performance score | ≥ 90 | nightly alert |
| Lighthouse accessibility score | ≥ 95 | nightly alert |
| k6 sustained RPS drop | < 5% vs baseline | nightly alert |
| Replay determinism | 0 byte diff | Release |
| Interop findings parity | ≥ 95% | Release |
| Offline E2E | All pass with no network | Release |
| Unknowns budget (prod) | ≤ configured limit | Release |
| Router Retry-After compliance | 100% | Nightly |
Local Runner
# minimal run: unit + property + frontend tests
./scripts/dev-test.sh
# full stack incl. Playwright and lighthouse
./scripts/dev-test.sh --full
# category-specific
dotnet test --filter "Category=Unit"
dotnet test --filter "Category=AirGap"
dotnet test --filter "Category=Interop"
The script spins up PostgreSQL/Valkey via Testcontainers and requires:
- Docker ≥ 25
- Node 20 (for Jest/Playwright)
PostgreSQL Testcontainers
Multiple suites (Concelier connectors, Excititor worker/WebService, Scheduler)
use Testcontainers with PostgreSQL for integration tests. If you don't have
Docker available, tests can also run against a local PostgreSQL instance
listening on 127.0.0.1:5432.
Local PostgreSQL Helper
Some suites (Concelier WebService/Core, Exporter JSON) need a full
PostgreSQL instance when you want to debug or inspect data with psql.
A helper script is available under tools/postgres/local-postgres.sh:
# start a local PostgreSQL instance
tools/postgres/local-postgres.sh start
# stop / clean
tools/postgres/local-postgres.sh stop
tools/postgres/local-postgres.sh clean
By default the script uses Docker to run PostgreSQL 16, binds to
127.0.0.1:5432, and creates a database called stellaops. The
connection string is printed on start and you can export it before
running dotnet test if a suite supports overriding its connection string.
New Test Infrastructure (Epic 5100)
Run Manifest & Replay
Every scan captures a Run Manifest containing all inputs (artifact digests, feed versions, policy versions, PRNG seed). This enables deterministic replay:
# Replay a scan from manifest
stella replay --manifest run-manifest.json --output verdict.json
# Verify determinism
stella replay verify --manifest run-manifest.json
Evidence Index
The Evidence Index links verdicts to their supporting evidence chain:
- Verdict → SBOM digests → Attestation IDs → Tool versions
Golden Corpus
Located at bench/golden-corpus/, contains 50+ test cases:
- Severity levels (Critical, High, Medium, Low)
- VEX scenarios (Not Affected, Affected, Conflicting)
- Reachability cases (Reachable, Not Reachable, Inconclusive)
- Unknowns scenarios
- Scale tests (200 to 50k+ packages)
- Multi-distro (Alpine, Debian, RHEL, SUSE, Ubuntu)
- Interop fixtures (Syft-generated, Trivy-generated)
- Negative cases (malformed inputs)
Offline Testing
Inherit from NetworkIsolatedTestBase for air-gap compliance:
[Trait("Category", "AirGap")]
public class OfflineTests : NetworkIsolatedTestBase
{
[Fact]
public async Task Test_WorksOffline()
{
// Test implementation
AssertNoNetworkCalls(); // Fails if network accessed
}
}
Concelier OSV↔GHSA Parity Fixtures
The Concelier connector suite includes a regression test (OsvGhsaParityRegressionTests)
that checks a curated set of GHSA identifiers against OSV responses. The fixture
snapshots live in src/Concelier/StellaOps.Concelier.PluginBinaries/StellaOps.Concelier.Connector.Osv.Tests/Fixtures/ and are kept
deterministic so the parity report remains reproducible.
To refresh the fixtures when GHSA/OSV payloads change:
- Ensure outbound HTTPS access to
https://api.osv.devandhttps://api.github.com. - Run
UPDATE_PARITY_FIXTURES=1 dotnet test src/Concelier/StellaOps.Concelier.PluginBinaries/StellaOps.Concelier.Connector.Osv.Tests/StellaOps.Concelier.Connector.Osv.Tests.csproj. - Commit the regenerated
osv-ghsa.*.jsonfiles that the test emits (raw snapshots and canonical advisories).
The regen flow logs [Parity] messages and normalises recordedAt timestamps so the
fixtures stay stable across machines.
CI Job Layout
flowchart LR
subgraph fast-path
U[xUnit] --> P[FsCheck] --> I1[Testcontainer API]
end
I1 --> FE[Jest]
FE --> E2E[Playwright]
E2E --> Lighthouse
subgraph release-gates
REPLAY[Replay Verify]
INTEROP[Interop E2E]
OFFLINE[Offline E2E]
BUDGET[Unknowns Gate]
end
Lighthouse --> INTEG2[Concelier]
INTEG2 --> LOAD[k6]
LOAD --> CHAOS[Chaos Suite]
CHAOS --> RELEASE[Attestation diff]
RELEASE --> release-gates
Adding a New Test Layer
- Extend
scripts/dev-test.shso local contributors get the layer by default. - Add a dedicated workflow in
.gitea/workflows/(or GitLab job in.gitlab-ci.yml). - Register the job in
docs/19_TEST_SUITE_OVERVIEW.mdand list its metric indocs/metrics/README.md. - If the test requires network isolation, inherit from
NetworkIsolatedTestBase. - If the test uses golden corpus, add cases to
bench/golden-corpus/.
Related Documentation
- Sprint Epic 5100 - Testing Strategy
- tests/AGENTS.md
- Offline Operation Guide
- Module Architecture Dossiers
Last updated 2025-12-21