docs consolidation

This commit is contained in:
master
2026-01-07 10:23:21 +02:00
parent 4789027317
commit 044cf0923c
515 changed files with 5460 additions and 5292 deletions

View File

@@ -366,7 +366,7 @@ histogram_quantile(0.95,
- **CI/CD Workflow**: `.gitea/workflows/cross-platform-determinism.yml`
- **Test README**: `src/__Tests/Determinism/README.md`
- **Developer Guide**: `docs/testing/DETERMINISM_DEVELOPER_GUIDE.md`
- **Developer Guide**: `docs/technical/testing/DETERMINISM_DEVELOPER_GUIDE.md`
- **Batch Summary**: `docs/implplan/archived/2025-12-29-completed-sprints/BATCH_20251229_BE_COMPLETION_SUMMARY.md`
## Changelog

View File

@@ -180,7 +180,7 @@ TODO → DOING → BLOCKED/IN_REVIEW → DONE
- [ ] Pilot adoption in 2+ modules with S1 model (e.g., Scanner, Policy)
**Epic D (Connectors):**
- [ ] Connector fixture discipline documented in `docs/testing/connector-fixture-discipline.md`
- [ ] Connector fixture discipline documented in `docs/technical/testing/connector-fixture-discipline.md`
- [ ] FixtureUpdater tool operational (with `UPDATE_CONNECTOR_FIXTURES=1` env var guard)
- [ ] Pilot adoption in Concelier.Connector.NVD

View File

@@ -390,11 +390,11 @@ ONGOING: QUALITY GATES (Weeks 3-14+)
### Appendix B: Reference Documents
1. **Advisory:** `docs/product-advisories/22-Dec-2026 - Better testing strategy.md`
2. **Test Catalog:** `docs/testing/TEST_CATALOG.yml`
3. **Test Models:** `docs/testing/testing-strategy-models.md`
4. **Dependency Graph:** `docs/testing/SPRINT_DEPENDENCY_GRAPH.md`
5. **Coverage Matrix:** `docs/testing/TEST_COVERAGE_MATRIX.md`
6. **Execution Playbook:** `docs/testing/SPRINT_EXECUTION_PLAYBOOK.md`
2. **Test Catalog:** `docs/technical/testing/TEST_CATALOG.yml`
3. **Test Models:** `docs/technical/testing/testing-strategy-models.md`
4. **Dependency Graph:** `docs/technical/testing/SPRINT_DEPENDENCY_GRAPH.md`
5. **Coverage Matrix:** `docs/technical/testing/TEST_COVERAGE_MATRIX.md`
6. **Execution Playbook:** `docs/technical/testing/SPRINT_EXECUTION_PLAYBOOK.md`
### Appendix C: Budget Estimate (Preliminary)

View File

@@ -259,4 +259,4 @@ Weekly (Optional):
**Prepared by:** Project Management
**Date:** 2025-12-23
**Next Review:** 2026-01-06 (Week 1 kickoff)
**Source:** `docs/testing/TEST_CATALOG.yml`, Sprint files 5100.0009.* and 5100.0010.*
**Source:** `docs/technical/testing/TEST_CATALOG.yml`, Sprint files 5100.0009.* and 5100.0010.*

View File

@@ -0,0 +1,265 @@
# Automated Test-Suite Overview
This document enumerates **every automated check** executed by the Stella Ops
CI pipeline, from unit level to chaos experiments. It is intended for
contributors who need to extend coverage or diagnose failures.
> **Build parameters** – values such as `{{ dotnet }}` (runtime) and
> `{{ angular }}` (UI framework) are injected at build time.
---
## Test Philosophy
### Core Principles
1. **Determinism as Contract**: Scan verdicts must be reproducible. Same inputs → byte-identical outputs.
2. **Offline by Default**: Every test (except explicitly tagged "online") runs without network access.
3. **Evidence-First Validation**: Assertions verify the complete evidence chain, not just pass/fail.
4. **Interop is Required**: Compatibility with ecosystem tools (Syft, Grype, Trivy, cosign) blocks releases.
5. **Coverage by Risk**: Prioritize testing high-risk paths over line coverage metrics.
### Test Boundaries
- **Lattice/policy merge** algorithms run in `scanner.webservice`
- **Concelier/Excitors** preserve prune source (no conflict resolution)
- Tests enforce these boundaries explicitly
### Model taxonomy
See `docs/technical/testing/testing-strategy-models.md` and `docs/technical/testing/TEST_CATALOG.yml` for
the required test types per project model and the module-to-model mapping.
---
## Layer Map
| Layer | Tooling | Entry-point | Frequency |
|-------|---------|-------------|-----------|
| **1. Unit** | `xUnit` (<code>dotnet test</code>) | `*.Tests.csproj` | per PR / push |
| **2. Property-based** | `FsCheck` | `SbomPropertyTests`, `Canonicalization` | per PR |
| **3. Integration (API)** | `Testcontainers` suite | `test/Api.Integration` | per PR + nightly |
| **4. Integration (DB-merge)** | Testcontainers PostgreSQL + Valkey | `Concelier.Integration` | per PR |
| **5. Contract (OpenAPI)** | Schema validation | `docs/api/*.yaml` | per PR |
| **6. Front-end unit** | `Jest` | `ui/src/**/*.spec.ts` | per PR |
| **7. Front-end E2E** | `Playwright` | `ui/e2e/**` | nightly |
| **8. Lighthouse perf / a11y** | `lighthouse-ci` (Chrome headless) | `ui/dist/index.html` | nightly |
| **9. Load** | `k6` scripted scenarios | `tests/load/*.js` | nightly |
| **10. Chaos** | `pumba`, custom harness | `tests/chaos/` | weekly |
| **11. Interop** | Syft/Grype/cosign | `tests/interop/` | nightly |
| **12. Offline E2E** | Network-isolated containers | `tests/offline/` | nightly |
| **13. Replay Verification** | Golden corpus replay | `bench/golden-corpus/` | per PR |
| **14. Dependency scanning** | `Trivy fs` + `dotnet list package --vuln` | root | per PR |
| **15. License compliance** | `LicenceFinder` | root | per PR |
| **16. SBOM reproducibility** | `in-toto attestation` diff | GitLab job | release tags |
---
## Test Categories (xUnit Traits)
```csharp
[Trait("Category", "Unit")] // Fast, isolated unit tests
[Trait("Category", "Property")] // Property-based checks (sub-trait)
[Trait("Category", "Snapshot")] // Golden/snapshot assertions (sub-trait)
[Trait("Category", "Integration")] // Tests requiring infrastructure
[Trait("Category", "Contract")] // Schema and API contract checks
[Trait("Category", "E2E")] // Full end-to-end workflows
[Trait("Category", "AirGap")] // Must work without network
[Trait("Category", "Interop")] // Third-party tool compatibility
[Trait("Category", "Performance")] // Performance benchmarks
[Trait("Category", "Chaos")] // Failure injection tests
[Trait("Category", "Security")] // Security-focused tests
[Trait("Category", "Live")] // Opt-in upstream connector tests
```
---
## Quality Gates
| Metric | Budget | Gate |
|--------|--------|------|
| API unit coverage | ≥ 85% lines | PR merge |
| API response P95 | ≤ 120 ms | nightly alert |
| Δ-SBOM warm scan P95 (4 vCPU) | ≤ 5 s | nightly alert |
| Lighthouse performance score | ≥ 90 | nightly alert |
| Lighthouse accessibility score | ≥ 95 | nightly alert |
| k6 sustained RPS drop | < 5% vs baseline | nightly alert |
| **Replay determinism** | 0 byte diff | **Release** |
| **Interop findings parity** | ≥ 95% | **Release** |
| **Offline E2E** | All pass with no network | **Release** |
| **Unknowns budget (prod)** | ≤ configured limit | **Release** |
| **Router Retry-After compliance** | 100% | Nightly |
---
## Local Runner
```bash
# minimal run: unit + property + frontend tests
./scripts/dev-test.sh
# full stack incl. Playwright and lighthouse
./scripts/dev-test.sh --full
# category-specific
dotnet test --filter "Category=Unit"
dotnet test --filter "Category=AirGap"
dotnet test --filter "Category=Interop"
```
The script spins up PostgreSQL/Valkey via Testcontainers and requires:
* Docker ≥ 25
* Node 20 (for Jest/Playwright)
### PostgreSQL Testcontainers
Multiple suites (Concelier connectors, Excititor worker/WebService, Scheduler)
use Testcontainers with PostgreSQL for integration tests. If you don't have
Docker available, tests can also run against a local PostgreSQL instance
listening on `127.0.0.1:5432`.
### Local PostgreSQL Helper
Some suites (Concelier WebService/Core, Exporter JSON) need a full
PostgreSQL instance when you want to debug or inspect data with `psql`.
A helper script is available under `tools/postgres/local-postgres.sh`:
```bash
# start a local PostgreSQL instance
tools/postgres/local-postgres.sh start
# stop / clean
tools/postgres/local-postgres.sh stop
tools/postgres/local-postgres.sh clean
```
By default the script uses Docker to run PostgreSQL 16, binds to
`127.0.0.1:5432`, and creates a database called `stellaops`. The
connection string is printed on start and you can export it before
running `dotnet test` if a suite supports overriding its connection string.
---
## New Test Infrastructure (Epic 5100)
### Run Manifest & Replay
Every scan captures a **Run Manifest** containing all inputs (artifact digests, feed versions, policy versions, PRNG seed). This enables deterministic replay:
```bash
# Replay a scan from manifest
stella replay --manifest run-manifest.json --output verdict.json
# Verify determinism
stella replay verify --manifest run-manifest.json
```
### Evidence Index
The **Evidence Index** links verdicts to their supporting evidence chain:
- Verdict → SBOM digests → Attestation IDs → Tool versions
### Golden Corpus
Located at `bench/golden-corpus/`, contains 50+ test cases:
- Severity levels (Critical, High, Medium, Low)
- VEX scenarios (Not Affected, Affected, Conflicting)
- Reachability cases (Reachable, Not Reachable, Inconclusive)
- Unknowns scenarios
- Scale tests (200 to 50k+ packages)
- Multi-distro (Alpine, Debian, RHEL, SUSE, Ubuntu)
- Interop fixtures (Syft-generated, Trivy-generated)
- Negative cases (malformed inputs)
### Offline Testing
Inherit from `NetworkIsolatedTestBase` for air-gap compliance:
```csharp
[Trait("Category", "AirGap")]
public class OfflineTests : NetworkIsolatedTestBase
{
[Fact]
public async Task Test_WorksOffline()
{
// Test implementation
AssertNoNetworkCalls(); // Fails if network accessed
}
}
```
---
## Concelier OSV↔GHSA Parity Fixtures
The Concelier connector suite includes a regression test (`OsvGhsaParityRegressionTests`)
that checks a curated set of GHSA identifiers against OSV responses. The fixture
snapshots live in `src/Concelier/StellaOps.Concelier.PluginBinaries/StellaOps.Concelier.Connector.Osv.Tests/Fixtures/` and are kept
deterministic so the parity report remains reproducible.
To refresh the fixtures when GHSA/OSV payloads change:
1. Ensure outbound HTTPS access to `https://api.osv.dev` and `https://api.github.com`.
2. Run `UPDATE_PARITY_FIXTURES=1 dotnet test src/Concelier/StellaOps.Concelier.PluginBinaries/StellaOps.Concelier.Connector.Osv.Tests/StellaOps.Concelier.Connector.Osv.Tests.csproj`.
3. Commit the regenerated `osv-ghsa.*.json` files that the test emits (raw snapshots and canonical advisories).
The regen flow logs `[Parity]` messages and normalises `recordedAt` timestamps so the
fixtures stay stable across machines.
---
## CI Job Layout
```mermaid
flowchart LR
subgraph fast-path
U[xUnit] --> P[FsCheck] --> I1[Testcontainer API]
end
I1 --> FE[Jest]
FE --> E2E[Playwright]
E2E --> Lighthouse
subgraph release-gates
REPLAY[Replay Verify]
INTEROP[Interop E2E]
OFFLINE[Offline E2E]
BUDGET[Unknowns Gate]
end
Lighthouse --> INTEG2[Concelier]
INTEG2 --> LOAD[k6]
LOAD --> CHAOS[Chaos Suite]
CHAOS --> RELEASE[Attestation diff]
RELEASE --> release-gates
```
---
## Adding a New Test Layer
1. Extend `scripts/dev-test.sh` so local contributors get the layer by default.
2. Add a dedicated workflow in `.gitea/workflows/` (or GitLab job in `.gitlab-ci.yml`).
3. Register the job in `docs/technical/testing/TEST_SUITE_OVERVIEW.md` *and* list its metric
in `docs/modules/telemetry/guides/README.md`.
4. If the test requires network isolation, inherit from `NetworkIsolatedTestBase`.
5. If the test uses golden corpus, add cases to `bench/golden-corpus/`.
---
## Related Documentation
- [Testing Strategy Models](testing/testing-strategy-models.md)
- [Test Catalog](testing/TEST_CATALOG.yml)
- [Testing README](testing/README.md) - Complete testing documentation index
- [CI/CD Test Strategy](cicd/test-strategy.md) - CI/CD integration details
- [tests/AGENTS.md](../tests/AGENTS.md)
- [Offline Operation Guide](OFFLINE_KIT.md)
- [Module Architecture Dossiers](modules/)
---
*Last updated 2025-12-23*

View File

@@ -4,7 +4,7 @@ This document describes how to categorize tests by lane and test type for CI fil
## Test Lanes
StellaOps uses standardized test lanes based on `docs/testing/TEST_CATALOG.yml`:
StellaOps uses standardized test lanes based on `docs/technical/testing/TEST_CATALOG.yml`:
| Lane | Purpose | Characteristics | PR Gating |
|------|---------|-----------------|-----------|
@@ -240,6 +240,6 @@ If you have existing tests without lane attributes:
## Related Documentation
- Test Catalog: `docs/testing/TEST_CATALOG.yml`
- Testing Strategy: `docs/testing/testing-strategy-models.md`
- Test Catalog: `docs/technical/testing/TEST_CATALOG.yml`
- Testing Strategy: `docs/technical/testing/testing-strategy-models.md`
- TestKit README: `src/__Libraries/StellaOps.TestKit/README.md`

View File

@@ -303,8 +303,8 @@ Replace per-module test execution with lane-based execution:
## Related Documentation
- Test Lane Filters: `docs/testing/ci-lane-filters.md`
- Testing Strategy: `docs/testing/testing-strategy-models.md`
- Test Catalog: `docs/testing/TEST_CATALOG.yml`
- Test Lane Filters: `docs/technical/testing/ci-lane-filters.md`
- Testing Strategy: `docs/technical/testing/testing-strategy-models.md`
- Test Catalog: `docs/technical/testing/TEST_CATALOG.yml`
- TestKit README: `src/__Libraries/StellaOps.TestKit/README.md`
- Example Workflow: `.gitea/workflows/test-lanes.yml`

View File

@@ -287,5 +287,5 @@ When writing determinism tests, verify:
## Related Documentation
- TestKit README: `src/__Libraries/StellaOps.TestKit/README.md`
- Testing Strategy: `docs/testing/testing-strategy-models.md`
- Test Catalog: `docs/testing/TEST_CATALOG.yml`
- Testing Strategy: `docs/technical/testing/testing-strategy-models.md`
- Test Catalog: `docs/technical/testing/TEST_CATALOG.yml`

View File

@@ -15,9 +15,9 @@ StellaOps validates all SBOM fixtures against official JSON schemas to detect sc
| Format | Version | Schema Location | Validator |
|--------|---------|-----------------|-----------|
| CycloneDX | 1.6 | `docs/schemas/cyclonedx-bom-1.6.schema.json` | sbom-utility |
| SPDX | 3.0.1 | `docs/schemas/spdx-jsonld-3.0.1.schema.json` | pyspdxtools / check-jsonschema |
| OpenVEX | 0.2.0 | `docs/schemas/openvex-0.2.0.schema.json` | ajv-cli |
| CycloneDX | 1.6 | `docs/modules/sbom-service/schemas/cyclonedx-bom-1.6.schema.json` | sbom-utility |
| SPDX | 3.0.1 | `docs/modules/sbom-service/schemas/spdx-jsonld-3.0.1.schema.json` | pyspdxtools / check-jsonschema |
| OpenVEX | 0.2.0 | `docs/modules/excititor/schemas/openvex-0.2.0.schema.json` | ajv-cli |
## CI Workflows
@@ -26,7 +26,7 @@ StellaOps validates all SBOM fixtures against official JSON schemas to detect sc
**File:** `.gitea/workflows/schema-validation.yml`
Runs on:
- Pull requests touching `bench/golden-corpus/**`, `src/Scanner/**`, `docs/schemas/**`, or `scripts/validate-*.sh`
- Pull requests touching `bench/golden-corpus/**`, `src/Scanner/**`, `docs/modules/**/schemas/**`, or `scripts/validate-*.sh`
- Push to `main` branch
Jobs:
@@ -85,7 +85,7 @@ curl -sSfL "https://github.com/CycloneDX/sbom-utility/releases/download/v0.16.0/
sudo mv sbom-utility /usr/local/bin/
# Validate
sbom-utility validate --input-file sbom.json --schema docs/schemas/cyclonedx-bom-1.6.schema.json
sbom-utility validate --input-file sbom.json --schema docs/modules/sbom-service/schemas/cyclonedx-bom-1.6.schema.json
```
## Troubleshooting
@@ -187,7 +187,7 @@ If negative tests fail with "UNEXPECTED PASS":
When updating schema versions:
1. Download new schema to `docs/schemas/`
1. Download new schema to the appropriate module `schemas/` directory (e.g., `docs/modules/sbom-service/schemas/`)
2. Update `SBOM_UTILITY_VERSION` in workflows if needed
3. Run full validation to check for new violations
4. Update documentation with new version

View File

@@ -160,7 +160,7 @@ thresholds:
- `MaliciousPayloads.cs` - Common attack patterns
- `SecurityTestBase.cs` - Test infrastructure
- `.gitea/workflows/security-tests.yml` - Dedicated CI workflow
- `docs/testing/security-testing-guide.md` - Documentation
- `docs/technical/testing/security-testing-guide.md` - Documentation
---
@@ -182,7 +182,7 @@ thresholds:
- `scripts/ci/mutation-thresholds.yaml` - Threshold configuration
- `.gitea/workflows/mutation-testing.yml` - Weekly mutation runs
- `bench/baselines/mutation-baselines.json` - Baseline scores
- `docs/testing/mutation-testing-guide.md` - Developer guide
- `docs/technical/testing/mutation-testing-guide.md` - Developer guide
---
@@ -273,7 +273,7 @@ src/Scanner/__Libraries/StellaOps.Scanner.Core/stryker-config.json
src/Policy/StellaOps.Policy.Engine/stryker-config.json
src/Authority/StellaOps.Authority.Core/stryker-config.json
docs/testing/
docs/technical/testing/
├── ci-quality-gates.md
├── security-testing-guide.md
└── mutation-testing-guide.md

View File

@@ -10,7 +10,7 @@ Supersedes/extends: `docs/product-advisories/archived/2025-12-21-testing-strateg
## Strategy in brief
- Use test models (L0, S1, C1, W1, WK1, T1, AN1, CLI1, PERF) to encode required test types.
- Map every module to one or more models in `docs/testing/TEST_CATALOG.yml`.
- Map every module to one or more models in `docs/technical/testing/TEST_CATALOG.yml`.
- Run tests through standardized CI lanes (Unit, Contract, Integration, Security, Performance, Live).
## Test models (requirements)
@@ -40,13 +40,13 @@ Supersedes/extends: `docs/product-advisories/archived/2025-12-21-testing-strateg
- Live: opt-in upstream connector checks (never PR gating by default).
## Documentation moments (when to update)
- New model or required test type: update `docs/testing/TEST_CATALOG.yml`.
- New lane or gate: update `docs/TEST_SUITE_OVERVIEW.md` and `docs/testing/ci-quality-gates.md`.
- New model or required test type: update `docs/technical/testing/TEST_CATALOG.yml`.
- New lane or gate: update `docs/technical/testing/TEST_SUITE_OVERVIEW.md` and `docs/technical/testing/ci-quality-gates.md`.
- Module-specific test policy change: update the module dossier under `docs/modules/<module>/`.
- New fixtures or runnable harnesses: place under `docs/benchmarks/**` or `tests/**` and link here.
## Related artifacts
- Test catalog (source of truth): `docs/testing/TEST_CATALOG.yml`
- Test suite overview: `docs/TEST_SUITE_OVERVIEW.md`
- Quality guardrails: `docs/testing/testing-quality-guardrails-implementation.md`
- Test catalog (source of truth): `docs/technical/testing/TEST_CATALOG.yml`
- Test suite overview: `docs/technical/testing/TEST_SUITE_OVERVIEW.md`
- Quality guardrails: `docs/technical/testing/testing-quality-guardrails-implementation.md`
- Code samples from the advisory: `docs/benchmarks/testing/better-testing-strategy-samples.md`