Add signal contracts for reachability, exploitability, trust, and unknown symbols
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled
Signals DSSE Sign & Evidence Locker / sign-signals-artifacts (push) Has been cancelled
Signals DSSE Sign & Evidence Locker / verify-signatures (push) Has been cancelled

- Introduced `ReachabilityState`, `RuntimeHit`, `ExploitabilitySignal`, `ReachabilitySignal`, `SignalEnvelope`, `SignalType`, `TrustSignal`, and `UnknownSymbolSignal` records to define various signal types and their properties.
- Implemented JSON serialization attributes for proper data interchange.
- Created project files for the new signal contracts library and corresponding test projects.
- Added deterministic test fixtures for micro-interaction testing.
- Included cryptographic keys for secure operations with cosign.
This commit is contained in:
StellaOps Bot
2025-12-05 00:27:00 +02:00
parent b018949a8d
commit 8768c27f30
192 changed files with 27569 additions and 2552 deletions

View File

@@ -0,0 +1,16 @@
# StellaOps Evidence Locker Changelog
Semantic Versioning policy: MAJOR for breaking API/format changes; MINOR for new capabilities or schema additions; PATCH for fixes that do not change contracts. Dates are UTC.
## 1.1.0 2025-12-04
- Closed EB1EB10 gaps from the 28-Nov-2025 advisory:
- Published canonical schemas `schemas/bundle.manifest.schema.json` and `schemas/checksums.schema.json`.
- DSSE subject now bound to the Merkle root (sha256 of `checksums.txt`); log policy captured for offline/online cases.
- Replay provenance block defined and embedded in manifest/attestation contracts.
- Incident-mode toggles recorded and signed; portable/redaction guidance formalized.
- Merkle/CAS recipe documented with deterministic gzip/tar invariants.
- Offline verifier guide + script published; golden sealed/portable bundles and replay NDJSON fixtures added under `tests/EvidenceLocker/Bundles/Golden/`.
- Status: **Released** for documentation/fixtures; wire into code/tests before packaging a new binary drop.
## 1.0.0 2025-11-19
- Initial Evidence Bundle v1 contract and sample layout published.

View File

@@ -16,7 +16,7 @@ Working directory: `docs/implplan` (sprint coordination) with artefacts in `docs
| EB7 | Incident-mode signed activation/exit | `docs/modules/evidence-locker/incident-mode.md` | Evidence Locker Guild · Security Guild | Manifest/DSSE captures activation + deactivation events with signer identity; API/CLI steps documented. | DONE (2025-12-04) |
| EB8 | Tenant isolation + redaction manifest | `bundle-packaging.md` + portable bundle guidance | Evidence Locker Guild · Privacy Guild | Portable bundles omit tenant identifiers; redaction map recorded; verifier asserts redacted fields absent. | DONE (2025-12-04) |
| EB9 | Offline verifier script | `docs/modules/evidence-locker/verify-offline.md` | Evidence Locker Guild | POSIX script included; no network dependencies; emits Merkle root used by DSSE subject. | DONE (2025-12-04) |
| EB10 | Golden bundles/replay fixtures + SemVer/changelog | `tests/EvidenceLocker/Bundles/Golden/` + release notes (TBD) | Evidence Locker Guild · CLI Guild | Golden sealed + portable bundles and replay NDJSON with expected roots; changelog bump covering EB1EB9. | Fixtures READY (2025-12-04); SemVer/changelog PENDING |
| EB10 | Golden bundles/replay fixtures + SemVer/changelog | `tests/EvidenceLocker/Bundles/Golden/` + `docs/modules/evidence-locker/CHANGELOG.md` | Evidence Locker Guild · CLI Guild | Golden sealed + portable bundles and replay NDJSON with expected roots; changelog bump covering EB1EB9. | DONE (2025-12-04) |
## Near-Term Actions (to move EB1EB10 to DONE)
- Wire schemas into EvidenceLocker CI (manifest + checksums validation) and surface in API/CLI OpenAPI/Help.
@@ -24,7 +24,7 @@ Working directory: `docs/implplan` (sprint coordination) with artefacts in `docs
- Extend replay contract with provenance block and ordering example, and mirror in manifest schema (EB5).
- Add normative Merkle/CAS section to `bundle-packaging.md`, ensuring DSSE subject references the root hash (EB3, EB6).
- Create golden fixtures under `tests/EvidenceLocker/Bundles/Golden/` with recorded expected hashes and replay traces; hook into xUnit tests (EB10).
- Bump Evidence Locker and CLI SemVer and changelog once above artefacts are wired (EB10).
- Bump Evidence Locker and CLI SemVer and changelog once above artefacts are wired (EB10)**completed** with changelog v1.1.0 and fixture drop; wire binaries/CLI version in next release cut.
## Dependencies and Links
- Advisory: `docs/product-advisories/archived/27-Nov-2025-superseded/28-Nov-2025 - Evidence Bundle and Replay Contracts.md`

View File

@@ -12,7 +12,8 @@ Scope: Define validation steps for replay bundles once schemas freeze.
- Determinism: run `stella replay` twice on same bundle and assert identical outputs (hash comparison).
## Fixtures/tests
- Place golden bundles under `tests/EvidenceLocker/Fixtures/replay/` with expected hashes and DSSE signatures.
- Golden bundles live under `tests/EvidenceLocker/Bundles/Golden/` (sealed, portable, replay) with `expected.json` and DSSE envelopes.
- `StellaOps.EvidenceLocker.Tests` includes fixture tests that validate Merkle subject, redaction, and replay digest; keep them green when regenerating bundles.
- CLI validation test: `stella verify --bundle <fixture>` returns exit code 0 and prints `verified: true`.
## Open dependencies

View File

@@ -77,6 +77,20 @@
"linksetDigest": "abcdefabcdefabcdefabcdefabcdefabcdefabcdefabcdefabcdefabcdefabcd",
"evidenceHash": "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcd"
}
},
{
"purl": "pkg:npm/lodash@4.17.21",
"scopes": [],
"relationships": [],
"advisories": [],
"vexStatements": [],
"provenance": {
"source": "concelier.linkset.v1",
"collectedAt": "2025-12-04T15:29:00Z",
"eventOffset": 6000,
"linksetDigest": "abcdefabcdefabcdefabcdefabcdefabcdefabcdefabcdefabcdefabcdefabcd",
"evidenceHash": "89abcdef0123456789abcdef0123456789abcdef0123456789abcdef012345"
}
}
],
"links": {

View File

@@ -0,0 +1,317 @@
# Feed Snapshot Freeze/Staleness Thresholds (SP6)
Status: Draft · Date: 2025-12-04
Scope: Codify feed snapshot governance including freshness budgets, staleness thresholds, freeze procedures, and validation checks.
## Objectives
- Define maximum age for vulnerability feed snapshots.
- Establish freeze/thaw procedures for reproducible scans.
- Set staleness detection and alerting thresholds.
- Enable deterministic feed state for replay scenarios.
## Freshness Budgets
### Feed Types
| Feed Type | Source | Max Age | Staleness Threshold | Critical Threshold |
|-----------|--------|---------|---------------------|-------------------|
| NVD | NIST NVD | 24 hours | 48 hours | 7 days |
| OSV | OSV.dev | 12 hours | 24 hours | 3 days |
| GHSA | GitHub | 12 hours | 24 hours | 3 days |
| Alpine | Alpine SecDB | 24 hours | 48 hours | 7 days |
| Debian | Debian Security Tracker | 24 hours | 48 hours | 7 days |
| RHEL | Red Hat OVAL | 24 hours | 72 hours | 7 days |
| Ubuntu | Ubuntu CVE Tracker | 24 hours | 48 hours | 7 days |
### Definitions
- **Max Age**: Maximum time since last successful sync before warnings
- **Staleness Threshold**: Time after which scans emit staleness warnings
- **Critical Threshold**: Time after which scans are blocked without override
## Snapshot States
```
┌─────────────┐
│ SYNCING │ ──────────────────────────────────┐
└─────────────┘ │
│ │
▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ FRESH │ ──► │ STALE │ ──► │ CRITICAL │
└─────────────┘ └─────────────┘ └─────────────┘
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ FROZEN │ │ FROZEN │ │ FROZEN │
│ (fresh) │ │ (stale) │ │ (critical) │
└─────────────┘ └─────────────┘ └─────────────┘
```
### State Transitions
| Current State | Trigger | New State | Action |
|---------------|---------|-----------|--------|
| SYNCING | Sync complete | FRESH | Record snapshot hash |
| FRESH | Max age exceeded | STALE | Emit warning |
| STALE | Staleness threshold exceeded | CRITICAL | Block scans |
| CRITICAL | Manual override | CRITICAL (override) | Log override |
| Any | Freeze command | FROZEN | Lock snapshot |
| FROZEN | Thaw command | Previous state | Unlock snapshot |
## Snapshot Schema
```json
{
"id": "nvd-2025-12-04T00-00-00Z",
"feed": "nvd",
"source": "https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-modified.json.gz",
"syncedAt": "2025-12-04T00:00:00Z",
"hash": {
"blake3": "...",
"sha256": "..."
},
"state": "fresh",
"advisoryCount": 250000,
"lastModified": "2025-12-03T23:45:00Z",
"frozen": false,
"frozenAt": null,
"frozenBy": null,
"thresholds": {
"maxAge": "PT24H",
"staleness": "PT48H",
"critical": "P7D"
}
}
```
## Freeze Procedures
### Manual Freeze
```bash
# Freeze current snapshot for reproducible scans
stellaops feed freeze --feed nvd --reason "Audit baseline 2025-Q4"
# Freeze with specific snapshot
stellaops feed freeze --feed nvd --snapshot-id nvd-2025-12-04T00-00-00Z
# List frozen snapshots
stellaops feed list --frozen
```
### Automatic Freeze (Replay Mode)
When recording a scan for replay:
1. Scanner captures current snapshot IDs for all feeds
2. Snapshot hashes recorded in replay manifest
3. Replay execution uses frozen snapshots from manifest
```json
{
"replay": {
"feeds": {
"nvd": {
"snapshotId": "nvd-2025-12-04T00-00-00Z",
"hash": "b3:...",
"state": "frozen"
},
"osv": {
"snapshotId": "osv-2025-12-04T00-00-00Z",
"hash": "b3:...",
"state": "frozen"
}
}
}
}
```
### Thaw Procedures
```bash
# Thaw a frozen snapshot (resume normal sync)
stellaops feed thaw --feed nvd --snapshot-id nvd-2025-12-04T00-00-00Z
# Force thaw all frozen snapshots
stellaops feed thaw --all --force
```
## Validation Checks
### Pre-Scan Validation
```python
# scripts/scanner/validate_feeds.py
def validate_feed_freshness(feed_states: dict) -> ValidationResult:
errors = []
warnings = []
for feed, state in feed_states.items():
age = datetime.utcnow() - state['syncedAt']
thresholds = state['thresholds']
if age > parse_duration(thresholds['critical']):
if not state.get('override'):
errors.append(f"{feed}: Critical staleness ({age})")
elif age > parse_duration(thresholds['staleness']):
warnings.append(f"{feed}: Stale ({age})")
elif age > parse_duration(thresholds['maxAge']):
warnings.append(f"{feed}: Approaching staleness ({age})")
return ValidationResult(
valid=len(errors) == 0,
errors=errors,
warnings=warnings
)
```
### Scan Output Metadata
```json
{
"scan": {
"id": "scan-12345",
"feedState": {
"nvd": {
"snapshotId": "nvd-2025-12-04T00-00-00Z",
"state": "fresh",
"age": "PT12H",
"hash": "b3:..."
},
"osv": {
"snapshotId": "osv-2025-12-04T00-00-00Z",
"state": "stale",
"age": "PT36H",
"hash": "b3:...",
"warning": "Feed approaching staleness threshold"
}
}
}
}
```
## API Endpoints
### Feed Status
```http
GET /api/v1/feeds/status
```
Response:
```json
{
"feeds": {
"nvd": {
"state": "fresh",
"snapshotId": "nvd-2025-12-04T00-00-00Z",
"syncedAt": "2025-12-04T00:00:00Z",
"age": "PT12H",
"frozen": false,
"thresholds": {
"maxAge": "PT24H",
"staleness": "PT48H",
"critical": "P7D"
}
}
}
}
```
### Freeze Feed
```http
POST /api/v1/feeds/{feed}/freeze
Content-Type: application/json
{
"snapshotId": "nvd-2025-12-04T00-00-00Z",
"reason": "Audit baseline"
}
```
### Override Staleness
```http
POST /api/v1/feeds/{feed}/override
Content-Type: application/json
{
"snapshotId": "nvd-2025-12-04T00-00-00Z",
"reason": "Emergency scan required",
"approvedBy": "security-admin@example.com",
"expiresAt": "2025-12-05T00:00:00Z"
}
```
## Alerting
### Alert Conditions
| Condition | Severity | Action |
|-----------|----------|--------|
| Feed sync failed | Warning | Retry with backoff |
| Feed approaching staleness | Info | Log, notify ops |
| Feed stale | Warning | Notify ops, scan warnings |
| Feed critical | Critical | Block scans, page on-call |
| Feed frozen > 30 days | Warning | Review freeze necessity |
### Alert Payload
```json
{
"alert": {
"type": "feed_staleness",
"feed": "nvd",
"severity": "warning",
"snapshotId": "nvd-2025-12-04T00-00-00Z",
"age": "PT50H",
"threshold": "PT48H",
"message": "NVD feed exceeds staleness threshold (50h > 48h)",
"timestamp": "2025-12-06T02:00:00Z"
}
}
```
## Determinism Integration
### Replay Requirements
For deterministic replay:
1. All feeds MUST be frozen at recorded state
2. Snapshot hashes MUST match manifest
3. No network fetch during replay
4. Staleness validation skipped (uses recorded state)
### Validation Script
```bash
#!/bin/bash
# scripts/scanner/verify_feed_hashes.sh
MANIFEST="replay-manifest.json"
for feed in $(jq -r '.replay.feeds | keys[]' "${MANIFEST}"); do
expected_hash=$(jq -r ".replay.feeds.${feed}.hash" "${MANIFEST}")
snapshot_id=$(jq -r ".replay.feeds.${feed}.snapshotId" "${MANIFEST}")
actual_hash=$(stellaops feed hash --snapshot-id "${snapshot_id}")
if [[ "${actual_hash}" != "${expected_hash}" ]]; then
echo "FAIL: ${feed} snapshot hash mismatch"
echo " expected: ${expected_hash}"
echo " actual: ${actual_hash}"
exit 1
fi
echo "PASS: ${feed} snapshot ${snapshot_id}"
done
```
## Links
- Sprint: `docs/implplan/SPRINT_0186_0001_0001_record_deterministic_execution.md` (SP6)
- Spine Versioning: `docs/modules/policy/contracts/spine-versioning-plan.md` (SP1)
- Replay: `docs/replay/DETERMINISTIC_REPLAY.md`

View File

@@ -0,0 +1,331 @@
# SBOM/VEX Deterministic Diff Rules (SP5)
Status: Draft · Date: 2025-12-04
Scope: Define deterministic diff rules and fixtures for SBOM/VEX deltas, ensuring reproducible comparison results and stable hash expectations.
## Objectives
- Enable deterministic diffing between SBOM/VEX versions.
- Define canonical ordering for diff output.
- Provide fixtures for validating diff implementations.
- Ensure diff results are hash-stable.
## Diff Operations
### Supported Operations
| Operation | Description | Output Format |
|-----------|-------------|---------------|
| `component-diff` | Compare component lists between SBOMs | JSON Patch |
| `vulnerability-diff` | Compare vulnerability lists | JSON Patch |
| `vex-diff` | Compare VEX statements | JSON Patch |
| `full-diff` | Complete SBOM/VEX comparison | Combined JSON Patch |
### JSON Patch Format
Diff output uses RFC 6902 JSON Patch format:
```json
{
"patch": [
{
"op": "add",
"path": "/components/2",
"value": {
"type": "library",
"name": "new-lib",
"version": "1.0.0",
"purl": "pkg:npm/new-lib@1.0.0"
}
},
{
"op": "remove",
"path": "/components/0"
},
{
"op": "replace",
"path": "/components/1/version",
"value": "2.0.0"
}
],
"meta": {
"source": "sbom-v1.json",
"target": "sbom-v2.json",
"sourceHash": "b3:...",
"targetHash": "b3:...",
"patchHash": "b3:...",
"timestamp": "2025-12-04T00:00:00Z"
}
}
```
## Determinism Rules
### Ordering
1. **Operations**: `remove` first (descending path order), then `replace`, then `add`
2. **Paths**: Lexicographic sort within operation type
3. **Array indices**: Stable indices based on sort keys (purl for components, id for vulns)
### Canonical Comparison
When comparing elements for diff:
| Element Type | Sort Keys | Tie Breakers |
|--------------|-----------|--------------|
| Component | `purl` | `name`, `version` |
| Vulnerability | `id` | `source.name`, `ratings[0].score` |
| VEX Statement | `vulnerability` | `products[0].purl`, `timestamp` |
| Service | `name` | `version` |
| Property | `name` | - |
### Hash Computation
Diff output hash computed as:
1. Serialize patch array to canonical JSON (sorted keys, no whitespace)
2. Compute BLAKE3-256 over UTF-8 bytes
3. Record in `meta.patchHash`
## Component Diff
### Input
```json
// sbom-v1.json
{
"components": [
{"name": "lib-a", "version": "1.0.0", "purl": "pkg:npm/lib-a@1.0.0"},
{"name": "lib-b", "version": "1.0.0", "purl": "pkg:npm/lib-b@1.0.0"}
]
}
// sbom-v2.json
{
"components": [
{"name": "lib-a", "version": "1.0.1", "purl": "pkg:npm/lib-a@1.0.1"},
{"name": "lib-c", "version": "1.0.0", "purl": "pkg:npm/lib-c@1.0.0"}
]
}
```
### Output
```json
{
"patch": [
{
"op": "remove",
"path": "/components/1",
"comment": "removed pkg:npm/lib-b@1.0.0"
},
{
"op": "replace",
"path": "/components/0/version",
"value": "1.0.1",
"comment": "upgraded pkg:npm/lib-a@1.0.0 -> 1.0.1"
},
{
"op": "replace",
"path": "/components/0/purl",
"value": "pkg:npm/lib-a@1.0.1"
},
{
"op": "add",
"path": "/components/1",
"value": {"name": "lib-c", "version": "1.0.0", "purl": "pkg:npm/lib-c@1.0.0"},
"comment": "added pkg:npm/lib-c@1.0.0"
}
]
}
```
## Vulnerability Diff
### Added/Removed Vulnerabilities
```json
{
"patch": [
{
"op": "add",
"path": "/vulnerabilities/-",
"value": {
"id": "CVE-2025-0002",
"ratings": [{"method": "CVSSv4", "score": 5.3}]
},
"comment": "new vulnerability CVE-2025-0002"
},
{
"op": "remove",
"path": "/vulnerabilities/0",
"comment": "resolved CVE-2025-0001"
}
]
}
```
### Rating Changes
```json
{
"patch": [
{
"op": "replace",
"path": "/vulnerabilities/0/ratings/0/score",
"value": 9.0,
"comment": "CVE-2025-0001 score updated 8.5 -> 9.0"
}
]
}
```
## VEX Diff
### Statement Status Changes
```json
{
"patch": [
{
"op": "replace",
"path": "/statements/0/status",
"value": "not_affected",
"comment": "CVE-2025-0001 status changed affected -> not_affected"
},
{
"op": "add",
"path": "/statements/0/justification",
"value": {
"category": "vulnerable_code_not_present",
"details": "Function patched in v2.0.1"
}
}
]
}
```
## Fixtures
### Directory Structure
```
docs/modules/policy/fixtures/diff-rules/
├── component-diff/
│ ├── input-v1.json
│ ├── input-v2.json
│ ├── expected-diff.json
│ └── hashes.txt
├── vulnerability-diff/
│ ├── input-v1.json
│ ├── input-v2.json
│ ├── expected-diff.json
│ └── hashes.txt
├── vex-diff/
│ ├── input-v1.json
│ ├── input-v2.json
│ ├── expected-diff.json
│ └── hashes.txt
└── full-diff/
├── sbom-v1.json
├── sbom-v2.json
├── expected-diff.json
└── hashes.txt
```
### Sample Fixture (Component Diff)
```json
// docs/modules/policy/fixtures/diff-rules/component-diff/expected-diff.json
{
"patch": [
{"op": "remove", "path": "/components/1"},
{"op": "replace", "path": "/components/0/version", "value": "1.0.1"},
{"op": "replace", "path": "/components/0/purl", "value": "pkg:npm/lib-a@1.0.1"},
{"op": "add", "path": "/components/1", "value": {"name": "lib-c", "version": "1.0.0", "purl": "pkg:npm/lib-c@1.0.0"}}
],
"meta": {
"sourceHash": "b3:...",
"targetHash": "b3:...",
"patchHash": "b3:..."
}
}
```
## CI Validation
```bash
#!/bin/bash
# scripts/policy/validate-diff-fixtures.sh
FIXTURE_DIR="docs/modules/policy/fixtures/diff-rules"
for category in component-diff vulnerability-diff vex-diff full-diff; do
echo "Validating ${category}..."
# Run diff
stellaops-diff \
--source "${FIXTURE_DIR}/${category}/input-v1.json" \
--target "${FIXTURE_DIR}/${category}/input-v2.json" \
--output /tmp/actual-diff.json
# Compare with expected
expected_hash=$(grep "expected-diff.json" "${FIXTURE_DIR}/${category}/hashes.txt" | awk '{print $2}')
actual_hash=$(b3sum /tmp/actual-diff.json | cut -d' ' -f1)
if [[ "${actual_hash}" != "${expected_hash}" ]]; then
echo "FAIL: ${category} diff hash mismatch"
diff <(jq -S . "${FIXTURE_DIR}/${category}/expected-diff.json") <(jq -S . /tmp/actual-diff.json)
exit 1
fi
echo "PASS: ${category}"
done
```
## API Integration
### Diff Endpoint
```http
POST /api/v1/sbom/diff
Content-Type: application/json
{
"source": "<base64-encoded-sbom-v1>",
"target": "<base64-encoded-sbom-v2>",
"options": {
"includeComments": true,
"format": "json-patch"
}
}
```
### Response
```json
{
"diff": {
"patch": [...],
"meta": {
"sourceHash": "b3:...",
"targetHash": "b3:...",
"patchHash": "b3:...",
"componentChanges": {
"added": 1,
"removed": 1,
"modified": 1
},
"vulnerabilityChanges": {
"added": 0,
"removed": 1,
"modified": 0
}
}
}
}
```
## Links
- Sprint: `docs/implplan/SPRINT_0186_0001_0001_record_deterministic_execution.md` (SP5)
- Spine Versioning: `docs/modules/policy/contracts/spine-versioning-plan.md` (SP1)

View File

@@ -0,0 +1,297 @@
# API/UI Surfacing for New Metadata (SC7)
Status: Draft · Date: 2025-12-04
Scope: Define API endpoints and UI components for surfacing CVSS v4, CycloneDX 1.7/CBOM, SLSA 1.2, and evidence metadata with deterministic pagination and sorting.
## Objectives
- Expose new metadata fields via REST API endpoints.
- Define UI components for filters, columns, and downloads.
- Ensure deterministic pagination and sorting across all endpoints.
- Support both online and offline UI rendering.
## API Endpoints
### Vulnerability Ratings (CVSS v4 + v3.1)
```http
GET /api/v1/scans/{scanId}/vulnerabilities
```
Response includes dual CVSS ratings:
```json
{
"vulnerabilities": [
{
"id": "CVE-2025-0001",
"ratings": [
{
"method": "CVSSv4",
"score": 8.5,
"severity": "high",
"vector": "CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:L/VA:N/SC:N/SI:N/SA:N"
},
{
"method": "CVSSv31",
"score": 7.5,
"severity": "high",
"vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N"
}
],
"evidence": {
"source": "concelier:nvd:2025-12-03",
"hash": "b3:eeee1111...",
"proofId": "proof-12345"
}
}
],
"meta": {
"page": {
"cursor": "eyJpZCI6IkNWRS0yMDI1LTAwMDEiLCJzY29yZSI6OC41fQ==",
"size": 100,
"hasMore": true
},
"sort": {
"field": "ratings.CVSSv4.score",
"direction": "desc"
}
}
}
```
### CBOM Services
```http
GET /api/v1/scans/{scanId}/services
```
Response includes CBOM properties:
```json
{
"services": [
{
"name": "api-gateway",
"version": "1.0.0",
"cbom": {
"ingress": "0.0.0.0:8080",
"egress": "https://external-api.example.invalid:443",
"dataClassification": "pii",
"provider": null,
"region": null
}
}
],
"meta": {
"page": {
"cursor": "eyJuYW1lIjoiYXBpLWdhdGV3YXkifQ==",
"size": 100
}
}
}
```
### Source Provenance (SLSA)
```http
GET /api/v1/scans/{scanId}/provenance
```
Response includes SLSA Source Track:
```json
{
"provenance": {
"source": {
"repo": "https://example.invalid/demo",
"ref": "refs/tags/v1.0.0",
"commit": "aaaa...",
"treeHash": "b3:1111..."
},
"build": {
"id": "build-12345",
"invocationHash": "b3:2222...",
"builderId": "https://builder.stellaops.local/scanner"
},
"dsse": {
"hash": "sha256:4444...",
"cas": "cas://provenance/demo/v1.0.0.dsse"
}
}
}
```
### Evidence Lookup
```http
GET /api/v1/evidence/{evidenceHash}
```
Returns evidence details:
```json
{
"evidence": {
"hash": "b3:eeee1111...",
"source": "scanner:binary-analyzer:v1.0.0",
"type": "binary",
"metadata": {
"buildId": "abc123...",
"symbolsHash": "b3:...",
"confidence": 0.95
},
"cas": "cas://evidence/openssl/3.0.0/binary-analysis.json"
}
}
```
## Pagination & Sorting
### Deterministic Cursors
Cursors are base64-encoded tuples of sort keys:
```json
{
"cursor": "base64({\"id\":\"CVE-2025-0001\",\"score\":8.5})",
"decode": {
"primaryKey": "id",
"secondaryKey": "score",
"lastValue": {"id": "CVE-2025-0001", "score": 8.5}
}
}
```
### Sort Fields
| Endpoint | Default Sort | Allowed Fields |
|----------|--------------|----------------|
| `/vulnerabilities` | `ratings.CVSSv4.score desc, id asc` | `id`, `ratings.CVSSv4.score`, `ratings.CVSSv31.score`, `severity` |
| `/components` | `purl asc, name asc` | `purl`, `name`, `version`, `type` |
| `/services` | `name asc` | `name`, `version`, `cbom.dataClassification` |
### Page Size
| Parameter | Default | Min | Max |
|-----------|---------|-----|-----|
| `pageSize` | 100 | 1 | 500 |
## UI Components
### Vulnerability Table Columns
| Column | Source | Sortable | Filterable |
|--------|--------|----------|------------|
| CVE ID | `id` | Yes | Yes (search) |
| CVSS v4 Score | `ratings[method=CVSSv4].score` | Yes | Yes (range) |
| CVSS v4 Severity | `ratings[method=CVSSv4].severity` | Yes | Yes (multi-select) |
| CVSS v3.1 Score | `ratings[method=CVSSv31].score` | Yes | Yes (range) |
| Affected Component | `affects[].ref` | No | Yes (search) |
| Evidence Source | `evidence.source` | No | Yes (multi-select) |
### CBOM Service View
| Column | Source | Sortable | Filterable |
|--------|--------|----------|------------|
| Service Name | `name` | Yes | Yes |
| Ingress | `cbom.ingress` | No | Yes |
| Egress | `cbom.egress` | No | Yes |
| Data Classification | `cbom.dataClassification` | Yes | Yes (multi-select) |
| Provider | `cbom.provider` | Yes | Yes |
| Region | `cbom.region` | Yes | Yes |
### Filters
```typescript
interface VulnerabilityFilters {
severity?: ('critical' | 'high' | 'medium' | 'low')[];
cvssV4ScoreMin?: number;
cvssV4ScoreMax?: number;
cvssV31ScoreMin?: number;
cvssV31ScoreMax?: number;
hasEvidence?: boolean;
evidenceSource?: string[];
affectedComponent?: string;
}
interface ServiceFilters {
dataClassification?: ('pii' | 'internal' | 'public' | 'confidential')[];
hasEgress?: boolean;
provider?: string[];
region?: string[];
}
```
## Download Formats
### Export Endpoints
```http
GET /api/v1/scans/{scanId}/export?format={format}
```
| Format | Content-Type | Deterministic |
|--------|--------------|---------------|
| `cdx-1.7` | `application/vnd.cyclonedx+json` | Yes |
| `cdx-1.6` | `application/vnd.cyclonedx+json` | Yes |
| `spdx-3.0` | `application/spdx+json` | Yes |
| `csv` | `text/csv` | Yes |
| `pdf` | `application/pdf` | Partial* |
*PDF includes timestamp in footer
### CSV Export Columns
```csv
cve_id,cvss_v4_score,cvss_v4_vector,cvss_v31_score,cvss_v31_vector,severity,affected_purl,evidence_hash,evidence_source
CVE-2025-0001,8.5,"CVSS:4.0/...",7.5,"CVSS:3.1/...",high,pkg:npm/example-lib@2.0.0,b3:eeee...,concelier:nvd:2025-12-03
```
## Determinism Requirements
1. **Sort stability**: All sorts use secondary key (usually `id`) for tie-breaking
2. **Cursor encoding**: Deterministic JSON serialization before base64
3. **Timestamps**: UTC ISO-8601, no sub-millisecond precision unless non-zero
4. **Export ordering**: Same ordering rules as API responses
5. **Filter normalization**: Sort filter arrays before query execution
## Offline Support
### Prefetch Manifest
```json
{
"prefetch": {
"vulnerabilities": "/api/v1/scans/{scanId}/vulnerabilities?pageSize=500",
"components": "/api/v1/scans/{scanId}/components?pageSize=500",
"services": "/api/v1/scans/{scanId}/services",
"provenance": "/api/v1/scans/{scanId}/provenance"
},
"cache": {
"ttl": 86400,
"storage": "indexeddb"
}
}
```
### Static Export
For air-gapped environments, export complete dataset:
```http
GET /api/v1/scans/{scanId}/offline-bundle
```
Returns zip containing:
- `vulnerabilities.json` (all pages concatenated)
- `components.json`
- `services.json`
- `provenance.json`
- `manifest.json` (with hashes)
## Links
- Sprint: `docs/implplan/SPRINT_0186_0001_0001_record_deterministic_execution.md` (SC7)
- Roadmap: `docs/modules/scanner/design/standards-convergence-roadmap.md` (SC1)
- Contract: `docs/modules/scanner/design/cdx17-cbom-contract.md` (SC2)

View File

@@ -0,0 +1,207 @@
# Binary Evidence Alignment with SBOM/VEX Outputs (SC6)
Status: Draft · Date: 2025-12-04
Scope: Define how binary-level evidence (build-id, symbols, patch oracle) aligns with SBOM/VEX outputs to feed policy engines and VEX decisioning.
## Objectives
- Link binary-level evidence to SBOM component identities.
- Ensure evidence fields are available for policy/VEX correlation.
- Define required joins between binary analysis and vulnerability data.
- Enable deterministic evidence chain from binary → SBOM → VEX → policy.
## Evidence Types
### Build Identity Evidence
| Evidence Type | Source | SBOM Field | VEX Field | Policy Input |
|---------------|--------|------------|-----------|--------------|
| Build ID | ELF `.note.gnu.build-id` | `component.properties[evidence:build-id]` | `statement.products[].identifiers.buildId` | `binary.buildId` |
| Go Build Info | `runtime.BuildInfo` | `component.properties[evidence:go-build]` | n/a | `binary.goBuildInfo` |
| PE Version | PE resource section | `component.properties[evidence:pe-version]` | n/a | `binary.peVersion` |
| Mach-O UUID | LC_UUID command | `component.properties[evidence:macho-uuid]` | n/a | `binary.machoUuid` |
### Symbol Evidence
| Evidence Type | Source | SBOM Field | VEX Field | Policy Input |
|---------------|--------|------------|-----------|--------------|
| Exported Symbols | ELF `.dynsym` / PE exports | `component.properties[evidence:symbols-hash]` | n/a | `binary.symbolsHash` |
| Debug Symbols | DWARF / PDB | `component.properties[evidence:debug-hash]` | n/a | `binary.debugHash` |
| Function Names | Symbol table | `component.properties[evidence:functions]` | `statement.justification.functions` | `binary.functions[]` |
### Patch Oracle Evidence
| Evidence Type | Source | SBOM Field | VEX Field | Policy Input |
|---------------|--------|------------|-----------|--------------|
| Patch Signature | Function hash diff | `component.properties[evidence:patch-sig]` | `statement.justification.patchSignature` | `patch.signature` |
| CVE Fix Commit | Commit mapping | `component.properties[evidence:fix-commit]` | `statement.justification.fixCommit` | `patch.fixCommit` |
| Binary Diff | objdiff hash | `component.properties[evidence:binary-diff]` | n/a | `patch.binaryDiffHash` |
## Evidence Chain
```
Binary → SBOM Component → VEX Statement → Policy Evaluation
│ │ │ │
│ │ │ └── policy://input/component/{purl}
│ │ └── vex://statement/{cve}/{product}
│ └── sbom://component/{purl}
└── binary://evidence/{hash}
```
### Join Keys
| Source | Target | Join Field | Required |
|--------|--------|------------|----------|
| Binary | SBOM Component | `evidence:hash``component.hashes[]` | Yes |
| SBOM Component | VEX Product | `component.purl``statement.products[].purl` | Yes |
| VEX Statement | Policy Input | `statement.cve``policy.advisoryId` | Yes |
| Binary Evidence | VEX Justification | `evidence:patch-sig``justification.patchSignature` | No |
## Required SBOM Evidence Properties
For binary evidence to flow through the pipeline, components MUST include:
```json
{
"type": "library",
"name": "openssl",
"version": "3.0.0",
"purl": "pkg:generic/openssl@3.0.0",
"hashes": [
{"alg": "SHA-256", "content": "..."}
],
"properties": [
{"name": "evidence:hash", "value": "b3:..."},
{"name": "evidence:source", "value": "scanner:binary-analyzer:v1.0.0"},
{"name": "evidence:build-id", "value": "abc123..."},
{"name": "evidence:symbols-hash", "value": "b3:..."},
{"name": "evidence:patch-sig", "value": "b3:..."},
{"name": "evidence:confidence", "value": "0.95"}
]
}
```
## VEX Integration
VEX statements can reference binary evidence for justification:
```json
{
"vulnerability": "CVE-2025-0001",
"products": [
{
"purl": "pkg:generic/openssl@3.0.0",
"identifiers": {
"buildId": "abc123...",
"evidenceHash": "b3:..."
}
}
],
"status": "not_affected",
"justification": {
"category": "vulnerable_code_not_present",
"patchSignature": "b3:...",
"fixCommit": "deadbeef...",
"functions": ["EVP_EncryptUpdate", "EVP_DecryptUpdate"],
"evidenceRef": "cas://evidence/openssl/3.0.0/binary-analysis.json"
}
}
```
## Policy Engine Integration
Policy rules can reference binary evidence fields:
```rego
# policy/scanner/binary-evidence.rego
package scanner.binary
import rego.v1
# Require build-id for high-severity vulns
deny contains msg if {
input.vulnerability.severity == "critical"
not input.component.properties["evidence:build-id"]
msg := sprintf("Critical vuln %s requires build-id evidence", [input.vulnerability.id])
}
# Accept patch oracle evidence as mitigation
allow contains decision if {
input.component.properties["evidence:patch-sig"]
input.vex.status == "not_affected"
input.vex.justification.patchSignature == input.component.properties["evidence:patch-sig"]
decision := {
"action": "accept",
"reason": "Patch signature verified",
"confidence": input.component.properties["evidence:confidence"]
}
}
# Confidence threshold for binary analysis
warn contains msg if {
conf := to_number(input.component.properties["evidence:confidence"])
conf < 0.8
msg := sprintf("Low confidence (%v) binary evidence for %s", [conf, input.component.purl])
}
```
## Evidence Fields by Binary Format
### ELF (Linux)
| Section | Evidence Extracted | Deterministic |
|---------|-------------------|---------------|
| `.note.gnu.build-id` | Build ID (SHA1/UUID) | Yes |
| `.gnu.hash` | Symbol hash table | Yes |
| `.dynsym` | Dynamic symbols | Yes |
| `.debug_info` | DWARF debug symbols | Yes |
| `.rodata` | String literals | Yes |
### PE (Windows)
| Section | Evidence Extracted | Deterministic |
|---------|-------------------|---------------|
| PE Header | Timestamp, Machine type | Partial* |
| Resource | Version info, Product name | Yes |
| Export Table | Exported functions | Yes |
| Import Table | Dependencies | Yes |
| Debug Directory | PDB path, GUID | Yes |
*PE timestamp may be zeroed for reproducible builds
### Mach-O (macOS)
| Command | Evidence Extracted | Deterministic |
|---------|-------------------|---------------|
| LC_UUID | Binary UUID | Yes |
| LC_VERSION_MIN | Min OS version | Yes |
| LC_BUILD_VERSION | Build version | Yes |
| LC_CODE_SIGNATURE | Code signature | Yes |
| SYMTAB | Symbol table | Yes |
## Determinism Requirements
1. **Stable ordering**: Evidence properties sorted by name
2. **Hash computation**: BLAKE3-256 over canonical JSON
3. **Confidence scores**: 4 decimal places, `MidpointRounding.ToZero`
4. **Function lists**: Sorted lexicographically, deduplicated
5. **Symbol hashes**: Computed over sorted symbol names
## CAS Storage
Binary evidence artifacts stored in CAS:
```
cas://evidence/{component}/{version}/
├── binary-analysis.json # Full analysis result
├── symbols.txt # Extracted symbols (sorted)
├── functions.txt # Extracted functions (sorted)
└── patch-signatures.json # Patch oracle signatures
```
## Links
- Sprint: `docs/implplan/SPRINT_0186_0001_0001_record_deterministic_execution.md` (SC6)
- Roadmap: `docs/modules/scanner/design/standards-convergence-roadmap.md` (SC1)
- Contract: `docs/modules/scanner/design/cdx17-cbom-contract.md` (SC2)
- Entropy: `docs/modules/scanner/entropy.md`

View File

@@ -0,0 +1,362 @@
# Competitor Ingest Anomaly Regression Tests (CM4)
Status: Draft · Date: 2025-12-04
Scope: Define anomaly regression test suite for ingest pipeline covering schema drift, nullables, encoding, and ordering anomalies.
## Objectives
- Detect schema drift in upstream tool outputs.
- Validate handling of nullable/missing fields.
- Ensure proper encoding handling (UTF-8, escaping).
- Verify deterministic ordering is maintained.
- Provide golden fixtures with expected hashes.
## Test Categories
### 1. Schema Drift Tests
Detect when upstream tools change their output schema.
```
tests/anomaly/schema-drift/
├── syft/
│ ├── v1.0.0-baseline.json # Known good output
│ ├── v1.5.0-new-fields.json # Added fields
│ ├── v1.5.0-removed-fields.json # Removed fields
│ ├── v1.5.0-type-change.json # Field type changed
│ └── expected-results.json
├── trivy/
│ └── ... (same structure)
└── clair/
└── ... (same structure)
```
#### Test Cases
| Test | Input | Expected Behavior |
|------|-------|-------------------|
| `new_optional_field` | Output with new field | Accept, ignore new field |
| `new_required_field` | Output with new required field | Warn, map if possible |
| `removed_optional_field` | Output missing optional field | Accept |
| `removed_required_field` | Output missing required field | Reject |
| `field_type_change` | Field type differs from schema | Reject or coerce |
| `field_rename` | Field renamed without mapping | Warn, check mapping |
#### Schema Drift Fixture
```json
{
"test": "new_optional_field",
"tool": "syft",
"inputVersion": "1.5.0",
"baselineVersion": "1.0.0",
"input": {
"artifacts": [
{
"name": "lib-a",
"version": "1.0.0",
"purl": "pkg:npm/lib-a@1.0.0",
"newField": "unexpected value"
}
]
},
"expected": {
"status": "accepted",
"warnings": ["unknown_field:newField"],
"normalizedHash": "b3:..."
}
}
```
### 2. Nullable/Missing Field Tests
Validate handling of null, empty, and missing values.
```
tests/anomaly/nullables/
├── null-values.json
├── empty-strings.json
├── empty-arrays.json
├── missing-optional.json
├── missing-required.json
└── expected-results.json
```
#### Test Cases
| Test | Input | Expected Behavior |
|------|-------|-------------------|
| `null_optional` | Optional field is null | Accept, omit from output |
| `null_required` | Required field is null | Reject |
| `empty_string` | String field is "" | Accept, preserve or omit |
| `empty_array` | Array field is [] | Accept, preserve |
| `missing_optional` | Optional field absent | Accept |
| `missing_required` | Required field absent | Reject |
#### Nullable Fixture
```json
{
"test": "null_optional",
"tool": "syft",
"input": {
"artifacts": [
{
"name": "lib-a",
"version": "1.0.0",
"purl": "pkg:npm/lib-a@1.0.0",
"licenses": null
}
]
},
"expected": {
"status": "accepted",
"output": {
"components": [
{
"name": "lib-a",
"version": "1.0.0",
"purl": "pkg:npm/lib-a@1.0.0"
}
]
},
"normalizedHash": "b3:..."
}
}
```
### 3. Encoding Tests
Validate proper handling of character encoding and escaping.
```
tests/anomaly/encoding/
├── utf8-valid.json
├── utf8-bom.json
├── latin1-fallback.json
├── unicode-escapes.json
├── special-chars.json
├── json-escaping.json
└── expected-results.json
```
#### Test Cases
| Test | Input | Expected Behavior |
|------|-------|-------------------|
| `utf8_valid` | Standard UTF-8 | Accept |
| `utf8_bom` | UTF-8 with BOM | Accept, strip BOM |
| `unicode_escapes` | `\u0041` style escapes | Accept, decode |
| `special_chars` | Tabs, newlines in strings | Accept, preserve or escape |
| `control_chars` | Control characters (0x00-0x1F) | Reject or sanitize |
| `surrogate_pairs` | Emoji and supplementary chars | Accept |
#### Encoding Fixture
```json
{
"test": "special_chars",
"tool": "syft",
"input": {
"artifacts": [
{
"name": "lib-with-tab\ttab",
"version": "1.0.0",
"description": "Line1\nLine2"
}
]
},
"expected": {
"status": "accepted",
"output": {
"components": [
{
"name": "lib-with-tab\ttab",
"version": "1.0.0"
}
]
},
"normalizedHash": "b3:..."
}
}
```
### 4. Ordering Tests
Verify deterministic ordering is maintained across inputs.
```
tests/anomaly/ordering/
├── unsorted-components.json
├── reversed-components.json
├── random-order.json
├── unicode-sort.json
├── case-sensitivity.json
└── expected-results.json
```
#### Test Cases
| Test | Input | Expected Behavior |
|------|-------|-------------------|
| `unsorted_input` | Components in random order | Sort deterministically |
| `reversed_input` | Components in reverse order | Sort deterministically |
| `same_after_sort` | Pre-sorted input | Same output as unsorted |
| `unicode_sort` | Unicode component names | Locale-invariant sort |
| `case_sensitivity` | Mixed case names | Case-insensitive sort |
#### Ordering Fixture
```json
{
"test": "unsorted_input",
"tool": "syft",
"input": {
"artifacts": [
{"name": "zebra", "version": "1.0.0", "purl": "pkg:npm/zebra@1.0.0"},
{"name": "apple", "version": "1.0.0", "purl": "pkg:npm/apple@1.0.0"},
{"name": "mango", "version": "1.0.0", "purl": "pkg:npm/mango@1.0.0"}
]
},
"expected": {
"status": "accepted",
"output": {
"components": [
{"name": "apple", "version": "1.0.0", "purl": "pkg:npm/apple@1.0.0"},
{"name": "mango", "version": "1.0.0", "purl": "pkg:npm/mango@1.0.0"},
{"name": "zebra", "version": "1.0.0", "purl": "pkg:npm/zebra@1.0.0"}
]
},
"normalizedHash": "b3:..."
}
}
```
## Golden Fixtures
### Hash File Format
```
# tests/anomaly/hashes.txt
schema-drift/syft/v1.0.0-baseline.json: BLAKE3=... SHA256=...
schema-drift/syft/expected-results.json: BLAKE3=... SHA256=...
nullables/null-values.json: BLAKE3=... SHA256=...
nullables/expected-results.json: BLAKE3=... SHA256=...
encoding/utf8-valid.json: BLAKE3=... SHA256=...
encoding/expected-results.json: BLAKE3=... SHA256=...
ordering/unsorted-components.json: BLAKE3=... SHA256=...
ordering/expected-results.json: BLAKE3=... SHA256=...
```
## CI Integration
### Test Workflow
```yaml
# .gitea/workflows/anomaly-tests.yml
name: Anomaly Regression Tests
on:
push:
paths:
- 'src/Scanner/Adapters/**'
- 'tests/anomaly/**'
pull_request:
jobs:
anomaly-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup .NET
uses: actions/setup-dotnet@v4
with:
dotnet-version: '10.0.x'
- name: Verify fixture hashes
run: scripts/scanner/verify-anomaly-fixtures.sh
- name: Run schema drift tests
run: |
dotnet test src/Scanner/__Tests/StellaOps.Scanner.Anomaly.Tests \
--filter "Category=SchemaDrift"
- name: Run nullable tests
run: |
dotnet test src/Scanner/__Tests/StellaOps.Scanner.Anomaly.Tests \
--filter "Category=Nullable"
- name: Run encoding tests
run: |
dotnet test src/Scanner/__Tests/StellaOps.Scanner.Anomaly.Tests \
--filter "Category=Encoding"
- name: Run ordering tests
run: |
dotnet test src/Scanner/__Tests/StellaOps.Scanner.Anomaly.Tests \
--filter "Category=Ordering"
```
### Test Runner
```csharp
// src/Scanner/__Tests/StellaOps.Scanner.Anomaly.Tests/AnomalyTestRunner.cs
[Category("SchemaDrift")]
[Theory]
[MemberData(nameof(GetSchemaDriftTestCases))]
public async Task SchemaDrift_HandledCorrectly(AnomalyTestCase testCase)
{
// Arrange
var adapter = _adapterFactory.Create(testCase.Tool);
// Act
var result = await adapter.NormalizeAsync(testCase.Input);
// Assert
Assert.Equal(testCase.Expected.Status, result.Status);
Assert.Equal(testCase.Expected.Warnings, result.Warnings);
if (testCase.Expected.NormalizedHash != null)
{
var hash = Blake3.HashData(Encoding.UTF8.GetBytes(
JsonSerializer.Serialize(result.Output)));
Assert.Equal(testCase.Expected.NormalizedHash,
$"b3:{Convert.ToHexString(hash).ToLowerInvariant()}");
}
}
```
## Failure Handling
### On Test Failure
1. **Schema Drift**: Create issue, update adapter mapping
2. **Nullable Handling**: Fix normalization logic
3. **Encoding Error**: Fix encoding detection/conversion
4. **Ordering Violation**: Fix sort comparator
### Failure Report
```json
{
"failure": {
"category": "schema_drift",
"test": "new_required_field",
"tool": "syft",
"input": {...},
"expected": {...},
"actual": {...},
"diff": [
{"path": "/status", "expected": "accepted", "actual": "rejected"}
],
"timestamp": "2025-12-04T12:00:00Z"
}
}
```
## Links
- Sprint: `docs/implplan/SPRINT_0186_0001_0001_record_deterministic_execution.md` (CM4)
- Normalization: `docs/modules/scanner/design/competitor-ingest-normalization.md` (CM1)
- Fixtures: `docs/modules/scanner/fixtures/competitor-adapters/`

View File

@@ -0,0 +1,324 @@
# Competitor Benchmark Parity Plan (CM7, CM8, CM9)
Status: Draft · Date: 2025-12-04
Scope: Define source transparency fields (CM7), benchmark parity requirements (CM8), and ecosystem coverage tracking (CM9).
## CM7: Source Transparency Fields
### Required Metadata Fields
| Field | Source | Storage Location | API Exposure |
|-------|--------|------------------|--------------|
| `source.tool` | Ingest input | `ingest_metadata.tool` | Yes |
| `source.version` | Ingest input | `ingest_metadata.tool_version` | Yes |
| `source.hash` | Computed | `ingest_metadata.tool_hash` | Yes |
| `adapter.version` | Adapter manifest | `ingest_metadata.adapter_version` | Yes |
| `normalized_hash` | Computed | `ingest_metadata.normalized_hash` | Yes |
| `import_timestamp` | System | `ingest_metadata.imported_at` | Yes |
### Metadata Schema
```json
{
"ingest_metadata": {
"source": {
"tool": "syft",
"version": "1.0.0",
"hash": "sha256:...",
"uri": "https://github.com/anchore/syft/releases/v1.0.0"
},
"adapter": {
"version": "1.0.0",
"mappingHash": "b3:..."
},
"normalized": {
"hash": "b3:aa42c167...",
"recordCount": 42,
"format": "stellaops-v1"
},
"import": {
"timestamp": "2025-12-04T12:00:00Z",
"user": "system",
"snapshotId": "syft-20251204T120000Z-001"
}
}
}
```
### API Exposure
```http
GET /api/v1/ingest/metadata/{snapshotId}
```
Response:
```json
{
"metadata": {
"snapshotId": "syft-20251204T120000Z-001",
"source": {
"tool": "syft",
"version": "1.0.0",
"hash": "sha256:..."
},
"adapter": {
"version": "1.0.0"
},
"normalized": {
"hash": "b3:...",
"recordCount": 42
}
}
}
```
## CM8: Benchmark Parity
### Pinned Tool Versions
| Tool | Pinned Version | Test Frequency | Baseline Date |
|------|----------------|----------------|---------------|
| Syft | 1.0.0 | Weekly | 2025-12-01 |
| Trivy | 0.50.0 | Weekly | 2025-12-01 |
| Clair | 6.0.0 | Weekly | 2025-12-01 |
### Benchmark Test Suite
```
tests/benchmark/
├── syft/
│ ├── inputs/
│ │ ├── alpine-3.19.json # Container image
│ │ ├── node-app.json # Node.js project
│ │ └── java-app.json # Java project
│ ├── expected/
│ │ ├── alpine-3.19-expected.json
│ │ ├── node-app-expected.json
│ │ └── java-app-expected.json
│ └── hashes.txt
├── trivy/
│ └── ... (same structure)
└── clair/
└── ... (same structure)
```
### Benchmark Workflow
```yaml
# .gitea/workflows/benchmark-parity.yml
name: Benchmark Parity Check
on:
schedule:
- cron: '0 0 * * 0' # Weekly
workflow_dispatch:
jobs:
benchmark:
runs-on: ubuntu-latest
strategy:
matrix:
tool: [syft, trivy, clair]
steps:
- uses: actions/checkout@v4
- name: Run ${{ matrix.tool }} benchmark
run: |
scripts/benchmark/run-benchmark.sh ${{ matrix.tool }}
- name: Compare results
run: |
scripts/benchmark/compare-results.sh ${{ matrix.tool }}
- name: Upload logs
uses: actions/upload-artifact@v4
with:
name: benchmark-${{ matrix.tool }}
path: benchmark-results/
```
### Benchmark Comparison
```bash
#!/bin/bash
# scripts/benchmark/compare-results.sh
TOOL=$1
BENCHMARK_DIR="tests/benchmark/${TOOL}"
for input in "${BENCHMARK_DIR}/inputs/"*.json; do
name=$(basename "${input}" .json)
expected="${BENCHMARK_DIR}/expected/${name}-expected.json"
actual="benchmark-results/${name}-actual.json"
# Run tool
stellaops ingest normalize \
--tool "${TOOL}" \
--input "${input}" \
--output "${actual}"
# Compare
diff_result=$(diff <(jq -S . "${expected}") <(jq -S . "${actual}"))
if [[ -n "${diff_result}" ]]; then
echo "DRIFT: ${name}"
echo "${diff_result}"
exit 1
fi
echo "PASS: ${name}"
done
```
### Drift Detection
When benchmark drift detected:
1. Log drift details with hash comparison
2. Create issue in tracking system
3. Notify Scanner Guild
4. Block release if critical drift
```json
{
"drift": {
"tool": "syft",
"version": "1.0.0",
"testCase": "alpine-3.19",
"detected": "2025-12-04T00:00:00Z",
"details": {
"expectedHash": "b3:expected...",
"actualHash": "b3:actual...",
"diffCount": 3,
"fields": [
"/components/5/version",
"/components/12/licenses",
"/vulnerabilities/2/ratings/0/score"
]
}
}
}
```
## CM9: Coverage Tracking
### Coverage Matrix
Location: `docs/modules/scanner/fixtures/competitor-adapters/coverage.csv`
```csv
ecosystem,syft,trivy,clair,notes
container,yes,yes,yes,All tools support OCI images
java,yes,yes,no,Clair Java support pending
python,yes,yes,no,Trivy has best pip/poetry coverage
dotnet,no,yes,no,Trivy only; Syft support pending
go,yes,yes,no,Both tools have good go.mod support
rust,yes,yes,no,Cargo.lock parsing
ruby,yes,yes,no,Gemfile.lock parsing
php,yes,yes,no,composer.lock parsing
os-pkgs,yes,yes,yes,APK/DEB/RPM supported
node,yes,yes,no,package-lock.json/yarn.lock
```
### Coverage API
```http
GET /api/v1/ingest/coverage
```
Response:
```json
{
"coverage": {
"lastUpdated": "2025-12-04T00:00:00Z",
"ecosystems": {
"container": {
"syft": {"supported": true, "tested": true},
"trivy": {"supported": true, "tested": true},
"clair": {"supported": true, "tested": true}
},
"java": {
"syft": {"supported": true, "tested": true},
"trivy": {"supported": true, "tested": true},
"clair": {"supported": false, "tested": false, "planned": "2026-Q1"}
}
},
"gaps": [
{"ecosystem": "dotnet", "tool": "syft", "priority": "high"},
{"ecosystem": "java", "tool": "clair", "priority": "medium"}
]
}
}
```
### Gap Tracking
```json
{
"gaps": [
{
"id": "gap-001",
"ecosystem": "dotnet",
"tool": "syft",
"priority": "high",
"reason": "Customer demand for .NET scanning",
"status": "planned",
"targetDate": "2025-Q2",
"blockers": ["Upstream syft issue #1234"]
}
]
}
```
### Coverage CI Check
```yaml
# Check coverage doesn't regress
- name: Verify coverage matrix
run: |
# Ensure no "yes" changed to "no" without documentation
git diff HEAD~1 docs/modules/scanner/fixtures/competitor-adapters/coverage.csv \
| grep -E '^\-.*yes.*$' && {
echo "Coverage regression detected"
exit 1
}
```
## Reporting
### Weekly Coverage Report
```json
{
"report": {
"period": "2025-W49",
"coverage": {
"total_ecosystems": 10,
"full_coverage": 3,
"partial_coverage": 5,
"no_coverage": 2
},
"benchmark": {
"tests_run": 45,
"tests_passed": 44,
"tests_failed": 1,
"drift_detected": ["trivy/alpine-3.19"]
},
"metadata": {
"snapshots_imported": 156,
"tools_seen": ["syft", "trivy", "clair"],
"versions_seen": {
"syft": ["1.0.0", "1.0.1"],
"trivy": ["0.50.0"],
"clair": ["6.0.0"]
}
}
}
}
```
## Links
- Sprint: `docs/implplan/SPRINT_0186_0001_0001_record_deterministic_execution.md` (CM7, CM8, CM9)
- Normalization: `docs/modules/scanner/design/competitor-ingest-normalization.md` (CM1)
- Coverage CSV: `docs/modules/scanner/fixtures/competitor-adapters/coverage.csv`

View File

@@ -0,0 +1,296 @@
# Competitor Ingest DB Snapshot Governance (CM3)
Status: Draft · Date: 2025-12-04
Scope: Enforce database snapshot governance including versioning, freshness SLA, and rollback procedures for imported external feeds.
## Objectives
- Define versioning scheme for imported snapshots.
- Establish freshness SLA for external data.
- Enable deterministic rollback to previous snapshots.
- Support audit trail for all snapshot operations.
## Snapshot Versioning
### Version Scheme
```
{tool}-{timestamp}-{sequence}
Examples:
- syft-20251204T000000Z-001
- trivy-20251204T120000Z-001
- clair-20251204T060000Z-002
```
### Snapshot Record
```json
{
"id": "syft-20251204T000000Z-001",
"tool": "syft",
"toolVersion": "1.0.0",
"importedAt": "2025-12-04T00:00:00Z",
"sourceHash": "b3:...",
"normalizedHash": "b3:...",
"recordCount": 1234,
"state": "active",
"previousSnapshot": "syft-20251203T000000Z-001",
"metadata": {
"sourceUri": "https://example.com/sbom.json",
"importUser": "system",
"importReason": "scheduled_sync"
}
}
```
## Freshness SLA
### Thresholds by Tool
| Tool | Max Age | Stale Threshold | Critical Threshold |
|------|---------|-----------------|-------------------|
| Syft | 7 days | 14 days | 30 days |
| Trivy | 7 days | 14 days | 30 days |
| Clair | 7 days | 14 days | 30 days |
| Custom | Configurable | Configurable | Configurable |
### Freshness States
| State | Condition | Action |
|-------|-----------|--------|
| `fresh` | age < max_age | Normal operation |
| `stale` | max_age <= age < critical | Emit warning |
| `critical` | age >= critical | Block queries without override |
| `expired` | Manual expiry | Data unavailable |
### SLA Monitoring
```json
{
"sla": {
"tool": "syft",
"snapshotId": "syft-20251204T000000Z-001",
"importedAt": "2025-12-04T00:00:00Z",
"age": "P2D",
"state": "fresh",
"nextCheck": "2025-12-05T00:00:00Z",
"thresholds": {
"maxAge": "P7D",
"stale": "P14D",
"critical": "P30D"
}
}
}
```
## Rollback Procedures
### Rollback Triggers
| Trigger | Auto/Manual | Action |
|---------|-------------|--------|
| Import failure | Auto | Rollback to previous |
| Validation failure | Auto | Rollback to previous |
| Data corruption | Manual | Rollback to specified |
| Compliance requirement | Manual | Rollback to specified |
| User request | Manual | Rollback to specified |
### Rollback Workflow
```
┌─────────────┐
│ Initiate │
│ Rollback │
└─────────────┘
┌─────────────┐
│ Verify │──Fail──► Abort
│ Target │
└─────────────┘
Pass
┌─────────────┐
│ Create │
│ Savepoint │
└─────────────┘
┌─────────────┐
│ Restore │──Fail──► Restore Savepoint
│ Snapshot │
└─────────────┘
Pass
┌─────────────┐
│ Verify │──Fail──► Restore Savepoint
│ Restore │
└─────────────┘
Pass
┌─────────────┐
│ Commit │
│ Change │
└─────────────┘
┌─────────────┐
│ Update │
│ Active │
└─────────────┘
```
### Rollback Command
```bash
# Rollback to previous snapshot
stellaops ingest rollback --tool syft
# Rollback to specific snapshot
stellaops ingest rollback --tool syft --snapshot-id syft-20251201T000000Z-001
# Dry run
stellaops ingest rollback --tool syft --dry-run
# Force rollback (skip confirmations)
stellaops ingest rollback --tool syft --force
```
### Rollback Response
```json
{
"rollback": {
"status": "completed",
"tool": "syft",
"from": {
"snapshotId": "syft-20251204T000000Z-001",
"recordCount": 1234
},
"to": {
"snapshotId": "syft-20251203T000000Z-001",
"recordCount": 1200
},
"executedAt": "2025-12-04T12:00:00Z",
"executedBy": "admin@example.com",
"reason": "Data corruption detected"
}
}
```
## Retention Policy
### Snapshot Retention
| Category | Retention | Cleanup |
|----------|-----------|---------|
| Active | Indefinite | Never |
| Previous (N-1) | 30 days | Auto |
| Archived | 90 days | Auto |
| Audit | 1 year | Manual |
### Cleanup Schedule
```json
{
"retention": {
"schedule": "0 0 * * *",
"rules": [
{
"category": "previous",
"maxAge": "P30D",
"action": "archive"
},
{
"category": "archived",
"maxAge": "P90D",
"action": "delete"
}
],
"exceptions": [
{
"snapshotId": "syft-20251101T000000Z-001",
"reason": "Audit hold",
"expiresAt": "2026-12-01T00:00:00Z"
}
]
}
}
```
## Audit Trail
### Audit Events
| Event | Fields | Retention |
|-------|--------|-----------|
| `snapshot_imported` | id, tool, hash, user, timestamp | 1 year |
| `snapshot_activated` | id, previous_id, user, timestamp | 1 year |
| `snapshot_rolled_back` | from_id, to_id, reason, user | 1 year |
| `snapshot_expired` | id, reason, user, timestamp | 1 year |
| `snapshot_deleted` | id, reason, user, timestamp | 1 year |
### Audit Record Format
```json
{
"audit": {
"id": "audit-12345",
"event": "snapshot_rolled_back",
"timestamp": "2025-12-04T12:00:00Z",
"user": "admin@example.com",
"details": {
"fromSnapshot": "syft-20251204T000000Z-001",
"toSnapshot": "syft-20251203T000000Z-001",
"reason": "Data corruption detected",
"recordsAffected": 34
},
"hash": "b3:..."
}
}
```
## API Endpoints
### List Snapshots
```http
GET /api/v1/ingest/snapshots?tool=syft&state=active
```
### Get Snapshot Details
```http
GET /api/v1/ingest/snapshots/{snapshotId}
```
### Initiate Rollback
```http
POST /api/v1/ingest/snapshots/{snapshotId}/rollback
Content-Type: application/json
{
"reason": "Data corruption detected",
"dryRun": false
}
```
### Check SLA Status
```http
GET /api/v1/ingest/sla?tool=syft
```
## Links
- Sprint: `docs/implplan/SPRINT_0186_0001_0001_record_deterministic_execution.md` (CM3)
- Normalization: `docs/modules/scanner/design/competitor-ingest-normalization.md` (CM1)
- Feed Thresholds: `docs/modules/policy/contracts/feed-snapshot-thresholds.md` (SP6)

View File

@@ -0,0 +1,322 @@
# Competitor Ingest Error Taxonomy (CM10)
Status: Draft · Date: 2025-12-04
Scope: Standardize retry/backoff/error taxonomy for the competitor ingest pipeline with deterministic diagnostics.
## Objectives
- Define comprehensive error taxonomy for ingest failures.
- Specify retry behavior for transient errors.
- Enable deterministic error diagnostics.
- Support offline error handling.
## Error Categories
### Retryable Errors
| Code | Category | Description | Max Retries | Backoff |
|------|----------|-------------|-------------|---------|
| `E1001` | `network_error` | Network connectivity failure | 3 | Exponential |
| `E1002` | `network_timeout` | Connection/read timeout | 3 | Exponential |
| `E1003` | `rate_limit` | Rate limit exceeded (429) | 5 | Linear |
| `E1004` | `service_unavailable` | Service temporarily unavailable (503) | 3 | Exponential |
| `E1005` | `transient_io` | Temporary I/O error | 2 | Fixed |
| `E1006` | `lock_contention` | Resource lock conflict | 3 | Exponential |
### Non-Retryable Errors
| Code | Category | Description | Action |
|------|----------|-------------|--------|
| `E2001` | `signature_invalid` | Signature verification failed | Reject |
| `E2002` | `signature_expired` | Signature validity exceeded | Reject |
| `E2003` | `key_unknown` | Signing key not found | Reject |
| `E2004` | `key_expired` | Signing key validity exceeded | Reject |
| `E2005` | `key_revoked` | Signing key revoked | Reject |
| `E2006` | `alg_unsupported` | Unsupported algorithm | Reject |
| `E2007` | `hash_mismatch` | Content hash mismatch | Reject |
| `E2008` | `schema_invalid` | Input doesn't match schema | Reject |
| `E2009` | `version_unsupported` | Tool version not supported | Reject |
| `E2010` | `no_evidence` | No acceptable evidence found | Reject |
### Warning Conditions
| Code | Category | Description | Action |
|------|----------|-------------|--------|
| `W3001` | `provenance_unknown` | Provenance not verifiable | Accept with warning |
| `W3002` | `degraded_confidence` | Low confidence result | Accept with warning |
| `W3003` | `stale_data` | Data exceeds freshness threshold | Accept with warning |
| `W3004` | `partial_mapping` | Some fields couldn't be mapped | Accept with warning |
| `W3005` | `deprecated_format` | Using deprecated format | Accept with warning |
## Error Format
### Standard Error Response
```json
{
"error": {
"code": "E2001",
"category": "signature_invalid",
"message": "DSSE signature verification failed",
"details": {
"reason": "Key ID not found in trusted keyring",
"keyId": "unknown-key-12345",
"algorithm": "ecdsa-p256"
},
"retryable": false,
"diagnostics": {
"timestamp": "2025-12-04T12:00:00Z",
"traceId": "abc123...",
"inputHash": "b3:...",
"stage": "signature_verification"
}
}
}
```
### Warning Response
```json
{
"result": {
"status": "accepted_with_warnings",
"warnings": [
{
"code": "W3001",
"category": "provenance_unknown",
"message": "SBOM accepted without verified provenance",
"details": {
"reason": "No signature present",
"fallbackLevel": 2
}
}
],
"output": {...}
}
}
```
## Retry Configuration
### Backoff Strategies
```json
{
"backoff": {
"exponential": {
"description": "2^attempt * base, capped at max",
"config": {
"base": 1000,
"factor": 2,
"max": 60000,
"jitter": 0.1
},
"example": [1000, 2000, 4000, 8000, 16000, 32000, 60000]
},
"linear": {
"description": "initial + (attempt * increment), capped at max",
"config": {
"initial": 1000,
"increment": 1000,
"max": 30000
},
"example": [1000, 2000, 3000, 4000, 5000]
},
"fixed": {
"description": "Constant delay between retries",
"config": {
"delay": 5000
},
"example": [5000, 5000, 5000]
}
}
}
```
### Retry Decision Logic
```python
def should_retry(error: IngestError, attempt: int) -> RetryDecision:
if error.code.startswith('E2'):
return RetryDecision(retry=False, reason="Non-retryable error")
config = RETRY_CONFIG.get(error.category)
if not config:
return RetryDecision(retry=False, reason="Unknown error category")
if attempt >= config.max_retries:
return RetryDecision(retry=False, reason="Max retries exceeded")
delay = calculate_backoff(config, attempt)
return RetryDecision(retry=True, delay=delay)
```
## Diagnostic Output
### Deterministic Diagnostics
All error diagnostics must be deterministic and reproducible:
```json
{
"diagnostics": {
"timestamp": "2025-12-04T12:00:00Z",
"traceId": "deterministic-trace-id",
"inputHash": "b3:...",
"stage": "normalization",
"context": {
"tool": "syft",
"toolVersion": "1.0.0",
"adapterVersion": "1.0.0",
"inputSize": 12345,
"componentCount": 42
},
"errorChain": [
{
"stage": "schema_validation",
"error": "Missing required field: artifacts[0].purl",
"path": "/artifacts/0"
}
]
}
}
```
### Offline Diagnostics
For offline mode, diagnostics include:
- No timestamps that depend on wall clock
- Deterministic trace IDs (based on input hash)
- All context from bundled metadata
```json
{
"diagnostics": {
"mode": "offline",
"kitVersion": "1.0.0",
"traceId": "b3:input-hash-derived-trace-id",
"kitHash": "b3:...",
"trustRootHash": "b3:..."
}
}
```
## Error Handling Workflow
```
┌─────────────┐
│ Ingest │
│ Request │
└─────────────┘
┌─────────────┐
│ Validate │──Error──► E2008 schema_invalid
└─────────────┘
┌─────────────┐
│ Verify │──Error──► E2001-E2007 signature errors
│ Signature │
└─────────────┘
┌─────────────┐
│ Normalize │──Error──► E2008-E2009 format errors
└─────────────┘
┌─────────────┐
│ Store │──Error──► E1001-E1006 retryable errors
└─────────────┘
┌─────────────┐
│ Success │
│ or Warn │
└─────────────┘
```
## API Error Responses
### HTTP Status Mapping
| Error Category | HTTP Status | Response Body |
|----------------|-------------|---------------|
| Retryable (E1xxx) | 503 | Error with Retry-After header |
| Rate limit (E1003) | 429 | Error with Retry-After header |
| Signature (E2001-E2007) | 400 | Error with details |
| Schema (E2008) | 400 | Error with validation details |
| Version (E2009) | 400 | Error with supported versions |
| No evidence (E2010) | 400 | Error with fallback options |
### Example Error Response
```http
HTTP/1.1 400 Bad Request
Content-Type: application/json
X-Stellaops-Error-Code: E2001
X-Stellaops-Trace-Id: abc123...
```
### Retry Response
```http
HTTP/1.1 503 Service Unavailable
Content-Type: application/json
Retry-After: 5
X-Stellaops-Error-Code: E1004
X-Stellaops-Retry-Attempt: 1
```
## Logging
### Error Log Format
```json
{
"level": "error",
"timestamp": "2025-12-04T12:00:00Z",
"logger": "stellaops.scanner.ingest",
"message": "Ingest failed: signature_invalid",
"error": {
"code": "E2001",
"category": "signature_invalid"
},
"context": {
"traceId": "abc123...",
"tool": "syft",
"inputHash": "b3:..."
}
}
```
## Links
- Sprint: `docs/implplan/SPRINT_0186_0001_0001_record_deterministic_execution.md` (CM10)
- Normalization: `docs/modules/scanner/design/competitor-ingest-normalization.md` (CM1)
- Verification: `docs/modules/scanner/design/competitor-signature-verification.md` (CM2)
- Offline Kit: `docs/modules/scanner/design/competitor-offline-ingest-kit.md` (CM5)
"timestamp": "2025-12-04T12:00:00Z",
"logger": "stellaops.scanner.ingest",
"message": "Ingest failed: signature_invalid",
"error": {
"code": "E2001",
"category": "signature_invalid"
},
"context": {
"traceId": "abc123...",
"tool": "syft",
"inputHash": "b3:..."
}
}
```
## Links
- Sprint: `docs/implplan/SPRINT_0186_0001_0001_record_deterministic_execution.md` (CM10)
- Normalization: `docs/modules/scanner/design/competitor-ingest-normalization.md` (CM1)
- Verification: `docs/modules/scanner/design/competitor-signature-verification.md` (CM2)
- Offline Kit: `docs/modules/scanner/design/competitor-offline-ingest-kit.md` (CM5)

View File

@@ -0,0 +1,339 @@
# Competitor Ingest Fallback Hierarchy (CM6)
Status: Draft · Date: 2025-12-04
Scope: Establish fallback hierarchy when external SBOM/scan data is incomplete, with explicit decision traces.
## Objectives
- Define clear fallback levels for incomplete data.
- Ensure transparent decision tracking.
- Enable policy-based confidence scoring.
- Support offline fallback evaluation.
## Fallback Levels
### Hierarchy Definition
```
Level 1: Signed SBOM with valid provenance
└──► Level 2: Unsigned SBOM with tool metadata
└──► Level 3: Scan-only results
└──► Level 4: Reject (no evidence)
```
### Level Details
| Level | Source | Confidence | Requirements | Warnings |
|-------|--------|------------|--------------|----------|
| 1 | Signed SBOM | 1.0 | Valid signature, valid provenance | None |
| 2 | Unsigned SBOM | 0.7 | Tool metadata, component purl, scan timestamp | `provenance_unknown` |
| 3 | Scan-only | 0.5 | Scan timestamp | `degraded_confidence`, `no_sbom` |
| 4 | Reject | 0.0 | None met | - |
### Level 1: Signed SBOM
Requirements:
- DSSE/COSE/JWS signature present
- Signature verification passes
- Signer key in trusted keyring
- Provenance metadata valid
```json
{
"fallback": {
"level": 1,
"source": "signed_sbom",
"confidence": 1.0,
"decision": {
"reason": "Valid signature and provenance",
"checks": {
"signaturePresent": true,
"signatureValid": true,
"keyTrusted": true,
"provenanceValid": true
}
}
}
}
```
### Level 2: Unsigned SBOM
Requirements (all must be present):
- Tool name and version
- Component list with PURLs
- At least one SHA-256 hash per component
- Scan timestamp
```json
{
"fallback": {
"level": 2,
"source": "unsigned_sbom",
"confidence": 0.7,
"decision": {
"reason": "Valid SBOM without signature",
"checks": {
"signaturePresent": false,
"toolMetadata": true,
"componentPurls": true,
"componentHashes": true,
"scanTimestamp": true
},
"warnings": ["provenance_unknown"]
}
}
}
```
### Level 3: Scan-only
Requirements:
- Scan timestamp present
- At least one finding or component
```json
{
"fallback": {
"level": 3,
"source": "scan_only",
"confidence": 0.5,
"decision": {
"reason": "Scan results without SBOM",
"checks": {
"signaturePresent": false,
"toolMetadata": false,
"scanTimestamp": true,
"hasFindings": true
},
"warnings": ["degraded_confidence", "no_sbom"]
}
}
}
```
### Level 4: Reject
When no requirements met:
```json
{
"fallback": {
"level": 4,
"source": "reject",
"confidence": 0.0,
"decision": {
"reason": "No acceptable evidence found",
"checks": {
"signaturePresent": false,
"toolMetadata": false,
"scanTimestamp": false,
"hasFindings": false
},
"action": "reject",
"errorCode": "E2010"
}
}
}
```
## Decision Evaluation
### Evaluation Algorithm
```python
def evaluate_fallback(input_data: dict) -> FallbackDecision:
checks = {
"signaturePresent": has_signature(input_data),
"signatureValid": False,
"keyTrusted": False,
"provenanceValid": False,
"toolMetadata": has_tool_metadata(input_data),
"componentPurls": has_component_purls(input_data),
"componentHashes": has_component_hashes(input_data),
"scanTimestamp": has_scan_timestamp(input_data),
"hasFindings": has_findings(input_data)
}
# Level 1 check
if checks["signaturePresent"]:
sig_result = verify_signature(input_data)
checks["signatureValid"] = sig_result.valid
checks["keyTrusted"] = sig_result.key_trusted
checks["provenanceValid"] = verify_provenance(input_data)
if all([checks["signatureValid"], checks["keyTrusted"], checks["provenanceValid"]]):
return FallbackDecision(level=1, confidence=1.0, checks=checks)
# Level 2 check
if all([checks["toolMetadata"], checks["componentPurls"],
checks["componentHashes"], checks["scanTimestamp"]]):
return FallbackDecision(
level=2, confidence=0.7, checks=checks,
warnings=["provenance_unknown"]
)
# Level 3 check
if checks["scanTimestamp"] and checks["hasFindings"]:
return FallbackDecision(
level=3, confidence=0.5, checks=checks,
warnings=["degraded_confidence", "no_sbom"]
)
# Level 4: Reject
return FallbackDecision(
level=4, confidence=0.0, checks=checks,
action="reject", error_code="E2010"
)
```
## Decision Trace
### Trace Format
```json
{
"trace": {
"id": "trace-12345",
"timestamp": "2025-12-04T12:00:00Z",
"input": {
"hash": "b3:...",
"size": 12345,
"format": "cyclonedx-1.6"
},
"evaluation": {
"steps": [
{
"check": "signaturePresent",
"result": false,
"details": "No DSSE/COSE/JWS envelope found"
},
{
"check": "toolMetadata",
"result": true,
"details": "Found tool: syft v1.0.0"
},
{
"check": "componentPurls",
"result": true,
"details": "42 components with valid PURLs"
},
{
"check": "componentHashes",
"result": true,
"details": "42 components with SHA-256 hashes"
},
{
"check": "scanTimestamp",
"result": true,
"details": "Timestamp: 2025-12-04T00:00:00Z"
}
],
"decision": {
"level": 2,
"confidence": 0.7,
"warnings": ["provenance_unknown"]
}
}
}
}
```
### Trace Persistence
Decision traces are:
- Stored with normalized output
- Included in API responses
- Available for audit queries
- Deterministic (same input = same trace)
## Policy Integration
### Confidence Thresholds
```json
{
"policy": {
"minConfidence": {
"production": 0.8,
"staging": 0.5,
"development": 0.0
},
"allowedLevels": {
"production": [1],
"staging": [1, 2],
"development": [1, 2, 3]
}
}
}
```
### Policy Evaluation
```rego
# policy/ingest/fallback.rego
package ingest.fallback
import rego.v1
default allow = false
allow if {
input.fallback.level <= max_allowed_level
input.fallback.confidence >= min_confidence
}
max_allowed_level := data.policy.allowedLevels[input.environment][_]
min_confidence := data.policy.minConfidence[input.environment]
deny contains msg if {
input.fallback.level > max_allowed_level
msg := sprintf("Fallback level %d not allowed in %s", [input.fallback.level, input.environment])
}
warn contains msg if {
warning := input.fallback.decision.warnings[_]
msg := sprintf("Fallback warning: %s", [warning])
}
```
## Override Mechanism
### Manual Override
```bash
# Accept unsigned SBOM in production (requires approval)
stellaops ingest import \
--input external-sbom.json \
--allow-unsigned \
--override-reason "Emergency import per ticket INC-12345" \
--override-approver security-admin@example.com
```
### Override Record
```json
{
"override": {
"enabled": true,
"level": 2,
"originalDecision": {
"level": 4,
"reason": "Would normally reject"
},
"overrideReason": "Emergency import per ticket INC-12345",
"approver": "security-admin@example.com",
"approvedAt": "2025-12-04T12:00:00Z",
"expiresAt": "2025-12-05T12:00:00Z"
}
}
```
## Links
- Sprint: `docs/implplan/SPRINT_0186_0001_0001_record_deterministic_execution.md` (CM6)
- Verification: `docs/modules/scanner/design/competitor-signature-verification.md` (CM2)
- Normalization: `docs/modules/scanner/design/competitor-ingest-normalization.md` (CM1)

View File

@@ -0,0 +1,353 @@
# Competitor Offline Ingest Kit (CM5)
Status: Draft · Date: 2025-12-04
Scope: Define offline ingest kit contents including DSSE-signed adapters, mappings, fixtures, and trust roots for air-gapped environments.
## Objectives
- Bundle all competitor ingest artifacts for offline use.
- Sign kit with DSSE for integrity verification.
- Include trust roots for signature verification.
- Enable complete ingest workflow without network access.
## Bundle Structure
```
out/offline/competitor-ingest-kit-v1/
├── manifest.json # Bundle manifest with all hashes
├── manifest.dsse # DSSE signature over manifest
├── adapters/
│ ├── syft/
│ │ ├── mapping.csv # Field mappings
│ │ ├── version-map.json # Version compatibility
│ │ └── schema.json # Expected input schema
│ ├── trivy/
│ │ ├── mapping.csv
│ │ ├── version-map.json
│ │ └── schema.json
│ └── clair/
│ ├── mapping.csv
│ ├── version-map.json
│ └── schema.json
├── fixtures/
│ ├── normalized-syft.json
│ ├── normalized-trivy.json
│ ├── normalized-clair.json
│ └── hashes.txt
├── trust/
│ ├── root-ca.pem
│ ├── keyring.json
│ ├── revocation.json
│ └── cosign-keys/
│ ├── syft-release.pub
│ ├── trivy-release.pub
│ └── clair-release.pub
├── coverage/
│ └── coverage.csv
├── policies/
│ ├── signature-policy.json
│ ├── fallback-policy.json
│ └── retry-policy.json
└── tools/
├── versions.json
└── checksums.txt
```
## Manifest Format
```json
{
"version": "1.0.0",
"created": "2025-12-04T00:00:00Z",
"creator": "stellaops-scanner",
"type": "competitor-ingest-kit",
"artifacts": [
{
"path": "adapters/syft/mapping.csv",
"type": "adapter",
"tool": "syft",
"blake3": "...",
"sha256": "..."
},
{
"path": "fixtures/normalized-syft.json",
"type": "fixture",
"tool": "syft",
"blake3": "aa42c167d19535709a10df73dc39e6a50b8efbbb0ae596d17183ce62676fa85a",
"sha256": "3f8684ff341808dcb92e97dd2c10acca727baaff05182e81a4364bb3dad0eaa7"
},
{
"path": "trust/keyring.json",
"type": "trust",
"blake3": "...",
"sha256": "..."
}
],
"supportedTools": {
"syft": {
"minVersion": "1.0.0",
"maxVersion": "1.99.99",
"tested": ["1.0.0", "1.5.0", "1.10.0"]
},
"trivy": {
"minVersion": "0.50.0",
"maxVersion": "0.59.99",
"tested": ["0.50.0", "0.55.0"]
},
"clair": {
"minVersion": "6.0.0",
"maxVersion": "6.99.99",
"tested": ["6.0.0", "6.1.0"]
}
},
"manifestHash": {
"blake3": "...",
"sha256": "..."
}
}
```
## Adapter Contents
### Mapping CSV Format
```csv
source_field,target_field,rule,required,notes
artifacts[].name,components[].name,copy,yes,Component name
artifacts[].version,components[].version,copy,yes,Component version
artifacts[].purl,components[].purl,copy,yes,Package URL
artifacts[].type,components[].type,map:package->library,yes,Type mapping
artifacts[].licenses[],components[].licenses[],flatten,no,License list
artifacts[].metadata.digest,components[].hashes[],transform:sha256,no,Hash extraction
```
### Version Map Format
```json
{
"tool": "syft",
"versionRanges": [
{
"range": ">=1.0.0 <1.5.0",
"schemaVersion": "v1",
"mappingFile": "mapping-v1.csv"
},
{
"range": ">=1.5.0 <2.0.0",
"schemaVersion": "v2",
"mappingFile": "mapping-v2.csv",
"breaking": ["artifacts.metadata renamed to artifacts.meta"]
}
]
}
```
## Policy Files
### Signature Policy (CM2)
```json
{
"policy": {
"requireSignature": true,
"allowUnsigned": false,
"acceptedFormats": ["dsse", "cose", "jws"],
"acceptedAlgorithms": ["ed25519", "ecdsa-p256", "rsa-2048"],
"trustedIssuers": [
"https://github.com/anchore/syft",
"https://github.com/aquasecurity/trivy",
"https://github.com/quay/clair"
],
"rejectReasons": [
"sig_invalid",
"sig_expired",
"key_unknown",
"key_expired",
"key_revoked",
"alg_unsupported"
]
}
}
```
### Fallback Policy (CM6)
```json
{
"fallback": {
"hierarchy": [
{
"level": 1,
"source": "signed_sbom",
"confidence": 1.0,
"requirements": ["valid_signature", "valid_provenance"]
},
{
"level": 2,
"source": "unsigned_sbom",
"confidence": 0.7,
"requirements": ["tool_metadata", "component_purl", "scan_timestamp"],
"warnings": ["provenance_unknown"]
},
{
"level": 3,
"source": "scan_only",
"confidence": 0.5,
"requirements": ["scan_timestamp"],
"warnings": ["degraded_confidence", "no_sbom"]
},
{
"level": 4,
"source": "reject",
"confidence": 0.0,
"requirements": [],
"action": "reject",
"reason": "no_evidence"
}
]
}
}
```
### Retry Policy (CM10)
```json
{
"retry": {
"retryable": [
{"code": "network_error", "maxRetries": 3, "backoff": "exponential"},
{"code": "rate_limit", "maxRetries": 5, "backoff": "linear"},
{"code": "transient_io", "maxRetries": 2, "backoff": "fixed"}
],
"nonRetryable": [
"signature_invalid",
"schema_invalid",
"unsupported_version",
"no_evidence"
],
"backoffConfig": {
"exponential": {"base": 1000, "factor": 2, "max": 60000},
"linear": {"initial": 1000, "increment": 1000, "max": 30000},
"fixed": {"delay": 5000}
}
}
}
```
## Kit Generation
### Build Script
```bash
#!/bin/bash
# scripts/scanner/build-competitor-ingest-kit.sh
set -euo pipefail
KIT_DIR="out/offline/competitor-ingest-kit-v1"
rm -rf "${KIT_DIR}"
mkdir -p "${KIT_DIR}"
# Copy adapters
for tool in syft trivy clair; do
mkdir -p "${KIT_DIR}/adapters/${tool}"
cp "src/Scanner/Adapters/${tool}/"*.csv "${KIT_DIR}/adapters/${tool}/"
cp "src/Scanner/Adapters/${tool}/"*.json "${KIT_DIR}/adapters/${tool}/"
done
# Copy fixtures
mkdir -p "${KIT_DIR}/fixtures"
cp docs/modules/scanner/fixtures/competitor-adapters/fixtures/*.json "${KIT_DIR}/fixtures/"
cp docs/modules/scanner/fixtures/competitor-adapters/fixtures/hashes.txt "${KIT_DIR}/fixtures/"
# Copy trust roots
mkdir -p "${KIT_DIR}/trust/cosign-keys"
cp trust/root-ca.pem "${KIT_DIR}/trust/"
cp trust/keyring.json "${KIT_DIR}/trust/"
cp trust/revocation.json "${KIT_DIR}/trust/"
cp trust/cosign-keys/*.pub "${KIT_DIR}/trust/cosign-keys/"
# Copy coverage
mkdir -p "${KIT_DIR}/coverage"
cp docs/modules/scanner/fixtures/competitor-adapters/coverage.csv "${KIT_DIR}/coverage/"
# Copy policies
mkdir -p "${KIT_DIR}/policies"
cp policies/competitor/*.json "${KIT_DIR}/policies/"
# Copy tool versions
mkdir -p "${KIT_DIR}/tools"
cp tools/versions.json "${KIT_DIR}/tools/"
cp tools/checksums.txt "${KIT_DIR}/tools/"
# Generate manifest
scripts/scanner/generate-competitor-manifest.sh "${KIT_DIR}"
# Sign manifest
scripts/scanner/sign-manifest.sh "${KIT_DIR}/manifest.json" "${KIT_DIR}/manifest.dsse"
echo "Kit built: ${KIT_DIR}"
```
## Verification
### Kit Verification Script
```bash
#!/bin/bash
# scripts/scanner/verify-competitor-ingest-kit.sh
set -euo pipefail
KIT_DIR="${1:-out/offline/competitor-ingest-kit-v1}"
# Verify DSSE signature
stellaops-verify dsse \
--envelope "${KIT_DIR}/manifest.dsse" \
--trust-root "${KIT_DIR}/trust/root-ca.pem" \
--expected-payload-type "application/vnd.stellaops.competitor-ingest.manifest+json"
# Extract and verify artifacts
MANIFEST=$(stellaops-verify dsse --envelope "${KIT_DIR}/manifest.dsse" --extract-payload)
for artifact in $(echo "${MANIFEST}" | jq -r '.artifacts[] | @base64'); do
path=$(echo "${artifact}" | base64 -d | jq -r '.path')
expected_blake3=$(echo "${artifact}" | base64 -d | jq -r '.blake3')
actual_blake3=$(b3sum "${KIT_DIR}/${path}" | cut -d' ' -f1)
if [[ "${actual_blake3}" != "${expected_blake3}" ]]; then
echo "FAIL: ${path}"
exit 1
fi
echo "PASS: ${path}"
done
echo "All artifacts verified"
```
## Usage
### Offline Ingest Workflow
```bash
# Initialize from kit
stellaops ingest init --kit out/offline/competitor-ingest-kit-v1
# Import SBOM with offline verification
stellaops ingest import \
--input external-sbom.json \
--tool syft \
--offline \
--trust-root out/offline/competitor-ingest-kit-v1/trust/keyring.json
# Validate against fixtures
stellaops ingest validate \
--fixtures out/offline/competitor-ingest-kit-v1/fixtures
```
## Links
- Sprint: `docs/implplan/SPRINT_0186_0001_0001_record_deterministic_execution.md` (CM5)
- Normalization: `docs/modules/scanner/design/competitor-ingest-normalization.md` (CM1)
- Verification: `docs/modules/scanner/design/competitor-signature-verification.md` (CM2)

View File

@@ -0,0 +1,255 @@
# Competitor SBOM/Scan Signature Verification (CM2)
Status: Draft · Date: 2025-12-04
Scope: Specify signature and provenance verification requirements for accepting external SBOM and scan outputs, including rejection/flag policies.
## Objectives
- Define acceptable signature algorithms and formats.
- Establish trust root management for external signers.
- Specify verification workflow and failure modes.
- Enable offline verification with bundled trust roots.
## Acceptable Signatures
### Signature Formats
| Format | Algorithm | Key Type | Status |
|--------|-----------|----------|--------|
| DSSE | Ed25519 | Asymmetric | Preferred |
| DSSE | ECDSA P-256 | Asymmetric | Accepted |
| DSSE | RSA-2048+ | Asymmetric | Accepted |
| COSE | EdDSA | Asymmetric | Accepted |
| JWS | ES256 | Asymmetric | Accepted |
| JWS | RS256 | Asymmetric | Deprecated |
### Hash Algorithms
| Algorithm | Usage | Status |
|-----------|-------|--------|
| SHA-256 | Primary | Required |
| BLAKE3-256 | Secondary | Preferred |
| SHA-384 | Alternative | Accepted |
| SHA-512 | Alternative | Accepted |
| SHA-1 | Legacy | Rejected |
| MD5 | Legacy | Rejected |
## Trust Root Management
### Bundled Trust Roots
```
out/offline/competitor-ingest-kit-v1/trust/
├── root-ca.pem # CA for signed SBOMs
├── keyring.json # Known signing keys
├── cosign-keys/ # Cosign public keys
│ ├── syft-release.pub
│ ├── trivy-release.pub
│ └── clair-release.pub
└── fulcio-root.pem # Sigstore Fulcio CA
```
### Keyring Format
```json
{
"keys": [
{
"id": "syft-release-2025",
"type": "ecdsa-p256",
"publicKey": "-----BEGIN PUBLIC KEY-----\n...\n-----END PUBLIC KEY-----",
"issuer": "https://github.com/anchore/syft",
"validFrom": "2025-01-01T00:00:00Z",
"validTo": "2026-01-01T00:00:00Z",
"purposes": ["sbom-signing", "attestation-signing"]
}
],
"trustedIssuers": [
"https://github.com/anchore/syft",
"https://github.com/aquasecurity/trivy",
"https://github.com/quay/clair"
]
}
```
## Verification Workflow
```
┌─────────────┐
│ Receive │
│ SBOM │
└─────────────┘
┌─────────────┐ ┌─────────────┐
│ Has DSSE? │──No─► Has JWS? │──No─► Unsigned
└─────────────┘ └─────────────┘ │
│ │ │
Yes Yes │
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Verify DSSE │ │ Verify JWS │ │ Apply CM6 │
│ Signature │ │ Signature │ │ Fallback │
└─────────────┘ └─────────────┘ └─────────────┘
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Valid? │ │ Valid? │ │ Provenance: │
└─────────────┘ └─────────────┘ │ unknown │
│ │ │ │ └─────────────┘
Yes No Yes No
│ │ │ │
▼ ▼ ▼ ▼
Accept Reject Accept Reject
```
### Verification Steps
1. **Format Detection**
- Check for DSSE envelope wrapper
- Check for detached signature file (`.sig`)
- Check for inline JWS header
2. **Signature Extraction**
- Parse envelope/signature structure
- Extract signer key ID and algorithm
3. **Key Lookup**
- Search bundled keyring for key ID
- Verify key is within validity period
- Check key purpose matches usage
4. **Cryptographic Verification**
- Verify signature over payload
- Verify hash matches content
- Check for signature expiry
5. **Provenance Validation**
- Extract signer identity
- Verify issuer is trusted
- Check build metadata if present
## Failure Modes
### Rejection Reasons
| Code | Reason | Action |
|------|--------|--------|
| `sig_missing` | No signature present | Apply fallback (CM6) |
| `sig_invalid` | Signature verification failed | Reject |
| `sig_expired` | Signature validity period exceeded | Reject |
| `key_unknown` | Signing key not in keyring | Reject |
| `key_expired` | Signing key validity exceeded | Reject |
| `key_revoked` | Signing key has been revoked | Reject |
| `issuer_untrusted` | Issuer not in trusted list | Reject |
| `alg_unsupported` | Algorithm not acceptable | Reject |
| `hash_mismatch` | Content hash doesn't match | Reject |
### Flag Policy
When `--allow-unsigned` is set:
| Condition | Behavior |
|-----------|----------|
| Signature missing | Accept with `provenance=unknown`, emit warning |
| Signature invalid | Reject (flag doesn't override invalid) |
| Key unknown | Accept with `provenance=unverified`, emit warning |
## Verification API
### Endpoint
```http
POST /api/v1/ingest/verify
Content-Type: application/json
{
"sbom": "<base64-encoded-sbom>",
"signature": "<base64-encoded-signature>",
"options": {
"allowUnsigned": false,
"requireProvenance": true
}
}
```
### Response
```json
{
"verification": {
"status": "valid",
"signature": {
"format": "dsse",
"algorithm": "ecdsa-p256",
"keyId": "syft-release-2025",
"signedAt": "2025-12-04T00:00:00Z"
},
"provenance": {
"issuer": "https://github.com/anchore/syft",
"buildId": "build-12345",
"sourceRepo": "https://github.com/example/app"
},
"hash": {
"algorithm": "sha256",
"value": "..."
}
}
}
```
## Offline Verification
### Requirements
- All trust roots bundled in offline kit
- No network calls during verification
- Keyring includes all expected signers
- CRL/OCSP checks disabled (use bundled revocation lists)
### Revocation List Format
```json
{
"revoked": [
{
"keyId": "compromised-key-2024",
"revokedAt": "2024-12-01T00:00:00Z",
"reason": "Key compromise"
}
],
"lastUpdated": "2025-12-04T00:00:00Z"
}
```
## Integration with Normalization (CM1)
After successful verification:
1. Extract tool metadata from signature/provenance
2. Pass to normalization adapter
3. Include verification result in normalized output
```json
{
"source": {
"tool": "syft",
"version": "1.0.0",
"hash": "sha256:..."
},
"verification": {
"status": "verified",
"keyId": "syft-release-2025",
"signedAt": "2025-12-04T00:00:00Z"
},
"components": [...],
"normalized_hash": "blake3:..."
}
```
## Links
- Sprint: `docs/implplan/SPRINT_0186_0001_0001_record_deterministic_execution.md` (CM2)
- Normalization: `docs/modules/scanner/design/competitor-ingest-normalization.md` (CM1)
- Fallback: See CM6 in this document series

View File

@@ -0,0 +1,263 @@
# Determinism CI Harness for New Formats (SC5)
Status: Draft · Date: 2025-12-04
Scope: Define the determinism CI harness for validating stable ordering, hash checks, golden fixtures, and RNG seeds for CVSS v4, CycloneDX 1.7/CBOM, and SLSA 1.2 outputs.
## Objectives
- Ensure Scanner outputs are reproducible across builds, platforms, and time.
- Validate that serialized SBOM/VEX/attestation outputs have deterministic ordering.
- Anchor CI validation to golden fixtures with pre-computed hashes.
- Enable offline verification without network dependencies.
## CI Pipeline Integration
### Environment Setup
```yaml
# .gitea/workflows/scanner-determinism.yml additions
env:
DOTNET_DISABLE_BUILTIN_GRAPH: "1"
TZ: "UTC"
LC_ALL: "C"
STELLAOPS_DETERMINISM_SEED: "42"
STELLAOPS_DETERMINISM_TIMESTAMP: "2025-01-01T00:00:00Z"
```
### Required Environment Variables
| Variable | Purpose | Default |
|----------|---------|---------|
| `TZ` | Force UTC timezone | `UTC` |
| `LC_ALL` | Force locale-invariant sorting | `C` |
| `STELLAOPS_DETERMINISM_SEED` | Fixed RNG seed for reproducibility | `42` |
| `STELLAOPS_DETERMINISM_TIMESTAMP` | Fixed timestamp for output | `2025-01-01T00:00:00Z` |
| `DOTNET_DISABLE_BUILTIN_GRAPH` | Disable non-deterministic graph features | `1` |
## Hash Validation Steps
### 1. Golden Fixture Verification
```bash
#!/bin/bash
# scripts/scanner/verify-determinism.sh
set -euo pipefail
FIXTURE_DIR="docs/modules/scanner/fixtures/cdx17-cbom"
HASH_FILE="${FIXTURE_DIR}/hashes.txt"
verify_fixture() {
local file="$1"
local expected_blake3="$2"
local expected_sha256="$3"
actual_blake3=$(b3sum "${file}" | cut -d' ' -f1)
actual_sha256=$(sha256sum "${file}" | cut -d' ' -f1)
if [[ "${actual_blake3}" != "${expected_blake3}" ]]; then
echo "FAIL: ${file} BLAKE3 mismatch"
echo " expected: ${expected_blake3}"
echo " actual: ${actual_blake3}"
return 1
fi
if [[ "${actual_sha256}" != "${expected_sha256}" ]]; then
echo "FAIL: ${file} SHA256 mismatch"
echo " expected: ${expected_sha256}"
echo " actual: ${actual_sha256}"
return 1
fi
echo "PASS: ${file}"
return 0
}
# Parse hashes.txt and verify each fixture
while IFS=': ' read -r filename hashes; do
blake3=$(echo "${hashes}" | grep -oP 'BLAKE3=\K[a-f0-9]+')
sha256=$(echo "${hashes}" | grep -oP 'SHA256=\K[a-f0-9]+')
verify_fixture "${FIXTURE_DIR}/${filename}" "${blake3}" "${sha256}"
done < <(grep -v '^#' "${HASH_FILE}")
```
### 2. Deterministic Serialization Test
```csharp
// src/Scanner/__Tests/StellaOps.Scanner.Determinism.Tests/CdxDeterminismTests.cs
[Fact]
public async Task Cdx17_Serialization_Is_Deterministic()
{
// Arrange
var options = new DeterminismOptions
{
Seed = 42,
Timestamp = new DateTimeOffset(2025, 1, 1, 0, 0, 0, TimeSpan.Zero),
CultureInvariant = true
};
var sbom = CreateTestSbom();
// Act - serialize twice
var json1 = await _serializer.SerializeAsync(sbom, options);
var json2 = await _serializer.SerializeAsync(sbom, options);
// Assert - must be identical
Assert.Equal(json1, json2);
// Compute and verify hash
var hash = Blake3.HashData(Encoding.UTF8.GetBytes(json1));
Assert.Equal(ExpectedHash, Convert.ToHexString(hash).ToLowerInvariant());
}
```
### 3. Downgrade Adapter Verification
```csharp
[Fact]
public async Task Cdx17_To_Cdx16_Downgrade_Is_Deterministic()
{
// Arrange
var cdx17 = await LoadFixture("sample-cdx17-cbom.json");
// Act
var cdx16_1 = await _adapter.Downgrade(cdx17);
var cdx16_2 = await _adapter.Downgrade(cdx17);
// Assert
var json1 = await _serializer.SerializeAsync(cdx16_1);
var json2 = await _serializer.SerializeAsync(cdx16_2);
Assert.Equal(json1, json2);
// Verify matches golden fixture hash
var hash = Blake3.HashData(Encoding.UTF8.GetBytes(json1));
var expectedHash = LoadExpectedHash("sample-cdx16.json");
Assert.Equal(expectedHash, Convert.ToHexString(hash).ToLowerInvariant());
}
```
## Ordering Rules
### Components (CycloneDX)
1. Sort by `purl` (case-insensitive, locale-invariant)
2. Ties: sort by `name` (case-insensitive)
3. Ties: sort by `version` (semantic version comparison)
### Vulnerabilities
1. Sort by `id` (lexicographic)
2. Ties: sort by `source.name` (lexicographic)
3. Ties: sort by highest severity rating score (descending)
### Properties
1. Sort by `name` (lexicographic, locale-invariant)
### Hashes
1. Sort by `alg` (BLAKE3-256, SHA-256, SHA-512 order)
### Ratings (CVSS)
1. CVSSv4 first
2. CVSSv31 second
3. CVSSv30 third
4. Others alphabetically by method
## Fixture Requirements (SC8 Cross-Reference)
Each golden fixture must include:
| Format | Fixture File | Contents |
|--------|--------------|----------|
| CDX 1.7 + CBOM | `sample-cdx17-cbom.json` | Full SBOM with CVSS v4/v3.1, CBOM properties, SLSA Source Track, evidence |
| CDX 1.6 (downgraded) | `sample-cdx16.json` | Downgraded version with CVSS v4 removed, CBOM dropped, audit markers |
| SLSA Source Track | `source-track.sample.json` | Standalone source provenance block |
## CI Workflow Steps
```yaml
# Add to .gitea/workflows/scanner-determinism.yml
jobs:
determinism-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup .NET
uses: actions/setup-dotnet@v4
with:
dotnet-version: '10.0.x'
- name: Set determinism environment
run: |
echo "TZ=UTC" >> $GITHUB_ENV
echo "LC_ALL=C" >> $GITHUB_ENV
echo "DOTNET_DISABLE_BUILTIN_GRAPH=1" >> $GITHUB_ENV
echo "STELLAOPS_DETERMINISM_SEED=42" >> $GITHUB_ENV
- name: Verify golden fixtures
run: scripts/scanner/verify-determinism.sh
- name: Run determinism tests
run: |
dotnet test src/Scanner/__Tests/StellaOps.Scanner.Determinism.Tests \
--configuration Release \
--verbosity normal
- name: Run adapter determinism tests
run: |
dotnet test src/Scanner/__Tests/StellaOps.Scanner.Adapters.Tests \
--filter "Category=Determinism" \
--configuration Release
```
## Failure Handling
### Hash Mismatch Protocol
1. **Do not auto-update hashes** - manual review required
2. Log diff between expected and actual output
3. Capture both BLAKE3 and SHA256 for audit trail
4. Block merge until resolved
### Acceptable Reasons for Hash Update
- Schema version bump (documented in change log)
- Intentional ordering rule change (documented in adapter CSV)
- Bug fix that corrects previously non-deterministic output
- Never: cosmetic changes, timestamp updates, random salts
## Offline Verification
The harness must work completely offline:
- No network calls during serialization
- No external schema validation endpoints
- Trust roots and schemas bundled in repository
- All RNG seeded from environment variable
## Integration with SC8 Fixtures
The fixtures defined in SC8 serve as golden sources for this harness:
```
docs/modules/scanner/fixtures/
├── cdx17-cbom/
│ ├── sample-cdx17-cbom.json # CVSS v4 + v3.1, CBOM, evidence
│ ├── sample-cdx16.json # Downgraded, CVSS v3.1 only
│ ├── source-track.sample.json # SLSA Source Track
│ └── hashes.txt # BLAKE3 + SHA256 for all fixtures
├── adapters/
│ ├── mapping-cvss4-to-cvss3.csv
│ ├── mapping-cdx17-to-cdx16.csv
│ ├── mapping-slsa12-to-slsa10.csv
│ └── hashes.txt
└── competitor-adapters/
└── fixtures/
├── normalized-syft.json
├── normalized-trivy.json
└── normalized-clair.json
```
## Links
- Sprint: `docs/implplan/SPRINT_0186_0001_0001_record_deterministic_execution.md` (SC5)
- Roadmap: `docs/modules/scanner/design/standards-convergence-roadmap.md` (SC1)
- Contract: `docs/modules/scanner/design/cdx17-cbom-contract.md` (SC2)

View File

@@ -0,0 +1,323 @@
# Offline Kit Parity for Scanner Standards (SC10)
Status: Draft · Date: 2025-12-04
Scope: Define offline-kit contents, DSSE signing, and parity requirements for schemas, adapters, mappings, and fixtures to enable air-gapped operation.
## Objectives
- Bundle all schema/adapter/fixture artifacts for offline use.
- Sign bundles with DSSE for integrity verification.
- Ensure offline kit matches online capabilities.
- Document verification procedures for air-gapped environments.
## Bundle Structure
```
out/offline/scanner-standards-kit-v1/
├── manifest.json # Bundle manifest with all hashes
├── manifest.dsse # DSSE signature over manifest
├── schemas/
│ ├── cyclonedx-1.7.schema.json # CDX 1.7 JSON schema
│ ├── cyclonedx-1.6.schema.json # CDX 1.6 JSON schema
│ ├── spdx-3.0.1.schema.json # SPDX 3.0.1 JSON-LD schema
│ └── slsa-provenance-v1.schema.json
├── adapters/
│ ├── mapping-cvss4-to-cvss3.csv
│ ├── mapping-cdx17-to-cdx16.csv
│ ├── mapping-slsa12-to-slsa10.csv
│ └── hashes.txt
├── fixtures/
│ ├── cdx17-cbom/
│ │ ├── sample-cdx17-cbom.json
│ │ ├── sample-cdx16.json
│ │ ├── source-track.sample.json
│ │ └── hashes.txt
│ └── competitor-adapters/
│ ├── fixtures/
│ │ ├── normalized-syft.json
│ │ ├── normalized-trivy.json
│ │ └── normalized-clair.json
│ └── coverage.csv
├── tools/
│ ├── versions.json # Pinned tool versions
│ └── checksums.txt # Tool binary hashes
└── trust/
├── root-ca.pem # Trust root for signature verification
└── keyring.json # Signing key metadata
```
## Manifest Format
```json
{
"version": "1.0.0",
"created": "2025-12-04T00:00:00Z",
"creator": "stellaops-scanner",
"artifacts": [
{
"path": "schemas/cyclonedx-1.7.schema.json",
"type": "schema",
"format": "cyclonedx",
"version": "1.7",
"blake3": "a1b2c3d4...",
"sha256": "e5f6a7b8..."
},
{
"path": "adapters/mapping-cvss4-to-cvss3.csv",
"type": "adapter",
"source": "cvss4",
"target": "cvss3.1",
"blake3": "fa600b26...",
"sha256": "072b66be..."
},
{
"path": "fixtures/cdx17-cbom/sample-cdx17-cbom.json",
"type": "fixture",
"format": "cyclonedx",
"version": "1.7",
"blake3": "27c6de0c...",
"sha256": "22d8f6f8..."
}
],
"tools": {
"syft": {
"version": "1.0.0",
"blake3": "...",
"sha256": "..."
},
"trivy": {
"version": "0.50.0",
"blake3": "...",
"sha256": "..."
}
},
"manifestHash": {
"blake3": "...",
"sha256": "..."
}
}
```
## DSSE Signing
### Signature Format
```json
{
"payloadType": "application/vnd.stellaops.scanner.manifest+json",
"payload": "<base64-encoded-manifest>",
"signatures": [
{
"keyid": "stellaops-scanner-release-2025",
"sig": "<base64-signature>"
}
]
}
```
### Signing Process
```bash
#!/bin/bash
# scripts/scanner/sign-offline-kit.sh
MANIFEST="out/offline/scanner-standards-kit-v1/manifest.json"
DSSE="out/offline/scanner-standards-kit-v1/manifest.dsse"
KEY_ID="stellaops-scanner-release-2025"
# Compute manifest hash
MANIFEST_HASH=$(b3sum "${MANIFEST}" | cut -d' ' -f1)
# Sign with DSSE
stellaops-sign dsse \
--payload-type "application/vnd.stellaops.scanner.manifest+json" \
--payload "${MANIFEST}" \
--key-id "${KEY_ID}" \
--output "${DSSE}"
echo "Signed manifest: ${DSSE}"
echo "Manifest BLAKE3: ${MANIFEST_HASH}"
```
## Verification
### Offline Verification Steps
```bash
#!/bin/bash
# scripts/scanner/verify-offline-kit.sh
KIT_DIR="out/offline/scanner-standards-kit-v1"
# 1. Verify DSSE signature
stellaops-verify dsse \
--envelope "${KIT_DIR}/manifest.dsse" \
--trust-root "${KIT_DIR}/trust/root-ca.pem" \
--expected-payload-type "application/vnd.stellaops.scanner.manifest+json"
# 2. Extract manifest and verify artifacts
MANIFEST=$(stellaops-verify dsse --envelope "${KIT_DIR}/manifest.dsse" --extract-payload)
# 3. Verify each artifact hash
for artifact in $(echo "${MANIFEST}" | jq -r '.artifacts[] | @base64'); do
path=$(echo "${artifact}" | base64 -d | jq -r '.path')
expected_blake3=$(echo "${artifact}" | base64 -d | jq -r '.blake3')
actual_blake3=$(b3sum "${KIT_DIR}/${path}" | cut -d' ' -f1)
if [[ "${actual_blake3}" != "${expected_blake3}" ]]; then
echo "FAIL: ${path} hash mismatch"
exit 1
fi
echo "PASS: ${path}"
done
echo "All artifacts verified"
```
### Programmatic Verification
```csharp
// src/Scanner/StellaOps.Scanner.Offline/OfflineKitVerifier.cs
public class OfflineKitVerifier
{
public async Task<VerificationResult> VerifyAsync(
string kitPath,
ITrustRootProvider trustRoots)
{
var manifestPath = Path.Combine(kitPath, "manifest.json");
var dssePath = Path.Combine(kitPath, "manifest.dsse");
// Verify DSSE signature
var envelope = await DsseEnvelope.LoadAsync(dssePath);
var signatureValid = await _verifier.VerifyAsync(envelope, trustRoots);
if (!signatureValid)
return VerificationResult.SignatureInvalid;
// Parse manifest
var manifest = JsonSerializer.Deserialize<OfflineManifest>(
envelope.Payload);
// Verify each artifact
foreach (var artifact in manifest.Artifacts)
{
var artifactPath = Path.Combine(kitPath, artifact.Path);
var actualHash = await HashUtil.Blake3Async(artifactPath);
if (actualHash != artifact.Blake3)
return VerificationResult.ArtifactHashMismatch(artifact.Path);
}
return VerificationResult.Success(manifest);
}
}
```
## Parity Requirements
### Required Contents
| Category | Online | Offline Kit | Notes |
|----------|--------|-------------|-------|
| CDX 1.7 Schema | CDN fetch | Bundled | Schema validation |
| CDX 1.6 Schema | CDN fetch | Bundled | Downgrade validation |
| CVSS v4→v3 Adapter | API lookup | Bundled CSV | Pure function |
| CDX 1.7→1.6 Adapter | API lookup | Bundled CSV | Pure function |
| SLSA 1.2→1.0 Adapter | API lookup | Bundled CSV | Pure function |
| Golden Fixtures | Test repo | Bundled | Determinism tests |
| Tool Binaries | Package registry | Bundled/Checksum | Syft, Trivy |
| Trust Roots | Online PKI | Bundled PEM | Signature verification |
### Functional Parity
The offline kit must support:
| Operation | Online | Offline |
|-----------|--------|---------|
| Schema validation | Yes | Yes |
| SBOM serialization | Yes | Yes |
| Downgrade conversion | Yes | Yes |
| Hash verification | Yes | Yes |
| DSSE verification | Yes | Yes |
| Determinism testing | Yes | Yes |
| Rekor transparency | Yes | No* |
*Offline mode uses mirrored checkpoints instead of live Rekor
## Bundle Generation
### CI Workflow
```yaml
# .gitea/workflows/offline-kit.yml
jobs:
build-offline-kit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Collect schemas
run: |
mkdir -p out/offline/scanner-standards-kit-v1/schemas
cp schemas/cyclonedx-*.json out/offline/scanner-standards-kit-v1/schemas/
cp schemas/spdx-*.json out/offline/scanner-standards-kit-v1/schemas/
- name: Collect adapters
run: |
mkdir -p out/offline/scanner-standards-kit-v1/adapters
cp docs/modules/scanner/fixtures/adapters/*.csv out/offline/scanner-standards-kit-v1/adapters/
cp docs/modules/scanner/fixtures/adapters/hashes.txt out/offline/scanner-standards-kit-v1/adapters/
- name: Collect fixtures
run: |
mkdir -p out/offline/scanner-standards-kit-v1/fixtures
cp -r docs/modules/scanner/fixtures/cdx17-cbom out/offline/scanner-standards-kit-v1/fixtures/
cp -r docs/modules/scanner/fixtures/competitor-adapters out/offline/scanner-standards-kit-v1/fixtures/
- name: Generate manifest
run: scripts/scanner/generate-manifest.sh
- name: Sign bundle
run: scripts/scanner/sign-offline-kit.sh
env:
SIGNING_KEY: ${{ secrets.SCANNER_SIGNING_KEY }}
- name: Verify bundle
run: scripts/scanner/verify-offline-kit.sh
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: scanner-standards-kit-v1
path: out/offline/scanner-standards-kit-v1/
```
## Update Procedure
### Offline Kit Refresh
When schemas/adapters change:
1. Update source artifacts in repository
2. Run hash verification CI
3. Generate new manifest
4. Sign with release key
5. Publish new kit version
6. Document changes in release notes
### Version Compatibility
| Kit Version | Scanner Version | CDX Version | CVSS Support |
|-------------|-----------------|-------------|--------------|
| v1.0.0 | 1.x | 1.6, 1.7 | v3.1, v4.0 |
| v1.1.0 | 1.x | 1.6, 1.7, 1.8* | v3.1, v4.0 |
*Future versions
## Links
- Sprint: `docs/implplan/SPRINT_0186_0001_0001_record_deterministic_execution.md` (SC10)
- Roadmap: `docs/modules/scanner/design/standards-convergence-roadmap.md` (SC1)
- Governance: `docs/modules/scanner/design/schema-governance.md` (SC9)
- Offline Operation: `docs/24_OFFLINE_KIT.md`

View File

@@ -0,0 +1,197 @@
# Schema Governance for Scanner Outputs (SC9)
Status: Draft · Date: 2025-12-04
Scope: Define governance, approvals, RACI, and review cadence for schema bumps, downgrade adapters, and mapping table changes.
## Objectives
- Establish clear ownership and approval workflows for schema changes.
- Define RACI matrix for schema-related decisions.
- Set review cadence and change control procedures.
- Ensure adapter tables are locked with documented changes.
## RACI Matrix
### Schema Changes
| Activity | Product | Scanner TL | Sbomer TL | Policy TL | Ops | QA |
|----------|---------|------------|-----------|-----------|-----|-----|
| CycloneDX version bump | A | R | C | C | I | C |
| CVSS version support | A | R | I | C | I | C |
| SLSA version bump | A | R | C | C | I | C |
| New evidence fields | A | R | C | C | I | C |
| CBOM property additions | A | R | C | C | I | C |
### Adapter Changes
| Activity | Product | Scanner TL | Sbomer TL | Policy TL | Ops | QA |
|----------|---------|------------|-----------|-----------|-----|-----|
| Downgrade adapter update | A | R | C | I | I | R |
| Mapping table changes | A | R | C | I | I | R |
| Hash update approval | A | R | I | I | I | R |
| Fixture updates | I | R | C | I | I | R |
### Release Artifacts
| Activity | Product | Scanner TL | Sbomer TL | Policy TL | Ops | QA |
|----------|---------|------------|-----------|-----------|-----|-----|
| Schema freeze | A | R | C | C | I | I |
| DSSE signing | I | C | I | I | R | I |
| Offline kit bundling | I | I | I | I | R | C |
| Release notes | R | C | C | C | C | I |
Legend: R=Responsible, A=Accountable, C=Consulted, I=Informed
## Schema Bump Workflow
### 1. Proposal Phase
```mermaid
graph LR
A[RFC Draft] --> B[Technical Review]
B --> C{Approved?}
C -->|Yes| D[Implementation]
C -->|No| A
D --> E[Adapter Update]
E --> F[Fixture Update]
F --> G[Hash Freeze]
G --> H[DSSE Sign]
H --> I[Release]
```
### 2. Required Artifacts
| Artifact | Owner | Location |
|----------|-------|----------|
| RFC Document | Scanner TL | `docs/rfcs/scanner/` |
| Mapping CSV | Scanner TL | `docs/modules/scanner/fixtures/adapters/` |
| Golden Fixtures | QA | `docs/modules/scanner/fixtures/cdx17-cbom/` |
| Hash List | QA | `docs/modules/scanner/fixtures/*/hashes.txt` |
| DSSE Envelope | Ops | `out/offline/scanner-standards-kit-v1/` |
### 3. Approval Gates
| Gate | Approvers | Criteria |
|------|-----------|----------|
| RFC Approval | Product + Scanner TL | Technical feasibility, backwards compat |
| Adapter Approval | Scanner TL + QA | Mapping completeness, determinism tests pass |
| Hash Freeze | Scanner TL + QA | All fixtures pass hash validation |
| DSSE Sign | Ops | All hashes recorded, offline kit complete |
| Release | Product | All gates passed, release notes approved |
## Review Cadence
### Regular Reviews
| Review | Frequency | Attendees | Scope |
|--------|-----------|-----------|-------|
| Schema Sync | Monthly | Scanner, Sbomer, Policy TLs | Upcoming changes, deprecations |
| Adapter Review | Per release | Scanner TL, QA | Mapping accuracy, test coverage |
| Hash Audit | Per release | QA, Ops | All fixture hashes valid |
### Ad-hoc Reviews
Triggered by:
- Upstream schema release (CycloneDX, SPDX, SLSA)
- Security advisory requiring field changes
- Customer request for new evidence types
- Determinism test failure
## Change Control
### Acceptable Changes
| Change Type | Requires | Example |
|-------------|----------|---------|
| Add optional field | Scanner TL approval | New evidence property |
| Add required field | RFC + Product approval | New mandatory hash |
| Remove field | RFC + deprecation notice | Legacy property removal |
| Change ordering | Scanner TL + QA approval | Sort key update |
| Update hash | QA approval + documented reason | Fixture content change |
### Prohibited Changes
| Change | Reason | Alternative |
|--------|--------|-------------|
| Silent hash update | Breaks determinism validation | Document change, get approval |
| Remove required field | Breaks consumers | Deprecate with N-1 support |
| Change field type | Breaks serialization | New field with migration |
| Reorder without docs | Breaks hash validation | Update ordering rules + hashes |
## Deprecation Policy
### Deprecation Timeline
| Phase | Duration | Actions |
|-------|----------|---------|
| Announced | +0 days | Add deprecation notice to docs |
| Warning | +30 days | Emit warning in API responses |
| N-1 Support | +90 days | Old format still accepted |
| Removal | +180 days | Old format rejected |
### Deprecation Notice Format
```json
{
"deprecated": {
"field": "ratings[method=CVSSv30]",
"since": "v2.5.0",
"removal": "v3.0.0",
"replacement": "ratings[method=CVSSv31]",
"migrationGuide": "docs/migrations/cvss-v30-removal.md"
}
}
```
## Adapter Locking
### Lock Conditions
Adapters are locked when:
1. Hash recorded in `hashes.txt`
2. DSSE envelope signed
3. Offline kit bundled
### Unlock Process
To modify a locked adapter:
1. Create new version (e.g., `mapping-cvss4-to-cvss3-v2.csv`)
2. Update hash file with new entry
3. Keep old version for N-1 compatibility
4. Get Scanner TL + QA approval
5. Sign new DSSE envelope
## Audit Trail
### Required Records
| Record | Location | Retention |
|--------|----------|-----------|
| RFC decisions | `docs/rfcs/scanner/` | Permanent |
| Hash changes | Git history + `CHANGELOG.md` | Permanent |
| Approval records | PR comments | Permanent |
| DSSE envelopes | CAS + offline kit | Permanent |
### Git Commit Requirements
Schema-related commits must include:
```
feat(scanner): Add CVSS v4 support
- Add CVSSv4 rating method
- Update adapter mapping CSV
- Update golden fixtures
- New hashes recorded
Approved-By: @scanner-tl
Reviewed-By: @qa-lead
Hash-Update: mapping-cvss4-to-cvss3.csv BLAKE3=fa600b26...
Refs: RFC-2025-012, SCAN-GAP-186-SC9
```
## Links
- Sprint: `docs/implplan/SPRINT_0186_0001_0001_record_deterministic_execution.md` (SC9)
- Roadmap: `docs/modules/scanner/design/standards-convergence-roadmap.md` (SC1)
- Adapters: `docs/modules/scanner/fixtures/adapters/`

View File

@@ -0,0 +1,6 @@
7d2f7e34acd2ef5b6ec6b1d99177cef03befeef4146c6e3386f35ed5a3bf43ff confidence_decay_config.sigstore.json
170892f6a48b0aef6f426ea97a86f6cd4420bc52634f12a92f72e20f0fa12e29 ../../decay/confidence_decay_config.yaml
91ced62e93409ab6880465fd50ecf8919275447fcfd9d88fa93f04d1059f57a6 unknowns_scoring_manifest.sigstore.json
450675035928e4771cca1b9e5f9e42035dbe10b3de7b66a4077a7b729b2c5b13 ../../unknowns/unknowns_scoring_manifest.json
618e43a1517cb7205c4e1876a3c20503a685ca6ad226d335fcfcb833355c4fbb heuristics_catalog.sigstore.json
e33fa0963493252a5ac379a12f820f6b356ea94310afd1db9ad7394e8307000e ../../heuristics/heuristics.catalog.json

View File

@@ -0,0 +1 @@
{"mediaType":"application/vnd.dev.sigstore.bundle.v0.3+json","verificationMaterial":{"publicKey":{"hint":"1/nAsWLsk/yOPl4sjynn6FOCC1ixnrbxSK9UHxjF8MQ="},"tlogEntries":[{"logIndex":"741918992","logId":{"keyId":"wNI9atQGlz+VWfO6LRygH4QUfY/8W4RFwiT5i5WRgB0="},"kindVersion":{"kind":"hashedrekord","version":"0.0.1"},"integratedTime":"1764886405","inclusionPromise":{"signedEntryTimestamp":"MEYCIQC+ZHqagsrB9xcLCZoPvkT60dlHFvKAIhRvVpkbPMjxowIhAN9x92EO24+l/F6BPBlHtNRh5/4XEpeoON3EV4ZzQ/Yf"},"inclusionProof":{"logIndex":"620014730","rootHash":"eAQ3rPsqF86CSTtL/YN5hZjexS8HXUkSAUOeJHarr9k=","treeSize":"620014731","hashes":["F/LmDH+ZAjyzUmyuXub9v84E5EnQ1uWz2YJTPVST/wU=","v+L6Vg7QPzlybKDIfLl512gaoHIsGygBHZHURYEqeA0=","yXSFRkGXzx/oyI/73u4Nfp0nA1zOjlU0pxzLgH0siXQ=","ZgNIp7f8+R4ts+jXnyLtYxAjmPR6tLXiGaJA30+TJMk=","PADizUpyshrBmVEwjUe3SP6/WpGdBpEtML2NmyvSnes=","n+0Vf/51myrnoK265V6LwF37riOqw5FOAZHhbitXT7c=","TMHRsLObrpHbd4Kf5cnZismsTDSiYFbQQKDPj6XuGh4=","ADL1dqlw5HTerbkzS06E2GSWcqWOYXsS9QqmrM77njI=","Mo/+V8ftGFQQbS+XsKdaF+l1sDADl3NB/NC1OoAr9WM=","RsQ5xuBa0gKvWk53V8F8JismpQAqEf9N2nqMjFfr/KA=","etMFukD8mHOD37ceTwB1Al2nC3iIzy/CTtNjwflJmDE=","huaH1ZSkRyP4+vpmGtpmkkL845lhcmN9io8MIe6Sob0=","ZmUkYkHBy1B723JrEgiKvepTdHYrP6y2a4oODYvi5VY=","T4DqWD42hAtN+vX8jKCWqoC4meE4JekI9LxYGCcPy1M="],"checkpoint":{"envelope":"rekor.sigstore.dev - 1193050959916656506\n620014731\neAQ3rPsqF86CSTtL/YN5hZjexS8HXUkSAUOeJHarr9k=\n\n— rekor.sigstore.dev wNI9ajBFAiBiHc21523qcx0c09pKKmBo/iIHGNM13UdOklHzel1hQwIhANu9sIf2F5nJw9sNOgRCv4eprjH/aqJlUeMktVhAbIs0\n"}},"canonicalizedBody":"eyJhcGlWZXJzaW9uIjoiMC4wLjEiLCJraW5kIjoiaGFzaGVkcmVrb3JkIiwic3BlYyI6eyJkYXRhIjp7Imhhc2giOnsiYWxnb3JpdGhtIjoic2hhMjU2IiwidmFsdWUiOiIxNzA4OTJmNmE0OGIwYWVmNmY0MjZlYTk3YTg2ZjZjZDQ0MjBiYzUyNjM0ZjEyYTkyZjcyZTIwZjBmYTEyZTI5In19LCJzaWduYXR1cmUiOnsiY29udGVudCI6Ik1FVUNJQ2hKYkRxZ0UzMDJVemx4dXhYWEp2dFJlaHdFMVUxQ3hLZUl3RlJueXdRZUFpRUE3aTlyQ09xS0pvMzJuSGJpWUVqTURBdTdYejFUeGtvRXRFNDJSZ1dtTkNZPSIsInB1YmxpY0tleSI6eyJjb250ZW50IjoiTFMwdExTMUNSVWRKVGlCUVZVSk1TVU1nUzBWWkxTMHRMUzBLVFVacmQwVjNXVWhMYjFwSmVtb3dRMEZSV1VsTGIxcEplbW93UkVGUlkwUlJaMEZGWm05Skt6bFNSa05VWTJacVpVMXhjRU5STTBaQmVYWkxkMEpSVlFwWlFVbE5NbU5tUkZJNFZ6azRUM2h1V0ZZcloyWldOVVJvWm05cE9IRnZaa0Z1Unk5MlF6ZEVZa0pzV0RKMEwyZFVOMGRMVlZwQlEyaEJQVDBLTFMwdExTMUZUa1FnVUZWQ1RFbERJRXRGV1MwdExTMHRDZz09In19fX0="}],"timestampVerificationData":{"rfc3161Timestamps":[{"signedTimestamp":"MIICyjADAgEAMIICwQYJKoZIhvcNAQcCoIICsjCCAq4CAQMxDTALBglghkgBZQMEAgEwgbgGCyqGSIb3DQEJEAEEoIGoBIGlMIGiAgEBBgkrBgEEAYO/MAIwMTANBglghkgBZQMEAgEFAAQgjct6btyQyDhEQnzkzy1RiweuLtlCSK1DeZ2km93UeVcCFQCQfrhvdNAk2det1sZXE/x5MaGGZBgPMjAyNTEyMDQyMjEzMjVaMAMCAQGgMqQwMC4xFTATBgNVBAoTDHNpZ3N0b3JlLmRldjEVMBMGA1UEAxMMc2lnc3RvcmUtdHNhoAAxggHbMIIB1wIBATBRMDkxFTATBgNVBAoTDHNpZ3N0b3JlLmRldjEgMB4GA1UEAxMXc2lnc3RvcmUtdHNhLXNlbGZzaWduZWQCFDoTVC8MkGHuvMFDL8uKjosqI4sMMAsGCWCGSAFlAwQCAaCB/DAaBgkqhkiG9w0BCQMxDQYLKoZIhvcNAQkQAQQwHAYJKoZIhvcNAQkFMQ8XDTI1MTIwNDIyMTMyNVowLwYJKoZIhvcNAQkEMSIEIM8UiAqXpUGIYxRLLpKG2JIxZq0G3QC5Ey8WIYz7VQHkMIGOBgsqhkiG9w0BCRACLzF/MH0wezB5BCCF+Se8B6tiysO0Q1bBDvyBssaIP9p6uebYcNnROs0FtzBVMD2kOzA5MRUwEwYDVQQKEwxzaWdzdG9yZS5kZXYxIDAeBgNVBAMTF3NpZ3N0b3JlLXRzYS1zZWxmc2lnbmVkAhQ6E1QvDJBh7rzBQy/Lio6LKiOLDDAKBggqhkjOPQQDAgRnMGUCMQCHae2v+FNcSZPzLV2csHqD1LU283CoUAGi0Gr0UQfhozk61jWmtKX3qx/icWiLD9ECMAgpC1DHsRvBXN4qbXbxelmBZsUT8ybRD/HTk01S9E6eo7tNjU340eA0USguyUdaHg=="}]}},"messageSignature":{"messageDigest":{"algorithm":"SHA2_256","digest":"FwiS9qSLCu9vQm6peob2zUQgvFJjTxKpL3LiDw+hLik="},"signature":"MEUCIChJbDqgE302UzlxuxXXJvtRehwE1U1CxKeIwFRnywQeAiEA7i9rCOqKJo32nHbiYEjMDAu7Xz1TxkoEtE42RgWmNCY="}}

View File

@@ -0,0 +1 @@
{"mediaType":"application/vnd.dev.sigstore.bundle.v0.3+json","verificationMaterial":{"publicKey":{"hint":"1/nAsWLsk/yOPl4sjynn6FOCC1ixnrbxSK9UHxjF8MQ="},"tlogEntries":[{"logIndex":"741919081","logId":{"keyId":"wNI9atQGlz+VWfO6LRygH4QUfY/8W4RFwiT5i5WRgB0="},"kindVersion":{"kind":"hashedrekord","version":"0.0.1"},"integratedTime":"1764886409","inclusionPromise":{"signedEntryTimestamp":"MEUCIQDC+clBc3oYzLcCJ38MkbZKEyLlYUpjU4ZE6BPMpRB4DwIgXA0zSYlw9FhpF9LlHJAZzPxeWyddunDdfxsBNE2KTiY="},"inclusionProof":{"logIndex":"620014819","rootHash":"rTqjSnnpGY/hoar8ETHIIGp4pQTV6NSjlKBBoeR9h7c=","treeSize":"620014825","hashes":["aPPYsionEQpETmnqlO33tbRN5Ps44tzijHMFDad1UAE=","U52btC8FRhQ/XucngaCv1dsjGQwMHWOAcSub5g2MxDE=","6bAXHkxe3ld4wPw1C7H8lK6v/TsVGvtWat8YLhjaQpc=","MDrcEuVGp6HkhYnTHm48QxxcI2CO908pLKgv84aDkI4=","pdxvVKDmRgD0zo4tCuk9uEVaxf23KaJzWq2UowxNQwE=","TZvx79RM8pnA6jMfcLY7IPW+4F1q6B7iGQcjThTYsFs=","yXSFRkGXzx/oyI/73u4Nfp0nA1zOjlU0pxzLgH0siXQ=","ZgNIp7f8+R4ts+jXnyLtYxAjmPR6tLXiGaJA30+TJMk=","PADizUpyshrBmVEwjUe3SP6/WpGdBpEtML2NmyvSnes=","n+0Vf/51myrnoK265V6LwF37riOqw5FOAZHhbitXT7c=","TMHRsLObrpHbd4Kf5cnZismsTDSiYFbQQKDPj6XuGh4=","ADL1dqlw5HTerbkzS06E2GSWcqWOYXsS9QqmrM77njI=","Mo/+V8ftGFQQbS+XsKdaF+l1sDADl3NB/NC1OoAr9WM=","RsQ5xuBa0gKvWk53V8F8JismpQAqEf9N2nqMjFfr/KA=","etMFukD8mHOD37ceTwB1Al2nC3iIzy/CTtNjwflJmDE=","huaH1ZSkRyP4+vpmGtpmkkL845lhcmN9io8MIe6Sob0=","ZmUkYkHBy1B723JrEgiKvepTdHYrP6y2a4oODYvi5VY=","T4DqWD42hAtN+vX8jKCWqoC4meE4JekI9LxYGCcPy1M="],"checkpoint":{"envelope":"rekor.sigstore.dev - 1193050959916656506\n620014825\nrTqjSnnpGY/hoar8ETHIIGp4pQTV6NSjlKBBoeR9h7c=\n\n— rekor.sigstore.dev wNI9ajBEAiAm1Mg3b30wKKzTofuRGoDKNDSp4N1KUV54iiOCT/TlsgIgdR56SZ3XY388icr807OdOlkq3vZ5K7W1UDFKWVFM5FQ=\n"}},"canonicalizedBody":"eyJhcGlWZXJzaW9uIjoiMC4wLjEiLCJraW5kIjoiaGFzaGVkcmVrb3JkIiwic3BlYyI6eyJkYXRhIjp7Imhhc2giOnsiYWxnb3JpdGhtIjoic2hhMjU2IiwidmFsdWUiOiJlMzNmYTA5NjM0OTMyNTJhNWFjMzc5YTEyZjgyMGY2YjM1NmVhOTQzMTBhZmQxZGI5YWQ3Mzk0ZTgzMDcwMDBlIn19LCJzaWduYXR1cmUiOnsiY29udGVudCI6Ik1FVUNJUUM2K2JVZWxORTZHd3hvZHRMZlBpK3MyQVNZNFBCYTR2Uno4Q3NXMzkxMkh3SWdPY0doVG5iaiswOStjT0ZGTGQveXJ5TWpXQW1rVEc3RXl2SlU5SGI3SnVrPSIsInB1YmxpY0tleSI6eyJjb250ZW50IjoiTFMwdExTMUNSVWRKVGlCUVZVSk1TVU1nUzBWWkxTMHRMUzBLVFVacmQwVjNXVWhMYjFwSmVtb3dRMEZSV1VsTGIxcEplbW93UkVGUlkwUlJaMEZGWm05Skt6bFNSa05VWTJacVpVMXhjRU5STTBaQmVYWkxkMEpSVlFwWlFVbE5NbU5tUkZJNFZ6azRUM2h1V0ZZcloyWldOVVJvWm05cE9IRnZaa0Z1Unk5MlF6ZEVZa0pzV0RKMEwyZFVOMGRMVlZwQlEyaEJQVDBLTFMwdExTMUZUa1FnVUZWQ1RFbERJRXRGV1MwdExTMHRDZz09In19fX0="}],"timestampVerificationData":{"rfc3161Timestamps":[{"signedTimestamp":"MIICyjADAgEAMIICwQYJKoZIhvcNAQcCoIICsjCCAq4CAQMxDTALBglghkgBZQMEAgEwgbgGCyqGSIb3DQEJEAEEoIGoBIGlMIGiAgEBBgkrBgEEAYO/MAIwMTANBglghkgBZQMEAgEFAAQg/0oSYFOhhWtP8A93mi/e6TcpgxvBGN5cH7EYmcVUfxMCFQDkXoyqUhAjBiK5SAphIeSr1b5anBgPMjAyNTEyMDQyMjEzMjlaMAMCAQGgMqQwMC4xFTATBgNVBAoTDHNpZ3N0b3JlLmRldjEVMBMGA1UEAxMMc2lnc3RvcmUtdHNhoAAxggHbMIIB1wIBATBRMDkxFTATBgNVBAoTDHNpZ3N0b3JlLmRldjEgMB4GA1UEAxMXc2lnc3RvcmUtdHNhLXNlbGZzaWduZWQCFDoTVC8MkGHuvMFDL8uKjosqI4sMMAsGCWCGSAFlAwQCAaCB/DAaBgkqhkiG9w0BCQMxDQYLKoZIhvcNAQkQAQQwHAYJKoZIhvcNAQkFMQ8XDTI1MTIwNDIyMTMyOVowLwYJKoZIhvcNAQkEMSIEIMZhkgdkM2piYdPZrQt77iqtp1tfHWRV20hSYBHxDISFMIGOBgsqhkiG9w0BCRACLzF/MH0wezB5BCCF+Se8B6tiysO0Q1bBDvyBssaIP9p6uebYcNnROs0FtzBVMD2kOzA5MRUwEwYDVQQKEwxzaWdzdG9yZS5kZXYxIDAeBgNVBAMTF3NpZ3N0b3JlLXRzYS1zZWxmc2lnbmVkAhQ6E1QvDJBh7rzBQy/Lio6LKiOLDDAKBggqhkjOPQQDAgRnMGUCMQCUDGGLE9CkwWZwiqPRBz/PtyGX6BdzgRbUzlQoU/Z/Y9H3reF4UsHF9MGyZNAQHKACMFlSZoYutcmOl6buXE5/j3dVTeb53AARKQ7TsDXZrXQjqOqad/cwmdfy/kU1ljlxcw=="}]}},"messageSignature":{"messageDigest":{"algorithm":"SHA2_256","digest":"4z+gljSTJSpaw3mhL4IPazVuqUMQr9Hbmtc5ToMHAA4="},"signature":"MEUCIQC6+bUelNE6GwxodtLfPi+s2ASY4PBa4vRz8CsW3912HwIgOcGhTnbj+09+cOFFLd/yryMjWAmkTG7EyvJU9Hb7Juk="}}

View File

@@ -0,0 +1 @@
{"mediaType":"application/vnd.dev.sigstore.bundle.v0.3+json","verificationMaterial":{"publicKey":{"hint":"1/nAsWLsk/yOPl4sjynn6FOCC1ixnrbxSK9UHxjF8MQ="},"tlogEntries":[{"logIndex":"741919046","logId":{"keyId":"wNI9atQGlz+VWfO6LRygH4QUfY/8W4RFwiT5i5WRgB0="},"kindVersion":{"kind":"hashedrekord","version":"0.0.1"},"integratedTime":"1764886407","inclusionPromise":{"signedEntryTimestamp":"MEUCIA3/O19l/Aaj6E+vXhbwGkTsZ+22A89bIEZLlDDbBJnaAiEA6LcphYlxX16FCTUlEA3IfXlV3KKz0FzdxRFJAAYcrzc="},"inclusionProof":{"logIndex":"620014784","rootHash":"SveF4czd7DxKRnpObfYSqYghJbz1fN7IcQk3mHlaQHY=","treeSize":"620014789","hashes":["roFEPPDy4MNeYaRiGImtAYAZaSOfEqB4PMRX0QfEHj4=","NTOqAxRPc7aNHck62yTD9sh4lD9mXbh6vWLDntk1M8M=","fh3HIcufIitN45GRTE2Y2T7be0kSxC6ZDGF42r0EF6g=","TZvx79RM8pnA6jMfcLY7IPW+4F1q6B7iGQcjThTYsFs=","yXSFRkGXzx/oyI/73u4Nfp0nA1zOjlU0pxzLgH0siXQ=","ZgNIp7f8+R4ts+jXnyLtYxAjmPR6tLXiGaJA30+TJMk=","PADizUpyshrBmVEwjUe3SP6/WpGdBpEtML2NmyvSnes=","n+0Vf/51myrnoK265V6LwF37riOqw5FOAZHhbitXT7c=","TMHRsLObrpHbd4Kf5cnZismsTDSiYFbQQKDPj6XuGh4=","ADL1dqlw5HTerbkzS06E2GSWcqWOYXsS9QqmrM77njI=","Mo/+V8ftGFQQbS+XsKdaF+l1sDADl3NB/NC1OoAr9WM=","RsQ5xuBa0gKvWk53V8F8JismpQAqEf9N2nqMjFfr/KA=","etMFukD8mHOD37ceTwB1Al2nC3iIzy/CTtNjwflJmDE=","huaH1ZSkRyP4+vpmGtpmkkL845lhcmN9io8MIe6Sob0=","ZmUkYkHBy1B723JrEgiKvepTdHYrP6y2a4oODYvi5VY=","T4DqWD42hAtN+vX8jKCWqoC4meE4JekI9LxYGCcPy1M="],"checkpoint":{"envelope":"rekor.sigstore.dev - 1193050959916656506\n620014789\nSveF4czd7DxKRnpObfYSqYghJbz1fN7IcQk3mHlaQHY=\n\n— rekor.sigstore.dev wNI9ajBFAiEAk1F29XIl+MJKZjEtl7RC7+KIRu1I3J1hvpE52TchQhwCIAjcmqn55hYy4EJq30RoJcqjaa2YjzU3Fz9ZmLagRUkp\n"}},"canonicalizedBody":"eyJhcGlWZXJzaW9uIjoiMC4wLjEiLCJraW5kIjoiaGFzaGVkcmVrb3JkIiwic3BlYyI6eyJkYXRhIjp7Imhhc2giOnsiYWxnb3JpdGhtIjoic2hhMjU2IiwidmFsdWUiOiI0NTA2NzUwMzU5MjhlNDc3MWNjYTFiOWU1ZjllNDIwMzVkYmUxMGIzZGU3YjY2YTQwNzdhN2I3MjliMmM1YjEzIn19LCJzaWduYXR1cmUiOnsiY29udGVudCI6Ik1FVUNJUURnZHIvUUYyU2NRZFRLTEZiZi9NbXBFUy9FeUlQbkZ4VDZzNWdncTVHcE1RSWdZT1o2ditTVmF0NjhhNld2M2FKV2Jyc1grYmRLZXUyYzhwbkoyRkR5WmQ4PSIsInB1YmxpY0tleSI6eyJjb250ZW50IjoiTFMwdExTMUNSVWRKVGlCUVZVSk1TVU1nUzBWWkxTMHRMUzBLVFVacmQwVjNXVWhMYjFwSmVtb3dRMEZSV1VsTGIxcEplbW93UkVGUlkwUlJaMEZGWm05Skt6bFNSa05VWTJacVpVMXhjRU5STTBaQmVYWkxkMEpSVlFwWlFVbE5NbU5tUkZJNFZ6azRUM2h1V0ZZcloyWldOVVJvWm05cE9IRnZaa0Z1Unk5MlF6ZEVZa0pzV0RKMEwyZFVOMGRMVlZwQlEyaEJQVDBLTFMwdExTMUZUa1FnVUZWQ1RFbERJRXRGV1MwdExTMHRDZz09In19fX0="}],"timestampVerificationData":{"rfc3161Timestamps":[{"signedTimestamp":"MIICyjADAgEAMIICwQYJKoZIhvcNAQcCoIICsjCCAq4CAQMxDTALBglghkgBZQMEAgEwgbcGCyqGSIb3DQEJEAEEoIGnBIGkMIGhAgEBBgkrBgEEAYO/MAIwMTANBglghkgBZQMEAgEFAAQgstEHBkhEQn5BWHaDsd12awP2UzNfXZfIEbL3APw8bYkCFGogTw2WF2bCdBT7TFpuO80U9VDMGA8yMDI1MTIwNDIyMTMyN1owAwIBAaAypDAwLjEVMBMGA1UEChMMc2lnc3RvcmUuZGV2MRUwEwYDVQQDEwxzaWdzdG9yZS10c2GgADGCAdwwggHYAgEBMFEwOTEVMBMGA1UEChMMc2lnc3RvcmUuZGV2MSAwHgYDVQQDExdzaWdzdG9yZS10c2Etc2VsZnNpZ25lZAIUOhNULwyQYe68wUMvy4qOiyojiwwwCwYJYIZIAWUDBAIBoIH8MBoGCSqGSIb3DQEJAzENBgsqhkiG9w0BCRABBDAcBgkqhkiG9w0BCQUxDxcNMjUxMjA0MjIxMzI3WjAvBgkqhkiG9w0BCQQxIgQgHtv3gRImNDY6l7k03sGgaMhYxnjaskxMDmUgPSa1aTQwgY4GCyqGSIb3DQEJEAIvMX8wfTB7MHkEIIX5J7wHq2LKw7RDVsEO/IGyxog/2nq55thw2dE6zQW3MFUwPaQ7MDkxFTATBgNVBAoTDHNpZ3N0b3JlLmRldjEgMB4GA1UEAxMXc2lnc3RvcmUtdHNhLXNlbGZzaWduZWQCFDoTVC8MkGHuvMFDL8uKjosqI4sMMAoGCCqGSM49BAMCBGgwZgIxAOqsNu05oQAdu79Xu/EUIsDG4hw7oHgyYcWWXWcPiezIYWH4xZ5OWwwzcNmK6ggH4QIxAK0Su1pzh7KmU8XV5KAxlTPOuARkaSsQqS+zmWsGO1BI36OamFJIM3+R3fuQ0oQiIA=="}]}},"messageSignature":{"messageDigest":{"algorithm":"SHA2_256","digest":"RQZ1A1ko5HccyhueX55CA12+ELPee2akB3p7cpssWxM="},"signature":"MEUCIQDgdr/QF2ScQdTKLFbf/MmpES/EyIPnFxT6s5ggq5GpMQIgYOZ6v+SVat68a6Wv3aJWbrsX+bdKeu2c8pnJ2FDyZd8="}}

View File

@@ -2,46 +2,82 @@
Artifacts prepared 2025-12-01 (UTC) for DSSE signing and Evidence Locker ingest:
- Decay config: `docs/modules/signals/decay/confidence_decay_config.yaml`
- Unknowns scoring manifest: `docs/modules/signals/unknowns/unknowns_scoring_manifest.json`
- Heuristic catalog + schema + fixtures: `docs/modules/signals/heuristics/`
- Checksums: `docs/modules/signals/SHA256SUMS`
| Artifact | Path | Predicate |
|----------|------|-----------|
| Decay config | `docs/modules/signals/decay/confidence_decay_config.yaml` | `stella.ops/confidenceDecayConfig@v1` |
| Unknowns manifest | `docs/modules/signals/unknowns/unknowns_scoring_manifest.json` | `stella.ops/unknownsScoringManifest@v1` |
| Heuristics catalog | `docs/modules/signals/heuristics/heuristics.catalog.json` | `stella.ops/heuristicCatalog@v1` |
| Checksums | `docs/modules/signals/SHA256SUMS` | — |
Planned Evidence Locker paths (to fill post-signing):
- `evidence-locker/signals/decay/2025-12-01/confidence_decay_config.dsse`
- `evidence-locker/signals/unknowns/2025-12-01/unknowns_scoring_manifest.dsse`
- `evidence-locker/signals/heuristics/2025-12-01/heuristics_catalog.dsse`
- `evidence-locker/signals/heuristics/2025-12-01/fixtures/` (golden inputs/outputs)
## CI Automated Signing
Pending steps:
0) Provide signing key: CI/ops should supply `COSIGN_PRIVATE_KEY_B64` (base64 of private key) and optional `COSIGN_PASSWORD`. Local dev can place a key at `tools/cosign/cosign.key` (see `tools/cosign/cosign.key.example` stub) or decode the env var to `/tmp/cosign.key`. The helper script `tools/cosign/sign-signals.sh` auto-detects the key and cosign version.
1) Sign each artifact with its predicate (cosign v3.0.2 in `/usr/local/bin`, use `--bundle`; v2.6.0 fallback in `tools/cosign` also works with `--output-signature`):
- `stella.ops/confidenceDecayConfig@v1`
- `stella.ops/unknownsScoringManifest@v1`
- `stella.ops/heuristicCatalog@v1`
Shortcut: `OUT_DIR=evidence-locker/signals/2025-12-01 tools/cosign/sign-signals.sh`
Example (v3, replace KEY):
```bash
cosign sign-blob \
--key cosign.key \
--predicate-type stella.ops/confidenceDecayConfig@v1 \
--bundle confidence_decay_config.sigstore.json \
decay/confidence_decay_config.yaml
```
v2.6.0 fallback (if PATH prefixed with `tools/cosign`):
```bash
cosign sign-blob \
--key cosign.key \
--predicate-type stella.ops/confidenceDecayConfig@v1 \
--output-signature confidence_decay_config.dsse \
decay/confidence_decay_config.yaml
```
2) Record SHA256 from `SHA256SUMS` in DSSE annotations (or bundle metadata); keep canonical filenames:
- v3: `confidence_decay_config.sigstore.json`, `unknowns_scoring_manifest.sigstore.json`, `heuristics_catalog.sigstore.json`
- v2 fallback: `.dsse` signatures.
3) Place signed envelopes + checksums in the Evidence Locker paths above; update sprint tracker Delivery Tracker rows 57 and Decisions & Risks with the final URIs.
4) Add signer/approver IDs to the sprint Execution Log once signatures are complete.
The `.gitea/workflows/signals-dsse-sign.yml` workflow automates DSSE signing.
Notes:
- Use UTC timestamps in DSSE `issuedAt`.
- Ensure offline parity by copying envelopes + SHA256SUMS into the offline kit bundle when ready.
### Prerequisites (CI Secrets)
| Secret | Description |
|--------|-------------|
| `COSIGN_PRIVATE_KEY_B64` | Base64-encoded cosign private key (required for production) |
| `COSIGN_PASSWORD` | Password for encrypted key (if applicable) |
| `CI_EVIDENCE_LOCKER_TOKEN` | Token for Evidence Locker push (optional) |
### Trigger
- **Automatic**: Push to `main` affecting `docs/modules/signals/**` or `tools/cosign/sign-signals.sh`
- **Manual**: Workflow dispatch with `allow_dev_key=1` for testing
### Output
Signed artifacts uploaded as workflow artifact `signals-dsse-signed-{run}` and optionally pushed to Evidence Locker.
## Development Signing (Local Testing)
A development key pair is available for smoke tests:
```bash
# Sign with dev key
COSIGN_ALLOW_DEV_KEY=1 COSIGN_PASSWORD=stellaops-dev \
OUT_DIR=docs/modules/signals/dev-test \
tools/cosign/sign-signals.sh
# Verify signature
cosign verify-blob \
--key tools/cosign/cosign.dev.pub \
--bundle docs/modules/signals/dev-test/confidence_decay_config.sigstore.json \
docs/modules/signals/decay/confidence_decay_config.yaml
```
**Note**: Dev key signatures are NOT suitable for Evidence Locker or production use.
## Production Signing (Manual)
For production signing without CI:
```bash
# Option 1: Place key file
cp /path/to/production.key tools/cosign/cosign.key
OUT_DIR=evidence-locker/signals/2025-12-01 tools/cosign/sign-signals.sh
# Option 2: Use base64 env var
export COSIGN_PRIVATE_KEY_B64=$(cat production.key | base64 -w0)
export COSIGN_PASSWORD=your-password
OUT_DIR=evidence-locker/signals/2025-12-01 tools/cosign/sign-signals.sh
```
## Evidence Locker Paths
Post-signing, artifacts go to:
- `evidence-locker/signals/2025-12-01/confidence_decay_config.sigstore.json`
- `evidence-locker/signals/2025-12-01/unknowns_scoring_manifest.sigstore.json`
- `evidence-locker/signals/2025-12-01/heuristics_catalog.sigstore.json`
- `evidence-locker/signals/2025-12-01/SHA256SUMS`
## Post-Signing Checklist
1. Verify signatures against public key
2. Update sprint tracker (SPRINT_0140) Delivery Tracker rows 57
3. Add signer ID to Execution Log
4. Copy to offline kit bundle for air-gap parity
## Notes
- All timestamps use UTC ISO-8601 format
- Signatures disable tlog upload (`--tlog-upload=false`) for offline compatibility
- See `tools/cosign/README.md` for detailed key management and CI setup

View File

@@ -0,0 +1,60 @@
# UI Micro-Interactions Component Map (MI6)
This document maps StellaOps UI components to their interaction types and motion token usage. Components must use tokens from the catalog (`tokens/motion.scss`, `motion-tokens.ts`) rather than bespoke values.
## Token Catalog Reference
| Token | CSS Variable | Duration | Use Case |
| --- | --- | --- | --- |
| durationXs | --motion-duration-xs | 80ms | Focus rings, instant feedback |
| durationSm | --motion-duration-sm | 140ms | Hover states, button presses |
| durationMd | --motion-duration-md | 200ms | Modal open/close, panel slide |
| durationLg | --motion-duration-lg | 260ms | Page transitions, accordions |
| durationXl | --motion-duration-xl | 320ms | Complex sequences, onboarding |
| easeStandard | --motion-ease-standard | cubic-bezier(0.2,0,0,1) | Default for all transitions |
| easeEntrance | --motion-ease-entrance | cubic-bezier(0.18,0.89,0.32,1) | Elements appearing |
| easeExit | --motion-ease-exit | cubic-bezier(0.36,0,-0.56,-0.56) | Elements leaving |
| easeBounce | --motion-ease-bounce | cubic-bezier(0.34,1.56,0.64,1) | Playful feedback only |
## Component Mapping
| Component | Interaction Type | Tokens Used | Reduced-Motion Behavior |
| --- | --- | --- | --- |
| Button (primary) | Hover lift, press scale | durationSm, easeStandard, scaleSm | No transform, instant state change |
| Button (secondary) | Hover border, press | durationXs, easeStandard | Instant state change |
| Modal/Dialog | Fade + slide up | durationMd, easeEntrance/Exit, translateMd | Instant show/hide |
| Dropdown/Menu | Fade + scale | durationSm, easeStandard, scaleSm | Instant show/hide |
| Toast/Snackbar | Slide in, auto-dismiss | durationMd, easeEntrance, translateLg | Instant show/hide |
| Accordion | Height expand | durationMd, easeStandard | Instant toggle |
| Tabs | Indicator slide | durationSm, easeStandard | Instant indicator |
| Progress bar | Width fill | durationMd, easeStandard | No animation, instant fill |
| Spinner/Loader | Rotate animation | N/A (CSS keyframe) | Static icon or hidden |
| Skeleton | Shimmer pulse | durationXl, easeStandard | Static skeleton, no shimmer |
| Badge (status) | Scale pop on change | durationXs, easeBounce, scaleSm | No scale, instant |
| Focus ring | Fade in | durationXs, easeStandard | Always visible, no fade |
| Tooltip | Fade + offset | durationSm, easeEntrance, translateSm | Instant show/hide |
| Banner (info) | Slide down | durationMd, easeEntrance, translateMd | Instant show/hide |
| Error state | Shake animation | durationSm, easeStandard | No shake, border only |
## Perf Budget Compliance (MI5)
| Metric | Budget | Measurement |
| --- | --- | --- |
| Interaction response | <= 100ms | Time to visual feedback |
| Animation frame (avg) | <= 16ms | Chrome DevTools Performance |
| Animation frame (p95) | <= 50ms | Chrome DevTools Performance |
| Layout shift | <= 0.05 CLS | Lighthouse CI |
| LCP placeholder | Show within 400ms | Lighthouse CI |
## Validation Rules
1. **Lint rule**: ESLint/Stylelint blocks non-catalog easing values in `.scss` and `.ts` files
2. **Storybook check**: All component stories must demonstrate reduced-motion variant
3. **Playwright check**: Snapshot tests run with `--disable-animations` and reduced-motion emulation
4. **CI gate**: Perf budget failures block merge
## Evidence
- Token catalog: `src/Web/StellaOps.Web/src/styles/tokens/_motion.scss`
- TypeScript tokens: `src/Web/StellaOps.Web/src/app/styles/motion-tokens.ts`
- Storybook stories: `src/Web/StellaOps.Web/src/stories/motion-tokens.stories.ts`

View File

@@ -0,0 +1,149 @@
# UI Micro-Interaction Theme & Contrast Guidance (MI10)
This document defines theme tokens and contrast requirements for micro-interactions across light, dark, and high-contrast modes.
## Color Contrast Requirements
| Element Type | Minimum Contrast Ratio | WCAG Level |
| --- | --- | --- |
| Normal text (< 18px) | 4.5:1 | AA |
| Large text (>= 18px bold or >= 24px) | 3:1 | AA |
| UI components (borders, icons) | 3:1 | AA |
| Focus indicators | 3:1 | AA |
| Status colors on background | 3:1 | AA |
## Theme Token Reference
### Light Theme (Default)
```scss
:root {
/* Backgrounds */
--theme-bg-primary: #ffffff;
--theme-bg-secondary: #f8fafc;
--theme-bg-elevated: #ffffff;
--theme-bg-overlay: rgba(15, 23, 42, 0.5);
/* Text */
--theme-text-primary: #0f172a; /* 15.42:1 on white */
--theme-text-secondary: #475569; /* 7.05:1 on white */
--theme-text-muted: #94a3b8; /* 3.21:1 on white - decorative only */
--theme-text-inverse: #ffffff;
/* Borders */
--theme-border-default: #e2e8f0;
--theme-border-focus: #3b82f6; /* 4.5:1 on white */
--theme-border-error: #ef4444;
/* Status Colors */
--theme-status-success: #16a34a; /* 4.52:1 on white */
--theme-status-warning: #d97706; /* 4.51:1 on white */
--theme-status-error: #dc2626; /* 5.92:1 on white */
--theme-status-info: #2563eb; /* 5.28:1 on white */
/* Focus Ring */
--theme-focus-ring-color: #3b82f6;
--theme-focus-ring-width: 2px;
--theme-focus-ring-offset: 2px;
}
```
### Dark Theme
```scss
[data-theme='dark'] {
/* Backgrounds */
--theme-bg-primary: #0f172a;
--theme-bg-secondary: #1e293b;
--theme-bg-elevated: #334155;
--theme-bg-overlay: rgba(0, 0, 0, 0.7);
/* Text */
--theme-text-primary: #f8fafc; /* 15.42:1 on dark bg */
--theme-text-secondary: #cbd5e1; /* 8.12:1 on dark bg */
--theme-text-muted: #64748b;
--theme-text-inverse: #0f172a;
/* Borders */
--theme-border-default: #334155;
--theme-border-focus: #60a5fa;
--theme-border-error: #f87171;
/* Status Colors */
--theme-status-success: #4ade80;
--theme-status-warning: #fbbf24;
--theme-status-error: #f87171;
--theme-status-info: #60a5fa;
/* Focus Ring */
--theme-focus-ring-color: #60a5fa;
}
```
### High Contrast Theme
```scss
[data-theme='high-contrast'] {
/* Backgrounds */
--theme-bg-primary: #000000;
--theme-bg-secondary: #000000;
--theme-bg-elevated: #1a1a1a;
--theme-bg-overlay: rgba(0, 0, 0, 0.9);
/* Text */
--theme-text-primary: #ffffff; /* 21:1 on black */
--theme-text-secondary: #ffffff;
--theme-text-muted: #e5e5e5;
--theme-text-inverse: #000000;
/* Borders - thicker and higher contrast */
--theme-border-default: #ffffff;
--theme-border-focus: #ffff00; /* Yellow for maximum visibility */
--theme-border-error: #ff0000;
/* Status Colors - bold, saturated */
--theme-status-success: #00ff00;
--theme-status-warning: #ffff00;
--theme-status-error: #ff0000;
--theme-status-info: #00ffff;
/* Focus Ring - extra visible */
--theme-focus-ring-color: #ffff00;
--theme-focus-ring-width: 3px;
--theme-focus-ring-offset: 3px;
}
```
## Focus Indicator Requirements
1. Focus ring must be visible in all themes (never `outline: none` without replacement)
2. Minimum 2px width, 3:1 contrast with adjacent colors
3. In high-contrast mode: 3px width, yellow (#ffff00) color
4. Focus must be visible when using keyboard navigation only
## Validation Checklist
- [ ] All text passes 4.5:1 contrast ratio (use axe DevTools)
- [ ] All UI elements pass 3:1 contrast ratio
- [ ] Focus indicators visible in all themes
- [ ] No reliance on color alone for state indication
- [ ] Theme toggle persists user preference
- [ ] Reduced motion settings respected across themes
## Testing
Run axe accessibility tests in Storybook:
```bash
npm run test:a11y
```
Run contrast checks in Playwright:
```bash
npx playwright test --project=a11y
```
## Evidence
- Storybook: Theme toggle available in toolbar
- Playwright: `tests/e2e/theme-contrast.spec.ts`
- axe reports: `tests/a11y/reports/`

View File

@@ -0,0 +1,72 @@
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "https://stella-ops.org/ui/schemas/ui-micro.schema.json",
"title": "UI Micro-Interaction Telemetry Event",
"description": "Schema for ui.micro.* telemetry events (MI7)",
"type": "object",
"required": ["schema_version", "event_type", "timestamp", "tenant_id", "surface", "component", "action"],
"properties": {
"schema_version": {
"type": "string",
"pattern": "^v[0-9]+\\.[0-9]+$",
"description": "Schema version (e.g., v1.0)"
},
"event_type": {
"type": "string",
"enum": ["ui.micro.interaction", "ui.micro.error", "ui.micro.latency", "ui.micro.state_change"],
"description": "Event type category"
},
"timestamp": {
"type": "string",
"format": "date-time",
"description": "ISO-8601 UTC timestamp"
},
"tenant_id": {
"type": "string",
"minLength": 1,
"description": "Tenant identifier (required for scoping)"
},
"surface": {
"type": "string",
"enum": ["dashboard", "vulnerabilities", "graph", "exceptions", "releases", "settings", "onboarding"],
"description": "UI surface/page where interaction occurred"
},
"component": {
"type": "string",
"minLength": 1,
"description": "Component name (e.g., button, modal, dropdown)"
},
"action": {
"type": "string",
"minLength": 1,
"description": "Action taken (e.g., click, hover, focus, submit, dismiss)"
},
"latency_ms": {
"type": "number",
"minimum": 0,
"description": "Time from action to visual feedback in milliseconds"
},
"outcome": {
"type": "string",
"enum": ["success", "error", "cancelled", "timeout", "offline"],
"description": "Result of the interaction"
},
"reduced_motion": {
"type": "boolean",
"description": "True if user has prefers-reduced-motion enabled"
},
"offline_mode": {
"type": "boolean",
"description": "True if app is in offline mode"
},
"error_code": {
"type": ["string", "null"],
"description": "Error code if outcome is error (e.g., UI_ERR_001)"
},
"correlation_id": {
"type": "string",
"description": "Trace correlation ID for request tracing"
}
},
"additionalProperties": false
}