themesd advisories enhanced

This commit is contained in:
StellaOps Bot
2025-12-14 21:29:44 +02:00
parent 9202cd7da8
commit 3411e825cd
10 changed files with 359 additions and 20 deletions

View File

@@ -634,6 +634,28 @@ proof_coverage_reachable = reachable_findings_with_proofs / total_reachable_find
- BF < 0.90 overall page/block release - BF < 0.90 overall page/block release
- Regulated BF < 0.95 page/block release - Regulated BF < 0.95 page/block release
## 15. DETERMINISTIC PACKAGING (BUNDLES)
Determinism applies to *packaging*, not only algorithms.
Rules for proof bundles and offline kits:
- Prefer `tar` with deterministic ordering; avoid formats that inject timestamps by default.
- Canonical file order: lexicographic path sort; include an `index.json` listing files and their digests in the same order.
- Normalize file metadata: fixed uid/gid, fixed mtime, stable permissions; record the chosen policy in the manifest.
- Compression must be reproducible (fixed level/settings; no embedded timestamps).
- Bundle hash is computed over the canonical archive bytes and must be DSSE-signed.
## 16. BENCHMARK HARNESS (MOAT METRICS)
Use the repo benchmark harness as the single place where moat metrics are measured and enforced:
- Harness root: `bench/README.md` (layout, verifiers, comparison tools).
- Evidence contracts: `docs/benchmarks/vex-evidence-playbook.md` and `docs/replay/DETERMINISTIC_REPLAY.md`.
Developer rules:
- No feature touching scans/policy/proofs ships without at least one benchmark scenario or an extension of an existing one.
- If golden outputs change intentionally, record a short why note (which metric improved, which contract changed) and keep artifacts deterministic.
- Bench runs must record and validate `graphRevisionId` and per-verdict receipts (see `docs/product-advisories/14-Dec-2025 - Proof and Evidence Chain Technical Reference.md`).
--- ---
**Document Version**: 1.0 **Document Version**: 1.0

View File

@@ -10,6 +10,16 @@
--- ---
## 0. WHERE TO START (IN-REPO)
- `docs/README.md` (doc map and module dossiers)
- `docs/07_HIGH_LEVEL_ARCHITECTURE.md` (end-to-end system model)
- `docs/18_CODING_STANDARDS.md` (C# conventions, repo rules, gates)
- `docs/19_TEST_SUITE_OVERVIEW.md` (test layers, CI expectations)
- `docs/technical/development/README.md` (developer tooling and workflows)
- `docs/10_PLUGIN_SDK_GUIDE.md` (plugin SDK + packaging)
- `LICENSE` (AGPL-3.0-or-later obligations)
## 1. CORE ENGINEERING PRINCIPLES ## 1. CORE ENGINEERING PRINCIPLES
- **SOLID First**: Interface and dependency inversion required - **SOLID First**: Interface and dependency inversion required
@@ -20,6 +30,15 @@
- **Fail-fast Startup**: Validate configuration before web host starts - **Fail-fast Startup**: Validate configuration before web host starts
- **Hot-load Compatibility**: Avoid static singletons that survive plugin unload - **Hot-load Compatibility**: Avoid static singletons that survive plugin unload
### 1.1 Product Non-Negotiables
- **Determinism first**: stable ordering + canonicalization; no hidden clocks/entropy in core algorithms
- **Offline-first**: no silent network dependency; every workflow has an offline/mirrored path
- **Evidence over UI**: the API + signed artifacts must fully explain what the UI shows
- **Contracts are contracts**: version schemas; add fields with defaults; never silently change semantics
- **Golden fixtures required**: any change to scanning/policy/proofs must be covered by deterministic fixtures + replay tests
- **Respect service boundaries**: do not re-implement scanner/policy logic in downstream services or UI
## 2. REPOSITORY LAYOUT RULES ## 2. REPOSITORY LAYOUT RULES
- No "Module" folders or nested solution hierarchies - No "Module" folders or nested solution hierarchies
@@ -221,6 +240,34 @@ cosign sign --key $COSIGN_KEY out/MyPlugin.Schedule.dll
- Merge strategies named and versioned - Merge strategies named and versioned
- Artifacts record which lattice algorithm used - Artifacts record which lattice algorithm used
### 14.5 Sbomer Module
- Emit SPDX 3.0.1 and CycloneDX 1.6 with stable ordering and deterministic IDs
- Persist raw bytes + canonical form; hash canonical bytes for digest binding
- Produce DSSE attestations for SBOM linkage and generation provenance
### 14.6 Feedser Module
- Treat every feed import as a versioned snapshot (URI + time + content hashes)
- Support deterministic export/import for offline bundles
- Imports are idempotent (same snapshot digest is a no-op)
### 14.7 Concelier Module
- Never mutate evidence; attach business context and build views only
- Never re-implement scanner/policy risk logic; consume signed decisions + proofs
### 14.8 UI / Console
- UI is an explainer and navigator; the evidence chain must be retrievable via API and export
- Any UI state must be reproducible from persisted evidence + graph revision identifiers
### 14.9 Zastava / Advisory AI
- AI consumes evidence graph IDs/digests; it is never a source of truth for vulnerability states
- Pipelines must never pass/fail based on AI text; enforcement is always policy + lattice + evidence
- Any AI output must reference evidence IDs and remain optional/offline-safe
## 15. COMMON PITFALLS & SOLUTIONS ## 15. COMMON PITFALLS & SOLUTIONS
### 15.1 Avoid ### 15.1 Avoid

View File

@@ -8,6 +8,25 @@
--- ---
## 0. AIR-GAP PRE-SEED CHECKLIST (GOLDEN INPUTS)
Before you can verify or ingest anything offline, the air-gap must be pre-seeded with:
- **Root of trust**
- Vendor/org public keys (and chains if using Fulcio-like PKI)
- Pinned transparency log root(s) for the offline log/mirror
- **Policy bundle**
- Verification policies (Cosign/in-toto rules, allow/deny lists)
- Lattice rules for VEX merge/precedence
- Toolchain manifest with hash-pinned binaries (cosign/oras/jq/scanner, etc.)
- **Evidence bundle**
- SBOMs (CycloneDX/SPDX), DSSE-wrapped attestations (provenance/VEX/SLSA)
- Optional: frozen vendor feeds/VEX snapshots (as content-addressed inputs)
- **Offline log snapshot**
- Signed checkpoint/tree head and entry pack (leaves + proofs) for every receipt you rely on
Ship bundles on signed, write-once media (or equivalent operational controls) and keep chain-of-custody receipts alongside the bundle manifest.
## 1. OFFLINE UPDATE BUNDLE STRUCTURE ## 1. OFFLINE UPDATE BUNDLE STRUCTURE
### 1.1 Directory Layout ### 1.1 Directory Layout
@@ -92,6 +111,7 @@
- Trust root: pinned publisher public keys (out-of-band rotation) - Trust root: pinned publisher public keys (out-of-band rotation)
- Monotonicity: only activate if `manifest.version > current.version` - Monotonicity: only activate if `manifest.version > current.version`
- Rollback/testing: allow an explicit force-activate path for emergency validation, but record it as a non-monotonic override in state + audit logs
- Atomic switch: unpack → validate → symlink flip (`db/staging/``db/active/`) - Atomic switch: unpack → validate → symlink flip (`db/staging/``db/active/`)
- Quarantine on failure: move to `updates/quarantine/` with reason code - Quarantine on failure: move to `updates/quarantine/` with reason code
@@ -143,6 +163,26 @@ constraints:
allow_expired_if_timepinned: true allow_expired_if_timepinned: true
``` ```
### 4.1 Offline Keyring Usage (Cosign / in-toto)
Cosign-style verification must not require any online CA, Rekor fetch, or DNS lookups. Use pinned keys and (when applicable) an offline Rekor mirror snapshot.
```bash
# Verify a DSSE attestation using a locally pinned key (no network assumptions)
COSIGN_EXPERIMENTAL=1 cosign verify-attestation \
--key ./evidence/keys/identities/vendor_A.pub \
--policy ./evidence/policy/verify-policy.yaml \
<artifact-digest-or-ref>
```
```bash
# in-toto offline verification (layout + local keys)
in-toto-verify \
--layout ./evidence/attestations/layout.root.json \
--layout-keys ./evidence/keys/identities/vendor_A.pub \
--products <artifact-file>
```
## 5. DETERMINISTIC EVIDENCE RECONCILIATION ALGORITHM ## 5. DETERMINISTIC EVIDENCE RECONCILIATION ALGORITHM
``` ```
@@ -263,7 +303,19 @@ rekorLogIndex
| Digest mismatch | Reject, quarantine | `IMPORT_FAILED_DIGEST` | | Digest mismatch | Reject, quarantine | `IMPORT_FAILED_DIGEST` |
| Version not monotonic | Reject | `IMPORT_FAILED_VERSION` | | Version not monotonic | Reject | `IMPORT_FAILED_VERSION` |
### 11.2 Quarantine Structure ### 11.2 Reason Codes (structured logs/metrics)
Use stable, machine-readable reason codes in logs/metrics and in `ProblemDetails` payloads:
- `HASH_MISMATCH`
- `SIG_FAIL_COSIGN`
- `SIG_FAIL_MANIFEST`
- `DSSE_VERIFY_FAIL`
- `REKOR_VERIFY_FAIL`
- `SELFTEST_FAIL`
- `VERSION_NON_MONOTONIC`
- `POLICY_DENY`
### 11.3 Quarantine Structure
``` ```
/updates/quarantine/<timestamp>-<reason>/ /updates/quarantine/<timestamp>-<reason>/
@@ -285,6 +337,16 @@ stellaops offline import \
--trust-root /evidence/keys/roots/stella-root.pub --trust-root /evidence/keys/roots/stella-root.pub
``` ```
```bash
# Emergency testing only (records a non-monotonic override in the audit trail)
stellaops offline import \
--bundle ./bundle-2025-12-07.tar.zst \
--verify-dsse \
--verify-rekor \
--trust-root /evidence/keys/roots/stella-root.pub \
--force-activate
```
### 12.2 Offline Kit Status ### 12.2 Offline Kit Status
```bash ```bash

View File

@@ -8,6 +8,19 @@
--- ---
## 0. POSTGRESQL VS MONGODB (DECISION RULE)
Default posture:
- **System of record**: PostgreSQL (JSONB-first, per-module schema isolation).
- **Queues & coordination**: PostgreSQL (`SKIP LOCKED`, advisory locks when needed).
- **Cache/acceleration only**: Valkey/Redis (ephemeral).
- **MongoDB**: only when you have a *clear* need for very large, read-optimized snapshot workloads (e.g., extremely large historical graphs), and you can regenerate those snapshots deterministically from the Postgres source-of-truth.
When MongoDB is justified:
- Interactive exploration over hundreds of millions of nodes/edges where denormalized reads beat relational joins.
- Snapshot cadence is batchy (hourly/daily) and you can re-emit snapshots deterministically.
- You need to isolate read spikes from transactional control-plane writes.
## 1. MODULE-SCHEMA MAPPING ## 1. MODULE-SCHEMA MAPPING
| Module | Schema | Primary Tables | | Module | Schema | Primary Tables |
@@ -132,6 +145,23 @@ where j.id = cte.id
returning j.*; returning j.*;
``` ```
### 3.3.1 Advisory Locks (coordination / idempotency guards)
Use advisory locks for per-tenant singleton work or "at-most-once" critical sections (do not hold them while doing long-running work):
```sql
-- Acquire (per tenant, per artifact) for the duration of the transaction
select pg_try_advisory_xact_lock(hashtextextended('recalc:' || $1 || ':' || $2, 0));
```
### 3.3.2 LISTEN/NOTIFY (nudge, not a durable queue)
Use `LISTEN/NOTIFY` to wake workers quickly after inserting work into a durable table:
```sql
notify stella_scan, json_build_object('purl', $1, 'priority', 5)::text;
```
### 3.4 Temporal Pattern (Unknowns) ### 3.4 Temporal Pattern (Unknowns)
```sql ```sql
@@ -281,6 +311,50 @@ create index brin_scan_events_time
- Derived data modeled as projection tables or materialized views - Derived data modeled as projection tables or materialized views
- Idempotency enforced in DB: unique keys for imports/jobs/results - Idempotency enforced in DB: unique keys for imports/jobs/results
### 8.4 Materialized Views vs Projection Tables
Materialized views are acceptable when:
- You can refresh them deterministically at a defined cadence (owned by a specific worker/job).
- You can afford full refresh cost, or the dataset is bounded.
- You provide a unique index to enable `REFRESH MATERIALIZED VIEW CONCURRENTLY`.
Prefer projection tables when:
- You need incremental updates (on import/scan completion).
- You need deterministic point-in-time snapshots per scan manifest (replay/audit).
- Refresh cost would scale with the entire dataset on every change.
Checklist:
- Every derived read model declares: owner, refresh cadence/trigger, retention, and idempotency key.
- No UI/API endpoint depends on a heavy non-materialized view for hot paths.
### 8.5 Queue + Outbox Rules (avoid deadlocks)
Queue claim rules:
- Claim in a short transaction (commit immediately after lock acquisition).
- Do work outside the transaction.
- On failure: increment attempts, compute backoff into `run_after`, and release locks.
- Define a DLQ condition (`attempts > N`) that is queryable and observable.
Outbox dispatch rules:
- Dispatch is idempotent (consumer must tolerate duplicates).
- The dispatcher writes a stable delivery attempt record (`dispatched_at`, `dispatch_attempts`, `error`).
### 8.6 Migration Safety Rules
- Create/drop indexes concurrently on large tables (`CREATE INDEX CONCURRENTLY`, `DROP INDEX CONCURRENTLY`).
- Add `NOT NULL` in stages: add nullable column → backfill in batches → enforce constraint → then add default (if needed).
- Avoid long-running `ALTER TABLE` on high-volume tables without a lock plan.
### 8.7 Definition of Done (new table/view)
A PR adding a table/view is incomplete unless it includes:
- Table classification (SoR / projection / queue / event).
- Primary key + idempotency unique key.
- Tenant scoping strategy (and RLS policy when applicable).
- Index plan mapped to the top 13 query patterns (include `EXPLAIN (ANALYZE, BUFFERS)` output).
- Retention plan (partitioning and drop policy for high-volume tables).
- Refresh/update plan for derived models (owner + cadence).
## 9. FEATURE FLAG SCHEMA ## 9. FEATURE FLAG SCHEMA
```sql ```sql

View File

@@ -25,6 +25,7 @@ EvidenceID = hash(canonical_evidence_json)
ReasoningID = hash(canonical_reasoning_json) ReasoningID = hash(canonical_reasoning_json)
VEXVerdictID = hash(canonical_vex_json) VEXVerdictID = hash(canonical_vex_json)
ProofBundleID = merkle_root(SBOMEntryID, EvidenceID[], ReasoningID, VEXVerdictID) ProofBundleID = merkle_root(SBOMEntryID, EvidenceID[], ReasoningID, VEXVerdictID)
GraphRevisionID = merkle_root(nodes[], edges[], policyDigest, feedsDigest, toolchainDigest, paramsDigest)
TrustAnchorID = per-dependency anchor (public key + policy) TrustAnchorID = per-dependency anchor (public key + policy)
``` ```
@@ -61,6 +62,33 @@ public sealed record ProofSubject(
sbomId = sha256(canonical_sbom_bytes) sbomId = sha256(canonical_sbom_bytes)
``` ```
### 1.5 Graph Revision ID (graphRev)
Use `GraphRevisionID` as the stable snapshot identifier for an artifact's *decision graph* (facts + derived edges) so receipts, UI/API responses, logs, exports, and replays can be correlated without ambiguity.
Rules:
- Graph revisions are content-addressed: any input change produces a new `GraphRevisionID`.
- Inputs must be canonicalized (stable ordering, stable casing, UTC timestamps stripped/isolated) before hashing.
- A graph revision must bind to the scan manifest inputs: `sbomDigest`, `feedsDigest`, `policyDigest`, `toolchainDigest`, and `paramsDigest`.
Recommended string format:
```
graphRevisionId = "grv_sha256:" + sha256(canonical_graph_bytes)
```
### 1.6 Proof-of-Integrity Graph (build/runtime ancestry)
Treat provenance as a first-class, append-only graph so "what is running?" can be traced back to "how was it built and attested?"
Minimum model:
- Nodes: `repo`, `commit`, `build`, `sbom`, `attestation`, `image`, `container`, `host`
- Edges: `built_from`, `scanned_with`, `attests`, `deployed_as`, `executes_on`, `derived_from`
- IDs: use content digests where possible (image digest, SBOM hash, DSSE hash); stable IDs for non-content entities (run IDs, host IDs)
Operational rules:
- Never delete/overwrite nodes; mark superseded instead.
- Every UI traversal must be backed by API queries over this graph (no UI-only inference).
## 2. DSSE ENVELOPE STRUCTURES ## 2. DSSE ENVELOPE STRUCTURES
### 2.1 Evidence Statement ### 2.1 Evidence Statement
@@ -165,7 +193,32 @@ Signer: VEXer key or vendor key
Signer: Authority key Signer: Authority key
### 2.5 SBOM Linkage Statement ### 2.5 Verdict Receipt Statement (per finding/verdict)
Use a receipt to bind the *final surfaced decision* to `graphRevisionId` and to the upstream proof objects (evidence/reasoning/VEX/spine). This is the primary export object for audit kits and benchmarks.
```json
{
"payloadType": "application/vnd.in-toto+json",
"payload": {
"_type": "https://in-toto.io/Statement/v1",
"subject": [{"name": "<ArtifactID>", "digest": {"sha256": "..."}}],
"predicateType": "verdict.stella/v1",
"predicate": {
"graphRevisionId": "<GraphRevisionID>",
"findingKey": {"sbomEntryId": "<SBOMEntryID>", "vulnerabilityId": "CVE-XXXX-YYYY"},
"rule": {"id": "POLICY-RULE-123", "version": "v2.3.1"},
"decision": {"status": "block|warn|pass", "reason": "short human-readable summary"},
"inputs": {"sbomDigest": "sha256:...", "feedsDigest": "sha256:...", "policyDigest": "sha256:..."},
"outputs": {"proofBundleId": "<ProofBundleID>", "reasoningId": "<ReasoningID>", "vexVerdictId": "<VEXVerdictID>"},
"createdAt": "2025-12-14T00:00:00Z"
}
},
"signatures": [{"keyid": "<KID>", "sig": "BASE64(SIG)"}]
}
```
### 2.6 SBOM Linkage Statement
```json ```json
{ {

View File

@@ -5,6 +5,7 @@
- 13-Dec-2025 - Designing the CallStack Reachability Engine - 13-Dec-2025 - Designing the CallStack Reachability Engine
- 03-Dec-2025 - Reachability Benchmarks and Moat Metrics - 03-Dec-2025 - Reachability Benchmarks and Moat Metrics
- 09-Dec-2025 - Caching Reachability the Smart Way - 09-Dec-2025 - Caching Reachability the Smart Way
- 06-Dec-2025 - Reachability Methods Worth Testing This Week
- 04-Dec-2025 - Ranking Unknowns in Reachability Graphs - 04-Dec-2025 - Ranking Unknowns in Reachability Graphs
- 02-Dec-2025 - Designing Deterministic Reachability UX - 02-Dec-2025 - Designing Deterministic Reachability UX
- 05-Dec-2025 - Design Notes on SmartDiff and CallStack Analysis - 05-Dec-2025 - Design Notes on SmartDiff and CallStack Analysis
@@ -69,6 +70,48 @@
- No machine-local files - No machine-local files
- No system clock inside algorithms - No system clock inside algorithms
### 1.4 Build Once, Query Many (avoid all-pairs precompute)
Separate *graph construction* from *reachability queries*:
1. **Build step (once per artifact/version)**
- Produce a deterministic call graph index: `CallGraph.v1.json` (or a binary format with an accompanying canonical JSON manifest and digest).
- Persist it content-addressed and bind it into the `ReplayManifest`.
2. **Query step (per entrypoint/sink query)**
- Run a bounded search (BFS / bidirectional BFS / A*) over the stored index.
- Return: `reachable`, a canonical shortest path, and `why[]` evidence (callsite/method IDs).
Rule: never compute full transitive-closure tables unless the graph is proven small and bounded; prefer per-query search + caching keyed by immutable inputs.
### 1.5 Deterministic Reachability Cache Key
Cache *query results*, not graph construction:
```
reachabilityCacheKey = (
graphRevisionId,
algorithmId,
fromSymbolId,
toSymbolId,
contextHash // entrypoints, flags/env, runtime profile selector
)
```
Cache requirements:
- Include the canonical path in cache entries; tie-break with stable neighbor ordering.
- Cache negative results explicitly (`reachable=false`) and represent uncertainty (`reachable=null`) without coercion.
- Invalidate by construction: when `graphRevisionId` changes, cache keys naturally change.
### 1.6 Compositional Library Summaries (ReachCheck-style option)
For third-party libraries, evaluate a compositional approach:
- Precompute *library reachability summaries* once per `(purl, version, digest)` offline.
- Merge summaries with the application graph at query time.
- Store each summary as content-addressed evidence with its own digest and tool version.
If using matrix-based transitive-closure summaries, treat them as an optimization layer only: surface any unsoundness/unknowns explicitly and keep all outputs deterministic (no sampling, no nondeterministic parallel traversal ordering).
## 2. DATA CONTRACTS ## 2. DATA CONTRACTS
### 2.1 CallGraph.v1.json Schema ### 2.1 CallGraph.v1.json Schema
@@ -79,14 +122,14 @@
"scanKey": "uuid", "scanKey": "uuid",
"language": "dotnet|java|node|python|go|rust|binary", "language": "dotnet|java|node|python|go|rust|binary",
"artifacts": [{ "artifacts": [{
"artifactKey": "", "artifactKey": "<artifactKey>",
"kind": "assembly|jar|module|binary", "kind": "assembly|jar|module|binary",
"sha256": "" "sha256": "<sha256>"
}], }],
"nodes": [{ "nodes": [{
"nodeId": "", "nodeId": "<nodeId>",
"artifactKey": "", "artifactKey": "<artifactKey>",
"symbolKey": "Namespace.Type::Method()", "symbolKey": "Namespace.Type::Method(<signature>)",
"visibility": "public|internal|private|unknown", "visibility": "public|internal|private|unknown",
"isEntrypointCandidate": false "isEntrypointCandidate": false
}], }],
@@ -98,7 +141,7 @@
"weight": 1.0 "weight": 1.0
}], }],
"entrypoints": [{ "entrypoints": [{
"nodeId": "", "nodeId": "<nodeId>",
"kind": "http|grpc|cli|job|event|unknown", "kind": "http|grpc|cli|job|event|unknown",
"route": "/api/orders/{id}", "route": "/api/orders/{id}",
"framework": "aspnetcore|minimalapi|spring|express|unknown" "framework": "aspnetcore|minimalapi|spring|express|unknown"
@@ -115,19 +158,19 @@
"collectedAt": "2025-12-14T10:00:00Z", "collectedAt": "2025-12-14T10:00:00Z",
"environment": { "environment": {
"os": "linux|windows", "os": "linux|windows",
"k8s": {"namespace": "", "pod": "", "container": ""}, "k8s": {"namespace": "<namespace>", "pod": "<pod>", "container": "<container>"},
"imageDigest": "sha256:", "imageDigest": "sha256:<digest>",
"buildId": "" "buildId": "<buildId>"
}, },
"samples": [{ "samples": [{
"timestamp": "", "timestamp": "<utc-iso8601>",
"pid": 1234, "pid": 1234,
"threadId": 77, "threadId": 77,
"frames": ["nodeId","nodeId","nodeId"], "frames": ["nodeId","nodeId","nodeId"],
"sampleWeight": 1.0 "sampleWeight": 1.0
}], }],
"loadedArtifacts": [{ "loadedArtifacts": [{
"artifactKey": "", "artifactKey": "<artifactKey>",
"evidence": "loaded_module|mapped_file|jar_loaded" "evidence": "loaded_module|mapped_file|jar_loaded"
}] }]
} }
@@ -140,12 +183,12 @@
"schema": "stella.replaymanifest.v1", "schema": "stella.replaymanifest.v1",
"scanId": "uuid", "scanId": "uuid",
"inputs": { "inputs": {
"sbomDigest": "sha256:", "sbomDigest": "sha256:<digest>",
"callGraphs": [{"language":"dotnet","digest":"sha256:"}], "callGraphs": [{"language":"dotnet","digest":"sha256:<digest>"}],
"runtimeEvidence": [{"digest":"sha256:"}], "runtimeEvidence": [{"digest":"sha256:<digest>"}],
"concelierSnapshot": "sha256:", "concelierSnapshot": "sha256:<digest>",
"excititorSnapshot": "sha256:", "excititorSnapshot": "sha256:<digest>",
"policyDigest": "sha256:" "policyDigest": "sha256:<digest>"
} }
} }
``` ```

View File

@@ -249,6 +249,32 @@ score =
+ policy weight: +300 if decision BLOCK, +100 if WARN + policy weight: +300 if decision BLOCK, +100 if WARN
``` ```
## 10. PROVENANCE-RICH BINARIES (BINARY SCA + PROVENANCE + SARIF)
Smart-Diff becomes materially stronger when it can reason about *binary-level* deltas (symbols/sections/hardening), not only package versions.
Required extractors (deterministic):
- ELF/PE/Mach-O headers, sections, imports/exports, build-id, rpaths
- Symbol tables (public + demangled), string tables, debug info pointers (DWARF/PDB when present)
- Compiler/linker fingerprints (e.g., `.comment`, PE version info, toolchain IDs)
- Per-section and per-function rolling hashes (stable across identical bytes)
- Optional: Bloom filter for symbol presence proofs (binary digest + filter digest)
Provenance capture (per binary):
- Compiler name/version, target triple, LTO mode, linker name/version
- Hardening flags (PIE/RELRO/CFGuard/CET/FORTIFY, stack protector)
- Link inputs (libraries + order) and build materials (git commit, dependency lock digests)
Attestation output:
- Emit a DSSE-wrapped in-toto statement per binary (SLSA provenance compatible) with subject = binary sha256.
CI output (developer-facing):
- Emit SARIF 2.1.0 (`tool`: `StellaOps.BinarySCA`) so binary findings and hardening regressions can surface in code scanning.
- Each SARIF result references the binary digest, symbol/section, and the attestation digest(s) needed to verify the claim.
Smart-Diff linkage rule:
- When a binary changes, map file delta → binary digest delta → symbol delta → impacted sinks/vulns, then re-score only the impacted scope.
--- ---
**Document Version**: 1.0 **Document Version**: 1.0

View File

@@ -449,6 +449,16 @@ jobs:
- Mutation score 70% - Mutation score 70%
- Performance regressions <10% - Performance regressions <10%
## 17. BENCH HARNESSES (SIGNED, REPRODUCIBLE METRICS)
Use the repo bench harness for moat-grade, reproducible comparisons and audit kits:
- Harness root: `bench/README.md`
- Signed finding bundles + verifiers live under `bench/findings/` and `bench/tools/`
- Baseline comparisons and rollups live under `bench/results/`
Guardrail:
- Any change to scanning/policy/proof logic must be covered by at least one deterministic bench scenario (or an extension of an existing one).
--- ---
**Document Version**: 1.0 **Document Version**: 1.0

View File

@@ -24,6 +24,7 @@
3. **Provenance**: Attestation/DSSE + build ancestry (image → layer → artifact → commit) 3. **Provenance**: Attestation/DSSE + build ancestry (image → layer → artifact → commit)
4. **VEX/CSAF status**: affected/not-affected/under-investigation + reason 4. **VEX/CSAF status**: affected/not-affected/under-investigation + reason
5. **Diff**: SBOM or VEX delta since last scan (smart-diff) 5. **Diff**: SBOM or VEX delta since last scan (smart-diff)
6. **Graph revision + receipt**: `graphRevisionId` plus the signed verdict receipt linking to upstream evidence (DSSE/Rekor when available)
## 3. KPIS ## 3. KPIS

View File

@@ -356,7 +356,8 @@ interface EvidencePanel {
``` ```
#### ProofSpine Component #### ProofSpine Component
- Displays: hashes, Rekor link - Displays: `graphRevisionId`, bundle hashes (SBOM/VEX/proof), receipt digest, and Rekor details (when present)
- Copy affordances: copy `graphRevisionId`, `proofBundleId`, and receipt digest in one click
- Verification status: `Verified` | `Unverified` | `Failed verification` | `Expired/Outdated` - Verification status: `Verified` | `Unverified` | `Failed verification` | `Expired/Outdated`
- "Verify locally" copy button with exact commands - "Verify locally" copy button with exact commands