partly or unimplemented features - now implemented

This commit is contained in:
master
2026-02-09 08:53:51 +02:00
parent 1bf6bbf395
commit 4bdc298ec1
674 changed files with 90194 additions and 2271 deletions

View File

@@ -177,6 +177,70 @@ Determinization scores are exposed to SPL policies via the `signals.trust.*` and
EWS weights are externalized to versioned JSON manifests in `etc/weights/`. The unified score facade (`IUnifiedScoreService`) loads weights from these manifests rather than using compiled defaults, enabling auditable weight changes without code modifications. See [Unified Score Architecture](../../technical/scoring-algebra.md) §4 for manifest schema and versioning rules.
### 3.1.1 · Trust Score Algebra Facade
The **TrustScoreAlgebraFacade** (`ITrustScoreAlgebraFacade`) provides a unified entry point composing TrustScoreAggregator + K4Lattice + ScorePolicy into a single deterministic scoring pipeline.
```csharp
public interface ITrustScoreAlgebraFacade
{
Task<TrustScoreResult> ComputeTrustScoreAsync(TrustScoreRequest request, CancellationToken ct);
TrustScoreResult ComputeTrustScore(TrustScoreRequest request);
}
```
**Pipeline steps:**
1. Calculate uncertainty entropy from signal snapshot
2. Aggregate weighted signal scores via TrustScoreAggregator
3. Compute K4 lattice verdict (Unknown/True/False/Conflict)
4. Extract dimension scores (BaseSeverity, Reachability, Evidence, Provenance)
5. Compute weighted final score in basis points (0-10000)
6. Determine risk tier (Info/Low/Medium/High/Critical)
7. Produce Score.v1 predicate for DSSE attestation
**Score.v1 Predicate Format:**
All numeric scores use **basis points (0-10000)** for bit-exact determinism:
```json
{
"predicateType": "https://stella-ops.org/predicates/score/v1",
"artifactId": "pkg:maven/com.example/mylib@1.0.0",
"vulnerabilityId": "CVE-2024-1234",
"trustScoreBps": 7250,
"tier": "High",
"latticeVerdict": "True",
"uncertaintyBps": 2500,
"dimensions": {
"baseSeverityBps": 5000,
"reachabilityBps": 10000,
"evidenceBps": 6000,
"provenanceBps": 8000,
"epssBps": 3500,
"vexBps": 10000
},
"weightsUsed": {
"baseSeverity": 1000,
"reachability": 4500,
"evidence": 3000,
"provenance": 1500
},
"policyDigest": "sha256:abc123...",
"computedAt": "2026-01-15T12:00:00Z",
"tenantId": "tenant-123"
}
```
**Risk Tier Mapping:**
| Score (bps) | Tier |
|-------------|------|
| ≥ 9000 | Critical |
| ≥ 7000 | High |
| ≥ 4000 | Medium |
| ≥ 1000 | Low |
| < 1000 | Info |
### 3.2 - License compliance configuration
License compliance evaluation runs during SBOM evaluation when enabled in
@@ -871,6 +935,7 @@ The Interop Layer provides bidirectional policy exchange between Stella's native
| Format | Schema | Direction | Notes |
|--------|--------|-----------|-------|
| **PolicyPack v2 (JSON)** | `policy.stellaops.io/v2` | Import + Export | Canonical format with typed gates, environment overrides, remediation hints |
| **PolicyPack v2 (YAML)** | `policy.stellaops.io/v2` | Import + Export | Deterministic YAML with sorted keys; YAMLJSON roundtrip for validation |
| **OPA/Rego** | `package stella.release` | Export (+ Import with pattern matching) | Deny-by-default pattern, `remediation` output rules |
### 13.2 · Architecture
@@ -878,8 +943,9 @@ The Interop Layer provides bidirectional policy exchange between Stella's native
```mermaid
graph TD
subgraph Interop["StellaOps.Policy.Interop"]
Exporter[JsonPolicyExporter / RegoPolicyExporter]
Importer[JsonPolicyImporter / RegoPolicyImporter]
Exporter[JsonPolicyExporter / YamlPolicyExporter / RegoPolicyExporter]
Importer[JsonPolicyImporter / YamlPolicyImporter / RegoPolicyImporter]
DiffMerge[PolicyDiffMergeEngine]
Validator[PolicySchemaValidator]
Generator[RegoCodeGenerator]
Resolver[RemediationResolver]
@@ -887,7 +953,7 @@ graph TD
Detector[FormatDetector]
end
subgraph Consumers
CLI[stella policy export/import/validate/evaluate]
CLI[stella policy export/import/validate/evaluate/diff/merge]
API[Platform API /api/v1/policy/interop]
UI[Policy Editor UI]
end
@@ -895,9 +961,11 @@ graph TD
CLI --> Exporter
CLI --> Importer
CLI --> Validator
CLI --> DiffMerge
API --> Exporter
API --> Importer
API --> Validator
API --> DiffMerge
UI --> API
Exporter --> Generator
@@ -946,7 +1014,51 @@ All exports and evaluations are deterministic:
- No time-dependent logic in deterministic mode
- `outputDigest` in evaluation results enables replay verification
### 13.6 · Implementation Reference
### 13.6 · YAML Format Support
> **Sprint:** SPRINT_20260208_048_Policy_policy_interop_framework
YAML export/import operates on the same `PolicyPackDocument` model as JSON. The YAML format is useful for human-editable policy files and GitOps workflows.
**Export** (`YamlPolicyExporter : IPolicyYamlExporter`):
- Converts `PolicyPackDocument` to a `SortedDictionary` intermediate for deterministic key ordering
- Serializes via YamlDotNet (CamelCaseNamingConvention, DisableAliases, OmitNull)
- Produces SHA-256 digest for replay verification
- Supports environment filtering and remediation stripping (same options as JSON)
**Import** (`YamlPolicyImporter`):
- Deserializes YAML via YamlDotNet, then re-serializes as JSON
- Delegates to `JsonPolicyImporter` for validation (apiVersion, kind, duplicate gates/rules)
- Errors: `YAML_PARSE_ERROR`, `YAML_EMPTY`, `YAML_CONVERSION_ERROR`
**Format Detection** (`FormatDetector`):
- Content-based: detects `apiVersion:`, `---`, `kind:` patterns
- Extension-based: `.yaml`, `.yml` `PolicyFormats.Yaml`
### 13.7 · Policy Diff/Merge Engine
> **Sprint:** SPRINT_20260208_048_Policy_policy_interop_framework
The `PolicyDiffMergeEngine` (`IPolicyDiffMerge`) compares and merges `PolicyPackDocument` instances structurally.
**Diff** produces `PolicyDiffResult` containing:
- Changes to metadata (name, version, description)
- Changes to settings (defaultAction, unknownsThreshold, stopOnFirstFailure, deterministicMode)
- Gate changes (by ID): added, removed, modified (action, type, config diffs)
- Rule changes (by Name): added, removed, modified (action, match diffs)
- Summary with counts of added/removed/modified and `HasChanges` flag
**Merge** applies one of three strategies via `PolicyMergeStrategy`:
| Strategy | Behavior |
|----------|----------|
| `OverlayWins` | Overlay values take precedence on conflict |
| `BaseWins` | Base values take precedence on conflict |
| `FailOnConflict` | Returns error with conflict details |
Merge output includes the merged `PolicyPackDocument` and a list of `PolicyMergeConflict` items (path, base value, overlay value).
### 13.8 · Implementation Reference
| Component | Source File |
|-----------|-------------|
@@ -954,18 +1066,22 @@ All exports and evaluations are deterministic:
| Remediation Models | `src/Policy/__Libraries/StellaOps.Policy.Interop/Contracts/RemediationModels.cs` |
| Interfaces | `src/Policy/__Libraries/StellaOps.Policy.Interop/Abstractions/` |
| JSON Exporter | `src/Policy/__Libraries/StellaOps.Policy.Interop/Export/JsonPolicyExporter.cs` |
| YAML Exporter | `src/Policy/__Libraries/StellaOps.Policy.Interop/Export/YamlPolicyExporter.cs` |
| JSON Importer | `src/Policy/__Libraries/StellaOps.Policy.Interop/Import/JsonPolicyImporter.cs` |
| YAML Importer | `src/Policy/__Libraries/StellaOps.Policy.Interop/Import/YamlPolicyImporter.cs` |
| Rego Generator | `src/Policy/__Libraries/StellaOps.Policy.Interop/Rego/RegoCodeGenerator.cs` |
| Rego Importer | `src/Policy/__Libraries/StellaOps.Policy.Interop/Import/RegoPolicyImporter.cs` |
| Diff/Merge Engine | `src/Policy/__Libraries/StellaOps.Policy.Interop/DiffMerge/PolicyDiffMergeEngine.cs` |
| Embedded OPA | `src/Policy/__Libraries/StellaOps.Policy.Interop/Evaluation/EmbeddedOpaEvaluator.cs` |
| Remediation Resolver | `src/Policy/__Libraries/StellaOps.Policy.Interop/Evaluation/RemediationResolver.cs` |
| Format Detector | `src/Policy/__Libraries/StellaOps.Policy.Interop/Import/FormatDetector.cs` |
| Schema Validator | `src/Policy/__Libraries/StellaOps.Policy.Interop/Validation/PolicySchemaValidator.cs` |
| DI Registration | `src/Policy/__Libraries/StellaOps.Policy.Interop/DependencyInjection/PolicyInteropServiceCollectionExtensions.cs` |
| CLI Commands | `src/Cli/StellaOps.Cli/Commands/Policy/PolicyInteropCommandGroup.cs` |
| Platform API | `src/Platform/StellaOps.Platform.WebService/Endpoints/PolicyInteropEndpoints.cs` |
| JSON Schema | `docs/schemas/policy-pack-v2.schema.json` |
### 13.7 · CLI Interface
### 13.9 · CLI Interface
```bash
# Export to Rego
@@ -983,7 +1099,7 @@ stella policy evaluate --policy baseline.json --input evidence.json --environmen
Exit codes: `0` = success/allow, `1` = warn, `2` = block/errors, `10` = input-error, `12` = policy-error.
### 13.8 · Platform API
### 13.10 · Platform API
Group: `/api/v1/policy/interop` with tag `PolicyInterop`
@@ -995,7 +1111,7 @@ Group: `/api/v1/policy/interop` with tag `PolicyInterop`
| POST | `/evaluate` | `platform.policy.evaluate` | Evaluate policy against input |
| GET | `/formats` | `platform.policy.read` | List supported formats |
### 13.9 · OPA Supply Chain Evidence Input
### 13.11 · OPA Supply Chain Evidence Input
> **Sprint:** SPRINT_0129_001_Policy_supply_chain_evidence_input
@@ -1061,4 +1177,107 @@ allow {
---
*Last updated: 2026-01-29 (Sprint 0129_001).*
*Last updated: 2026-02-09 (Sprint 049 — Proof Studio UX).*
## 14 · Proof Studio (Explainable Confidence Scoring)
The Proof Studio UX provides visual, auditable evidence chains for every verdict decision. It bridges existing verdict rationale, score explanation, and counterfactual simulation data into composable views.
### 14.1 · Library Layout
```
StellaOps.Policy.Explainability/
├── VerdictRationale.cs # 4-line structured rationale model
├── VerdictRationaleRenderer.cs # Content-addressed render (text/md/JSON)
├── IVerdictRationaleRenderer.cs # Renderer interface
├── ProofGraphModels.cs # Proof graph DAG types
├── ProofGraphBuilder.cs # Deterministic graph builder
├── ScoreBreakdownDashboard.cs # Score breakdown dashboard model
├── ProofStudioService.cs # Composition + counterfactual integration
├── ServiceCollectionExtensions.cs # DI registration
└── GlobalUsings.cs
```
### 14.2 · Proof Graph
A proof graph is a directed acyclic graph (DAG) that visualizes the complete evidence chain from source artifacts to a final verdict decision.
| Node Type | Depth | Purpose |
|---|---|---|
| `Verdict` | 0 | Root: the final verdict + composite score |
| `PolicyRule` | 1 | Policy clause that triggered the decision |
| `Guardrail` | 1 | Score guardrail (cap/floor) that modified the score |
| `ScoreComputation` | 2 | Per-factor score contribution |
| `ReachabilityAnalysis` | 3 | Reachability evidence leaf |
| `VexStatement` | 3 | VEX attestation leaf |
| `Provenance` | 3 | Provenance attestation leaf |
| `SbomEvidence` | 3 | SBOM evidence leaf |
| `RuntimeSignal` | 3 | Runtime detection signal leaf |
| `AdvisoryData` | 3 | Advisory data leaf |
| `Counterfactual` | 0 | What-if hypothesis (overlay) |
Edge relations: `ProvidesEvidence`, `ContributesScore`, `Gates`, `Attests`, `Overrides`, `GuardrailApplied`.
Graph IDs are content-addressed (`pg:sha256:...`) from deterministic sorting of node and edge identifiers.
### 14.3 · Score Breakdown Dashboard
The `ScoreBreakdownDashboard` exposes per-factor contributions with weighted contributions and percentages:
```
ScoreBreakdownDashboard
├── CompositeScore (int)
├── ActionBucket (string)
├── Factors[] → FactorContribution
│ ├── FactorId / FactorName
│ ├── RawScore, Weight → WeightedContribution (computed)
│ ├── Confidence, IsSubtractive
│ └── PercentageOfTotal
├── GuardrailsApplied[] → GuardrailApplication
│ ├── ScoreBefore → ScoreAfter
│ └── Reason, Conditions
├── PreGuardrailScore
├── Entropy
└── NeedsReview
```
### 14.4 · Counterfactual Explorer
The `AddCounterfactualOverlay()` method on `IProofGraphBuilder` adds hypothetical nodes to an existing proof graph. A `CounterfactualScenario` specifies factor overrides (factorId hypothetical score) and an optional resulting composite score. The overlay:
1. Creates a `Counterfactual` node at depth 0 with the scenario label.
2. Connects overridden factor score nodes to the counterfactual node via `Overrides` edges.
3. Recomputes the content-addressed graph ID, making each scenario distinctly identifiable.
### 14.5 · Proof Studio Service (Integration)
The `IProofStudioService` is the primary integration surface:
| Method | Input | Output |
|---|---|---|
| `Compose(request)` | `ProofStudioRequest` (rationale + optional score factors + guardrails) | `ProofStudioView` (proof graph + optional score breakdown) |
| `ApplyCounterfactual(view, scenario)` | Existing view + `CounterfactualScenario` | Updated view with overlay |
The service bridges `ScoreFactorInput` (from scoring engine) to `FactorContribution` models and formats factor names for UI display.
### 14.6 · DI Registration
```csharp
services.AddVerdictExplainability();
// Registers:
// IVerdictRationaleRenderer → VerdictRationaleRenderer
// IProofGraphBuilder → ProofGraphBuilder
// IProofStudioService → ProofStudioService
```
### 14.7 · OTel Metrics
| Metric | Type | Description |
|---|---|---|
| `stellaops.proofstudio.views_composed_total` | Counter | Proof studio views composed |
| `stellaops.proofstudio.counterfactuals_applied_total` | Counter | Counterfactual scenarios applied |
### 14.8 · Tests
- `ProofGraphBuilderTests.cs` 18 tests (graph construction, determinism, depth hierarchy, critical paths, counterfactual overlay, edge cases)
- `ProofStudioServiceTests.cs` 10 tests (compose, score breakdown, guardrails, counterfactual, DI resolution)

View File

@@ -429,3 +429,151 @@ Policy:
MinConfidence: 0.75
MaxEntropy: 0.3
```
---
## 13. Delta-If-Present Calculations (TSF-004)
> **Sprint:** SPRINT_20260208_043_Policy_delta_if_present_calculations_for_missing_signals
The Delta-If-Present API provides "what-if" analysis for missing signals, showing hypothetical score changes if specific evidence were obtained.
### 13.1 Purpose
When making release decisions with incomplete evidence, operators need to understand:
- **Gap prioritization:** Which missing signals would have the most impact?
- **Score bounds:** What is the possible range of trust scores given current gaps?
- **Risk simulation:** What would the score be if a missing signal had a specific value?
### 13.2 API Endpoints
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/api/v1/policy/delta-if-present/signal` | POST | Calculate delta for a single signal |
| `/api/v1/policy/delta-if-present/analysis` | POST | Full gap analysis with prioritization |
| `/api/v1/policy/delta-if-present/bounds` | POST | Calculate min/max score bounds |
### 13.3 Single Signal Delta
Calculate hypothetical score change for one missing signal:
**Request:**
```json
{
"snapshot": {
"cve": "CVE-2024-1234",
"purl": "pkg:maven/org.example/lib@1.0.0",
"vex": { "state": "not_queried" },
"epss": { "state": "not_queried" },
"reachability": {
"state": "queried",
"value": { "status": "Reachable", "analyzed_at": "2026-01-15T00:00:00Z" }
},
"runtime": { "state": "not_queried" },
"backport": { "state": "not_queried" },
"sbom": {
"state": "queried",
"value": { "sbom_digest": "sha256:abc", "format": "SPDX" }
}
},
"signal_name": "VEX",
"assumed_value": 0.0
}
```
**Response:**
```json
{
"signal": "VEX",
"current_score": 0.65,
"hypothetical_score": 0.52,
"score_delta": -0.13,
"assumed_value": 0.0,
"signal_weight": 0.25,
"current_entropy": 0.60,
"hypothetical_entropy": 0.35,
"entropy_delta": -0.25
}
```
### 13.4 Full Gap Analysis
Analyze all missing signals with best/worst/prior cases:
**Response:**
```json
{
"cve": "CVE-2024-1234",
"purl": "pkg:maven/org.example/lib@1.0.0",
"current_score": 0.65,
"current_entropy": 0.60,
"gap_analysis": [
{
"signal": "VEX",
"gap_reason": "NotQueried",
"best_case": {
"assumed_value": 0.0,
"hypothetical_score": 0.52,
"score_delta": -0.13
},
"worst_case": {
"assumed_value": 1.0,
"hypothetical_score": 0.77,
"score_delta": 0.12
},
"prior_case": {
"assumed_value": 0.5,
"hypothetical_score": 0.64,
"score_delta": -0.01
},
"max_impact": 0.25
}
],
"prioritized_gaps": ["VEX", "Reachability", "EPSS", "Runtime", "Backport", "SBOMLineage"],
"computed_at": "2026-01-15T12:00:00Z"
}
```
### 13.5 Score Bounds
Calculate the possible range of trust scores:
**Response:**
```json
{
"cve": "CVE-2024-1234",
"purl": "pkg:maven/org.example/lib@1.0.0",
"current_score": 0.65,
"current_entropy": 0.60,
"minimum_score": 0.35,
"maximum_score": 0.85,
"range": 0.50,
"gap_count": 4,
"missing_weight_percentage": 65.0,
"computed_at": "2026-01-15T12:00:00Z"
}
```
### 13.6 Signal Weights
Default signal weights used in delta calculations:
| Signal | Weight | Default Prior |
|--------|--------|---------------|
| VEX | 0.25 | 0.5 |
| Reachability | 0.25 | 0.5 |
| EPSS | 0.15 | 0.3 |
| Runtime | 0.15 | 0.3 |
| Backport | 0.10 | 0.5 |
| SBOMLineage | 0.10 | 0.5 |
Custom weights can be passed in requests to override defaults.
### 13.7 Use Cases
1. **Evidence Prioritization:** Determine which signals to acquire first based on maximum impact
2. **Risk Bounding:** Understand worst-case score before making release decisions
3. **Sensitivity Analysis:** Explore how different evidence values would affect outcomes
4. **Operator Guidance:** Help operators focus collection efforts on high-impact signals
5. **Audit Trail:** Document "what-if" analysis as part of release decision rationale

View File

@@ -69,7 +69,37 @@ src/Policy/__Libraries/StellaOps.Policy.Determinization/
│ ├── IDecayedConfidenceCalculator.cs
│ ├── DecayedConfidenceCalculator.cs # Half-life decay application
│ ├── SignalWeights.cs # Configurable signal weights
── PriorDistribution.cs # Default priors for missing signals
── PriorDistribution.cs # Default priors for missing signals
│ ├── EvidenceWeightedScoring/ # 6-dimension EWS model
│ │ ├── EwsDimension.cs # RCH/RTS/BKP/XPL/SRC/MIT enum
│ │ ├── IEwsDimensionNormalizer.cs # Pluggable normalizer interface
│ │ ├── EwsSignalInput.cs # Raw signal inputs
│ │ ├── EwsModels.cs # Scores, weights, guardrails
│ │ ├── IGuardrailsEngine.cs # Guardrails enforcement interface
│ │ ├── GuardrailsEngine.cs # Caps/floors (KEV, backport, etc.)
│ │ ├── IEwsCalculator.cs # Unified calculator interface
│ │ ├── EwsCalculator.cs # Orchestrates normalizers + guardrails
│ │ └── Normalizers/
│ │ ├── ReachabilityNormalizer.cs
│ │ ├── RuntimeSignalsNormalizer.cs
│ │ ├── BackportEvidenceNormalizer.cs
│ │ ├── ExploitabilityNormalizer.cs
│ │ ├── SourceConfidenceNormalizer.cs
│ │ └── MitigationStatusNormalizer.cs
│ └── Triage/ # Decay-based triage queue (Sprint 050)
│ ├── TriageModels.cs # TriagePriority, TriageItem, TriageQueueSnapshot, TriageQueueOptions
│ ├── ITriageQueueEvaluator.cs # Batch + single evaluation interface
│ ├── TriageQueueEvaluator.cs # Priority classification, days-until-stale, OTel metrics
│ ├── ITriageObservationSource.cs # Source for observation candidates
│ ├── ITriageReanalysisSink.cs # Sink interface for re-analysis queue
│ ├── InMemoryTriageReanalysisSink.cs # ConcurrentQueue-based default sink
│ └── UnknownTriageQueueService.cs # Fetch→evaluate→enqueue cycle orchestrator
│ └── WeightManifest/ # Versioned weight manifests (Sprint 051)
│ ├── WeightManifestModels.cs # WeightManifestDocument, weights, guardrails, buckets, diff models
│ ├── WeightManifestHashComputer.cs # Deterministic SHA-256 with canonical JSON (excludes contentHash)
│ ├── IWeightManifestLoader.cs # Interface: list, load, select, validate, diff
│ ├── WeightManifestLoader.cs # File-based discovery, effectiveFrom selection, OTel metrics
│ └── WeightManifestCommands.cs # CLI backing: list, validate, diff, activate, hash
├── Policies/
│ ├── IDeterminizationPolicy.cs
│ ├── DeterminizationPolicy.cs # Allow/quarantine/escalate rules
@@ -913,6 +943,158 @@ public static class DeterminizationMetrics
}
```
## Evidence-Weighted Score (EWS) Model
The EWS model extends the Determinization subsystem with a **6-dimension scoring
pipeline** that replaces ad-hoc signal weighting with a unified, pluggable, and
guardrail-enforced composite score.
### Dimensions
Each dimension maps a family of raw signals to a **normalised risk score 0100**
(higher = riskier) and a **confidence 0.01.0**:
| Code | Dimension | Key signals | Score semantics |
|------|-----------|-------------|-----------------|
| RCH | Reachability | Call-graph tier R0R4, runtime trace | Higher = more reachable |
| RTS | RuntimeSignals | Instrumentation coverage, invocation count, APM | Higher = more actively exercised |
| BKP | BackportEvidence | Vendor confirmation, binary-analysis confidence | Higher = no backport / low confidence |
| XPL | Exploitability | EPSS, KEV, exploit-kit availability, PoC age, CVSS | Higher = more exploitable |
| SRC | SourceConfidence | SBOM completeness, signatures, attestation count | **Inverted**: high confidence = low risk |
| MIT | MitigationStatus | VEX status, workarounds, network controls | Higher = less mitigated |
### Default Weights
```
RCH 0.25 XPL 0.20 RTS 0.15
BKP 0.15 SRC 0.15 MIT 0.10
─── Total: 1.00 ───
```
A **Legacy** preset preserves backward-compatible weights aligned with the
original `SignalWeights` record.
### Guardrails
After weighted scoring, a `GuardrailsEngine` enforces hard caps and floors:
| Guardrail | Default | Trigger condition |
|-----------|---------|-------------------|
| `kev_floor` | 70 | `IsInKev == true` — floor the score |
| `backported_cap` | 20 | `BackportDetected && Confidence ≥ 0.8` — cap the score |
| `not_affected_cap` | 25 | `VexStatus == not_affected` — cap the score |
| `runtime_floor` | 30 | `RuntimeTraceConfirmed == true` — floor the score |
| `speculative_cap` | 60 | Overall confidence < `MinConfidenceThreshold` (0.3) — cap |
Guardrails are applied in priority order (KEV first). The resulting
`EwsCompositeScore` records which guardrails fired and whether the score was
adjusted up or down.
### Calculator API
```csharp
// Via DI
IEwsCalculator calculator = serviceProvider.GetRequiredService<IEwsCalculator>();
// Or standalone
IEwsCalculator calculator = EwsCalculator.CreateDefault();
var signal = new EwsSignalInput
{
CveId = "CVE-2025-1234",
ReachabilityTier = 3, // R3
EpssProbability = 0.42,
IsInKev = false,
VexStatus = "under_investigation",
SbomCompleteness = 0.85,
};
EwsCompositeScore result = calculator.Calculate(signal);
// result.Score → 0-100 composite
// result.BasisPoints → 0-10000 (fine-grained)
// result.Confidence → weighted confidence
// result.RiskTier → Critical/High/Medium/Low/Negligible
// result.AppliedGuardrails → list of guardrail names that fired
// result.NeedsReview → true when confidence < threshold
```
### Normalizer Interface
Each dimension is implemented as an `IEwsDimensionNormalizer`:
```csharp
public interface IEwsDimensionNormalizer
{
EwsDimension Dimension { get; }
int Normalize(EwsSignalInput signal); // 0-100
double GetConfidence(EwsSignalInput signal); // 0.0-1.0
string GetExplanation(EwsSignalInput signal, int score);
}
```
Normalizers are registered via DI as `IEnumerable<IEwsDimensionNormalizer>`.
Custom normalizers can be added by registering additional implementations.
### Observability
The calculator emits two OTel metrics:
- **`stellaops_ews_score`** (Histogram) — score distribution 0100
- **`stellaops_ews_guardrails_applied`** (Counter) — number of guardrail applications
## Unknowns Decay Triage Queue
> **Sprint:** SPRINT_20260208_050_Policy_unknowns_decay_and_triage_queue
The triage queue automatically identifies unknowns whose evidence has decayed past staleness thresholds and queues them for re-analysis, closing the gap between passive `ObservationDecay.CheckIsStale()` tracking and active re-analysis triggering.
### Triage Priority Classification
| Priority | Decay Multiplier Range | Action |
|----------|----------------------|--------|
| **None** | > 0.70 | No action — fresh |
| **Low** | 0.50 0.70 | Monitor — approaching staleness |
| **Medium** | 0.30 0.50 | Schedule re-analysis — stale |
| **High** | 0.10 0.30 | Re-analyse soon — heavily decayed |
| **Critical** | ≤ 0.10 | URGENT — evidence at floor |
Thresholds are configurable via `TriageQueueOptions` (section: `Determinization:TriageQueue`).
### Architecture
```
UnknownTriageQueueService (orchestrator)
├── ITriageObservationSource → fetch candidates
├── ITriageQueueEvaluator → classify priority, compute days-until-stale
└── ITriageReanalysisSink → enqueue Medium+ items for re-analysis
```
- **`TriageQueueEvaluator`**: Deterministic evaluator. Given the same observations and reference time, produces identical output. Calculates days-until-stale using the formula: `d = -halfLife × ln(threshold) / ln(2) - currentAgeDays`.
- **`UnknownTriageQueueService`**: Orchestrates fetch→evaluate→enqueue cycles. Designed for periodic invocation by a background host, timer, or scheduler. Also supports on-demand evaluation (CLI/API) without auto-enqueue.
- **`InMemoryTriageReanalysisSink`**: Default `ConcurrentQueue<TriageItem>` implementation for single-node and offline scenarios. Host-level can replace with message bus or database-backed sink.
### OTel Metrics
- **`stellaops_triage_items_evaluated_total`** (Counter) — observations evaluated per cycle
- **`stellaops_triage_items_queued_total`** (Counter) — items added to triage queue
- **`stellaops_triage_decay_multiplier`** (Histogram) — decay multiplier distribution
- **`stellaops_triage_cycles_total`** (Counter) — evaluation cycles executed
- **`stellaops_triage_reanalysis_enqueued_total`** (Counter) — items sent to re-analysis sink
- **`stellaops_triage_cycle_duration_seconds`** (Histogram) — cycle duration
### Configuration
```yaml
Determinization:
TriageQueue:
ApproachingThreshold: 0.70 # Multiplier below which Low priority starts
HighPriorityThreshold: 0.30 # Below this → High
CriticalPriorityThreshold: 0.10 # Below this → Critical
MaxSnapshotItems: 500 # Max items per snapshot
IncludeApproaching: true # Include Low priority in snapshots
MinEvaluationIntervalMinutes: 60
```
## Testing Strategy
| Test Category | Focus Area | Example |
@@ -934,6 +1116,90 @@ public static class DeterminizationMetrics
4. **Escalation Path:** Runtime evidence always escalates regardless of other signals
5. **Tamper Detection:** Signal snapshots hashed for integrity verification
## Versioned Weight Manifests
Weight manifests (Sprint 051) provide versioned, content-addressed configuration for
all scoring weights, guardrails, buckets, and determinization thresholds. Manifests
live in `etc/weights/` as JSON files with a `*.weights.json` extension.
### Manifest Schema (v1.0.0)
| Field | Type | Description |
| --- | --- | --- |
| `schemaVersion` | string | Must be `"1.0.0"` |
| `version` | string | Manifest version identifier (e.g. `"v2026-01-22"`) |
| `effectiveFrom` | ISO-8601 | UTC date from which this manifest is active |
| `profile` | string | Environment profile (`production`, `staging`, etc.) |
| `contentHash` | string | `sha256:<hex>` content hash or `sha256:auto` placeholder |
| `weights.legacy` | dict | 6-dimension EWS weights (must sum to 1.0) |
| `weights.advisory` | dict | Advisory-profile weights |
| `guardrails` | object | Guardrail rules (notAffectedCap, runtimeFloor, speculativeCap) |
| `buckets` | object | Action tier boundaries (actNowMin, scheduleNextMin, investigateMin) |
| `determinizationThresholds` | object | Entropy thresholds for triage |
| `signalWeightsForEntropy` | dict | Signal weights for uncertainty calculation (sum to 1.0) |
| `metadata` | object | Provenance: createdBy, createdAt, changelog, notes |
### Content Hash Computation
The `WeightManifestHashComputer` computes a deterministic SHA-256 hash over
canonical JSON (alphabetically sorted properties, `contentHash` field excluded):
```
Input JSON → parse → remove contentHash → sort keys recursively → UTF-8 → SHA-256 → "sha256:<hex>"
```
This enables tamper detection and content-addressed references. The `sha256:auto`
placeholder is replaced by `stella weights hash --write-back` or at build time.
### CLI Commands (backing services)
| Command | Service Method | Description |
| --- | --- | --- |
| `stella weights list` | `WeightManifestCommands.ListAsync()` | List all manifests with version, profile, hash status |
| `stella weights validate` | `WeightManifestCommands.ValidateAsync()` | Validate schema, weight normalization, hash integrity |
| `stella weights diff` | `WeightManifestCommands.DiffAsync()` | Compare two manifests field-by-field |
| `stella weights activate` | `WeightManifestCommands.ActivateAsync()` | Select effective manifest for a reference date |
| `stella weights hash` | `WeightManifestCommands.HashAsync()` | Compute/verify content hash, optionally write back |
### EffectiveFrom Selection
`WeightManifestLoader.SelectEffectiveAsync(referenceDate)` picks the most recent
manifest where `effectiveFrom ≤ referenceDate`, enabling time-travel replay:
```
Manifests: v2026-01-01 v2026-01-22 v2026-03-01
Reference: 2026-02-15
Selected: v2026-01-22 (most recent ≤ reference date)
```
### OTel Metrics
| Metric | Type | Description |
| --- | --- | --- |
| `stellaops.weight_manifest.loaded_total` | Counter | Manifests loaded from disk |
| `stellaops.weight_manifest.validated_total` | Counter | Manifests validated |
| `stellaops.weight_manifest.hash_mismatch_total` | Counter | Content hash mismatches |
| `stellaops.weight_manifest.validation_error_total` | Counter | Validation errors |
### DI Registration
```csharp
services.AddDeterminization(); // Registers WeightManifestLoaderOptions,
// IWeightManifestLoader → WeightManifestLoader,
// WeightManifestCommands
```
### YAML Configuration
```yaml
Determinization:
WeightManifest:
ManifestDirectory: "etc/weights"
FilePattern: "*.weights.json"
RequireComputedHash: true # Reject sha256:auto in production
StrictHashVerification: true # Fail on hash mismatch
```
## References
- Product Advisory: "Unknown CVEs: graceful placeholders, not blockers"

View File

@@ -0,0 +1,283 @@
# Stella Policy DSL Grammar Specification
**Version**: stella-dsl@1.0
**Status**: Implemented
**Last Updated**: 2026-02-15
## Overview
The Stella Policy DSL is a domain-specific language for defining release policies that control software deployment decisions. Policies are evaluated against signal contexts to produce deterministic verdicts.
## File Extension
Policy files use the `.stella` extension.
## Lexical Structure
### Comments
```
// Single-line comment
/*
* Multi-line comment
*/
```
### Keywords
Reserved keywords (case-sensitive):
| Keyword | Description |
|---------|-------------|
| `policy` | Policy declaration |
| `syntax` | Syntax version declaration |
| `metadata` | Policy metadata block |
| `settings` | Policy settings block |
| `profile` | Profile declaration |
| `rule` | Rule declaration |
| `when` | Rule condition |
| `then` | Rule action (condition true) |
| `else` | Rule action (condition false) |
| `and` | Logical AND |
| `or` | Logical OR |
| `not` | Logical NOT |
| `true` | Boolean true |
| `false` | Boolean false |
| `null` | Null literal |
| `in` | Membership operator |
| `map` | Map literal |
| `env` | Environment binding |
### Operators
| Operator | Description |
|----------|-------------|
| `==` | Equality |
| `!=` | Inequality |
| `<` | Less than |
| `<=` | Less than or equal |
| `>` | Greater than |
| `>=` | Greater than or equal |
| `:=` | Definition |
| `=>` | Arrow (lambda/map) |
| `.` | Member access |
| `,` | Separator |
| `:` | Key-value separator |
| `=` | Assignment |
### Literals
#### Strings
```
"hello world"
"escaped \"quotes\""
```
#### Numbers
```
42
3.14
-1
0.5
```
#### Booleans
```
true
false
```
#### Arrays
```
[1, 2, 3]
["a", "b", "c"]
```
### Identifiers
Identifiers start with a letter or underscore, followed by letters, digits, or underscores:
```
identifier
_private
signal_name
cvss_score
```
## Grammar (EBNF)
```ebnf
document = policy-header "{" body "}" ;
policy-header = "policy" string-literal "syntax" string-literal ;
body = { metadata-block | settings-block | profile | rule } ;
metadata-block = "metadata" "{" { key-value } "}" ;
settings-block = "settings" "{" { key-value } "}" ;
key-value = identifier ":" literal ;
profile = "profile" identifier "{" { profile-item } "}" ;
profile-item = map-item | env-item | scalar-item ;
map-item = "map" identifier "=>" expression ;
env-item = "env" identifier "=>" string-literal ;
scalar-item = identifier ":=" expression ;
rule = "rule" identifier [ "(" priority ")" ] "{" rule-body "}" ;
priority = number-literal ;
rule-body = when-clause then-clause [ else-clause ] ;
when-clause = "when" expression ;
then-clause = "then" "{" { action } "}" ;
else-clause = "else" "{" { action } "}" ;
action = action-name [ "(" { argument } ")" ] ;
action-name = identifier ;
argument = expression | key-value ;
expression = or-expression ;
or-expression = and-expression { "or" and-expression } ;
and-expression = unary-expression { "and" unary-expression } ;
unary-expression = [ "not" ] primary-expression ;
primary-expression = literal
| identifier
| member-access
| "(" expression ")"
| comparison ;
comparison = primary-expression comparison-op primary-expression ;
comparison-op = "==" | "!=" | "<" | "<=" | ">" | ">=" | "in" ;
member-access = identifier { "." identifier } ;
literal = string-literal
| number-literal
| boolean-literal
| array-literal
| null-literal ;
string-literal = '"' { character } '"' ;
number-literal = [ "-" | "+" ] digit { digit } [ "." digit { digit } ] ;
boolean-literal = "true" | "false" ;
array-literal = "[" [ expression { "," expression } ] "]" ;
null-literal = "null" ;
```
## Example Policy
```stella
policy "Production Release Policy" syntax "stella-dsl@1" {
metadata {
author: "security-team@example.com"
version: "1.2.0"
description: "Governs production releases"
}
settings {
default_action: "block"
audit_mode: false
}
profile production {
env target => "prod"
map severity_threshold := 7.0
}
rule critical_cve_block (100) {
when cvss.score >= 9.0 and cve.reachable == true
then {
block("Critical CVE is reachable")
notify("security-oncall")
}
}
rule high_cve_warn (50) {
when cvss.score >= 7.0 and cvss.score < 9.0
then {
warn("High severity CVE detected")
}
else {
allow()
}
}
rule sbom_required (80) {
when not sbom.present
then {
block("SBOM attestation required")
}
}
}
```
## Signal Context
Policies are evaluated against a signal context containing runtime values:
| Signal | Type | Description |
|--------|------|-------------|
| `cvss.score` | number | CVSS score of vulnerability |
| `cve.reachable` | boolean | Whether CVE is reachable |
| `cve.id` | string | CVE identifier |
| `sbom.present` | boolean | SBOM attestation exists |
| `sbom.format` | string | SBOM format (cyclonedx, spdx) |
| `artifact.digest` | string | Artifact content digest |
| `artifact.tag` | string | Container image tag |
| `environment` | string | Target environment |
| `attestation.signed` | boolean | Has signed attestation |
## Compilation
The DSL compiles to a content-addressed IR (Intermediate Representation):
1. **Tokenize**: Source → Token stream
2. **Parse**: Tokens → AST
3. **Compile**: AST → PolicyIrDocument
4. **Serialize**: IR → Canonical JSON
5. **Hash**: JSON → SHA-256 checksum
The checksum provides deterministic policy identity for audit and replay.
## CLI Commands
```bash
# Lint a policy file
stella policy lint policy.stella
# Compile to IR JSON
stella policy compile policy.stella --output policy.ir.json
# Get deterministic checksum
stella policy compile policy.stella --checksum-only
# Simulate with signals
stella policy simulate policy.stella --signals context.json
```
## See Also
- [Policy Module Architecture](architecture.md)
- [PolicyDsl Implementation](../../../src/Policy/StellaOps.PolicyDsl/)
- [Signal Context Reference](signal-context-reference.md)