Remove global.json and add extensive documentation for SBOM-first supply chain spine, diff-aware releases, binary intelligence graph, reachability proofs, smart-diff evidence, risk budget visualization, and weighted confidence for VEX sources. Introduce solution file for Concelier web service project.
This commit is contained in:
@@ -1,9 +0,0 @@
|
|||||||
{
|
|
||||||
"solution": {
|
|
||||||
"path": "src/Concelier/StellaOps.Concelier.sln",
|
|
||||||
"projects": [
|
|
||||||
"StellaOps.Concelier.WebService/StellaOps.Concelier.WebService.csproj",
|
|
||||||
"__Tests/StellaOps.Concelier.WebService.Tests/StellaOps.Concelier.WebService.Tests.csproj"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,437 +0,0 @@
|
|||||||
# Sprint 8200.0013.0002 - Interest Scoring Service
|
|
||||||
|
|
||||||
## Topic & Scope
|
|
||||||
|
|
||||||
Implement **interest scoring** that learns which advisories matter to your organization. This sprint delivers:
|
|
||||||
|
|
||||||
1. **interest_score table**: Store per-canonical scores with reasons
|
|
||||||
2. **InterestScoringService**: Compute scores from SBOM/VEX/runtime signals
|
|
||||||
3. **Scoring Job**: Periodic batch recalculation of scores
|
|
||||||
4. **Stub Degradation**: Demote low-interest advisories to lightweight stubs
|
|
||||||
|
|
||||||
**Working directory:** `src/Concelier/__Libraries/StellaOps.Concelier.Interest/` (new)
|
|
||||||
|
|
||||||
**Evidence:** Advisories intersecting org SBOMs receive high scores; unused advisories degrade to stubs.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Dependencies & Concurrency
|
|
||||||
|
|
||||||
- **Depends on:** SPRINT_8200_0012_0003 (canonical service), SPRINT_8200_0013_0001 (Valkey cache)
|
|
||||||
- **Blocks:** Nothing (feature complete for Phase B)
|
|
||||||
- **Safe to run in parallel with:** SPRINT_8200_0013_0003 (SBOM scoring integration)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Documentation Prerequisites
|
|
||||||
|
|
||||||
- `docs/implplan/SPRINT_8200_0012_0000_FEEDSER_master_plan.md`
|
|
||||||
- `src/Excititor/__Libraries/StellaOps.Excititor.Core/TrustVector/` (existing scoring reference)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Delivery Tracker
|
|
||||||
|
|
||||||
| # | Task ID | Status | Key dependency | Owner | Task Definition |
|
|
||||||
|---|---------|--------|----------------|-------|-----------------|
|
|
||||||
| **Wave 0: Schema & Project Setup** | | | | | |
|
|
||||||
| 0 | ISCORE-8200-000 | DONE | Canonical service | Platform Guild | Create migration `015_interest_score.sql` |
|
|
||||||
| 1 | ISCORE-8200-001 | DONE | Task 0 | Concelier Guild | Create `StellaOps.Concelier.Interest` project |
|
|
||||||
| 2 | ISCORE-8200-002 | DONE | Task 1 | Concelier Guild | Define `InterestScoreEntity` and repository interface |
|
|
||||||
| 3 | ISCORE-8200-003 | DONE | Task 2 | Concelier Guild | Implement `PostgresInterestScoreRepository` |
|
|
||||||
| 4 | ISCORE-8200-004 | DONE | Task 3 | QA Guild | Unit tests for repository CRUD |
|
|
||||||
| **Wave 1: Scoring Algorithm** | | | | | |
|
|
||||||
| 5 | ISCORE-8200-005 | DONE | Task 4 | Concelier Guild | Define `IInterestScoringService` interface |
|
|
||||||
| 6 | ISCORE-8200-006 | DONE | Task 5 | Concelier Guild | Define `InterestScoreInput` with all signal types |
|
|
||||||
| 7 | ISCORE-8200-007 | DONE | Task 6 | Concelier Guild | Implement `InterestScoreCalculator` with weighted factors |
|
|
||||||
| 8 | ISCORE-8200-008 | DONE | Task 7 | Concelier Guild | Implement SBOM intersection factor (`in_sbom`) |
|
|
||||||
| 9 | ISCORE-8200-009 | DONE | Task 8 | Concelier Guild | Implement reachability factor (`reachable`) |
|
|
||||||
| 10 | ISCORE-8200-010 | DONE | Task 9 | Concelier Guild | Implement deployment factor (`deployed`) |
|
|
||||||
| 11 | ISCORE-8200-011 | DONE | Task 10 | Concelier Guild | Implement VEX factor (`no_vex_na`) |
|
|
||||||
| 12 | ISCORE-8200-012 | DONE | Task 11 | Concelier Guild | Implement age decay factor (`recent`) |
|
|
||||||
| 13 | ISCORE-8200-013 | DONE | Tasks 8-12 | QA Guild | Unit tests for score calculation with various inputs |
|
|
||||||
| **Wave 2: Scoring Service** | | | | | |
|
|
||||||
| 14 | ISCORE-8200-014 | DONE | Task 13 | Concelier Guild | Implement `InterestScoringService.ComputeScoreAsync()` |
|
|
||||||
| 15 | ISCORE-8200-015 | DONE | Task 14 | Concelier Guild | Implement `UpdateScoreAsync()` - persist + update cache |
|
|
||||||
| 16 | ISCORE-8200-016 | DONE | Task 15 | Concelier Guild | Implement `GetScoreAsync()` - cached score retrieval |
|
|
||||||
| 17 | ISCORE-8200-017 | DONE | Task 16 | Concelier Guild | Implement `BatchUpdateAsync()` - bulk score updates |
|
|
||||||
| 18 | ISCORE-8200-018 | DONE | Task 17 | QA Guild | Integration tests with Postgres + Valkey |
|
|
||||||
| **Wave 3: Scoring Job** | | | | | |
|
|
||||||
| 19 | ISCORE-8200-019 | DONE | Task 18 | Concelier Guild | Create `InterestScoreRecalculationJob` hosted service |
|
|
||||||
| 20 | ISCORE-8200-020 | DONE | Task 19 | Concelier Guild | Implement incremental scoring (only changed advisories) |
|
|
||||||
| 21 | ISCORE-8200-021 | DONE | Task 20 | Concelier Guild | Implement full recalculation mode (nightly) |
|
|
||||||
| 22 | ISCORE-8200-022 | DONE | Task 21 | Concelier Guild | Add job metrics and OpenTelemetry tracing |
|
|
||||||
| 23 | ISCORE-8200-023 | DONE | Task 22 | QA Guild | Test job execution and score consistency |
|
|
||||||
| **Wave 4: Stub Degradation** | | | | | |
|
|
||||||
| 24 | ISCORE-8200-024 | DONE | Task 18 | Concelier Guild | Define stub degradation policy (score threshold, retention) |
|
|
||||||
| 25 | ISCORE-8200-025 | DONE | Task 24 | Concelier Guild | Implement `DegradeToStubAsync()` - convert full to stub |
|
|
||||||
| 26 | ISCORE-8200-026 | DONE | Task 25 | Concelier Guild | Implement `RestoreFromStubAsync()` - promote on score increase |
|
|
||||||
| 27 | ISCORE-8200-027 | DONE | Task 26 | Concelier Guild | Create `StubDegradationJob` for periodic cleanup |
|
|
||||||
| 28 | ISCORE-8200-028 | DONE | Task 27 | QA Guild | Test degradation/restoration cycle |
|
|
||||||
| **Wave 5: API & Integration** | | | | | |
|
|
||||||
| 29 | ISCORE-8200-029 | DONE | Task 28 | Concelier Guild | Create `GET /api/v1/canonical/{id}/score` endpoint |
|
|
||||||
| 30 | ISCORE-8200-030 | DONE | Task 29 | Concelier Guild | Add score to canonical advisory response |
|
|
||||||
| 31 | ISCORE-8200-031 | DONE | Task 30 | Concelier Guild | Create `POST /api/v1/scores/recalculate` admin endpoint |
|
|
||||||
| 32 | ISCORE-8200-032 | DONE | Task 31 | QA Guild | End-to-end test: ingest advisory, update SBOM, verify score change |
|
|
||||||
| 33 | ISCORE-8200-033 | DONE | Task 32 | Docs Guild | Document interest scoring in module README |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Database Schema
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Migration: 20250201000001_CreateInterestScore.sql
|
|
||||||
|
|
||||||
CREATE TABLE vuln.interest_score (
|
|
||||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
|
||||||
canonical_id UUID NOT NULL REFERENCES vuln.advisory_canonical(id) ON DELETE CASCADE,
|
|
||||||
score NUMERIC(3,2) NOT NULL CHECK (score >= 0 AND score <= 1),
|
|
||||||
reasons JSONB NOT NULL DEFAULT '[]',
|
|
||||||
last_seen_in_build UUID,
|
|
||||||
computed_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
|
||||||
|
|
||||||
CONSTRAINT uq_interest_score_canonical UNIQUE (canonical_id)
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE INDEX idx_interest_score_score ON vuln.interest_score(score DESC);
|
|
||||||
CREATE INDEX idx_interest_score_computed ON vuln.interest_score(computed_at DESC);
|
|
||||||
|
|
||||||
-- Partial index for high-interest advisories
|
|
||||||
CREATE INDEX idx_interest_score_high ON vuln.interest_score(canonical_id)
|
|
||||||
WHERE score >= 0.7;
|
|
||||||
|
|
||||||
COMMENT ON TABLE vuln.interest_score IS 'Per-canonical interest scores based on org signals';
|
|
||||||
COMMENT ON COLUMN vuln.interest_score.reasons IS 'Array of reason codes: in_sbom, reachable, deployed, no_vex_na, recent';
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Scoring Algorithm
|
|
||||||
|
|
||||||
```csharp
|
|
||||||
namespace StellaOps.Concelier.Interest;
|
|
||||||
|
|
||||||
public sealed class InterestScoreCalculator
|
|
||||||
{
|
|
||||||
private readonly InterestScoreWeights _weights;
|
|
||||||
|
|
||||||
public InterestScoreCalculator(InterestScoreWeights weights)
|
|
||||||
{
|
|
||||||
_weights = weights;
|
|
||||||
}
|
|
||||||
|
|
||||||
public InterestScore Calculate(InterestScoreInput input)
|
|
||||||
{
|
|
||||||
var reasons = new List<string>();
|
|
||||||
double score = 0.0;
|
|
||||||
|
|
||||||
// Factor 1: In SBOM (30%)
|
|
||||||
if (input.SbomMatches.Count > 0)
|
|
||||||
{
|
|
||||||
score += _weights.InSbom;
|
|
||||||
reasons.Add("in_sbom");
|
|
||||||
}
|
|
||||||
|
|
||||||
// Factor 2: Reachable from entrypoint (25%)
|
|
||||||
if (input.SbomMatches.Any(m => m.IsReachable))
|
|
||||||
{
|
|
||||||
score += _weights.Reachable;
|
|
||||||
reasons.Add("reachable");
|
|
||||||
}
|
|
||||||
|
|
||||||
// Factor 3: Deployed in production (20%)
|
|
||||||
if (input.SbomMatches.Any(m => m.IsDeployed))
|
|
||||||
{
|
|
||||||
score += _weights.Deployed;
|
|
||||||
reasons.Add("deployed");
|
|
||||||
}
|
|
||||||
|
|
||||||
// Factor 4: No VEX Not-Affected (15%)
|
|
||||||
if (!input.VexStatements.Any(v => v.Status == VexStatus.NotAffected))
|
|
||||||
{
|
|
||||||
score += _weights.NoVexNotAffected;
|
|
||||||
reasons.Add("no_vex_na");
|
|
||||||
}
|
|
||||||
|
|
||||||
// Factor 5: Age decay (10%) - newer builds = higher score
|
|
||||||
if (input.LastSeenInBuild.HasValue)
|
|
||||||
{
|
|
||||||
var age = DateTimeOffset.UtcNow - input.LastSeenInBuild.Value;
|
|
||||||
var decayFactor = Math.Max(0, 1 - (age.TotalDays / 365));
|
|
||||||
var ageScore = _weights.Recent * decayFactor;
|
|
||||||
score += ageScore;
|
|
||||||
if (decayFactor > 0.5)
|
|
||||||
{
|
|
||||||
reasons.Add("recent");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return new InterestScore
|
|
||||||
{
|
|
||||||
CanonicalId = input.CanonicalId,
|
|
||||||
Score = Math.Round(Math.Min(score, 1.0), 2),
|
|
||||||
Reasons = reasons.ToArray(),
|
|
||||||
ComputedAt = DateTimeOffset.UtcNow
|
|
||||||
};
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
public sealed record InterestScoreWeights
|
|
||||||
{
|
|
||||||
public double InSbom { get; init; } = 0.30;
|
|
||||||
public double Reachable { get; init; } = 0.25;
|
|
||||||
public double Deployed { get; init; } = 0.20;
|
|
||||||
public double NoVexNotAffected { get; init; } = 0.15;
|
|
||||||
public double Recent { get; init; } = 0.10;
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Domain Models
|
|
||||||
|
|
||||||
```csharp
|
|
||||||
/// <summary>
|
|
||||||
/// Interest score for a canonical advisory.
|
|
||||||
/// </summary>
|
|
||||||
public sealed record InterestScore
|
|
||||||
{
|
|
||||||
public Guid CanonicalId { get; init; }
|
|
||||||
public double Score { get; init; }
|
|
||||||
public IReadOnlyList<string> Reasons { get; init; } = [];
|
|
||||||
public Guid? LastSeenInBuild { get; init; }
|
|
||||||
public DateTimeOffset ComputedAt { get; init; }
|
|
||||||
}
|
|
||||||
|
|
||||||
/// <summary>
|
|
||||||
/// Input signals for interest score calculation.
|
|
||||||
/// </summary>
|
|
||||||
public sealed record InterestScoreInput
|
|
||||||
{
|
|
||||||
public required Guid CanonicalId { get; init; }
|
|
||||||
public IReadOnlyList<SbomMatch> SbomMatches { get; init; } = [];
|
|
||||||
public IReadOnlyList<VexStatement> VexStatements { get; init; } = [];
|
|
||||||
public IReadOnlyList<RuntimeSignal> RuntimeSignals { get; init; } = [];
|
|
||||||
public DateTimeOffset? LastSeenInBuild { get; init; }
|
|
||||||
}
|
|
||||||
|
|
||||||
/// <summary>
|
|
||||||
/// SBOM match indicating canonical affects a package in an org's SBOM.
|
|
||||||
/// </summary>
|
|
||||||
public sealed record SbomMatch
|
|
||||||
{
|
|
||||||
public required string SbomDigest { get; init; }
|
|
||||||
public required string Purl { get; init; }
|
|
||||||
public bool IsReachable { get; init; }
|
|
||||||
public bool IsDeployed { get; init; }
|
|
||||||
public DateTimeOffset ScannedAt { get; init; }
|
|
||||||
}
|
|
||||||
|
|
||||||
/// <summary>
|
|
||||||
/// VEX statement affecting the canonical.
|
|
||||||
/// </summary>
|
|
||||||
public sealed record VexStatement
|
|
||||||
{
|
|
||||||
public required string StatementId { get; init; }
|
|
||||||
public required VexStatus Status { get; init; }
|
|
||||||
public string? Justification { get; init; }
|
|
||||||
}
|
|
||||||
|
|
||||||
public enum VexStatus
|
|
||||||
{
|
|
||||||
Affected,
|
|
||||||
NotAffected,
|
|
||||||
Fixed,
|
|
||||||
UnderInvestigation
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Service Interface
|
|
||||||
|
|
||||||
```csharp
|
|
||||||
public interface IInterestScoringService
|
|
||||||
{
|
|
||||||
/// <summary>Compute interest score for a canonical advisory.</summary>
|
|
||||||
Task<InterestScore> ComputeScoreAsync(Guid canonicalId, CancellationToken ct = default);
|
|
||||||
|
|
||||||
/// <summary>Get current interest score (cached).</summary>
|
|
||||||
Task<InterestScore?> GetScoreAsync(Guid canonicalId, CancellationToken ct = default);
|
|
||||||
|
|
||||||
/// <summary>Update interest score and persist.</summary>
|
|
||||||
Task UpdateScoreAsync(InterestScore score, CancellationToken ct = default);
|
|
||||||
|
|
||||||
/// <summary>Batch update scores for multiple canonicals.</summary>
|
|
||||||
Task BatchUpdateAsync(IEnumerable<Guid> canonicalIds, CancellationToken ct = default);
|
|
||||||
|
|
||||||
/// <summary>Trigger full recalculation for all active canonicals.</summary>
|
|
||||||
Task RecalculateAllAsync(CancellationToken ct = default);
|
|
||||||
|
|
||||||
/// <summary>Degrade low-interest canonicals to stub status.</summary>
|
|
||||||
Task<int> DegradeToStubsAsync(double threshold, CancellationToken ct = default);
|
|
||||||
|
|
||||||
/// <summary>Restore stubs to active when score increases.</summary>
|
|
||||||
Task<int> RestoreFromStubsAsync(double threshold, CancellationToken ct = default);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Stub Degradation Policy
|
|
||||||
|
|
||||||
```csharp
|
|
||||||
public sealed class StubDegradationPolicy
|
|
||||||
{
|
|
||||||
/// <summary>Score below which canonicals become stubs.</summary>
|
|
||||||
public double DegradationThreshold { get; init; } = 0.2;
|
|
||||||
|
|
||||||
/// <summary>Score above which stubs are restored to active.</summary>
|
|
||||||
public double RestorationThreshold { get; init; } = 0.4;
|
|
||||||
|
|
||||||
/// <summary>Minimum age before degradation (days).</summary>
|
|
||||||
public int MinAgeDays { get; init; } = 30;
|
|
||||||
|
|
||||||
/// <summary>Maximum stubs to process per job run.</summary>
|
|
||||||
public int BatchSize { get; init; } = 1000;
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Stub Content
|
|
||||||
|
|
||||||
When an advisory is degraded to stub, only these fields are retained:
|
|
||||||
|
|
||||||
| Field | Retained | Reason |
|
|
||||||
|-------|----------|--------|
|
|
||||||
| `id`, `merge_hash` | Yes | Identity |
|
|
||||||
| `cve`, `affects_key` | Yes | Lookup keys |
|
|
||||||
| `severity`, `exploit_known` | Yes | Quick triage |
|
|
||||||
| `title` | Yes | Human reference |
|
|
||||||
| `summary`, `version_range` | No | Space savings |
|
|
||||||
| Source edges | First only | Reduces storage |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Scoring Job
|
|
||||||
|
|
||||||
```csharp
|
|
||||||
public sealed class InterestScoreRecalculationJob : BackgroundService
|
|
||||||
{
|
|
||||||
private readonly IServiceProvider _services;
|
|
||||||
private readonly ILogger<InterestScoreRecalculationJob> _logger;
|
|
||||||
private readonly InterestScoreJobOptions _options;
|
|
||||||
|
|
||||||
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
|
|
||||||
{
|
|
||||||
while (!stoppingToken.IsCancellationRequested)
|
|
||||||
{
|
|
||||||
try
|
|
||||||
{
|
|
||||||
await using var scope = _services.CreateAsyncScope();
|
|
||||||
var scoringService = scope.ServiceProvider
|
|
||||||
.GetRequiredService<IInterestScoringService>();
|
|
||||||
|
|
||||||
if (IsFullRecalculationTime())
|
|
||||||
{
|
|
||||||
_logger.LogInformation("Starting full interest score recalculation");
|
|
||||||
await scoringService.RecalculateAllAsync(stoppingToken);
|
|
||||||
}
|
|
||||||
else
|
|
||||||
{
|
|
||||||
_logger.LogInformation("Starting incremental interest score update");
|
|
||||||
var changedIds = await GetChangedCanonicalIdsAsync(stoppingToken);
|
|
||||||
await scoringService.BatchUpdateAsync(changedIds, stoppingToken);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run stub degradation
|
|
||||||
var degraded = await scoringService.DegradeToStubsAsync(
|
|
||||||
_options.DegradationThreshold, stoppingToken);
|
|
||||||
_logger.LogInformation("Degraded {Count} advisories to stubs", degraded);
|
|
||||||
}
|
|
||||||
catch (Exception ex)
|
|
||||||
{
|
|
||||||
_logger.LogError(ex, "Interest score job failed");
|
|
||||||
}
|
|
||||||
|
|
||||||
await Task.Delay(_options.Interval, stoppingToken);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private bool IsFullRecalculationTime()
|
|
||||||
{
|
|
||||||
// Full recalculation at 3 AM UTC daily
|
|
||||||
var now = DateTimeOffset.UtcNow;
|
|
||||||
return now.Hour == 3 && now.Minute < _options.Interval.TotalMinutes;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## API Endpoints
|
|
||||||
|
|
||||||
```csharp
|
|
||||||
// GET /api/v1/canonical/{id}/score
|
|
||||||
app.MapGet("/api/v1/canonical/{id:guid}/score", async (
|
|
||||||
Guid id,
|
|
||||||
IInterestScoringService scoringService,
|
|
||||||
CancellationToken ct) =>
|
|
||||||
{
|
|
||||||
var score = await scoringService.GetScoreAsync(id, ct);
|
|
||||||
return score is null ? Results.NotFound() : Results.Ok(score);
|
|
||||||
})
|
|
||||||
.WithName("GetInterestScore")
|
|
||||||
.Produces<InterestScore>(200);
|
|
||||||
|
|
||||||
// POST /api/v1/scores/recalculate (admin)
|
|
||||||
app.MapPost("/api/v1/scores/recalculate", async (
|
|
||||||
IInterestScoringService scoringService,
|
|
||||||
CancellationToken ct) =>
|
|
||||||
{
|
|
||||||
await scoringService.RecalculateAllAsync(ct);
|
|
||||||
return Results.Accepted();
|
|
||||||
})
|
|
||||||
.WithName("RecalculateScores")
|
|
||||||
.RequireAuthorization("admin")
|
|
||||||
.Produces(202);
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Metrics
|
|
||||||
|
|
||||||
| Metric | Type | Labels | Description |
|
|
||||||
|--------|------|--------|-------------|
|
|
||||||
| `concelier_interest_score_computed_total` | Counter | - | Total scores computed |
|
|
||||||
| `concelier_interest_score_distribution` | Histogram | - | Score value distribution |
|
|
||||||
| `concelier_stub_degradations_total` | Counter | - | Total stub degradations |
|
|
||||||
| `concelier_stub_restorations_total` | Counter | - | Total stub restorations |
|
|
||||||
| `concelier_scoring_job_duration_seconds` | Histogram | mode | Job execution time |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Test Scenarios
|
|
||||||
|
|
||||||
| Scenario | Expected Score | Reasons |
|
|
||||||
|----------|---------------|---------|
|
|
||||||
| Advisory in SBOM, reachable, deployed | 0.75+ | in_sbom, reachable, deployed |
|
|
||||||
| Advisory in SBOM only | 0.30 | in_sbom |
|
|
||||||
| Advisory with VEX not_affected | 0.00 | (none - excluded by VEX) |
|
|
||||||
| Advisory not in any SBOM | 0.00 | (none) |
|
|
||||||
| Stale advisory (> 1 year) | ~0.00-0.10 | age decay |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution Log
|
|
||||||
|
|
||||||
| Date (UTC) | Update | Owner |
|
|
||||||
|------------|--------|-------|
|
|
||||||
| 2025-12-24 | Sprint created from gap analysis | Project Mgmt |
|
|
||||||
| 2025-12-25 | Tasks 1-2, 5-17, 24-26 DONE: Created StellaOps.Concelier.Interest project with InterestScore models, InterestScoreInput signals, InterestScoreCalculator (5 weighted factors), IInterestScoreRepository, IInterestScoringService, InterestScoringService, StubDegradationPolicy. 19 unit tests pass. Remaining: DB migration, Postgres repo, recalculation job, API endpoints. | Claude Code |
|
|
||||||
| 2025-12-25 | Task 3 DONE: Implemented PostgresInterestScoreRepository in StellaOps.Concelier.Storage.Postgres with all CRUD operations, batch save, low/high score queries, stale detection, and score distribution aggregation. Added Interest project reference. Build passes. Remaining: DB migration (task 0), unit tests (task 4), integration tests (task 18), jobs (tasks 19-23, 27), API endpoints (tasks 29-31). | Claude Code |
|
|
||||||
| 2025-12-25 | Tasks 19-22, 27 DONE: Created InterestScoreRecalculationJob (incremental + full modes), InterestScoringMetrics (OpenTelemetry counters/histograms), StubDegradationJob (periodic cleanup). Updated ServiceCollectionExtensions with job registration. 19 tests pass. Remaining: QA tests (23, 28), API endpoints (29-31), docs (33). | Claude Code |
|
|
||||||
| 2025-12-25 | Tasks 29-31 DONE: Created InterestScoreEndpointExtensions.cs with GET /canonical/{id}/score, GET /scores, GET /scores/distribution, POST /canonical/{id}/score/compute, POST /scores/recalculate, POST /scores/degrade, POST /scores/restore endpoints. Added InterestScoreInfo to CanonicalAdvisoryResponse. Added GetAllAsync and GetScoreDistributionAsync to repository. WebService builds successfully. 19 tests pass. | Claude Code |
|
|
||||||
| 2025-12-25 | Task 0 DONE: Created 015_interest_score.sql migration with interest_score table, indexes for score DESC, computed_at DESC, and partial indexes for high/low scores. Remaining: QA tests (tasks 4, 18, 23, 28, 32), docs (task 33). | Claude Code |
|
|
||||||
| 2025-12-26 | Task 4 DONE: Created `InterestScoreRepositoryTests.cs` in Storage.Postgres.Tests with 32 integration tests covering CRUD operations (Get/Save/Delete), batch operations (SaveMany, GetByCanonicalIds), low/high score queries, stale detection, pagination (GetAll), distribution statistics, and edge cases. Tests use ConcelierPostgresFixture with Testcontainers. Build passes. | Claude Code |
|
|
||||||
| 2025-12-26 | Tasks 18, 23, 28, 32 DONE: Created `InterestScoringServiceTests.cs` with 20 tests covering integration tests (score persistence, cache retrieval), job execution (deterministic results, batch updates), and degradation/restoration cycle (threshold-based degradation, restoration, data integrity). E2E test covered by existing `SbomScoreIntegrationTests.cs`. **Sprint 100% complete - all 34 tasks DONE.** | Claude Code |
|
|
||||||
| 2025-12-26 | Tasks 32, 33 completed: Created `InterestScoreEndpointTests.cs` in WebService.Tests (E2E tests for API endpoints), created `README.md` in StellaOps.Concelier.Interest with full module documentation (usage examples, API endpoints, configuration, metrics, schema). Fixed and verified InterestScoringServiceTests (36 tests pass). Sprint complete. | Claude Code || 2025-12-26 | Note: WebService.Tests build blocked by pre-existing broken project references in StellaOps.Concelier.Testing.csproj (references point to wrong paths). Interest.Tests (36 tests) pass. E2E tests created but cannot execute until Testing infra is fixed (separate backlog item). | Claude Code |
|
|
||||||
@@ -0,0 +1,175 @@
|
|||||||
|
Here’s a simple, practical way to think about a **SBOM‑first, VEX‑ready supply‑chain spine** and the **evidence graph + smart‑diff** you can build on top of it—starting from zero and ending with reproducible, signed decisions.
|
||||||
|
|
||||||
|
# SBOM‑first spine (VEX‑ready)
|
||||||
|
|
||||||
|
**Goal:** make the SBOM the canonical graph of “what’s inside,” then layer signed evidence (build, scans, policy) so every verdict is portable, replayable, and auditable across registries.
|
||||||
|
|
||||||
|
**Core choices**
|
||||||
|
|
||||||
|
* **Canonical graph:** treat **CycloneDX 1.6** and **SPDX 3.x** as first‑class. Keep both in sync; normalize component IDs (PURL/CPE), hashes, licenses, and relationships.
|
||||||
|
* **Attestations:** use **in‑toto + DSSE** for all lifecycle facts:
|
||||||
|
|
||||||
|
* build (SLSA provenance),
|
||||||
|
* scan results (vuln, secrets, IaC, reachability),
|
||||||
|
* policy evaluation (allow/deny, risk budgets, exceptions).
|
||||||
|
* **Storage/transport:** publish everything as **OCI‑attached artifacts** via **OCI Referrers**:
|
||||||
|
|
||||||
|
* `image:tag` → SBOM (spdx/cdx), VEX, SARIF, provenance, policy verdicts, exception notes—each a referrer with media type + signature.
|
||||||
|
* **Signatures:** cosign/sigstore (or your regional crypto: eIDAS/FIPS/GOST/SM) for **content‑addressed** blobs.
|
||||||
|
|
||||||
|
**Minimum viable workflow**
|
||||||
|
|
||||||
|
1. **Build step**
|
||||||
|
|
||||||
|
* Produce identical SBOMs in CycloneDX and SPDX.
|
||||||
|
* Emit SLSA‑style provenance attestation.
|
||||||
|
2. **Scan step(s)**
|
||||||
|
|
||||||
|
* OS + language deps + container layers; add **reachability proofs** where possible.
|
||||||
|
* Emit one **scan attestation per tool** (don’t conflate).
|
||||||
|
3. **Policy step**
|
||||||
|
|
||||||
|
* Evaluate policies (e.g., OPA/Rego or your lattice rules) **against the SBOM graph + scan evidence**.
|
||||||
|
* Emit a **signed policy verdict attestation** (pass/fail + reasons + unknowns count).
|
||||||
|
4. **Publish**
|
||||||
|
|
||||||
|
* Push image, then push SBOMs, VEX, scan attestations, policy verdicts as **OCI referrers**.
|
||||||
|
5. **Verify / consume**
|
||||||
|
|
||||||
|
* Pull the image’s **referrer set**; verify signatures; reconstruct graph locally; **replay** the policy evaluation deterministically.
|
||||||
|
|
||||||
|
**Data model tips**
|
||||||
|
|
||||||
|
* Stable identifiers: PURLs for packages, digests for layers, Build‑ID for binaries.
|
||||||
|
* Edges: `component→dependsOn`, `component→vulnerability`, `component→evidence(attestation)`, `component→policyClaim`.
|
||||||
|
* Keep **time (as‑of)** and **source** on every node/edge for replay.
|
||||||
|
|
||||||
|
# Evidence graph + smart‑diff
|
||||||
|
|
||||||
|
**Goal:** persist an **explainability graph** (findings ↔ components ↔ provenance ↔ policies) and compute **signed delta‑verdicts** on diffs to drive precise impact analysis and quiet noise.
|
||||||
|
|
||||||
|
**What to store**
|
||||||
|
|
||||||
|
* **Provenance:** who built it, from what, when (commit, builder, materials).
|
||||||
|
* **Findings:** CVEs, misconfigs, secrets, license flags, each with source tool, version, rule, confidence, timestamp.
|
||||||
|
* **Policies & verdicts:** rule set version, inputs’ hashes, outcome, rationale.
|
||||||
|
* **Reachability subgraphs:** the minimal path proving exploitability (e.g., symbol → function → package → process start).
|
||||||
|
|
||||||
|
**Smart‑diff algorithm (high level)**
|
||||||
|
|
||||||
|
* Compare two images (or SBOM graphs) **by component identity + version + hash**.
|
||||||
|
* For each change class:
|
||||||
|
|
||||||
|
* **Added/removed/changed component**
|
||||||
|
* **New/cleared/changed finding**
|
||||||
|
* **Changed reachability path**
|
||||||
|
* **Changed policy version/inputs**
|
||||||
|
* Re‑evaluate only affected subgraph; produce a **Delta Verdict**:
|
||||||
|
|
||||||
|
* `status`: safer / risk‑equal / risk‑higher
|
||||||
|
* `why`: list of net‑new reachable vulns, removed reachable vulns, policy/exception impacts
|
||||||
|
* `evidenceRefs`: hashes of attestations used
|
||||||
|
* **Sign the delta verdict (DSSE)** and publish it as an **OCI referrer** too.
|
||||||
|
|
||||||
|
**UX essentials**
|
||||||
|
|
||||||
|
* Artifact page shows: **“Evidence Stack”** (SBOM, scans, VEX, policy, provenance) with green checks for signatures.
|
||||||
|
* **Smart‑diff view:** left vs right image → “net‑new reachable CVEs (+3)”, “downgraded risk (‑1)” with drill‑downs to the exact path/evidence.
|
||||||
|
* **Explain button:** expands to show **why** a CVE is (not) applicable (feature flag off, code path unreachable, kernel mitigation present, etc.).
|
||||||
|
* **Replay badge:** “Deterministic ✅” (inputs’ hashes match; verdict reproducible).
|
||||||
|
|
||||||
|
# Implementation checklist (team‑ready)
|
||||||
|
|
||||||
|
**Pipelines**
|
||||||
|
|
||||||
|
* [ ] Build: emit SBOM (CDX + SPDX), SLSA provenance (in‑toto/DSSE), sign all.
|
||||||
|
* [ ] Scan: OS + language + config + (optional) eBPF/runtime; one attestation per tool.
|
||||||
|
* [ ] Policy: evaluate rules → signed verdict attestation; include **unknowns count**.
|
||||||
|
* [ ] Publish: push all as OCI referrers; enable verification gate on pull/deploy.
|
||||||
|
|
||||||
|
**Schema & IDs**
|
||||||
|
|
||||||
|
* [ ] Normalize component IDs (PURL/CPE) + strong hashes; map binaries (Build‑ID → package).
|
||||||
|
* [ ] Evidence graph store: Postgres (authoritative) + cache (Valkey) for queries.
|
||||||
|
* [ ] Index by image digest; maintain **as‑of** snapshots for time‑travel.
|
||||||
|
|
||||||
|
**Determinism**
|
||||||
|
|
||||||
|
* [ ] Lock feeds, rule versions, tool versions; record all **input digests**.
|
||||||
|
* [ ] Provide a `replay.yaml` manifest capturing inputs → expected verdict hash.
|
||||||
|
|
||||||
|
**Security & sovereignty**
|
||||||
|
|
||||||
|
* [ ] Pluggable crypto: eIDAS/FIPS/GOST/SM; offline bundle export/import.
|
||||||
|
* [ ] Air‑gapped profile: Postgres‑only with documented trade‑offs.
|
||||||
|
|
||||||
|
**APIs & types (suggested media types)**
|
||||||
|
|
||||||
|
* `application/vnd.cyclonedx+json`
|
||||||
|
* `application/spdx+json`
|
||||||
|
* `application/vnd.in-toto+json; statement=provenance|scan|policy`
|
||||||
|
* `application/vnd.stella.verdict+json` (your signed verdict/delta)
|
||||||
|
|
||||||
|
**Minimal object examples (sketches)**
|
||||||
|
|
||||||
|
*Attestation (scan)*
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "https://in-toto.io/Statement/v1",
|
||||||
|
"predicateType": "https://stella.dev/scan/v1",
|
||||||
|
"subject": [{"name": "registry/app@sha256:…", "digest": {"sha256": "..."} }],
|
||||||
|
"predicate": {
|
||||||
|
"tool": {"name": "scannerX", "version": "1.4.2"},
|
||||||
|
"inputs": {"sbom": "sha256:…", "db": "sha256:…"},
|
||||||
|
"findings": [{"id": "CVE-2025-1234", "component": "pkg:pypi/xyz@1.2.3", "severity": "HIGH"}]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
*Policy verdict (replayable)*
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "https://in-toto.io/Statement/v1",
|
||||||
|
"predicateType": "https://stella.dev/verdict/v1",
|
||||||
|
"subject": [{"name": "registry/app@sha256:…"}],
|
||||||
|
"predicate": {
|
||||||
|
"policy": {"id": "prod.v1.7", "hash": "sha256:…"},
|
||||||
|
"inputs": {"sbom": "sha256:…", "scans": ["sha256:…","sha256:…"]},
|
||||||
|
"unknowns": 2,
|
||||||
|
"decision": "allow",
|
||||||
|
"reasons": [
|
||||||
|
"CVE-2025-1234 not reachable (path pruned)",
|
||||||
|
"License policy ok"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
*Delta verdict (smart‑diff)*
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"predicateType": "https://stella.dev/delta-verdict/v1",
|
||||||
|
"predicate": {
|
||||||
|
"from": "sha256:old", "to": "sha256:new",
|
||||||
|
"impact": "risk-higher",
|
||||||
|
"changes": {
|
||||||
|
"componentsAdded": ["pkg:apk/openssl@3.2.1-r1"],
|
||||||
|
"reachableVulnsAdded": ["CVE-2025-2222"]
|
||||||
|
},
|
||||||
|
"evidenceRefs": ["sha256:scanA", "sha256:policyV1"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
# Operating rules you can adopt today
|
||||||
|
|
||||||
|
* **Everything is evidence.** If it influenced a decision, it’s an attestation you can sign and attach.
|
||||||
|
* **Same inputs → same verdict.** If not, treat it as a bug.
|
||||||
|
* **Unknowns budgeted by policy.** E.g., “fail prod if unknowns > 0; warn in dev.”
|
||||||
|
* **Diffs decide deployments.** Gate on the **delta verdict**, not raw CVE counts.
|
||||||
|
* **Portable by default.** If you move registries, your decisions move with the image via referrers.
|
||||||
|
|
||||||
|
If you want, I can turn this into starter repos (SBOM/attestation schemas, OCI‑referrer publish/verify CLI, and a smart‑diff service stub in .NET 10) so your team can plug it into your current pipelines without a big rewrite.
|
||||||
@@ -0,0 +1,61 @@
|
|||||||
|
Here’s a tight, practical pattern you can lift for Stella Ops: **make exceptions first‑class, auditable objects** and **gate releases on risk deltas (diff‑aware checks)**—mirroring what top scanners do, but with stronger evidence and auto‑revalidation.
|
||||||
|
|
||||||
|
### 1) Exceptions as auditable objects
|
||||||
|
|
||||||
|
Competitor cues
|
||||||
|
|
||||||
|
* **Snyk** lets users ignore issues with a required reason and optional expiry (UI/CLI; `.snyk` policy). Ignored items can auto‑resurface when a fix exists. ([Snyk User Docs][1])
|
||||||
|
* **Anchore** models **policy allowlists** (named sets of exceptions) applied during evaluation/mapping. ([Anchore Documentation][2])
|
||||||
|
* **Prisma Cloud** supports vulnerability rules/CVE exceptions to soften or block findings. ([Prisma Cloud][3])
|
||||||
|
|
||||||
|
What to ship (Stella Ops)
|
||||||
|
|
||||||
|
* **Exception entity**: `{scope, subject(CVE/pkg/path), reason(text), evidenceRefs[], createdBy, createdAt, expiresAt?, policyBinding, signature}`
|
||||||
|
* **Signed rationale + evidence**: require a justification plus **linked proofs** (attestation IDs, VEX note, reachability subgraph slice). Store as an **OCI‑attached attestation** to the SBOM/VEX artifact.
|
||||||
|
* **Auto‑expiry & revalidation gates**: scheduler re‑tests on expiry or when feeds mark “fix available / EPSS ↑ / reachability ↑”; on failure, **flip gate to “needs re‑review”** and notify.
|
||||||
|
* **Audit view**: timeline of exception lifecycle; show who/why, evidence, and re‑checks; exportable as an “audit pack.”
|
||||||
|
* **Policy hooks**: “allow only if: reason ∧ evidence present ∧ max TTL ≤ X ∧ owner = team‑Y.”
|
||||||
|
* **Inheritance**: repo→image→env scoping with explicit shadowing (surface conflicts).
|
||||||
|
|
||||||
|
### 2) Diff‑aware release gates (“delta verdicts”)
|
||||||
|
|
||||||
|
Competitor cues
|
||||||
|
|
||||||
|
* **Snyk PR Checks** scan *changes* and gate merges with a severity threshold; results show issue diffs per PR. ([Snyk User Docs][4])
|
||||||
|
|
||||||
|
What to ship (Stella Ops)
|
||||||
|
|
||||||
|
* **Graph deltas**: on each commit/image, compute `Δ(SBOM graph, reachability graph, VEX claims)`.
|
||||||
|
* **Delta verdict** (signed, replayable): `PASS | WARN | FAIL` + **proof links** to:
|
||||||
|
|
||||||
|
* attestation bundle (in‑toto/DSSE),
|
||||||
|
* **reachability subgraph** showing new execution paths to vulnerable symbols,
|
||||||
|
* policy evaluation trace.
|
||||||
|
* **Side‑by‑side UI**: “before vs after” risks; highlight *newly reachable* vulns and *fixed/mitigated* ones; one‑click **Create Exception** (enforces reason+evidence+TTL).
|
||||||
|
* **Enforcement knobs**: per‑branch/env risk budgets; fail if `unknowns > N` or if any exception lacks evidence/TTL.
|
||||||
|
* **Supply chain scope**: run the same gate on base‑image bumps and dependency updates.
|
||||||
|
|
||||||
|
### Minimal data model (sketch)
|
||||||
|
|
||||||
|
* `Exception`: id, scope, subject, reason, evidenceRefs[], ttl, status, sig.
|
||||||
|
* `DeltaVerdict`: id, baseRef, headRef, changes[], policyOutcome, proofs[], sig.
|
||||||
|
* `Proof`: type(`attestation|reachability|vex|log`), uri, hash.
|
||||||
|
|
||||||
|
### CLI / API ergonomics (examples)
|
||||||
|
|
||||||
|
* `stella exception create --cve CVE-2025-1234 --scope image:repo/app:tag --reason "Feature disabled" --evidence att:sha256:… --ttl 30d`
|
||||||
|
* `stella verify delta --from abc123 --to def456 --policy prod.json --print-proofs`
|
||||||
|
|
||||||
|
### Guardrails out of the box
|
||||||
|
|
||||||
|
* **No silent ignores**: exceptions are visible in results (action changes, not deletion)—same spirit as Anchore. ([Anchore Documentation][2])
|
||||||
|
* **Resurface on fix**: if a fix exists, force re‑review (parity with Snyk behavior). ([Snyk User Docs][1])
|
||||||
|
* **Rule‑based blocking**: allow “hard/soft fail” like Prisma enforcement. ([Prisma Cloud][5])
|
||||||
|
|
||||||
|
If you want, I can turn this into a short product spec (API + UI wireframe + policy snippets) tailored to your Stella Ops modules (Policy Engine, Vexer, Attestor).
|
||||||
|
|
||||||
|
[1]: https://docs.snyk.io/manage-risk/prioritize-issues-for-fixing/ignore-issues?utm_source=chatgpt.com "Ignore issues | Snyk User Docs"
|
||||||
|
[2]: https://docs.anchore.com/current/docs/overview/concepts/policy/policies/?utm_source=chatgpt.com "Policies and Evaluation"
|
||||||
|
[3]: https://docs.prismacloud.io/en/compute-edition/22-12/admin-guide/vulnerability-management/configure-vuln-management-rules?utm_source=chatgpt.com "Vulnerability management rules - Prisma Cloud Documentation"
|
||||||
|
[4]: https://docs.snyk.io/scan-with-snyk/pull-requests/pull-request-checks?utm_source=chatgpt.com "Pull Request checks | Snyk User Docs"
|
||||||
|
[5]: https://docs.prismacloud.io/en/enterprise-edition/content-collections/application-security/risk-management/monitor-and-manage-code-build/enforcement?utm_source=chatgpt.com "Enforcement - Prisma Cloud Documentation"
|
||||||
@@ -0,0 +1,145 @@
|
|||||||
|
Here’s a compact blueprint for a **binary‑level knowledge base** that maps ELF Build‑IDs / PE signatures to vulnerable functions, patch lineage, and reachability hints—so your scanner can act like a provenance‑aware “binary oracle,” not just a CVE lookup.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Why this matters (in plain terms)
|
||||||
|
|
||||||
|
* **Same version ≠ same risk.** Distros (and vendors) frequently **backport** fixes without bumping versions. Only the **binary** tells the truth.
|
||||||
|
* **Function‑level matching** turns noisy “package has CVE” into precise “this exact function range is vulnerable in your binary.”
|
||||||
|
* **Reachability hints** cut triage noise by ranking vulns the code path can actually hit at runtime.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Minimal starter schema (MVP)
|
||||||
|
|
||||||
|
Keep it tiny so it grows with real evidence:
|
||||||
|
|
||||||
|
**artifacts**
|
||||||
|
|
||||||
|
* `id (pk)`
|
||||||
|
* `platform` (linux, windows)
|
||||||
|
* `format` (ELF, PE)
|
||||||
|
* `build_id` (ELF `.note.gnu.build-id`), `pdb_guid` / `pe_imphash` (Windows)
|
||||||
|
* `sha256` (whole‑file)
|
||||||
|
* `compiler_fingerprint` (e.g., `gcc-13.2`, `msvc-19.39`)
|
||||||
|
* `source_hint` (optional: pname/version if known)
|
||||||
|
|
||||||
|
**symbols**
|
||||||
|
|
||||||
|
* `artifact_id (fk)`
|
||||||
|
* `symbol_name`
|
||||||
|
* `addr_start`, `addr_end` (or RVA for PE)
|
||||||
|
* `section`, `file_offset` (optional)
|
||||||
|
|
||||||
|
**vuln_segments**
|
||||||
|
|
||||||
|
* `id (pk)`
|
||||||
|
* `cve_id` (CVE‑YYYY‑NNNN)
|
||||||
|
* `function_signature` (normalized name + arity)
|
||||||
|
* `byte_sig` (short stable pattern around the vulnerable hunk)
|
||||||
|
* `patch_sig` (pattern from fixed hunk)
|
||||||
|
* `evidence_ref` (link to patch diff, commit, or NVD note)
|
||||||
|
* `backport_flag` (bool)
|
||||||
|
* `introduced_in`, `fixed_in` (semver-ish text; note “backport” when used)
|
||||||
|
|
||||||
|
**matches**
|
||||||
|
|
||||||
|
* `artifact_id (fk)`, `vuln_segment_id (fk)`
|
||||||
|
* `match_type` (`byte`, `range`, `symbol`)
|
||||||
|
* `confidence` (0–1)
|
||||||
|
* `explain` (why we think this matches)
|
||||||
|
|
||||||
|
**reachability_hints**
|
||||||
|
|
||||||
|
* `artifact_id (fk)`, `symbol_name`
|
||||||
|
* `hint_type` (`imported`, `exported`, `hot`, `ebpf_seen`, `graph_core`)
|
||||||
|
* `weight` (0–100)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# How the oracle answers “Am I affected?”
|
||||||
|
|
||||||
|
1. **Identify**: Look up by Build‑ID / PE signature; fall back to file hash.
|
||||||
|
2. **Locate**: Map symbols → address ranges; scan for `byte_sig`/`patch_sig`.
|
||||||
|
3. **Decide**:
|
||||||
|
|
||||||
|
* if `patch_sig` present ⇒ **Not affected (backported)**.
|
||||||
|
* if `byte_sig` present and reachable (weighted) ⇒ **Affected (prioritized)**.
|
||||||
|
* if only `byte_sig` present, unreachable ⇒ **Affected (low priority)**.
|
||||||
|
* if neither ⇒ **Unknown**.
|
||||||
|
4. **Explain**: Attach `evidence_ref`, the exact offsets, and the reason (match_type + reachability).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Ingestion pipeline (no humans in the loop)
|
||||||
|
|
||||||
|
* **Fingerprinting**: extract Build‑ID / PE GUID; compute `sha256`.
|
||||||
|
* **Symbol map**: parse DWARF/PDB if present; else fall back to heuristics (ELF `symtab`, PE exports).
|
||||||
|
* **Patch intelligence**: auto‑diff upstream commits (plus major distros) → synthesize short **byte signatures** around changed hunks (stable across relocations).
|
||||||
|
* **Evidence links**: store URLs/commit IDs for cross‑audit.
|
||||||
|
* **Noise control**: only accept a vuln signature if it hits N≥3 independent binaries across distros (tunable).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Deterministic verdicts (fit to Stella Ops)
|
||||||
|
|
||||||
|
* **Inputs**: `(artifact fingerprint, vuln_segments@version, reachability@policy)`
|
||||||
|
* **Output**: **Signed OCI attestation** “verdict.json” (same inputs → same verdict).
|
||||||
|
* **Replay**: keep rule bundle & feed hashes for audit.
|
||||||
|
* **Backport precedence**: `patch_sig` beats package version claims every time.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Fast path to MVP (2 sprints)
|
||||||
|
|
||||||
|
* Add a **Build‑ID/PE indexer** to Scanner.
|
||||||
|
* Teach Feedser/Vexer to ingest `vuln_segments` (with `byte_sig`/`patch_sig`).
|
||||||
|
* Implement matching + verdict attestation; surface **“Backported & Safe”** vs **“Affected & Reachable”** badges in UI.
|
||||||
|
* Seed DB with 10 high‑impact CVEs (OpenSSL, zlib, xz, glibc, libxml2, curl, musl, busybox, OpenSSH, sudo).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Example: SQL skeleton (Postgres)
|
||||||
|
|
||||||
|
```sql
|
||||||
|
create table artifacts(
|
||||||
|
id bigserial primary key,
|
||||||
|
platform text, format text,
|
||||||
|
build_id text, pdb_guid text, pe_imphash text,
|
||||||
|
sha256 bytea not null unique,
|
||||||
|
compiler_fingerprint text, source_hint text
|
||||||
|
);
|
||||||
|
|
||||||
|
create table symbols(
|
||||||
|
artifact_id bigint references artifacts(id),
|
||||||
|
symbol_name text, addr_start bigint, addr_end bigint,
|
||||||
|
section text, file_offset bigint
|
||||||
|
);
|
||||||
|
|
||||||
|
create table vuln_segments(
|
||||||
|
id bigserial primary key,
|
||||||
|
cve_id text, function_signature text,
|
||||||
|
byte_sig bytea, patch_sig bytea,
|
||||||
|
evidence_ref text, backport_flag boolean,
|
||||||
|
introduced_in text, fixed_in text
|
||||||
|
);
|
||||||
|
|
||||||
|
create table matches(
|
||||||
|
artifact_id bigint references artifacts(id),
|
||||||
|
vuln_segment_id bigint references vuln_segments(id),
|
||||||
|
match_type text, confidence real, explain text
|
||||||
|
);
|
||||||
|
|
||||||
|
create table reachability_hints(
|
||||||
|
artifact_id bigint references artifacts(id),
|
||||||
|
symbol_name text, hint_type text, weight int
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
If you want, I can:
|
||||||
|
|
||||||
|
* drop in a tiny **.NET 10** matcher (ELF/PE parsers + byte‑window scanner),
|
||||||
|
* wire verdicts as **OCI attestations** in your current pipeline,
|
||||||
|
* and prep the first **10 CVE byte/patch signatures** to seed the DB.
|
||||||
@@ -0,0 +1,71 @@
|
|||||||
|
Here’s a crisp way to think about “reachability” that makes triage sane and auditable: **treat it like a cryptographic proof**—a minimal, reproducible chain that shows *why* a vuln can (or cannot) hit runtime.
|
||||||
|
|
||||||
|
### The idea (plain English)
|
||||||
|
|
||||||
|
* **Reachability** asks: “Could data flow from an attacker to the vulnerable code path during real execution?”
|
||||||
|
* **Proof-carrying reachability** says: “Don’t just say yes/no—hand me a *proof chain* I can re-run.”
|
||||||
|
Think: the shortest, lossless breadcrumb trail from entrypoint → sinks, with the exact build + policy context that made it true.
|
||||||
|
|
||||||
|
### What the “proof” contains
|
||||||
|
|
||||||
|
1. **Scope hash**: content digests for artifact(s) (image layers, SBOM nodes, commit IDs, compiler flags).
|
||||||
|
2. **Policy hash**: the decision rules used (e.g., “prod disallows unknowns > 0”; “vendor VEX outranks distro unless backport tag present”).
|
||||||
|
3. **Graph snippet**: the *minimal subgraph* (call/data/control edges) that connects:
|
||||||
|
|
||||||
|
* external entrypoint(s) → user-controlled sources → validators (if any) → vulnerable function(s)/sink(s).
|
||||||
|
4. **Conditions**: feature flags, env vars, platform guards, version ranges, eBPF-observed edges (if present).
|
||||||
|
5. **Verdict** (signed): A → {Affected | Not Affected | Under-Constrained} with reason codes.
|
||||||
|
6. **Replay manifest**: the inputs needed to recompute the same verdict (feeds, rules, versions, hashes).
|
||||||
|
|
||||||
|
### Why this helps
|
||||||
|
|
||||||
|
* **Auditable**: Every “Not Affected” is defensible (no hand-wavy “scanner says so”).
|
||||||
|
* **Deterministic**: Same inputs → same verdict (great for change control and regulators).
|
||||||
|
* **Compact**: You store only the *minimal subgraph*, not the whole monolith.
|
||||||
|
|
||||||
|
### Minimal proof example (sketch)
|
||||||
|
|
||||||
|
* Artifact: `svc.payments:1.4.7` (image digest `sha256:…`)
|
||||||
|
* CVE: `CVE-2024-XYZ` in `libyaml 0.2.5`
|
||||||
|
* Entry: `POST /import`, body → `YamlDeserializer.Parse`
|
||||||
|
* Guards: none (no schema/whitelist prior to parse)
|
||||||
|
* Edge chain: `HttpBody → Parse(bytes) → LoadNode() → vulnerable_path()`
|
||||||
|
* Condition: feature flag `BULK_IMPORT=true`
|
||||||
|
* Verdict: **Affected**
|
||||||
|
* Signed DSSE envelope over {scope hash, policy hash, graph snippet JSON, conditions, verdict}.
|
||||||
|
|
||||||
|
### How to build it (practical checklist)
|
||||||
|
|
||||||
|
* **During build**
|
||||||
|
|
||||||
|
* Emit SBOM (source & binary) with function/file symbols where possible.
|
||||||
|
* Capture compiler/linker flags; normalize paths; include feature flags’ default state.
|
||||||
|
* **During analysis**
|
||||||
|
|
||||||
|
* Static: slice the call graph to the *shortest* source→sink chain; attach type-state facts (e.g., “validated length”).
|
||||||
|
* Deps: map CVEs to precise symbol/ABI surfaces (not just package names).
|
||||||
|
* Backports: require explicit evidence (patch IDs, symbol presence) before downgrading severity.
|
||||||
|
* **During runtime (optional but strong)**
|
||||||
|
|
||||||
|
* eBPF trace to confirm edges observed; store hashes of kprobes/uprobes programs and sampling window.
|
||||||
|
* **During decisioning**
|
||||||
|
|
||||||
|
* Apply merge policy (vendor VEX, distro notes, internal tests) deterministically; hash the policy.
|
||||||
|
* Emit one DSSE/attestation per verdict; include replay manifest.
|
||||||
|
|
||||||
|
### UI that won’t overwhelm
|
||||||
|
|
||||||
|
* **Default card**: Verdict + “Why?” (one-line chain) + “Replay” button.
|
||||||
|
* **Expand**: shows the 5–10 edge subgraph, conditions, and signed envelope.
|
||||||
|
* **Compare builds**: side-by-side proof deltas (edges added/removed, policy change, backport flip).
|
||||||
|
|
||||||
|
### Operating modes
|
||||||
|
|
||||||
|
* **Strict** (prod): Unknowns → fail-closed; proofs required for Not Affected.
|
||||||
|
* **Lenient** (dev): Unknowns tolerated; proofs optional but encouraged; allow “Under-Constrained”.
|
||||||
|
|
||||||
|
### What to measure
|
||||||
|
|
||||||
|
* Proof generation rate, median proof size (KB), replay success %, proof dedup ratio, and “unknowns” burn-down.
|
||||||
|
|
||||||
|
If you want, I can turn this into a ready-to-ship spec for Stella Ops (attestation schema, JSON examples, API routes, and a tiny .NET verifier).
|
||||||
@@ -0,0 +1,86 @@
|
|||||||
|
Here’s a crisp idea you can put to work right away: **treat SBOM diffs as a first‑class, signed evidence object**—not just “what components changed,” but also **VEX claim deltas** and **attestation (in‑toto/DSSE) deltas**. This makes vulnerability verdicts **deterministically replayable** and **audit‑ready** across release gates.
|
||||||
|
|
||||||
|
### Why this matters (plain speak)
|
||||||
|
|
||||||
|
* **Less noise, faster go/no‑go:** Only re‑triage what truly changed (package, reachability, config, or vendor stance), not the whole universe.
|
||||||
|
* **Deterministic audits:** Same inputs → same verdict. Auditors can replay checks exactly.
|
||||||
|
* **Tighter release gates:** Policies evaluate the *delta verdict*, not raw scans.
|
||||||
|
|
||||||
|
### Evidence model (minimal but complete)
|
||||||
|
|
||||||
|
* **Subject:** OCI digest of image/artifact.
|
||||||
|
* **Baseline:** SBOM‑G (graph hash), VEX set hash, policy + rules hash, feed snapshots (CVE JSON digests), toolchain + config hashes.
|
||||||
|
* **Delta:**
|
||||||
|
|
||||||
|
* `components_added/removed/updated` (with semver + source/distro origin)
|
||||||
|
* `reachability_delta` (edges added/removed in call/file/path graph)
|
||||||
|
* `settings_delta` (flags, env, CAPs, eBPF signals)
|
||||||
|
* `vex_delta` (per‑CVE claim transitions: *affected → not_affected → fixed*, with reason codes)
|
||||||
|
* `attestation_delta` (build‑provenance step or signer changes)
|
||||||
|
* **Verdict:** Signed “delta verdict” (allow/block/risk_budget_consume) with rationale pointers into the deltas.
|
||||||
|
* **Provenance:** DSSE envelope, in‑toto link to baseline + new inputs.
|
||||||
|
|
||||||
|
### Deterministic replay contract
|
||||||
|
|
||||||
|
Pin and record:
|
||||||
|
|
||||||
|
* Feed snapshots (CVE/VEX advisories) + hashes
|
||||||
|
* Scanner versions + rule packs + lattice/policy version
|
||||||
|
* SBOM generator version + mode (CycloneDX 1.6 / SPDX 3.0.1)
|
||||||
|
* Reachability engine settings (language analyzers, eBPF taps)
|
||||||
|
* Merge semantics ID (see below)
|
||||||
|
|
||||||
|
Replayer re‑hydrates these **exact** inputs and must reproduce the same verdict bit‑for‑bit.
|
||||||
|
|
||||||
|
### Merge semantics (stop “vendor > distro > internal” naïveté)
|
||||||
|
|
||||||
|
Define a policy‑controlled lattice for claims, e.g.:
|
||||||
|
|
||||||
|
* **Orderings:** `exploit_observed > affected > under_investigation > fixed > not_affected`
|
||||||
|
* **Source weights:** vendor, distro, internal SCA, runtime sensor, pentest
|
||||||
|
* **Conflict rules:** tie‑breaks, quorum, freshness windows, required evidence hooks (e.g., “not_affected because feature flag X=off, proven by config attestation Y”)
|
||||||
|
|
||||||
|
### Where it lives in the product
|
||||||
|
|
||||||
|
* **UI:** “Diff & Verdict” panel on each PR/build → shows SBOM/VEX/attestation deltas and the signed delta verdict; one‑click export of the DSSE envelope.
|
||||||
|
* **API/Artifact:** Publish as an **OCI‑attached attestation** (`application/vnd.stella.delta-verdict+json`) alongside SBOM + VEX.
|
||||||
|
* **Pipelines:** Release gate consumes only the delta verdict (fast path); full scan can run asynchronously for deep telemetry.
|
||||||
|
|
||||||
|
### Minimal schema sketch (JSON)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"subject": {"ociDigest": "sha256:..."},
|
||||||
|
"inputs": {
|
||||||
|
"feeds": [{"type":"cve","digest":"sha256:..."},{"type":"vex","digest":"sha256:..."}],
|
||||||
|
"tools": {"sbomer":"1.6.3","reach":"0.9.0","policy":"lattice-2025.12"},
|
||||||
|
"baseline": {"sbomG":"sha256:...","vexSet":"sha256:..."}
|
||||||
|
},
|
||||||
|
"delta": {
|
||||||
|
"components": {"added":[...],"removed":[...],"updated":[...]},
|
||||||
|
"reachability": {"edgesAdded":[...],"edgesRemoved":[...]},
|
||||||
|
"settings": {"changed":[...]},
|
||||||
|
"vex": [{"cve":"CVE-2025-1234","from":"affected","to":"not_affected","reason":"config_flag_off","evidenceRef":"att#cfg-42"}],
|
||||||
|
"attestations": {"changed":[...]}
|
||||||
|
},
|
||||||
|
"verdict": {"decision":"allow","riskBudgetUsed":2,"policyId":"lattice-2025.12","explanationRefs":["vex[0]","reachability.edgesRemoved[3]"]},
|
||||||
|
"signing": {"dsse":"...","signer":"stella-authority"}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Roll‑out checklist (Stella Ops framing)
|
||||||
|
|
||||||
|
* **Sbomer:** emit **graph‑hash** (stable canonicalization) and diff vs previous SBOM‑G.
|
||||||
|
* **Vexer:** compute VEX claim deltas + reason codes; apply lattice merge; expose `vexDelta[]`.
|
||||||
|
* **Attestor:** snapshot feed digests, tool/rule versions, and config; produce DSSE bundle.
|
||||||
|
* **Policy Engine:** evaluate deltas → produce **delta verdict** with strict replay semantics.
|
||||||
|
* **Router/Timeline:** store delta verdicts as auditable objects; enable “replay build N” button.
|
||||||
|
* **CLI/CI:** `stella delta-verify --subject <digest> --envelope delta.json.dsse` → must return identical verdict.
|
||||||
|
|
||||||
|
### Guardrails
|
||||||
|
|
||||||
|
* Canonicalize and sort everything before hashing.
|
||||||
|
* Record unknowns explicitly and let policy act on them (e.g., “fail if unknowns > N in prod”).
|
||||||
|
* No network during replay except to fetch pinned digests.
|
||||||
|
|
||||||
|
If you want, I can draft the precise CycloneDX extension fields + an OCI media type registration, plus .NET 10 interfaces for Sbomer/Vexer/Attestor to emit/consume this today.
|
||||||
@@ -0,0 +1,58 @@
|
|||||||
|
Here’s a simple way to make “risk budget” feel like a real, live dashboard rather than a dusty policy—plus the one visualization that best explains “budget burn” to PMs.
|
||||||
|
|
||||||
|
### First, quick background (plain English)
|
||||||
|
|
||||||
|
* **Risk budget** = how much unresolved risk we’re willing to carry for a release (e.g., 100 “risk points”).
|
||||||
|
* **Burn** = how fast we consume that budget as unknowns/alerts pop up, minus how much we “pay back” by fixing/mitigating.
|
||||||
|
|
||||||
|
### What to show on the dashboard
|
||||||
|
|
||||||
|
1. **Heatmap of Unknowns (Where are we blind?)**
|
||||||
|
|
||||||
|
* Rows = components/services; columns = risk categories (vulns, compliance, perf, data, supply-chain).
|
||||||
|
* Cell value = *unknowns count × severity weight* (unknown ≠ unimportant; it’s the most dangerous).
|
||||||
|
* Click-through reveals: last evidence timestamp, owners, next probe.
|
||||||
|
|
||||||
|
2. **Delta Table (Risk Decay per Release)**
|
||||||
|
|
||||||
|
* Each release row compares **Before vs After**: total risk, unknowns, known-high, accepted, deferred.
|
||||||
|
* Include a **“risk retired”** column (points dropped due to fixes/mitigations) and **“risk shifted”** (moved to exceptions).
|
||||||
|
|
||||||
|
3. **Exception Ledger (Auditable)**
|
||||||
|
|
||||||
|
* Every accepted risk has an ID, owner, expiry, evidence note, and auto-reminder.
|
||||||
|
|
||||||
|
### The best single chart for PMs: **Risk Budget Burn-Up**
|
||||||
|
|
||||||
|
*(This is the one slide they’ll get immediately.)*
|
||||||
|
|
||||||
|
* **X-axis:** calendar dates up to code freeze.
|
||||||
|
* **Y-axis:** risk points.
|
||||||
|
* **Two lines:**
|
||||||
|
|
||||||
|
* **Budget (flat or stepped)** = allowable risk over time (e.g., 100 pts until T‑2, then 60).
|
||||||
|
* **Actual Risk (cumulative)** = unknowns + knowns − mitigations (daily snapshot).
|
||||||
|
* **Shaded area** between lines = **Headroom** (green) or **Overrun** (red).
|
||||||
|
* Add **vertical markers** for major changes (feature freeze, pen-test start, dependency bump).
|
||||||
|
* Add **burn targets** (dotted) to show where you must be each week to land inside budget.
|
||||||
|
|
||||||
|
### How to compute the numbers (lightweight)
|
||||||
|
|
||||||
|
* **Risk points** = Σ(issue_severity_weight × exposure_factor × evidence_freshness_penalty).
|
||||||
|
* **Unknown penalty**: if no evidence ≤ N days, apply multiplier (e.g., ×1.5).
|
||||||
|
* **Decay**: when a fix lands *and* evidence is refreshed, subtract points that day.
|
||||||
|
* **Guardrail**: fail gate if **unknowns > K** *or* **Actual Risk > Budget** within T days of release.
|
||||||
|
|
||||||
|
### Minimal artifacts to ship
|
||||||
|
|
||||||
|
* **Schema:** `issue_id, component, category, severity, is_unknown, exposure, evidence_date, status, owner`.
|
||||||
|
* **Daily snapshot job:** materialize totals + unknowns + mitigations per component.
|
||||||
|
* **One chart, one table, one heatmap** (don’t overdo it).
|
||||||
|
|
||||||
|
### Copy‑paste labels for the board
|
||||||
|
|
||||||
|
* **Top-left KPI:** “Headroom: 28 pts (green)”
|
||||||
|
* **Badges:** “Unknowns↑ +6 (24h)”, “Risk retired −18 (7d)”, “Exceptions expiring: 3”
|
||||||
|
* **Callout:** “At current burn, overrun in 5 days—pull forward libX fix or scope‑cut Y.”
|
||||||
|
|
||||||
|
If you want, I can mock this with sample data (CSV → chart) so your team sees exactly how it looks.
|
||||||
@@ -0,0 +1,90 @@
|
|||||||
|
Here’s a compact, practical way to rank conflicting vulnerability evidence (VEX) by **freshness vs. confidence**—so your system picks the best truth without hand‑holding.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# A scoring lattice for VEX sources
|
||||||
|
|
||||||
|
**Goal:** Given multiple signals (VEX statements, advisories, bug trackers, scanner detections), compute a single verdict with a transparent score and a proof trail.
|
||||||
|
|
||||||
|
## 1) Normalize inputs → “evidence atoms”
|
||||||
|
|
||||||
|
For every item, extract:
|
||||||
|
|
||||||
|
* **scope** (package@version, image@digest, file hash)
|
||||||
|
* **claim** (affected, not_affected, under_investigation, fixed)
|
||||||
|
* **reason** (reachable?, feature flag off, vulnerable code not present, platform not impacted)
|
||||||
|
* **provenance** (who said it, how it’s signed)
|
||||||
|
* **when** (issued_at, observed_at, expires_at)
|
||||||
|
* **supporting artifacts** (SBOM ref, in‑toto link, CVE IDs, PoC link)
|
||||||
|
|
||||||
|
## 2) Confidence (C) and Freshness (F)
|
||||||
|
|
||||||
|
**Confidence C (0–1)** (multiply factors; cap at 1):
|
||||||
|
|
||||||
|
* **Signature strength:** DSSE + Sigstore/Rekor inclusion (0.35), plus hardware‑backed key or org OIDC (0.15)
|
||||||
|
* **Source reputation:** NVD (0.20), major distro PSIRT (0.20), upstream vendor (0.20), reputable CERT (0.15), small vendor (0.10)
|
||||||
|
* **Evidence quality:** reachability proof / test (0.25), code diff linking (0.20), deterministic build link (0.15), “reason” present (0.10)
|
||||||
|
* **Consensus bonus:** ≥2 independent concurring sources (+0.10)
|
||||||
|
|
||||||
|
**Freshness F (0–1)** (monotone decay):
|
||||||
|
|
||||||
|
* F = exp(−Δdays / τ) with τ tuned per source class (e.g., **τ=30** vendor VEX, **τ=90** NVD, **τ=14** exploit‑active feeds).
|
||||||
|
* **Update reset:** new attestation with same subject resets Δdays.
|
||||||
|
* **Expiry clamp:** if `now > expires_at`, set F=0.
|
||||||
|
|
||||||
|
## 3) Claim strength (S_claim)
|
||||||
|
|
||||||
|
Map claim → base weight:
|
||||||
|
|
||||||
|
* not_affected (0.9), fixed (0.8), affected (0.7), under_investigation (0.4)
|
||||||
|
* **Reason multipliers:** reachable? (+0.15 to “affected”), “feature flag off” (+0.10 to “not_affected”), platform mismatch (+0.10), backport patch note (+0.10 if patch commit hash provided)
|
||||||
|
|
||||||
|
## 4) Overall score & lattice merge
|
||||||
|
|
||||||
|
Per evidence `e`:
|
||||||
|
**Score(e) = C(e) × F(e) × S_claim(e)**
|
||||||
|
|
||||||
|
Then, merge in a **distributive lattice** ordered by:
|
||||||
|
|
||||||
|
1. **Claim precedence** (not_affected > fixed > affected > under_investigation)
|
||||||
|
2. Break ties by **Score(e)**
|
||||||
|
3. If competing top claims within ε (e.g., 0.05), **escalate to “disputed”** and surface both with proofs.
|
||||||
|
|
||||||
|
**Policy hooks:** allow org‑level overrides (e.g., “prod must treat ‘under_investigation’ as affected unless reachability=false proof present”).
|
||||||
|
|
||||||
|
## 5) Worked example: small‑vendor Sigstore VEX vs 6‑month‑old NVD note
|
||||||
|
|
||||||
|
* **Small vendor VEX (signed, Sigstore, reason: code path unreachable, issued 7 days ago):**
|
||||||
|
C ≈ signature (0.35) + small‑vendor (0.10) + reason (0.10) + evidence (reachability +0.25) = ~0.70
|
||||||
|
F = exp(−7/30) ≈ 0.79
|
||||||
|
S_claim (not_affected + reason) = 0.9 + 0.10 = 1.0 (cap at 1)
|
||||||
|
**Score ≈ 0.70 × 0.79 × 1.0 = 0.55**
|
||||||
|
|
||||||
|
* **NVD entry (affected; no extra reasoning; last updated 180 days ago):**
|
||||||
|
C ≈ NVD (0.20) = 0.20
|
||||||
|
F = exp(−180/90) ≈ 0.14
|
||||||
|
S_claim (affected) = 0.7
|
||||||
|
**Score ≈ 0.20 × 0.14 × 0.7 = 0.02**
|
||||||
|
|
||||||
|
**Outcome:** vendor VEX decisively wins; lattice yields **not_affected** with linked proofs. If NVD updates tomorrow, its F jumps and the lattice may flip—deterministically.
|
||||||
|
|
||||||
|
## 6) Implementation notes (fits Stella Ops modules)
|
||||||
|
|
||||||
|
* **Where:** run in **scanner.webservice** (per your standing rule), keep Concelier/Excitors as preserve‑prune pipes.
|
||||||
|
* **Storage:** Postgres as SoR; Valkey as cache for score shards.
|
||||||
|
* **Inputs:** CycloneDX/SPDX IDs, in‑toto attestations, Rekor proofs, feed timestamps.
|
||||||
|
* **Outputs:**
|
||||||
|
|
||||||
|
* **Signed “verdict attestation”** (OCI‑attached) with inputs’ hashes + chosen path in lattice.
|
||||||
|
* **Delta verdicts** when any input changes (freshness decay counts as change).
|
||||||
|
* **UI:** “Trust Algebra” panel showing (C,F,S_claim), decay timeline, and “why this won.”
|
||||||
|
|
||||||
|
## 7) Guardrails & ops
|
||||||
|
|
||||||
|
* **Replayability:** include τ values, weights, and source catalog in the attested policy so anyone can recompute the same score.
|
||||||
|
* **Backports:** add a “patch‑aware” booster only if commit hash maps to shipped build (prove via diff or package changelog).
|
||||||
|
* **Air‑gapped:** mirror Rekor; cache trust anchors; freeze decay at scan time but recompute at policy‑evaluation time.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
If you want, I can drop this into a ready‑to‑run JSON/YAML policy bundle (with τ/weights defaults) and a tiny C# evaluator stub you can wire into **Policy Engine → Vexer** right away.
|
||||||
File diff suppressed because it is too large
Load Diff
9
src/concelier-webservice.slnf
Normal file
9
src/concelier-webservice.slnf
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
{
|
||||||
|
"solution": {
|
||||||
|
"path": "Concelier/StellaOps.Concelier.sln",
|
||||||
|
"projects": [
|
||||||
|
"StellaOps.Concelier.WebService\\StellaOps.Concelier.WebService.csproj",
|
||||||
|
"__Tests\\StellaOps.Concelier.WebService.Tests\\StellaOps.Concelier.WebService.Tests.csproj"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,6 +0,0 @@
|
|||||||
{
|
|
||||||
"sdk": {
|
|
||||||
"version": "10.0.100",
|
|
||||||
"rollForward": "latestMinor"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Reference in New Issue
Block a user