license switch agpl -> busl1, sprints work, new product advisories

This commit is contained in:
master
2026-01-20 15:32:20 +02:00
parent 4903395618
commit c32fff8f86
1835 changed files with 38630 additions and 4359 deletions

View File

@@ -34,7 +34,7 @@ Local LLM inference in air-gapped environments requires model weight bundles to
"model_family": "llama3",
"model_size": "8B",
"quantization": "Q4_K_M",
"license": "Apache-2.0",
"license": "BUSL-1.1",
"created_at": "2025-12-26T00:00:00Z",
"files": [
{

View File

@@ -29,10 +29,17 @@ See `etc/airgap.yaml.sample` for configuration options.
Key settings:
- Staleness policy (maxAgeHours, warnAgeHours, staleAction)
- Time anchor requirements (requireTimeAnchor)
- Per-content staleness budgets (advisories, VEX, packages, mitigations)
- Per-content staleness budgets (advisories, VEX, packages, mitigations)
- PostgreSQL connection (schema: `airgap`)
- Export/import paths and validation rules
## Bundle manifest (v2) additions
- `canonicalManifestHash`: sha256 of canonical JSON for deterministic verification.
- `subject`: sha256 (+ optional sha512) digest of the bundle target.
- `timestamps`: RFC3161/eIDAS timestamp entries with TSA chain/OCSP/CRL refs.
- `rekorProofs`: entry body/inclusion proof paths plus signed entry timestamp for offline verification.
## Dependencies
- PostgreSQL (schema: `airgap`)

View File

@@ -60,6 +60,7 @@ AirGap Time calculates drift = `now(monotonic) - anchor.issued_at` and exposes:
- Test vectors located under `src/AirGap/StellaOps.AirGap.Time/fixtures/`.
- For offline testing, simulate monotonic clock via `ITestClock` to avoid system clock drift in CI.
- Staleness calculations use `StalenessCalculator` + `StalenessBudget`/`StalenessEvaluation` (see `src/AirGap/StellaOps.AirGap.Time/Services` and `.Models`); warning/breach thresholds must be non-negative and warning ≤ breach.
- RFC3161 verification in offline mode consumes bundle-stapled TSA chain + OCSP/CRL blobs (`tsa/chain/`, `tsa/ocsp/`, `tsa/crl/`) and fails closed when revocation evidence is missing.
## 7. References

View File

@@ -0,0 +1,136 @@
# Analytics Module
The Analytics module provides a star-schema data warehouse layer for SBOM and attestation data, enabling executive reporting, risk dashboards, and ad-hoc analysis.
## Overview
Stella Ops generates rich data through SBOM ingestion, vulnerability correlation, VEX assessments, and attestations. The Analytics module normalizes this data into a queryable warehouse schema optimized for:
- **Executive dashboards**: Risk posture, vulnerability trends, compliance status
- **Supply chain analysis**: Supplier concentration, license distribution
- **Security metrics**: CVE exposure, VEX effectiveness, MTTR tracking
- **Attestation coverage**: SLSA compliance, provenance gaps
## Key Capabilities
| Capability | Description |
|------------|-------------|
| Unified component registry | Canonical component table with normalized suppliers and licenses |
| Vulnerability correlation | Pre-joined component-vulnerability mapping with EPSS/KEV flags |
| VEX-adjusted exposure | Vulnerability counts that respect VEX overrides |
| Attestation tracking | Provenance and SLSA level coverage by environment/team |
| Time-series rollups | Daily snapshots for trend analysis |
| Materialized views | Pre-computed aggregations for dashboard performance |
## Data Model
### Star Schema Overview
```
┌─────────────────┐
│ artifacts │ (dimension)
│ container/app │
└────────┬────────┘
┌──────────────┼──────────────┐
│ │ │
┌─────────▼──────┐ ┌─────▼─────┐ ┌──────▼──────┐
│ artifact_ │ │attestations│ │vex_overrides│
│ components │ │ (fact) │ │ (fact) │
│ (bridge) │ └───────────┘ └─────────────┘
└─────────┬──────┘
┌─────────▼──────┐
│ components │ (dimension)
│ unified │
│ registry │
└─────────┬──────┘
┌─────────▼──────┐
│ component_ │
│ vulns │ (fact)
│ (bridge) │
└────────────────┘
```
### Core Tables
| Table | Type | Purpose |
|-------|------|---------|
| `components` | Dimension | Unified component registry with PURL, supplier, license |
| `artifacts` | Dimension | Container images and applications with SBOM metadata |
| `artifact_components` | Bridge | Links artifacts to their SBOM components |
| `component_vulns` | Fact | Component-to-vulnerability mapping |
| `attestations` | Fact | Attestation metadata (provenance, SBOM, VEX) |
| `vex_overrides` | Fact | VEX status overrides with justifications |
| `raw_sboms` | Audit | Raw SBOM payloads for reprocessing |
| `raw_attestations` | Audit | Raw DSSE envelopes for audit |
| `daily_vulnerability_counts` | Rollup | Daily vuln aggregations |
| `daily_component_counts` | Rollup | Daily component aggregations |
### Materialized Views
| View | Refresh | Purpose |
|------|---------|---------|
| `mv_supplier_concentration` | Daily | Top suppliers by component count |
| `mv_license_distribution` | Daily | License category distribution |
| `mv_vuln_exposure` | Daily | CVE exposure adjusted by VEX |
| `mv_attestation_coverage` | Daily | Provenance/SLSA coverage by env/team |
## Quick Start
### Day-1 Queries
**Top supplier concentration (supply chain risk):**
```sql
SELECT * FROM analytics.sp_top_suppliers(20);
```
**License risk heatmap:**
```sql
SELECT * FROM analytics.sp_license_heatmap();
```
**CVE exposure adjusted by VEX:**
```sql
SELECT * FROM analytics.sp_vuln_exposure('prod', 'high');
```
**Fixable vulnerability backlog:**
```sql
SELECT * FROM analytics.sp_fixable_backlog('prod');
```
**Attestation coverage gaps:**
```sql
SELECT * FROM analytics.sp_attestation_gaps('prod');
```
### API Endpoints
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/api/analytics/suppliers` | GET | Supplier concentration data |
| `/api/analytics/licenses` | GET | License distribution |
| `/api/analytics/vulnerabilities` | GET | CVE exposure (VEX-adjusted) |
| `/api/analytics/backlog` | GET | Fixable vulnerability backlog |
| `/api/analytics/attestation-coverage` | GET | Attestation gaps |
| `/api/analytics/trends/vulnerabilities` | GET | Vulnerability time-series |
| `/api/analytics/trends/components` | GET | Component time-series |
## Architecture
See [architecture.md](./architecture.md) for detailed design decisions, data flow, and normalization rules.
## Schema Reference
See [analytics_schema.sql](../../db/analytics_schema.sql) for complete DDL including:
- Table definitions with indexes
- Normalization functions
- Materialized views
- Stored procedures
- Refresh procedures
## Sprint Reference
Implementation tracked in: `docs/implplan/SPRINT_20260120_030_Platform_sbom_analytics_lake.md`

View File

@@ -0,0 +1,270 @@
# Analytics Module Architecture
## Design Philosophy
The Analytics module implements a **star-schema data warehouse** pattern optimized for analytical queries rather than transactional workloads. Key design principles:
1. **Separation of concerns**: Analytics schema is isolated from operational schemas (scanner, vex, proof_system)
2. **Pre-computation**: Expensive aggregations computed in advance via materialized views
3. **Audit trail**: Raw payloads preserved for reprocessing and compliance
4. **Determinism**: All normalization functions are immutable and reproducible
5. **Incremental updates**: Supports both full refresh and incremental ingestion
## Data Flow
```
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Scanner │ │ Concelier │ │ Attestor │
│ (SBOM) │ │ (Vuln) │ │ (DSSE) │
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
│ │ │
│ SBOM Ingested │ Vuln Updated │ Attestation Created
▼ ▼ ▼
┌──────────────────────────────────────────────────────┐
│ AnalyticsIngestionService │
│ - Normalize components (PURL, supplier, license) │
│ - Upsert to unified registry │
│ - Correlate with vulnerabilities │
│ - Store raw payloads │
└──────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────┐
│ analytics schema │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌────────────┐ │
│ │components│ │artifacts│ │comp_vuln│ │attestations│ │
│ └─────────┘ └─────────┘ └─────────┘ └────────────┘ │
└──────────────────────────────────────────────────────┘
│ Daily refresh
┌──────────────────────────────────────────────────────┐
│ Materialized Views │
│ mv_supplier_concentration | mv_license_distribution │
│ mv_vuln_exposure | mv_attestation_coverage │
└──────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────┐
│ Platform API Endpoints │
│ (with 5-minute caching) │
└──────────────────────────────────────────────────────┘
```
## Normalization Rules
### PURL Parsing
Package URLs (PURLs) are the canonical identifier for components. The `parse_purl()` function extracts:
| Field | Example | Notes |
|-------|---------|-------|
| `purl_type` | `maven`, `npm`, `pypi` | Ecosystem identifier |
| `purl_namespace` | `org.apache.logging` | Group/org/scope (optional) |
| `purl_name` | `log4j-core` | Package name |
| `purl_version` | `2.17.1` | Version string |
### Supplier Normalization
The `normalize_supplier()` function standardizes supplier names for consistent grouping:
1. Convert to lowercase
2. Trim whitespace
3. Remove legal suffixes: Inc., LLC, Ltd., Corp., GmbH, B.V., S.A., PLC, Co.
4. Normalize internal whitespace
**Examples:**
- `"Apache Software Foundation, Inc."``"apache software foundation"`
- `"Google LLC"``"google"`
- `" Microsoft Corp. "``"microsoft"`
### License Categorization
The `categorize_license()` function maps SPDX expressions to risk categories:
| Category | Examples | Risk Level |
|----------|----------|------------|
| `permissive` | MIT, Apache-2.0, BSD-3-Clause, ISC | Low |
| `copyleft-weak` | LGPL-2.1, MPL-2.0, EPL-2.0 | Medium |
| `copyleft-strong` | GPL-3.0, AGPL-3.0, SSPL | High |
| `proprietary` | Proprietary, Commercial | Review Required |
| `unknown` | Unrecognized expressions | Review Required |
**Special handling:**
- GPL with exceptions (e.g., `GPL-2.0 WITH Classpath-exception-2.0`) → `copyleft-weak`
- Dual-licensed (e.g., `MIT OR Apache-2.0`) → uses first match
## Component Deduplication
Components are deduplicated by `(purl, hash_sha256)`:
1. If same PURL and hash: existing record updated (last_seen_at, counts)
2. If same PURL but different hash: new record created (version change)
3. If same hash but different PURL: new record (aliased package)
**Upsert pattern:**
```sql
INSERT INTO analytics.components (...)
VALUES (...)
ON CONFLICT (purl, hash_sha256) DO UPDATE SET
last_seen_at = now(),
sbom_count = components.sbom_count + 1,
updated_at = now();
```
## Vulnerability Correlation
When a component is upserted, the `VulnerabilityCorrelationService` queries Concelier for matching advisories:
1. Query by PURL type + namespace + name
2. Filter by version range matching
3. Upsert to `component_vulns` with severity, EPSS, KEV flags
**Version range matching** uses Concelier's existing logic to handle:
- Semver ranges: `>=1.0.0 <2.0.0`
- Exact versions: `1.2.3`
- Wildcards: `1.x`
## VEX Override Logic
The `mv_vuln_exposure` view implements VEX-adjusted counts:
```sql
-- Effective count excludes artifacts with active VEX overrides
COUNT(DISTINCT ac.artifact_id) FILTER (
WHERE NOT EXISTS (
SELECT 1 FROM analytics.vex_overrides vo
WHERE vo.artifact_id = ac.artifact_id
AND vo.vuln_id = cv.vuln_id
AND vo.status = 'not_affected'
AND (vo.valid_until IS NULL OR vo.valid_until > now())
)
) AS effective_artifact_count
```
**Override validity:**
- `valid_from`: When the override became effective
- `valid_until`: Expiration (NULL = no expiration)
- Only `status = 'not_affected'` reduces exposure counts
## Time-Series Rollups
Daily rollups computed by `compute_daily_rollups()`:
**Vulnerability counts** (per environment/team/severity):
- `total_vulns`: All affecting vulnerabilities
- `fixable_vulns`: Vulns with `fix_available = TRUE`
- `vex_mitigated`: Vulns with active `not_affected` override
- `kev_vulns`: Vulns in CISA KEV
- `unique_cves`: Distinct CVE IDs
- `affected_artifacts`: Artifacts containing affected components
- `affected_components`: Components with affecting vulns
**Component counts** (per environment/team/license/type):
- `total_components`: Distinct components
- `unique_suppliers`: Distinct normalized suppliers
**Retention policy:** 90 days in hot storage; older data archived to cold storage.
## Materialized View Refresh
All materialized views support `REFRESH ... CONCURRENTLY` for zero-downtime updates:
```sql
-- Refresh all views (run daily via pg_cron or Scheduler)
SELECT analytics.refresh_all_views();
```
**Refresh schedule (recommended):**
- `mv_supplier_concentration`: 02:00 UTC daily
- `mv_license_distribution`: 02:15 UTC daily
- `mv_vuln_exposure`: 02:30 UTC daily
- `mv_attestation_coverage`: 02:45 UTC daily
- `compute_daily_rollups()`: 03:00 UTC daily
## Performance Considerations
### Indexing Strategy
| Table | Key Indexes | Query Pattern |
|-------|-------------|---------------|
| `components` | `purl`, `supplier_normalized`, `license_category` | Lookup, aggregation |
| `artifacts` | `digest`, `environment`, `team` | Lookup, filtering |
| `component_vulns` | `vuln_id`, `severity`, `fix_available` | Join, filtering |
| `attestations` | `artifact_id`, `predicate_type` | Join, aggregation |
| `vex_overrides` | `(artifact_id, vuln_id)`, `status` | Subquery exists |
### Query Performance Targets
| Query | Target | Notes |
|-------|--------|-------|
| `sp_top_suppliers(20)` | < 100ms | Uses materialized view |
| `sp_license_heatmap()` | < 100ms | Uses materialized view |
| `sp_vuln_exposure()` | < 200ms | Uses materialized view |
| `sp_fixable_backlog()` | < 500ms | Live query with indexes |
| `sp_attestation_gaps()` | < 100ms | Uses materialized view |
### Caching Strategy
Platform API endpoints use a 5-minute TTL cache:
- Cache key: endpoint + query parameters
- Invalidation: Time-based only (no event-driven invalidation)
- Storage: Valkey (in-memory)
## Security Considerations
### Schema Permissions
```sql
-- Read-only role for dashboards
GRANT USAGE ON SCHEMA analytics TO dashboard_reader;
GRANT SELECT ON ALL TABLES IN SCHEMA analytics TO dashboard_reader;
GRANT SELECT ON ALL SEQUENCES IN SCHEMA analytics TO dashboard_reader;
-- Write role for ingestion service
GRANT USAGE ON SCHEMA analytics TO analytics_writer;
GRANT SELECT, INSERT, UPDATE ON ALL TABLES IN SCHEMA analytics TO analytics_writer;
GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA analytics TO analytics_writer;
```
### Data Classification
| Table | Classification | Notes |
|-------|----------------|-------|
| `components` | Internal | Contains package names, versions |
| `artifacts` | Internal | Contains image names, team names |
| `component_vulns` | Internal | Vulnerability data (public CVEs) |
| `vex_overrides` | Confidential | Contains justifications, operator IDs |
| `raw_sboms` | Confidential | Full SBOM payloads |
| `raw_attestations` | Confidential | Signed attestation envelopes |
### Audit Trail
All tables include `created_at` and `updated_at` timestamps. Raw payload tables (`raw_sboms`, `raw_attestations`) are append-only with content hashes for integrity verification.
## Integration Points
### Upstream Dependencies
| Service | Event | Action |
|---------|-------|--------|
| Scanner | SBOM ingested | Normalize and upsert components |
| Concelier | Advisory updated | Re-correlate affected components |
| Excititor | VEX observation | Create/update vex_overrides |
| Attestor | Attestation created | Upsert attestation record |
### Downstream Consumers
| Consumer | Data | Endpoint |
|----------|------|----------|
| Console UI | Dashboard data | `/api/analytics/*` |
| Export Center | Compliance reports | Direct DB query |
| AdvisoryAI | Risk context | `/api/analytics/vulnerabilities` |
## Future Enhancements
1. **Partitioning**: Partition `daily_*` tables by date for faster queries and archival
2. **Incremental refresh**: Implement incremental materialized view refresh for large datasets
3. **Custom dimensions**: Support user-defined component groupings (business units, cost centers)
4. **Predictive analytics**: Add ML-based risk prediction using historical trends
5. **BI tool integration**: Direct connectors for Tableau, Looker, Metabase

View File

@@ -0,0 +1,418 @@
# Analytics Query Library
This document provides ready-to-use SQL queries for common analytics use cases. All queries are optimized for the analytics star schema.
## Executive Dashboard Queries
### 1. Top Supplier Concentration (Supply Chain Risk)
Identifies suppliers with the highest component footprint, indicating supply chain concentration risk.
```sql
-- Via stored procedure (recommended)
SELECT * FROM analytics.sp_top_suppliers(20);
-- Direct query
SELECT
supplier,
component_count,
artifact_count,
team_count,
critical_vuln_count,
high_vuln_count,
environments
FROM analytics.mv_supplier_concentration
ORDER BY component_count DESC
LIMIT 20;
```
**Use case**: Identify vendors that, if compromised, would affect the most artifacts.
### 2. License Risk Heatmap
Shows distribution of components by license category for compliance review.
```sql
-- Via stored procedure
SELECT * FROM analytics.sp_license_heatmap();
-- Direct query with grouping
SELECT
license_category,
SUM(component_count) AS total_components,
SUM(artifact_count) AS total_artifacts,
COUNT(DISTINCT license_concluded) AS unique_licenses
FROM analytics.mv_license_distribution
GROUP BY license_category
ORDER BY
CASE license_category
WHEN 'copyleft-strong' THEN 1
WHEN 'proprietary' THEN 2
WHEN 'unknown' THEN 3
WHEN 'copyleft-weak' THEN 4
ELSE 5
END;
```
**Use case**: Compliance review, identify components requiring legal review.
### 3. CVE Exposure Adjusted by VEX
Shows true vulnerability exposure after applying VEX mitigations.
```sql
-- Via stored procedure
SELECT * FROM analytics.sp_vuln_exposure('prod', 'high');
-- Direct query showing VEX effectiveness
SELECT
vuln_id,
severity::TEXT,
cvss_score,
epss_score,
kev_listed,
fix_available,
raw_artifact_count AS total_affected,
effective_artifact_count AS actually_affected,
raw_artifact_count - effective_artifact_count AS vex_mitigated,
ROUND(100.0 * (raw_artifact_count - effective_artifact_count) / NULLIF(raw_artifact_count, 0), 1) AS mitigation_rate
FROM analytics.mv_vuln_exposure
WHERE effective_artifact_count > 0
ORDER BY
CASE severity
WHEN 'critical' THEN 1
WHEN 'high' THEN 2
WHEN 'medium' THEN 3
ELSE 4
END,
effective_artifact_count DESC
LIMIT 50;
```
**Use case**: Show executives the "real" risk after VEX assessment.
### 4. Fixable Vulnerability Backlog
Lists vulnerabilities that can be fixed today (fix available, not VEX-mitigated).
```sql
-- Via stored procedure
SELECT * FROM analytics.sp_fixable_backlog('prod');
-- Direct query with priority scoring
SELECT
a.name AS service,
a.environment,
a.team,
c.name AS component,
c.version AS current_version,
cv.vuln_id,
cv.severity::TEXT,
cv.cvss_score,
cv.epss_score,
cv.fixed_version,
cv.kev_listed,
-- Priority score: higher = fix first
(
CASE cv.severity
WHEN 'critical' THEN 100
WHEN 'high' THEN 75
WHEN 'medium' THEN 50
ELSE 25
END
+ COALESCE(cv.epss_score * 100, 0)
+ (CASE WHEN cv.kev_listed THEN 50 ELSE 0 END)
)::INT AS priority_score
FROM analytics.component_vulns cv
JOIN analytics.components c ON c.component_id = cv.component_id
JOIN analytics.artifact_components ac ON ac.component_id = c.component_id
JOIN analytics.artifacts a ON a.artifact_id = ac.artifact_id
LEFT JOIN analytics.vex_overrides vo ON vo.artifact_id = a.artifact_id
AND vo.vuln_id = cv.vuln_id
AND vo.status = 'not_affected'
AND (vo.valid_until IS NULL OR vo.valid_until > now())
WHERE cv.affects = TRUE
AND cv.fix_available = TRUE
AND vo.override_id IS NULL
AND a.environment = 'prod'
ORDER BY priority_score DESC, a.name
LIMIT 100;
```
**Use case**: Prioritize remediation work based on risk and fixability.
### 5. Build Integrity / Attestation Coverage
Shows attestation gaps by environment and team.
```sql
-- Via stored procedure
SELECT * FROM analytics.sp_attestation_gaps('prod');
-- Direct query with gap analysis
SELECT
environment,
team,
total_artifacts,
with_provenance,
total_artifacts - with_provenance AS missing_provenance,
provenance_pct,
slsa_level_2_plus,
slsa2_pct,
with_sbom_attestation,
with_vex_attestation
FROM analytics.mv_attestation_coverage
WHERE environment = 'prod'
ORDER BY provenance_pct ASC;
```
**Use case**: Identify teams/environments not meeting attestation requirements.
## Trend Analysis Queries
### 6. Vulnerability Trend (30 Days)
```sql
SELECT
snapshot_date,
environment,
SUM(total_vulns) AS total_vulns,
SUM(fixable_vulns) AS fixable_vulns,
SUM(vex_mitigated) AS vex_mitigated,
SUM(total_vulns) - SUM(vex_mitigated) AS net_exposure,
SUM(kev_vulns) AS kev_vulns
FROM analytics.daily_vulnerability_counts
WHERE snapshot_date >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY snapshot_date, environment
ORDER BY environment, snapshot_date;
```
### 7. Vulnerability Trend by Severity
```sql
SELECT
snapshot_date,
severity::TEXT,
SUM(total_vulns) AS total_vulns
FROM analytics.daily_vulnerability_counts
WHERE snapshot_date >= CURRENT_DATE - INTERVAL '30 days'
AND environment = 'prod'
GROUP BY snapshot_date, severity
ORDER BY snapshot_date,
CASE severity
WHEN 'critical' THEN 1
WHEN 'high' THEN 2
WHEN 'medium' THEN 3
ELSE 4
END;
```
### 8. Component Growth Trend
```sql
SELECT
snapshot_date,
environment,
SUM(total_components) AS total_components,
SUM(unique_suppliers) AS unique_suppliers
FROM analytics.daily_component_counts
WHERE snapshot_date >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY snapshot_date, environment
ORDER BY environment, snapshot_date;
```
## Deep-Dive Queries
### 9. Component Impact Analysis
Find all artifacts affected by a specific component.
```sql
SELECT
a.name AS artifact,
a.version,
a.environment,
a.team,
ac.depth AS dependency_depth,
ac.introduced_via
FROM analytics.components c
JOIN analytics.artifact_components ac ON ac.component_id = c.component_id
JOIN analytics.artifacts a ON a.artifact_id = ac.artifact_id
WHERE c.purl LIKE 'pkg:maven/org.apache.logging.log4j/log4j-core%'
ORDER BY a.environment, a.name;
```
### 10. CVE Impact Analysis
Find all artifacts affected by a specific CVE.
```sql
SELECT DISTINCT
a.name AS artifact,
a.version,
a.environment,
a.team,
c.name AS component,
c.version AS component_version,
cv.cvss_score,
cv.fixed_version,
CASE
WHEN vo.status = 'not_affected' THEN 'VEX Mitigated'
WHEN cv.fix_available THEN 'Fix Available'
ELSE 'Vulnerable'
END AS status
FROM analytics.component_vulns cv
JOIN analytics.components c ON c.component_id = cv.component_id
JOIN analytics.artifact_components ac ON ac.component_id = c.component_id
JOIN analytics.artifacts a ON a.artifact_id = ac.artifact_id
LEFT JOIN analytics.vex_overrides vo ON vo.artifact_id = a.artifact_id
AND vo.vuln_id = cv.vuln_id
AND (vo.valid_until IS NULL OR vo.valid_until > now())
WHERE cv.vuln_id = 'CVE-2021-44228'
ORDER BY a.environment, a.name;
```
### 11. Supplier Vulnerability Profile
Detailed vulnerability breakdown for a specific supplier.
```sql
SELECT
c.supplier_normalized AS supplier,
c.name AS component,
c.version,
cv.vuln_id,
cv.severity::TEXT,
cv.cvss_score,
cv.kev_listed,
cv.fix_available,
cv.fixed_version
FROM analytics.components c
JOIN analytics.component_vulns cv ON cv.component_id = c.component_id
WHERE c.supplier_normalized = 'apache software foundation'
AND cv.affects = TRUE
ORDER BY
CASE cv.severity
WHEN 'critical' THEN 1
WHEN 'high' THEN 2
ELSE 3
END,
cv.cvss_score DESC;
```
### 12. License Compliance Report
Components with concerning licenses in production.
```sql
SELECT
c.name AS component,
c.version,
c.license_concluded,
c.license_category::TEXT,
c.supplier_normalized AS supplier,
COUNT(DISTINCT a.artifact_id) AS artifact_count,
ARRAY_AGG(DISTINCT a.name) AS affected_artifacts
FROM analytics.components c
JOIN analytics.artifact_components ac ON ac.component_id = c.component_id
JOIN analytics.artifacts a ON a.artifact_id = ac.artifact_id
WHERE c.license_category IN ('copyleft-strong', 'proprietary', 'unknown')
AND a.environment = 'prod'
GROUP BY c.component_id, c.name, c.version, c.license_concluded, c.license_category, c.supplier_normalized
ORDER BY c.license_category, artifact_count DESC;
```
### 13. MTTR Analysis
Mean time to remediate by severity.
```sql
SELECT
cv.severity::TEXT,
COUNT(*) AS remediated_vulns,
AVG(EXTRACT(EPOCH FROM (vo.valid_from - cv.published_at)) / 86400)::NUMERIC(10,2) AS avg_days_to_mitigate,
PERCENTILE_CONT(0.5) WITHIN GROUP (
ORDER BY EXTRACT(EPOCH FROM (vo.valid_from - cv.published_at)) / 86400
)::NUMERIC(10,2) AS median_days,
PERCENTILE_CONT(0.9) WITHIN GROUP (
ORDER BY EXTRACT(EPOCH FROM (vo.valid_from - cv.published_at)) / 86400
)::NUMERIC(10,2) AS p90_days
FROM analytics.component_vulns cv
JOIN analytics.vex_overrides vo ON vo.vuln_id = cv.vuln_id
AND vo.status = 'not_affected'
WHERE cv.published_at >= now() - INTERVAL '90 days'
AND cv.published_at IS NOT NULL
GROUP BY cv.severity
ORDER BY
CASE cv.severity
WHEN 'critical' THEN 1
WHEN 'high' THEN 2
WHEN 'medium' THEN 3
ELSE 4
END;
```
### 14. Transitive Dependency Risk
Components introduced through transitive dependencies.
```sql
SELECT
c.name AS transitive_component,
c.version,
ac.introduced_via AS direct_dependency,
ac.depth,
COUNT(DISTINCT cv.vuln_id) AS vuln_count,
SUM(CASE WHEN cv.severity = 'critical' THEN 1 ELSE 0 END) AS critical_count,
COUNT(DISTINCT a.artifact_id) AS affected_artifacts
FROM analytics.components c
JOIN analytics.artifact_components ac ON ac.component_id = c.component_id
JOIN analytics.artifacts a ON a.artifact_id = ac.artifact_id
LEFT JOIN analytics.component_vulns cv ON cv.component_id = c.component_id AND cv.affects = TRUE
WHERE ac.depth > 0 -- Transitive only
AND a.environment = 'prod'
GROUP BY c.component_id, c.name, c.version, ac.introduced_via, ac.depth
HAVING COUNT(cv.vuln_id) > 0
ORDER BY critical_count DESC, vuln_count DESC
LIMIT 50;
```
### 15. VEX Effectiveness Report
How effective is the VEX program at reducing noise?
```sql
SELECT
DATE_TRUNC('week', vo.created_at)::DATE AS week,
COUNT(*) AS total_overrides,
COUNT(*) FILTER (WHERE vo.status = 'not_affected') AS not_affected,
COUNT(*) FILTER (WHERE vo.status = 'affected') AS confirmed_affected,
COUNT(*) FILTER (WHERE vo.status = 'under_investigation') AS under_investigation,
COUNT(*) FILTER (WHERE vo.status = 'fixed') AS marked_fixed,
-- Noise reduction rate
ROUND(100.0 * COUNT(*) FILTER (WHERE vo.status = 'not_affected') / NULLIF(COUNT(*), 0), 1) AS noise_reduction_pct
FROM analytics.vex_overrides vo
WHERE vo.created_at >= now() - INTERVAL '90 days'
GROUP BY DATE_TRUNC('week', vo.created_at)
ORDER BY week;
```
## Performance Tips
1. **Use materialized views**: Queries prefixed with `mv_` are pre-computed and fast
2. **Add environment filter**: Most queries benefit from `WHERE environment = 'prod'`
3. **Use stored procedures**: `sp_*` functions return JSON and handle caching
4. **Limit results**: Always use `LIMIT` for large result sets
5. **Check refresh times**: Views are refreshed daily; data may be up to 24h stale
## Query Parameters
Common filter parameters:
| Parameter | Type | Example | Notes |
|-----------|------|---------|-------|
| `environment` | TEXT | `'prod'`, `'stage'` | Filter by deployment environment |
| `team` | TEXT | `'platform'` | Filter by owning team |
| `severity` | TEXT | `'critical'`, `'high'` | Minimum severity level |
| `days` | INT | `30`, `90` | Lookback period |
| `limit` | INT | `20`, `100` | Max results |

View File

@@ -40,6 +40,7 @@ All predicates capture subjects, issuer metadata, policy context, materials, opt
- **Console:** Evidence browser, verification reports, chain-of-custody graph, issuer/key management, attestation workbench, and bulk verification flows.
- **CLI / SDK:** `stella attest sign|verify|list|fetch|key` commands plus language SDKs to integrate build pipelines and offline verification scripts.
- **Policy Studio:** Verification policies author required predicate types, issuers, witness requirements, and freshness windows; simulations show enforcement impact.
Reference: `docs/modules/attestor/guides/timestamp-policy.md` for RFC-3161 policy assertions.
## Storage, offline & air-gap posture
- PostgreSQL stores entry metadata, dedupe keys, and audit events; object storage optionally archives DSSE bundles.

View File

@@ -10,9 +10,12 @@ StellaOps SBOM interoperability tests ensure compatibility with third-party secu
| Format | Version | Status | Parity Target |
|--------|---------|--------|---------------|
| CycloneDX | 1.6 | ✅ Supported | 95%+ |
| CycloneDX | 1.7 | ✅ Supported | 95%+ |
| SPDX | 3.0.1 | ✅ Supported | 95%+ |
Notes:
- SPDX 3.0.1 generation currently emits JSON-LD `@context`, `spdxVersion`, core document/package/relationship elements, software package/file/snippet metadata, build profile elements with output relationships, security vulnerabilities with assessment relationships, verifiedUsing hashes/signatures, and external references/identifiers. Full profile coverage is tracked in SPRINT_20260119_014.
### Third-Party Tools
| Tool | Purpose | Version | Status |
@@ -162,7 +165,7 @@ If SBOMs fail schema validation:
1. Verify format version:
```bash
jq '.specVersion' sbom-cyclonedx.json # Should be "1.6"
jq '.specVersion' sbom-cyclonedx.json # Should be "1.7"
jq '.spdxVersion' sbom-spdx.json # Should be "SPDX-3.0"
```
@@ -203,7 +206,7 @@ Tools are currently installed from `latest`. To pin versions:
## References
- [CycloneDX 1.6 Specification](https://cyclonedx.org/docs/1.6/)
- [CycloneDX 1.7 Specification](https://cyclonedx.org/docs/1.7/)
- [SPDX 3.0.1 Specification](https://spdx.github.io/spdx-spec/v3.0/)
- [Syft Documentation](https://github.com/anchore/syft)
- [Grype Documentation](https://github.com/anchore/grype)

View File

@@ -607,7 +607,7 @@ stella attest verify-batch \
- [Sigstore Trust Root Specification](https://github.com/sigstore/root-signing)
- [in-toto Attestation Specification](https://github.com/in-toto/attestation)
- [SPDX 3.0.1 Specification](https://spdx.github.io/spdx-spec/v3.0.1/)
- [CycloneDX 1.6 Specification](https://cyclonedx.org/docs/1.6/)
- [CycloneDX 1.7 Specification](https://cyclonedx.org/docs/1.7/)
### StellaOps Documentation
- [Attestor Architecture](../modules/attestor/architecture.md)

View File

@@ -0,0 +1,48 @@
# Attestor Offline Verification Guide
> **Audience:** Attestor operators, AirGap owners, CI/Release engineers
>
> **Purpose:** Explain how to verify attestations and timestamp evidence in fully offline environments.
## 1. Offline Inputs
Offline verification expects all evidence to be bundled locally:
- DSSE envelopes + certificate chains.
- Rekor inclusion proofs + a pinned checkpoint.
- RFC3161 timestamp evidence with bundled TSA chain and revocation data:
- `tsa/chain/` (PEM certificates, leaf -> root)
- `tsa/ocsp/` (stapled OCSP responses)
- `tsa/crl/` (CRL snapshots when OCSP is unavailable)
## 2. Bundle Layout Expectations
Minimum paths for timestamp verification:
- `manifest.json` with `timestamps[]` entries.
- `tsa/chain/*.pem` for each RFC3161 timestamp.
- `tsa/ocsp/*.der` or `tsa/crl/*.crl` (revocation evidence).
## 3. CLI Workflow (Offline)
Use the bundle verification flow aligned to domain operations:
```bash
stella bundle verify --bundle /path/to/bundle --offline --trust-root /path/to/tsa-root.pem --rekor-checkpoint /path/to/checkpoint.json
```
Notes:
- Offline mode fails closed when revocation evidence is missing or invalid.
- Trust roots must be provided locally; no network fetches are allowed.
## 4. Verification Behavior
- TSA chain is validated against the provided trust roots.
- Revocation evidence is verified using bundled OCSP/CRL data.
- Rekor proofs are verified against the pinned checkpoint when provided.
## 5. References
- `docs/modules/attestor/guides/timestamp-policy.md`
- `docs/modules/attestor/airgap.md`
- `docs/modules/airgap/guides/staleness-and-time.md`

View File

@@ -0,0 +1,50 @@
# RFC-3161 Timestamp Policy Assertions
## Overview
Attestation timestamp policy rules validate RFC-3161 evidence alongside Rekor
inclusion proofs. The policy surface is backed by `AttestationTimestampPolicyContext`
and `TimestampPolicyEvaluator` in `StellaOps.Attestor.Timestamping`.
## Context fields
`AttestationTimestampPolicyContext` exposes the following fields:
| Field | Type | Description |
| --- | --- | --- |
| `HasValidTst` | bool | True when RFC-3161 verification succeeded. |
| `TstTime` | DateTimeOffset? | Generation time from the timestamp token. |
| `TsaName` | string? | TSA subject/name from the TST. |
| `TsaPolicyOid` | string? | TSA policy OID from the TST. |
| `TsaCertificateValid` | bool | True when TSA certificate validation passes. |
| `TsaCertificateExpires` | DateTimeOffset? | TSA signing cert expiry time. |
| `OcspStatus` | string? | OCSP status (Good/Unknown/Revoked). |
| `CrlChecked` | bool | True when CRL data was checked. |
| `RekorTime` | DateTimeOffset? | Rekor integrated time for the entry. |
| `TimeSkew` | TimeSpan? | RekorTime - TstTime, used for skew checks. |
## Example assertions
The policy engine maps the context into `evidence.tst.*` fields. Example rules:
```yaml
rules:
- id: require-rfc3161
assert: evidence.tst.valid == true
- id: time-skew
assert: abs(evidence.tst.time_skew) <= "5m"
- id: freshness
assert: evidence.tst.signing_cert.expires_at - now() > "180d"
- id: revocation-staple
assert: evidence.tst.ocsp.status in ["good","unknown"] && evidence.tst.crl.checked == true
- id: trusted-tsa
assert: evidence.tst.tsa_name in ["Example TSA", "Acme TSA"]
```
## Built-in policy defaults
`TimestampPolicy.Default` enforces:
- `RequireRfc3161 = true`
- `MaxTimeSkew = 5 minutes`
- `MinCertificateFreshness = 180 days`
- `RequireRevocationStapling = true`
## References
- `src/Attestor/__Libraries/StellaOps.Attestor.Timestamping/AttestationTimestampPolicyContext.cs`
- `docs/modules/attestor/architecture.md`

View File

@@ -0,0 +1,32 @@
# Attestor Implementation Plan
## Purpose
Provide a concise, living plan for Attestor feature delivery, timestamping, and offline verification workflows.
## Active work
- `docs/implplan/SPRINT_20260119_010_Attestor_tst_integration.md`
- `docs/implplan/SPRINT_20260119_013_Attestor_cyclonedx_1.7_generation.md`
- `docs/implplan/SPRINT_20260119_014_Attestor_spdx_3.0.1_generation.md`
## Near-term deliverables
- RFC-3161 timestamping integration (signing, verification, policy context).
- CycloneDX 1.7 predicate writer updates and determinism tests.
- SPDX 3.0.1 predicate writer updates and determinism tests.
- CLI workflows for attestation timestamp handling.
## Dependencies
- Authority timestamping services and TSA client integrations.
- EvidenceLocker timestamp storage and verification utilities.
- Policy evaluation integration for timestamp assertions.
## Evidence of completion
- Attestor timestamping library changes under `src/Attestor/__Libraries/`.
- Updated CLI command handlers and tests under `src/Cli/`.
- Deterministic unit tests and fixtures in `src/Attestor/__Tests/`.
- Documentation updates under `docs/modules/attestor/`.
## Reference docs
- `docs/modules/attestor/README.md`
- `docs/modules/attestor/architecture.md`
- `docs/modules/attestor/rekor-verification-design.md`
- `docs/modules/platform/architecture-overview.md`

View File

@@ -432,25 +432,22 @@ In tile-based logs, the Merkle tree is stored in fixed-size chunks (tiles) of 25
#### 3.4.2 Log Version Configuration
StellaOps supports automatic version detection and explicit version selection:
StellaOps supports automatic selection and explicit version selection:
```csharp
public enum RekorLogVersion
{
Auto = 0, // Auto-detect based on endpoint availability
V1 = 1, // Traditional Trillian-based Rekor (API proofs)
Auto = 0, // Auto-selects v2 tiles
V2 = 2 // Tile-based Sunlight format
}
```
**Version Selection Logic:**
| Version | PreferTileProofs | Result |
|---------|------------------|--------|
| V2 | (any) | Always use tile proofs |
| V1 | (any) | Always use API proofs |
| Auto | true | Prefer tile proofs if available |
| Auto | false | Use API proofs (default) |
| Version | Result |
|---------|--------|
| V2 | Always use tile proofs |
| Auto | Always use tile proofs |
#### 3.4.3 Checkpoint Format
@@ -577,14 +574,12 @@ attestor:
rekor:
primary:
url: https://rekor.sigstore.dev
# Version: Auto, V1, or V2
# Version: Auto or V2
version: Auto
# Custom tile base URL (optional, defaults to {url}/tile/)
tile_base_url: ""
# Log ID for multi-log environments (hex-encoded SHA-256)
log_id: "c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d"
# Prefer tile proofs when version is Auto
prefer_tile_proofs: false
```
**Environment Variables:**
@@ -592,10 +587,9 @@ attestor:
```bash
# Rekor v2 Configuration
REKOR_SERVER_URL=https://rekor.sigstore.dev
REKOR_VERSION=Auto # Auto, V1, or V2
REKOR_VERSION=Auto # Auto or V2
REKOR_TILE_BASE_URL= # Optional custom tile endpoint
REKOR_LOG_ID=c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d
REKOR_PREFER_TILE_PROOFS=false
```
#### 3.4.8 Offline Verification Benefits

View File

@@ -402,9 +402,9 @@ Authority now understands two flavours of sender-constrained OAuth clients:
- `security.senderConstraints.dpop.allowTemporaryBypass` toggles an emergency-only bypass for sealed drills. When set to `true`, Authority logs `authority.dpop.proof.bypass`, tags `authority.dpop_result=bypass`, and issues tokens without a DPoP `cnf` claim so downstream servers know sender constraints are disabled. **Reset to `false` immediately after the exercise.**
- `security.senderConstraints.dpop.nonce.enabled` enables nonce challenges for high-value audiences (`requiredAudiences`, normalised to case-insensitive strings). When a nonce is required but missing or expired, `/token` replies with `WWW-Authenticate: DPoP error="use_dpop_nonce"` (and, when available, a fresh `DPoP-Nonce` header). Clients must retry with the issued nonce embedded in the proof.
- Refresh-token requests honour the original sender constraint (DPoP or mTLS). `/token` revalidates the proof/certificate, enforces the recorded thumbprint/JKT, and reuses that metadata so the new access/refresh tokens remain bound to the same key.
- `security.senderConstraints.dpop.nonce.store` selects `memory` (default) or `redis`. When `redis` is configured, set `security.senderConstraints.dpop.nonce.redisConnectionString` so replicas share nonce issuance and high-value clients avoid replay gaps during failover.
- `security.senderConstraints.dpop.nonce.store` selects `memory` (default) or `redis` (Valkey-backed). When `redis` is configured, set `security.senderConstraints.dpop.nonce.redisConnectionString` so replicas share nonce issuance and high-value clients avoid replay gaps during failover.
- Telemetry: every nonce challenge increments `authority_dpop_nonce_miss_total{reason=...}` while mTLS mismatches increment `authority_mtls_mismatch_total{reason=...}`.
- Example (enabling Redis-backed nonces; adjust audiences per deployment):
- Example (enabling Valkey-backed nonces; adjust audiences per deployment):
```yaml
security:
senderConstraints:

View File

@@ -242,7 +242,7 @@ Create `devops/docker/ghidra/Dockerfile.headless`:
```dockerfile
# Copyright (c) StellaOps. All rights reserved.
# Licensed under AGPL-3.0-or-later.
# Licensed under BUSL-1.1.
FROM eclipse-temurin:17-jdk-jammy
@@ -253,7 +253,7 @@ ARG GHIDRA_SHA256=<insert-sha256-here>
LABEL org.opencontainers.image.title="StellaOps Ghidra Headless"
LABEL org.opencontainers.image.description="Ghidra headless analysis server with ghidriff for BinaryIndex"
LABEL org.opencontainers.image.version="${GHIDRA_VERSION}"
LABEL org.opencontainers.image.licenses="AGPL-3.0-or-later"
LABEL org.opencontainers.image.licenses="BUSL-1.1"
# Install dependencies
RUN apt-get update && apt-get install -y \
@@ -323,7 +323,7 @@ Create `devops/compose/docker-compose.ghidra.yml`:
```yaml
# Copyright (c) StellaOps. All rights reserved.
# Licensed under AGPL-3.0-or-later.
# Licensed under BUSL-1.1.
version: "3.9"
@@ -418,7 +418,7 @@ Create `devops/compose/init-bsim-db.sql`:
```sql
-- Copyright (c) StellaOps. All rights reserved.
-- Licensed under AGPL-3.0-or-later.
-- Licensed under BUSL-1.1.
-- BSim database initialization for Ghidra
-- This schema is managed by Ghidra's BSim tooling

View File

@@ -9,6 +9,66 @@ stella attest verify --envelope bundle.dsse.json --policy policy.json \
```
- Offline verification uses bundled roots and checkpoints; transparency optional.
### Timestamped attestations
Create a DSSE envelope and request RFC-3161 timestamping:
```bash
stella attest sign \
--predicate ./predicate.json \
--predicate-type https://slsa.dev/provenance/v1 \
--subject oci://registry/app@sha256:abc123 \
--digest sha256:abc123 \
--key ./keys/signing.pem \
--timestamp \
--tsa https://tsa.example \
--output attestation.dsse.json
```
Request and inspect standalone timestamp tokens:
```bash
stella ts rfc3161 --hash sha256:abc123 --tsa https://tsa.example --out artifact.tst
stella ts info --tst artifact.tst
stella ts verify --tst artifact.tst --artifact ./artifact.bin --trust-root ./roots.pem
```
Store timestamp evidence alongside an attestation:
```bash
stella evidence store --artifact attestation.dsse.json \
--tst artifact.tst --rekor-bundle rekor.json \
--tsa-chain tsa-chain.pem --ocsp ocsp.der --crl crl.der
```
Evidence is stored under `~/.stellaops/evidence-store/sha256_<digest>/` by default
(the colon in the digest is replaced with an underscore).
### Timestamp requirements during verify
Require RFC-3161 evidence and enforce skew:
```bash
stella attest verify --envelope attestation.dsse.json \
--require-timestamp --max-skew 5m --format json
```
The JSON output includes a `timestamp` block:
```json
{
"timestamp": {
"required": true,
"maxSkew": "00:05:00",
"present": true,
"generationTime": "2026-01-19T12:00:00Z",
"tsaUrl": "https://tsa.example",
"tokenDigest": "sha256:...",
"withinSkew": true
}
}
```
`--max-skew` accepts relative durations (`5m`, `30s`, `2h`) or `hh:mm:ss`.
## List attestations
```bash
stella attest list --tenant default --issuer dev-kms --format table

View File

@@ -680,7 +680,7 @@ wget https://releases.stella-ops.org/cli/china/latest/stella-china-linux-x64.tar
### License Compliance
All distributions are licensed under **AGPL-3.0-or-later**, with regional plugins subject to additional vendor licenses (e.g., CryptoPro CSP requires commercial license).
All distributions are licensed under **BUSL-1.1**, with regional plugins subject to additional vendor licenses (e.g., CryptoPro CSP requires commercial license).
---

View File

@@ -299,7 +299,7 @@ stella cache lookup-purl pkg:npm/express@4.0.0
```bash
# Connect to Valkey
redis-cli -h localhost -p 6379
valkey-cli -h localhost -p 6379
# Check memory usage
INFO memory

View File

@@ -6,6 +6,22 @@ Per SPRINT_8200_0013_0003.
The SBOM Learning API enables Concelier to learn which advisories are relevant to your organization by registering SBOMs from scanned images. When an SBOM is registered, Concelier matches its components against the canonical advisory database and updates interest scores accordingly.
## SBOM Extraction
Concelier normalizes incoming CycloneDX 1.7 and SPDX 3.0.1 documents into the internal `ParsedSbom` model for matching and downstream analysis.
Current extraction coverage (SPRINT_20260119_015):
- Document metadata: format, specVersion, serialNumber, created, name, namespace when present
- Components: bomRef, type, name, version, purl, cpe, hashes (including SPDX verifiedUsing), license IDs/expressions, license text (base64 decode), external references, properties, scope/modified, supplier/manufacturer, evidence, pedigree, cryptoProperties, modelCard (CycloneDX)
- Dependencies: component dependency edges (CycloneDX dependencies, SPDX relationships)
- Services: endpoints, authentication, crossesTrustBoundary, data flows, licenses, external references (CycloneDX)
- Formulation: components, workflows, tasks, properties (CycloneDX)
- Build metadata: buildId, buildType, timestamps, config source, environment, parameters (SPDX)
- Document properties
Notes:
- Full SPDX Licensing profile objects, vulnerabilities, and other SPDX profiles are pending in SPRINT_20260119_015.
- Matching currently uses PURL and CPE; additional fields are stored for downstream consumers.
## Flow
```

View File

@@ -270,7 +270,7 @@ gateway:
enabled: true
requestsPerMinute: 1000
burstSize: 100
redisConnectionString: "${REDIS_URL}"
redisConnectionString: "${REDIS_URL}" # Valkey (Redis-compatible)
openapi:
enabled: true

View File

@@ -203,7 +203,7 @@ public static JsonArray GenerateSecurityRequirement(EndpointDescriptor endpoint)
| `ServerUrl` | `string` | `"/"` | Base server URL |
| `CacheTtlSeconds` | `int` | `60` | Cache TTL |
| `Enabled` | `bool` | `true` | Enable/disable |
| `LicenseName` | `string` | `"AGPL-3.0-or-later"` | License name |
| `LicenseName` | `string` | `"BUSL-1.1"` | License name |
| `ContactName` | `string?` | `null` | Contact name |
| `ContactEmail` | `string?` | `null` | Contact email |
| `TokenUrl` | `string` | `"/auth/token"` | OAuth2 token URL |
@@ -218,7 +218,7 @@ OpenApi:
ServerUrl: "https://api.example.com"
CacheTtlSeconds: 60
Enabled: true
LicenseName: "AGPL-3.0-or-later"
LicenseName: "BUSL-1.1"
ContactName: "API Team"
ContactEmail: "api@example.com"
TokenUrl: "/auth/token"

View File

@@ -61,7 +61,7 @@ metadata:
maintainers:
- name: Jane Doe
email: jane@example.com
license: AGPL-3.0-or-later
license: BUSL-1.1
annotations:
imposedRuleReminder: true

View File

@@ -450,7 +450,7 @@ PolicyEngine:
HighConfidenceThreshold: 0.9
ReachabilityCache:
Type: "redis" # or "memory"
Type: "redis" # Valkey (Redis-compatible) or "memory"
RedisConnectionString: "${REDIS_URL}"
KeyPrefix: "rf:"
```

View File

@@ -381,7 +381,7 @@ groups:
1. **Check Valkey health**
```bash
redis-cli -h valkey info stats
valkey-cli -h valkey info stats
```
2. **Check Postgres connections**

View File

@@ -20,7 +20,7 @@ This document provides the configuration reference for the Release Orchestrator,
```bash
# Database
STELLA_DATABASE_URL=postgresql://user:pass@host:5432/stella
STELLA_REDIS_URL=redis://host:6379
STELLA_REDIS_URL=redis://host:6379 # Valkey (Redis-compatible)
STELLA_SECRET_KEY=base64-encoded-32-bytes
STELLA_LOG_LEVEL=info
STELLA_LOG_FORMAT=json
@@ -75,7 +75,7 @@ database:
ssl_mode: require
redis:
url: redis://host:6379
url: redis://host:6379 # Valkey (Redis-compatible)
prefix: stella
auth:

View File

@@ -2661,7 +2661,7 @@ script_engine:
compilation_cache:
enabled: true
memory_cache_size_mb: 256
distributed_cache: redis
distributed_cache: redis # Valkey (Redis-compatible)
ttl_days: 7
# Warm container pool

View File

@@ -785,7 +785,7 @@ performance:
default_ttl: "00:05:00"
l2:
enabled: true
provider: redis
provider: redis # Valkey (Redis-compatible)
connection_string: "redis://localhost:6379"
default_ttl: "01:00:00"

View File

@@ -291,7 +291,7 @@ plugin:
id: "com.example.jenkins-connector"
version: "1.0.0"
vendor: "Example Corp"
license: "Apache-2.0"
license: "BUSL-1.1"
description: "Jenkins CI integration for Stella Ops"
capabilities:

View File

@@ -429,6 +429,9 @@ Response:
}
```
In Valkey-backed deployments, the `redis` check reflects the Redis-compatible
Valkey cache.
### Readiness Probe
```http

View File

@@ -196,7 +196,7 @@ RiskEngine:
Cache:
Enabled: true
Provider: "valkey"
ConnectionString: "redis://valkey:6379"
ConnectionString: "redis://valkey:6379" # Valkey (Redis-compatible)
DefaultTtl: "00:15:00"
Providers:

View File

@@ -48,7 +48,7 @@ OpenApi:
ServerUrl: "https://api.example.com"
CacheTtlSeconds: 60
Enabled: true
LicenseName: "AGPL-3.0-or-later"
LicenseName: "BUSL-1.1"
ContactName: "API Team"
ContactEmail: "api@example.com"
TokenUrl: "/auth/token"
@@ -64,7 +64,7 @@ OpenApi:
| `ServerUrl` | `string` | `"/"` | Base server URL |
| `CacheTtlSeconds` | `int` | `60` | Cache time-to-live in seconds |
| `Enabled` | `bool` | `true` | Enable/disable OpenAPI aggregation |
| `LicenseName` | `string` | `"AGPL-3.0-or-later"` | License name in OpenAPI info |
| `LicenseName` | `string` | `"BUSL-1.1"` | License name in OpenAPI info |
| `ContactName` | `string?` | `null` | Contact name (optional) |
| `ContactEmail` | `string?` | `null` | Contact email (optional) |
| `TokenUrl` | `string` | `"/auth/token"` | OAuth2 token endpoint URL |
@@ -229,7 +229,7 @@ The aggregated OpenAPI document follows this structure:
"version": "1.0.0",
"description": "Unified API aggregating all connected microservices.",
"license": {
"name": "AGPL-3.0-or-later"
"name": "BUSL-1.1"
},
"contact": {
"name": "API Team",

View File

@@ -294,4 +294,4 @@ Ensure the endpoint:
## License
AGPL-3.0-or-later
BUSL-1.1

View File

@@ -250,7 +250,7 @@ Clear the binary cache if results seem stale:
stella cache clear --prefix binary
# Via Redis CLI
redis-cli KEYS "stellaops:binary:*" | xargs redis-cli DEL
valkey-cli KEYS "stellaops:binary:*" | xargs valkey-cli DEL
```
### Build-ID Missing