This commit is contained in:
File diff suppressed because it is too large
Load Diff
@@ -1,602 +0,0 @@
|
||||
Here’s a simple, low‑friction way to keep priorities fresh without constant manual grooming: **let confidence decay over time**.
|
||||
|
||||
%20=%20e^{-t/τ})
|
||||
|
||||
# Exponential confidence decay (what & why)
|
||||
|
||||
* **Idea:** Every item (task, lead, bug, doc, hypothesis) has a confidence score that **automatically shrinks with time** if you don’t touch it.
|
||||
* **Formula:** `confidence(t) = e^(−t/τ)` where `t` is days since last signal (edit, comment, commit, new data), and **τ (“tau”)** is the decay constant.
|
||||
* **Rule of thumb:** With **τ = 30 days**, at **t = 30** the confidence is **e^(−1) ≈ 0.37**—about a **63% drop**. This surfaces long‑ignored items *gradually*, not with harsh “stale/expired” flips.
|
||||
|
||||
# How to use it in practice
|
||||
|
||||
* **Signals that reset t → 0:** comment on the ticket, new benchmark, fresh log sample, doc update, CI run, new market news.
|
||||
* **Sort queues by:** `priority × confidence(t)` (or severity × confidence). Quiet items drift down; truly active ones stay up.
|
||||
* **Escalation bands:**
|
||||
|
||||
* `>0.6` = green (recently touched)
|
||||
* `0.3–0.6` = amber (review soon)
|
||||
* `<0.3` = red (poke or close)
|
||||
|
||||
# Quick presets
|
||||
|
||||
* **Fast‑moving queues (incidents, hot leads):** τ = **7–14** days
|
||||
* **Engineering tasks / product docs:** τ = **30** days
|
||||
* **Research bets / roadmaps:** τ = **60–90** days
|
||||
|
||||
# For your world (Stella Ops + ops/dev work)
|
||||
|
||||
* **Vuln tickets:** `risk_score = CVSS × reachability × e^(−t/30)`
|
||||
* **Roadmap epics:** `value_score = impact × e^(−t/60)` to re‑rank quarterly.
|
||||
* **Docs:** show a badge “freshness: 42%” derived from last edit age to nudge updates.
|
||||
|
||||
# Minimal implementation sketch
|
||||
|
||||
* Store per‑item: `last_signal_at`, `base_priority`.
|
||||
* Compute on read:
|
||||
|
||||
```
|
||||
days = (now - last_signal_at).days
|
||||
conf = exp(-days / tau)
|
||||
score = base_priority * conf
|
||||
```
|
||||
* Recompute in your API layer or materialize nightly; no cron spam needed.
|
||||
|
||||
If you want, I can draft a tiny C# helper (and SQL snippet) you can drop into your issue service to add `confidence(t)` and color bands to your lists.
|
||||
Perfect, let’s turn the idea into something your devs can actually build.
|
||||
|
||||
Below is an implementation plan you can drop into a ticketing/PRD — with clear phases, data model changes, APIs, and some sample code (C# + SQL). I’ll also sprinkle in Stella Ops–specific notes.
|
||||
|
||||
---
|
||||
|
||||
## 0. Scope & Objectives
|
||||
|
||||
**Goal:** Introduce `confidence(t)` as an automatic freshness factor that decays with time and is used to rank and highlight work.
|
||||
|
||||
We’ll apply it to:
|
||||
|
||||
* Vulnerabilities (Stella Ops)
|
||||
* General issues / tasks / epics
|
||||
* (Optional) Docs, leads, hypotheses later
|
||||
|
||||
**Core behavior:**
|
||||
|
||||
* Each item has:
|
||||
|
||||
* A base priority / risk (from severity, business impact, etc.)
|
||||
* A timestamp of last signal (meaningful activity)
|
||||
* A decay rate τ (tau) in days
|
||||
* Effective priority = `base_priority × confidence(t)`
|
||||
* `confidence(t) = exp(− t / τ)` where `t` = days since last_signal
|
||||
|
||||
---
|
||||
|
||||
## 1. Data Model Changes
|
||||
|
||||
### 1.1. Add fields to core “work item” tables
|
||||
|
||||
For each relevant table (`Issues`, `Vulnerabilities`, `Epics`, …):
|
||||
|
||||
**New columns:**
|
||||
|
||||
* `base_priority` (FLOAT or INT)
|
||||
|
||||
* Example: 1–100, or derived from severity.
|
||||
* `last_signal_at` (DATETIME, NOT NULL, default = `created_at`)
|
||||
* `tau_days` (FLOAT, nullable, falls back to type default)
|
||||
* (Optional) `confidence_score_cached` (FLOAT, for materialized score)
|
||||
* (Optional) `is_confidence_frozen` (BOOL, default FALSE)
|
||||
For pinned items that should not decay.
|
||||
|
||||
**Example Postgres migration (Issues):**
|
||||
|
||||
```sql
|
||||
ALTER TABLE issues
|
||||
ADD COLUMN base_priority DOUBLE PRECISION,
|
||||
ADD COLUMN last_signal_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
ADD COLUMN tau_days DOUBLE PRECISION,
|
||||
ADD COLUMN confidence_cached DOUBLE PRECISION,
|
||||
ADD COLUMN is_confidence_frozen BOOLEAN NOT NULL DEFAULT FALSE;
|
||||
```
|
||||
|
||||
For Stella Ops:
|
||||
|
||||
```sql
|
||||
ALTER TABLE vulnerabilities
|
||||
ADD COLUMN base_risk DOUBLE PRECISION,
|
||||
ADD COLUMN last_signal_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
ADD COLUMN tau_days DOUBLE PRECISION,
|
||||
ADD COLUMN confidence_cached DOUBLE PRECISION,
|
||||
ADD COLUMN is_confidence_frozen BOOLEAN NOT NULL DEFAULT FALSE;
|
||||
```
|
||||
|
||||
### 1.2. Add a config table for τ per entity type
|
||||
|
||||
```sql
|
||||
CREATE TABLE confidence_decay_config (
|
||||
id SERIAL PRIMARY KEY,
|
||||
entity_type TEXT NOT NULL, -- 'issue', 'vulnerability', 'epic', 'doc'
|
||||
tau_days_default DOUBLE PRECISION NOT NULL,
|
||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
INSERT INTO confidence_decay_config (entity_type, tau_days_default) VALUES
|
||||
('incident', 7),
|
||||
('vulnerability', 30),
|
||||
('issue', 30),
|
||||
('epic', 60),
|
||||
('doc', 90);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Define “signal” events & instrumentation
|
||||
|
||||
We need a standardized way to say: “this item got activity → reset last_signal_at”.
|
||||
|
||||
### 2.1. Signals that should reset `last_signal_at`
|
||||
|
||||
For **issues / epics:**
|
||||
|
||||
* New comment
|
||||
* Status change (e.g., Open → In Progress)
|
||||
* Field change that matters (severity, owner, milestone)
|
||||
* Attachment added
|
||||
* Link to PR added or updated
|
||||
* New CI failure linked
|
||||
|
||||
For **vulnerabilities (Stella Ops):**
|
||||
|
||||
* New scanner result attached or status updated (e.g., “Verified”, “False Positive”)
|
||||
* New evidence (PoC, exploit notes)
|
||||
* SLA override change
|
||||
* Assignment / ownership change
|
||||
* Integration events (e.g., PR merge that references the vuln)
|
||||
|
||||
For **docs (if you do it):**
|
||||
|
||||
* Any edit
|
||||
* Comment/annotation
|
||||
|
||||
### 2.2. Implement a shared helper to record a signal
|
||||
|
||||
**Service-level helper (pseudocode / C#-ish):**
|
||||
|
||||
```csharp
|
||||
public interface IConfidenceSignalService
|
||||
{
|
||||
Task RecordSignalAsync(WorkItemType type, Guid itemId, DateTime? signalTimeUtc = null);
|
||||
}
|
||||
|
||||
public class ConfidenceSignalService : IConfidenceSignalService
|
||||
{
|
||||
private readonly IWorkItemRepository _repo;
|
||||
private readonly IConfidenceConfigService _config;
|
||||
|
||||
public async Task RecordSignalAsync(WorkItemType type, Guid itemId, DateTime? signalTimeUtc = null)
|
||||
{
|
||||
var now = signalTimeUtc ?? DateTime.UtcNow;
|
||||
var item = await _repo.GetByIdAsync(type, itemId);
|
||||
if (item == null) return;
|
||||
|
||||
item.LastSignalAt = now;
|
||||
|
||||
if (item.TauDays == null)
|
||||
{
|
||||
item.TauDays = await _config.GetDefaultTauAsync(type);
|
||||
}
|
||||
|
||||
await _repo.UpdateAsync(item);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2.3. Wire signals into existing flows
|
||||
|
||||
Create small tasks for devs like:
|
||||
|
||||
* **ISS-01:** Call `RecordSignalAsync` on:
|
||||
|
||||
* New issue comment handler
|
||||
* Issue status update handler
|
||||
* Issue field update handler (severity/priority/owner)
|
||||
* **VULN-01:** Call `RecordSignalAsync` when:
|
||||
|
||||
* New scanner result ingested for a vuln
|
||||
* Vulnerability status, SLA, or owner changes
|
||||
* New exploit evidence is attached
|
||||
|
||||
---
|
||||
|
||||
## 3. Confidence & scoring calculation
|
||||
|
||||
### 3.1. Shared confidence function
|
||||
|
||||
Definition:
|
||||
|
||||
```csharp
|
||||
public static class ConfidenceMath
|
||||
{
|
||||
// t = days since last signal
|
||||
public static double ConfidenceScore(DateTime lastSignalAtUtc, double tauDays, DateTime? nowUtc = null)
|
||||
{
|
||||
var now = nowUtc ?? DateTime.UtcNow;
|
||||
var tDays = (now - lastSignalAtUtc).TotalDays;
|
||||
|
||||
if (tDays <= 0) return 1.0;
|
||||
if (tauDays <= 0) return 1.0; // guard / fallback
|
||||
|
||||
var score = Math.Exp(-tDays / tauDays);
|
||||
|
||||
// Optional: never drop below a tiny floor, so items never "disappear"
|
||||
const double floor = 0.01;
|
||||
return Math.Max(score, floor);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3.2. Effective priority formulas
|
||||
|
||||
**Generic issues / tasks:**
|
||||
|
||||
```csharp
|
||||
double effectiveScore = issue.BasePriority * ConfidenceMath.ConfidenceScore(issue.LastSignalAt, issue.TauDays ?? defaultTau);
|
||||
```
|
||||
|
||||
**Vulnerabilities (Stella Ops):**
|
||||
|
||||
Let’s define:
|
||||
|
||||
* `severity_weight`: map CVSS or severity string to numeric (e.g. Critical=100, High=80, Medium=50, Low=20).
|
||||
* `reachability`: 0–1 (e.g. from your reachability analysis).
|
||||
* `exploitability`: 0–1 (optional, based on known exploits).
|
||||
* `confidence`: as above.
|
||||
|
||||
```csharp
|
||||
double baseRisk = severityWeight * reachability * exploitability; // or simpler: severityWeight * reachability
|
||||
double conf = ConfidenceMath.ConfidenceScore(vuln.LastSignalAt, vuln.TauDays ?? defaultTau);
|
||||
double effectiveRisk = baseRisk * conf;
|
||||
```
|
||||
|
||||
Store `baseRisk` → `vulnerabilities.base_risk`, and compute `effectiveRisk` on the fly or via job.
|
||||
|
||||
### 3.3. SQL implementation (optional for server-side sorting)
|
||||
|
||||
**Postgres example:**
|
||||
|
||||
```sql
|
||||
-- t_days = age in days
|
||||
-- tau = tau_days
|
||||
-- score = exp(-t_days / tau)
|
||||
|
||||
SELECT
|
||||
i.*,
|
||||
i.base_priority *
|
||||
GREATEST(
|
||||
EXP(- EXTRACT(EPOCH FROM (NOW() - i.last_signal_at)) / (86400 * COALESCE(i.tau_days, 30))),
|
||||
0.01
|
||||
) AS effective_priority
|
||||
FROM issues i
|
||||
ORDER BY effective_priority DESC;
|
||||
```
|
||||
|
||||
You can wrap that in a view:
|
||||
|
||||
```sql
|
||||
CREATE VIEW issues_with_confidence AS
|
||||
SELECT
|
||||
i.*,
|
||||
GREATEST(
|
||||
EXP(- EXTRACT(EPOCH FROM (NOW() - i.last_signal_at)) / (86400 * COALESCE(i.tau_days, 30))),
|
||||
0.01
|
||||
) AS confidence,
|
||||
i.base_priority *
|
||||
GREATEST(
|
||||
EXP(- EXTRACT(EPOCH FROM (NOW() - i.last_signal_at)) / (86400 * COALESCE(i.tau_days, 30))),
|
||||
0.01
|
||||
) AS effective_priority
|
||||
FROM issues i;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Caching & performance
|
||||
|
||||
You have two options:
|
||||
|
||||
### 4.1. Compute on read (simplest to start)
|
||||
|
||||
* Use the helper function in your service layer or a DB view.
|
||||
* Pros:
|
||||
|
||||
* No jobs, always fresh.
|
||||
* Cons:
|
||||
|
||||
* Slight CPU cost on heavy lists.
|
||||
|
||||
**Plan:** Start with this. If you see perf issues, move to 4.2.
|
||||
|
||||
### 4.2. Periodic materialization job (optional later)
|
||||
|
||||
Add a scheduled job (e.g. hourly) that:
|
||||
|
||||
1. Selects all active items.
|
||||
2. Computes `confidence_score` and `effective_priority`.
|
||||
3. Writes to `confidence_cached` and `effective_priority_cached` (if you add such a column).
|
||||
|
||||
Service then sorts by cached values.
|
||||
|
||||
---
|
||||
|
||||
## 5. Backfill & migration
|
||||
|
||||
### 5.1. Initial backfill script
|
||||
|
||||
For existing records:
|
||||
|
||||
* If `last_signal_at` is NULL → set to `created_at`.
|
||||
* Derive `base_priority` / `base_risk` from existing severity fields.
|
||||
* Set `tau_days` from config.
|
||||
|
||||
**Example:**
|
||||
|
||||
```sql
|
||||
UPDATE issues
|
||||
SET last_signal_at = created_at
|
||||
WHERE last_signal_at IS NULL;
|
||||
|
||||
UPDATE issues
|
||||
SET base_priority = CASE severity
|
||||
WHEN 'critical' THEN 100
|
||||
WHEN 'high' THEN 80
|
||||
WHEN 'medium' THEN 50
|
||||
WHEN 'low' THEN 20
|
||||
ELSE 10
|
||||
END
|
||||
WHERE base_priority IS NULL;
|
||||
|
||||
UPDATE issues i
|
||||
SET tau_days = c.tau_days_default
|
||||
FROM confidence_decay_config c
|
||||
WHERE c.entity_type = 'issue'
|
||||
AND i.tau_days IS NULL;
|
||||
```
|
||||
|
||||
Do similarly for `vulnerabilities` using severity / CVSS.
|
||||
|
||||
### 5.2. Sanity checks
|
||||
|
||||
Add a small script/test to verify:
|
||||
|
||||
* Newly created items → `confidence ≈ 1.0`.
|
||||
* 30-day-old items with τ=30 → `confidence ≈ 0.37`.
|
||||
* Ordering changes when you edit/comment on items.
|
||||
|
||||
---
|
||||
|
||||
## 6. API & Query Layer
|
||||
|
||||
### 6.1. New sorting options
|
||||
|
||||
Update list APIs:
|
||||
|
||||
* Accept parameter: `sort=effective_priority` or `sort=confidence`.
|
||||
* Default sort for some views:
|
||||
|
||||
* Vulnerabilities backlog: `sort=effective_risk` (risk × confidence).
|
||||
* Issues backlog: `sort=effective_priority`.
|
||||
|
||||
**Example REST API contract:**
|
||||
|
||||
`GET /api/issues?sort=effective_priority&state=open`
|
||||
|
||||
**Response fields (additions):**
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "ISS-123",
|
||||
"title": "Fix login bug",
|
||||
"base_priority": 80,
|
||||
"last_signal_at": "2025-11-01T10:00:00Z",
|
||||
"tau_days": 30,
|
||||
"confidence": 0.63,
|
||||
"effective_priority": 50.4,
|
||||
"confidence_band": "amber"
|
||||
}
|
||||
```
|
||||
|
||||
### 6.2. Confidence banding (for UI)
|
||||
|
||||
Define bands server-side (easy to change):
|
||||
|
||||
* Green: `confidence >= 0.6`
|
||||
* Amber: `0.3 ≤ confidence < 0.6`
|
||||
* Red: `confidence < 0.3`
|
||||
|
||||
You can compute on server:
|
||||
|
||||
```csharp
|
||||
string ConfidenceBand(double confidence) =>
|
||||
confidence >= 0.6 ? "green"
|
||||
: confidence >= 0.3 ? "amber"
|
||||
: "red";
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. UI / UX changes
|
||||
|
||||
### 7.1. List views (issues / vulns / epics)
|
||||
|
||||
For each item row:
|
||||
|
||||
* Show a small freshness pill:
|
||||
|
||||
* Text: `Active`, `Review soon`, `Stale`
|
||||
* Derived from confidence band.
|
||||
* Tooltip:
|
||||
|
||||
* “Confidence 78%. Last activity 3 days ago. τ = 30 days.”
|
||||
|
||||
* Sort default: by `effective_priority` / `effective_risk`.
|
||||
|
||||
* Filters:
|
||||
|
||||
* `Freshness: [All | Active | Review soon | Stale]`
|
||||
* Optionally: “Show stale only” toggle.
|
||||
|
||||
**Example labels:**
|
||||
|
||||
* Green: “Active (confidence 82%)”
|
||||
* Amber: “Review soon (confidence 45%)”
|
||||
* Red: “Stale (confidence 18%)”
|
||||
|
||||
### 7.2. Detail views
|
||||
|
||||
On an issue / vuln page:
|
||||
|
||||
* Add a “Confidence” section:
|
||||
|
||||
* “Confidence: **52%**”
|
||||
* “Last signal: **12 days ago**”
|
||||
* “Decay τ: **30 days**”
|
||||
* “Effective priority: **Base 80 × 0.52 = 42**”
|
||||
|
||||
* (Optional) small mini-chart (text-only or simple bar) showing approximate decay, but not necessary for first iteration.
|
||||
|
||||
### 7.3. Admin / settings UI
|
||||
|
||||
Add an internal settings page:
|
||||
|
||||
* Table of entity types with editable τ:
|
||||
|
||||
| Entity type | τ (days) | Notes |
|
||||
| ------------- | -------- | ---------------------------- |
|
||||
| Incident | 7 | Fast-moving |
|
||||
| Vulnerability | 30 | Standard risk review cadence |
|
||||
| Issue | 30 | Sprint-level decay |
|
||||
| Epic | 60 | Quarterly |
|
||||
| Doc | 90 | Slow decay |
|
||||
|
||||
* Optionally: toggle to pin item (`is_confidence_frozen`) from UI.
|
||||
|
||||
---
|
||||
|
||||
## 8. Stella Ops–specific behavior
|
||||
|
||||
For vulnerabilities:
|
||||
|
||||
### 8.1. Base risk calculation
|
||||
|
||||
Ingested fields you likely already have:
|
||||
|
||||
* `cvss_score` or `severity`
|
||||
* `reachable` (true/false or numeric)
|
||||
* (Optional) `exploit_available` (bool) or exploitability score
|
||||
* `asset_criticality` (1–5)
|
||||
|
||||
Define `base_risk` as:
|
||||
|
||||
```text
|
||||
severity_weight = f(cvss_score or severity)
|
||||
reachability = reachable ? 1.0 : 0.5 -- example
|
||||
exploitability = exploit_available ? 1.0 : 0.7
|
||||
asset_factor = 0.5 + 0.1 * asset_criticality -- 1 → 1.0, 5 → 1.5
|
||||
|
||||
base_risk = severity_weight * reachability * exploitability * asset_factor
|
||||
```
|
||||
|
||||
Store `base_risk` on vuln row.
|
||||
|
||||
Then:
|
||||
|
||||
```text
|
||||
effective_risk = base_risk * confidence(t)
|
||||
```
|
||||
|
||||
Use `effective_risk` for backlog ordering and SLAs dashboards.
|
||||
|
||||
### 8.2. Signals for vulns
|
||||
|
||||
Make sure these all call `RecordSignalAsync(Vulnerability, vulnId)`:
|
||||
|
||||
* New scan result for same vuln (re-detected).
|
||||
* Change status to “In Progress”, “Ready for Deploy”, “Verified Fixed”, etc.
|
||||
* Assigning an owner.
|
||||
* Attaching PoC / exploit details.
|
||||
|
||||
### 8.3. Vuln UI copy ideas
|
||||
|
||||
* Pill text:
|
||||
|
||||
* “Risk: 850 (confidence 68%)”
|
||||
* “Last analyst activity 11 days ago”
|
||||
|
||||
* In backlog view: show **Effective Risk** as main sort, with a smaller subtext “Base 1200 × Confidence 71%”.
|
||||
|
||||
---
|
||||
|
||||
## 9. Rollout plan
|
||||
|
||||
### Phase 1 – Infrastructure (backend-only)
|
||||
|
||||
* [ ] DB migrations & config table
|
||||
* [ ] Implement `ConfidenceMath` and helper functions
|
||||
* [ ] Implement `IConfidenceSignalService`
|
||||
* [ ] Wire signals into key flows (comments, state changes, scanner ingestion)
|
||||
* [ ] Add `confidence` and `effective_priority/risk` to API responses
|
||||
* [ ] Backfill script + dry run in staging
|
||||
|
||||
### Phase 2 – Internal UI & feature flag
|
||||
|
||||
* [ ] Add optional sorting by effective score to internal/staff views
|
||||
* [ ] Add confidence pill (hidden behind feature flag `confidence_decay_v1`)
|
||||
* [ ] Dogfood internally:
|
||||
|
||||
* Do items bubble up/down as expected?
|
||||
* Are any items “disappearing” because decay is too aggressive?
|
||||
|
||||
### Phase 3 – Parameter tuning
|
||||
|
||||
* [ ] Adjust τ per type based on feedback:
|
||||
|
||||
* If things decay too fast → increase τ
|
||||
* If queues rarely change → decrease τ
|
||||
* [ ] Decide on confidence floor (0.01? 0.05?) so nothing goes to literal 0.
|
||||
|
||||
### Phase 4 – General release
|
||||
|
||||
* [ ] Make effective score the default sort for key views:
|
||||
|
||||
* Vulnerabilities backlog
|
||||
* Issues backlog
|
||||
* [ ] Document behavior for users (help center / inline tooltip)
|
||||
* [ ] Add admin UI to tweak τ per entity type.
|
||||
|
||||
---
|
||||
|
||||
## 10. Edge cases & safeguards
|
||||
|
||||
* **New items**
|
||||
|
||||
* `last_signal_at = created_at`, confidence = 1.0.
|
||||
* **Pinned items**
|
||||
|
||||
* If `is_confidence_frozen = true` → treat confidence as 1.0.
|
||||
* **Items without τ**
|
||||
|
||||
* Always fallback to entity type default.
|
||||
* **Timezones**
|
||||
|
||||
* Always store & compute in UTC.
|
||||
* **Very old items**
|
||||
|
||||
* Floor the confidence so they’re still visible when explicitly searched.
|
||||
|
||||
---
|
||||
|
||||
If you want, I can turn this into:
|
||||
|
||||
* A short **technical design doc** (with sections: Problem, Proposal, Alternatives, Rollout).
|
||||
* Or a **set of Jira tickets** grouped by backend / frontend / infra that your team can pick up directly.
|
||||
@@ -0,0 +1,630 @@
|
||||
Here’s a tight, practical blueprint for an **offline/air‑gap verification kit**—so regulated customers can prove what they’re installing without any network access.
|
||||
|
||||
---
|
||||
|
||||
# Offline kit: what to include & how to use it
|
||||
|
||||
**Why this matters (in plain words):** When a server has no internet, customers still need to trust that the bits they install are exactly what you shipped—and that the provenance is provable. This kit lets them verify integrity, origin, and minimal runtime safety entirely offline.
|
||||
|
||||
## 1) Signed SBOMs (DSSE‑wrapped)
|
||||
|
||||
* **Files:** `product.sbom.json` (CycloneDX or SPDX) + `product.sbom.json.dsse`
|
||||
* **What it gives:** Full component list with a detached DSSE envelope proving it came from you.
|
||||
* **Tip:** Use a *deterministic* SBOM build to avoid hash drift across rebuilds.
|
||||
|
||||
## 2) Rekor‑style receipt (or equivalent transparency record)
|
||||
|
||||
* **Files:** `rekor-receipt.json` (or a vendor‑neutral JSON receipt with canonical log ID, index, entry hash)
|
||||
* **Offline use:** They can’t query Rekor, but they can:
|
||||
|
||||
* Check the receipt’s **inclusion proof** against a bundled **checkpoint** (see #5).
|
||||
* Archive the receipt for later online audit parity.
|
||||
|
||||
## 3) Installer checksum + signature
|
||||
|
||||
* **Files:**
|
||||
|
||||
* `installer.tar.zst` (or your MSI/DEB/RPM)
|
||||
* `installer.sha256`
|
||||
* `installer.sha256.sig`
|
||||
* **What it gives:** Simple, fast integrity check + cryptographic authenticity (your offline public key in #5).
|
||||
|
||||
## 4) Minimal runtime config examples
|
||||
|
||||
* **Files:** `configs/minimal/compose.yml`, `configs/minimal/appsettings.json`, `configs/minimal/readme.md`
|
||||
* **What it gives:** A known‑good baseline the customer can start from without guessing env vars or secrets (use placeholders).
|
||||
|
||||
## 5) Step‑by‑step verification commands & trust roots
|
||||
|
||||
* **Files:**
|
||||
|
||||
* `VERIFY.md` (copy‑paste steps below)
|
||||
* `keys/vendor-pubkeys.pem` (X.509 or PEM keyring)
|
||||
* `checkpoints/transparency-checkpoint.txt` (Merkle root or log checkpoint snapshot)
|
||||
* `tools/` (static, offline verifiers if allowed: e.g., `cosign`, `gpg`, `sha256sum`)
|
||||
* **What it gives:** A single, predictable path to verification, without internet.
|
||||
|
||||
---
|
||||
|
||||
## Drop‑in VERIFY.md (ready to ship)
|
||||
|
||||
```bash
|
||||
# 0) Prepare
|
||||
export KIT_DIR=/media/usb/StellaOpsOfflineKit
|
||||
cd "$KIT_DIR"
|
||||
|
||||
# 1) Verify installer integrity
|
||||
sha256sum -c installer.sha256 # expects: installer.tar.zst: OK
|
||||
|
||||
# 2) Verify checksum signature (GPG example)
|
||||
gpg --keyid-format long --import keys/vendor-pubkeys.pem
|
||||
gpg --verify installer.sha256.sig installer.sha256
|
||||
|
||||
# 3) Verify SBOM DSSE envelope (cosign example, offline)
|
||||
# If you bundle cosign statically: tools/cosign verify-blob --key keys/vendor-pubkeys.pem ...
|
||||
tools/cosign verify-blob \
|
||||
--key keys/vendor-pubkeys.pem \
|
||||
--signature product.sbom.json.dsse \
|
||||
product.sbom.json
|
||||
|
||||
# 4) Verify transparency receipt against bundled checkpoint (offline proof)
|
||||
# (a) Inspect receipt & checkpoint
|
||||
jq . rekor-receipt.json | head
|
||||
cat checkpoints/transparency-checkpoint.txt
|
||||
|
||||
# (b) Validate Merkle proof (bundle a tiny verifier)
|
||||
tools/tlog-verify \
|
||||
--receipt rekor-receipt.json \
|
||||
--checkpoint checkpoints/transparency-checkpoint.txt \
|
||||
--roots keys/vendor-pubkeys.pem
|
||||
|
||||
# 5) Inspect SBOM quickly (CycloneDX)
|
||||
jq '.components[] | {name, version, purl}' product.sbom.json | head -n 50
|
||||
|
||||
# 6) Start from minimal config (no secrets inside the kit)
|
||||
cp -r configs/minimal ./local-min
|
||||
# Edit placeholders in ./local-min/*.yml / *.json before running
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Packaging notes (for your build pipeline)
|
||||
|
||||
* **Determinism:** Pin timestamps, ordering, and compression flags for SBOM and archives so hashes are stable.
|
||||
* **Crypto suite:** Default to modern Ed25519/sha256; optionally include **post‑quantum** signatures as a second envelope for long‑horizon customers.
|
||||
* **Key rotation:** Bundle a **key manifest** (`keys/manifest.json`) mapping key IDs → validity windows → products.
|
||||
* **Receipts:** If Rekor v2 or your own ledger is used, export an **inclusion proof + checkpoint** so customers can validate structure offline and later re‑check online for consistency.
|
||||
* **Tooling:** Prefer statically linked, air‑gap‑safe binaries in `tools/` with a small README and SHA256 list for each tool.
|
||||
|
||||
---
|
||||
|
||||
## Suggested kit layout
|
||||
|
||||
```
|
||||
StellaOpsOfflineKit/
|
||||
├─ installer.tar.zst
|
||||
├─ installer.sha256
|
||||
├─ installer.sha256.sig
|
||||
├─ product.sbom.json
|
||||
├─ product.sbom.json.dsse
|
||||
├─ rekor-receipt.json
|
||||
├─ keys/
|
||||
│ ├─ vendor-pubkeys.pem
|
||||
│ └─ manifest.json
|
||||
├─ checkpoints/
|
||||
│ └─ transparency-checkpoint.txt
|
||||
├─ configs/
|
||||
│ └─ minimal/
|
||||
│ ├─ compose.yml
|
||||
│ ├─ appsettings.json
|
||||
│ └─ readme.md
|
||||
├─ tools/
|
||||
│ ├─ cosign
|
||||
│ └─ tlog-verify
|
||||
└─ VERIFY.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What your customers gain
|
||||
|
||||
* **Integrity** (checksum matches), **authenticity** (signature verifies), **provenance** (SBOM + DSSE), and **ledger consistency** (receipt vs checkpoint)—all **without internet**.
|
||||
* A **minimal, known‑good config** to get running safely in air‑gapped environments.
|
||||
|
||||
If you want, I can turn this into a Git‑ready template (files + sample keys + tiny verifier) that your CI can populate on release build.
|
||||
Below is something you can drop almost verbatim in an internal Confluence / `docs/engineering` page as “Developer directions for the Offline / Air-Gap Verification Kit”.
|
||||
|
||||
---
|
||||
|
||||
## 1. Objective for the team
|
||||
|
||||
We must ship, for every Stella Ops on-prem / air-gapped release, an **Offline Verification Kit** that allows a customer to verify:
|
||||
|
||||
1. Integrity of the installer payload
|
||||
2. Authenticity of the payload and SBOM (came from us, not altered)
|
||||
3. Provenance of the SBOM via DSSE + transparency receipt
|
||||
4. A minimal, known-good configuration to start from in an air-gap
|
||||
|
||||
All of this must work **with zero network access**.
|
||||
|
||||
You are responsible for implementing this as part of the standard build/release pipeline, not as a one-off script.
|
||||
|
||||
---
|
||||
|
||||
## 2. Directory layout and naming conventions
|
||||
|
||||
All code and CI scripts must assume the following layout for the kit content:
|
||||
|
||||
```text
|
||||
StellaOpsOfflineKit/
|
||||
├─ installer.tar.zst
|
||||
├─ installer.sha256
|
||||
├─ installer.sha256.sig
|
||||
├─ product.sbom.json
|
||||
├─ product.sbom.json.dsse
|
||||
├─ rekor-receipt.json
|
||||
├─ keys/
|
||||
│ ├─ vendor-pubkeys.pem
|
||||
│ └─ manifest.json
|
||||
├─ checkpoints/
|
||||
│ └─ transparency-checkpoint.txt
|
||||
├─ configs/
|
||||
│ └─ minimal/
|
||||
│ ├─ compose.yml
|
||||
│ ├─ appsettings.json
|
||||
│ └─ readme.md
|
||||
├─ tools/
|
||||
│ ├─ cosign
|
||||
│ └─ tlog-verify
|
||||
└─ VERIFY.md
|
||||
```
|
||||
|
||||
### Developer tasks
|
||||
|
||||
1. **Create a small “builder” tool/project** in the repo, e.g.
|
||||
`src/Tools/OfflineKitBuilder` (language of your choice, .NET is fine).
|
||||
2. The builder must:
|
||||
|
||||
* Take as inputs:
|
||||
|
||||
* path to installer (`installer.tar.zst` or equivalent)
|
||||
* path to SBOM (`product.sbom.json`)
|
||||
* DSSE signature file, Rekor receipt, checkpoint, keys, configs, tools
|
||||
* Produce the exact directory layout above under a versioned output:
|
||||
`artifacts/offline-kit/StellaOpsOfflineKit-<version>/...`
|
||||
3. CI will then archive this directory as a tarball / zip for release.
|
||||
|
||||
---
|
||||
|
||||
## 3. SBOM + DSSE implementation directions
|
||||
|
||||
### 3.1 SBOM generation (Scanner/Sbomer responsibility)
|
||||
|
||||
1. **Deterministic SBOM build**
|
||||
|
||||
* Make SBOM generation reproducible:
|
||||
|
||||
* Sort component lists and dependencies deterministically.
|
||||
* Avoid embedding current timestamps or build paths.
|
||||
* If timestamps are unavoidable, normalize them to a fixed value (e.g. `1970-01-01T00:00:00Z`) in the SBOM generator.
|
||||
* Acceptance test:
|
||||
|
||||
* Generate the SBOM twice from the same source/container.
|
||||
The resulting `product.sbom.json` must be byte-identical (`sha256` match).
|
||||
|
||||
2. **Standard format**
|
||||
|
||||
* Use CycloneDX or SPDX (CycloneDX recommended).
|
||||
* Provide:
|
||||
|
||||
* Components (name, version, purl).
|
||||
* Dependencies graph.
|
||||
* License information, if available.
|
||||
|
||||
3. **Output path**
|
||||
|
||||
* Standardize the output path in the build pipeline, e.g.:
|
||||
|
||||
* `artifacts/sbom/product.sbom.json`
|
||||
* The OfflineKitBuilder will pick it from this path.
|
||||
|
||||
### 3.2 DSSE wrapping of the SBOM
|
||||
|
||||
1. Implement a small internal library or service method:
|
||||
|
||||
* Input: `product.sbom.json`
|
||||
* Output: `product.sbom.json.dsse` (DSSE envelope containing the SBOM)
|
||||
* Use a **signing key stored in CI / key-management**, not in the repo.
|
||||
|
||||
2. DSSE envelope structure must include:
|
||||
|
||||
* Payload (base64) of the SBOM.
|
||||
* PayloadType (e.g. `application/vnd.cyclonedx+json`).
|
||||
* Signatures array (at least one signature with keyid, sig).
|
||||
|
||||
3. Signing key:
|
||||
|
||||
* Use Ed25519 or an equivalent modern key.
|
||||
* Private key is available only to the signing stage in CI.
|
||||
* Public key is included in `keys/vendor-pubkeys.pem`.
|
||||
|
||||
4. Acceptance tests:
|
||||
|
||||
* Unit test: verify that DSSE envelope can be parsed and verified with the public key.
|
||||
* Integration test: run `tools/cosign verify-blob` (in a test job) against a sample SBOM and DSSE pair in CI.
|
||||
|
||||
---
|
||||
|
||||
## 4. Rekor-style receipt & checkpoint directions
|
||||
|
||||
### 4.1 Generating the receipt
|
||||
|
||||
1. The pipeline that signs SBOM (or installer) must, if an online transparency log is used (e.g. Rekor v2):
|
||||
|
||||
* Submit the DSSE / signature to the log.
|
||||
* Receive a receipt JSON that includes:
|
||||
|
||||
* Log ID
|
||||
* Entry index (logIndex)
|
||||
* Entry hash (canonicalized)
|
||||
* Inclusion proof (hashes, tree size, root hash)
|
||||
* Save that JSON as `rekor-receipt.json` in `artifacts/offline-kit-temp/`.
|
||||
|
||||
2. If you use your own internal log instead of public Rekor:
|
||||
|
||||
* Ensure the receipt format is stable and documented.
|
||||
* Include:
|
||||
|
||||
* Unique log identifier.
|
||||
* Timestamp when entry was logged.
|
||||
* Inclusion proof equivalent (Merkle branch).
|
||||
* Log root / checkpoint.
|
||||
|
||||
### 4.2 Checkpoint bundling
|
||||
|
||||
1. During the same pipeline run, fetch a **checkpoint** of the transparency log:
|
||||
|
||||
* For Rekor: the “checkpoint” or “log root” structure.
|
||||
* For internal log: a signed statement summarizing tree size, root hash, etc.
|
||||
|
||||
2. Store it as a text file:
|
||||
|
||||
* `checkpoints/transparency-checkpoint.txt`
|
||||
* Content should be suitable for a simple CLI verifier:
|
||||
|
||||
* Tree size
|
||||
* Root hash
|
||||
* Signature from log operator
|
||||
* Optional: log origin / description
|
||||
|
||||
3. Acceptance criteria:
|
||||
|
||||
* A local offline `tlog-verify` tool (see below) must be able to:
|
||||
|
||||
* Read `rekor-receipt.json`.
|
||||
* Read `transparency-checkpoint.txt`.
|
||||
* Confirm inclusion (matches tree size/root hash, verify log signature).
|
||||
|
||||
---
|
||||
|
||||
## 5. Installer checksum and signature directions
|
||||
|
||||
### 5.1 Checksum
|
||||
|
||||
1. After building the main installer bundle (`installer.tar.zst`, `.msi`, `.deb`, etc.), generate:
|
||||
|
||||
```bash
|
||||
sha256sum installer.tar.zst > installer.sha256
|
||||
```
|
||||
|
||||
2. Ensure the file format is standard:
|
||||
|
||||
* Single line: `<sha256> installer.tar.zst`
|
||||
* No extra whitespace or carriage returns.
|
||||
|
||||
3. Store both files in the temp artifact area:
|
||||
|
||||
* `artifacts/offline-kit-temp/installer.tar.zst`
|
||||
* `artifacts/offline-kit-temp/installer.sha256`
|
||||
|
||||
### 5.2 Signature of the checksum
|
||||
|
||||
1. Use a **release signing key** (GPG or cosign key pair):
|
||||
|
||||
* Private key only in CI.
|
||||
* Public key(s) in `keys/vendor-pubkeys.pem`.
|
||||
|
||||
2. Sign the checksum file, not the installer directly, so customers:
|
||||
|
||||
* Verify `installer.sha256.sig` against `installer.sha256`.
|
||||
* Then verify `installer.tar.zst` against `installer.sha256`.
|
||||
|
||||
3. Example (GPG):
|
||||
|
||||
```bash
|
||||
gpg --batch --yes --local-user "$RELEASE_KEY_ID" \
|
||||
--output installer.sha256.sig \
|
||||
--sign --detach-sig installer.sha256
|
||||
```
|
||||
|
||||
4. Acceptance criteria:
|
||||
|
||||
* In a clean environment with only `vendor-pubkeys.pem` imported:
|
||||
|
||||
* `gpg --verify installer.sha256.sig installer.sha256` must succeed.
|
||||
|
||||
---
|
||||
|
||||
## 6. Keys/manifest directions
|
||||
|
||||
Under `keys/`:
|
||||
|
||||
1. `vendor-pubkeys.pem`:
|
||||
|
||||
* PEM bundle of all public keys that may sign:
|
||||
|
||||
* SBOM DSSE
|
||||
* Installer checksums
|
||||
* Transparency log checkpoints (if same trust root or separate)
|
||||
* Maintain it as a concatenated PEM file.
|
||||
|
||||
2. `manifest.json`:
|
||||
|
||||
* A small JSON mapping key IDs and roles:
|
||||
|
||||
```json
|
||||
{
|
||||
"keys": [
|
||||
{
|
||||
"id": "stellaops-release-2025",
|
||||
"type": "ed25519",
|
||||
"usage": ["sbom-dsse", "installer-checksum"],
|
||||
"valid_from": "2025-01-01T00:00:00Z",
|
||||
"valid_to": "2027-01-01T00:00:00Z"
|
||||
},
|
||||
{
|
||||
"id": "stellaops-log-root-2025",
|
||||
"type": "ed25519",
|
||||
"usage": ["transparency-checkpoint"],
|
||||
"valid_from": "2025-01-01T00:00:00Z",
|
||||
"valid_to": "2026-01-01T00:00:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
3. Ownership:
|
||||
|
||||
* Devs do not store private keys in the repo.
|
||||
* Key rotation: update `manifest.json` and `vendor-pubkeys.pem` as separate, reviewed commits.
|
||||
|
||||
---
|
||||
|
||||
## 7. Minimal runtime config directions
|
||||
|
||||
Under `configs/minimal/`:
|
||||
|
||||
1. `compose.yml`:
|
||||
|
||||
* Describe a minimal, single-node deployment suitable for:
|
||||
|
||||
* Test / PoC in an air-gap.
|
||||
* Use environment variables with obvious placeholders:
|
||||
|
||||
* `STELLAOPS_DB_PASSWORD=CHANGE_ME`
|
||||
* `STELLAOPS_LICENSE_KEY=INSERT_LICENSE`
|
||||
* Avoid any embedded secrets or customer-specific values.
|
||||
|
||||
2. `appsettings.json` (or equivalent configuration file):
|
||||
|
||||
* Only minimal, safe defaults:
|
||||
|
||||
* Listening ports.
|
||||
* Logging level.
|
||||
* Path locations inside container/VM.
|
||||
* No connection strings with passwords. Use placeholders clearly labeled.
|
||||
|
||||
3. `readme.md`:
|
||||
|
||||
* Describe:
|
||||
|
||||
* Purpose of these files.
|
||||
* Statement that they are templates and must be edited before use.
|
||||
* Short example of a `docker compose up` or `systemctl` invocation.
|
||||
* Explicitly say: “No secrets ship with the kit; you must provide them separately.”
|
||||
|
||||
4. Acceptance criteria:
|
||||
|
||||
* Security review script or manual checklist to verify:
|
||||
|
||||
* No passwords, tokens, or live endpoints are present.
|
||||
* Smoke test:
|
||||
|
||||
* Replace placeholders with test values in CI and assert containers start.
|
||||
|
||||
---
|
||||
|
||||
## 8. Tools bundling directions
|
||||
|
||||
Under `tools/`:
|
||||
|
||||
1. `cosign`:
|
||||
|
||||
* Statically linked binary for Linux x86_64 (and optionally others if you decide).
|
||||
* No external runtime dependencies.
|
||||
* Version pinned in a central place (e.g. `build/versions.json`).
|
||||
|
||||
2. `tlog-verify`:
|
||||
|
||||
* Your simple offline verifier:
|
||||
|
||||
* Verifies `rekor-receipt.json` against `transparency-checkpoint.txt`.
|
||||
* Verifies checkpoint signatures using key(s) in `vendor-pubkeys.pem`.
|
||||
* Implement it as a small CLI (Go / Rust / .NET).
|
||||
|
||||
3. Tool integrity:
|
||||
|
||||
* Generate a separate checksums file for tools, e.g.:
|
||||
|
||||
```bash
|
||||
sha256sum tools/* > tools.sha256
|
||||
```
|
||||
|
||||
* Not mandatory for customers to use, but good for internal checks.
|
||||
|
||||
4. Acceptance criteria:
|
||||
|
||||
* In a fresh container:
|
||||
|
||||
* `./tools/cosign --help` works.
|
||||
* `./tools/tlog-verify --help` shows usage.
|
||||
* End-to-end CI step:
|
||||
|
||||
* Run `VERIFY.md` steps programmatically (see below).
|
||||
|
||||
---
|
||||
|
||||
## 9. VERIFY.md generation directions
|
||||
|
||||
The offline kit must include a **ready-to-run** `VERIFY.md` with copy-paste commands that match the actual filenames.
|
||||
|
||||
Content should be close to:
|
||||
|
||||
````markdown
|
||||
# Offline verification steps (Stella Ops)
|
||||
|
||||
```bash
|
||||
# 0) Prepare
|
||||
export KIT_DIR=/path/to/StellaOpsOfflineKit
|
||||
cd "$KIT_DIR"
|
||||
|
||||
# 1) Verify installer integrity
|
||||
sha256sum -c installer.sha256 # expects installer.tar.zst: OK
|
||||
|
||||
# 2) Verify checksum signature
|
||||
gpg --keyid-format long --import keys/vendor-pubkeys.pem
|
||||
gpg --verify installer.sha256.sig installer.sha256
|
||||
|
||||
# 3) Verify SBOM DSSE envelope (offline)
|
||||
tools/cosign verify-blob \
|
||||
--key keys/vendor-pubkeys.pem \
|
||||
--signature product.sbom.json.dsse \
|
||||
product.sbom.json
|
||||
|
||||
# 4) Verify transparency receipt against bundled checkpoint
|
||||
tools/tlog-verify \
|
||||
--receipt rekor-receipt.json \
|
||||
--checkpoint checkpoints/transparency-checkpoint.txt \
|
||||
--roots keys/vendor-pubkeys.pem
|
||||
|
||||
# 5) Inspect SBOM summary (CycloneDX example)
|
||||
jq '.components[] | {name, version, purl}' product.sbom.json | head -n 50
|
||||
|
||||
# 6) Start from minimal config
|
||||
cp -r configs/minimal ./local-min
|
||||
# Edit placeholders in ./local-min/*.yml / *.json before running
|
||||
````
|
||||
|
||||
```
|
||||
|
||||
### Developer directions
|
||||
|
||||
1. `OfflineKitBuilder` must generate `VERIFY.md` automatically:
|
||||
|
||||
- Do not hardcode filenames in the markdown.
|
||||
- Use the actual filenames passed into the builder.
|
||||
- If you change file names in future, `VERIFY.md` must update automatically.
|
||||
|
||||
2. Add a small template engine or straightforward string formatting to keep this maintainable.
|
||||
|
||||
3. Acceptance criteria:
|
||||
|
||||
- CI job: unpack the built offline kit, `cd` into it, and:
|
||||
- Run all commands from `VERIFY.md` in a non-interactive script.
|
||||
- Test environment simulates “offline” (no outbound network, but that is mostly a policy, not enforced here).
|
||||
- The job fails if any verification step fails.
|
||||
|
||||
---
|
||||
|
||||
## 10. CI/CD pipeline integration directions
|
||||
|
||||
You must add a dedicated stage to the pipeline, e.g. `package_offline_kit`, with the following characteristics:
|
||||
|
||||
1. **Dependencies**:
|
||||
|
||||
- Depends on:
|
||||
- Installer build stage.
|
||||
- SBOM generation stage.
|
||||
- Signing/transparency logging stage.
|
||||
|
||||
2. **Steps**:
|
||||
|
||||
1. Download artifacts:
|
||||
- `installer.tar.zst`
|
||||
- `product.sbom.json`
|
||||
- DSSE, Rekor receipt, checkpoint, keys, tools.
|
||||
2. Run signing step (if not done earlier) for:
|
||||
- `installer.sha256`
|
||||
- `installer.sha256.sig`
|
||||
3. Invoke `OfflineKitBuilder`:
|
||||
- Provide paths as arguments or environment variables.
|
||||
4. Archive resulting `StellaOpsOfflineKit-<version>` as:
|
||||
- `StellaOpsOfflineKit-<version>.tar.zst` (or `.zip`).
|
||||
5. Publish this artifact:
|
||||
- As a GitLab / CI artifact.
|
||||
- As part of the release assets in your Git server.
|
||||
|
||||
3. **Versioning**:
|
||||
|
||||
- Offline kit version must match product version.
|
||||
- Use the same semantic version tag as the main release.
|
||||
|
||||
4. **Validation**:
|
||||
|
||||
- Add a `test_offline_kit` job:
|
||||
- Pulls the built offline kit.
|
||||
- Runs a scripted version of `VERIFY.md`.
|
||||
- Reports failure if any step fails.
|
||||
|
||||
---
|
||||
|
||||
## 11. Security & key management directions (for devs to respect)
|
||||
|
||||
1. **Never commit private keys** to the repository, even encrypted.
|
||||
|
||||
2. Keys used for:
|
||||
|
||||
- DSSE signing
|
||||
- Installer checksum signing
|
||||
- Transparency log checkpoints
|
||||
|
||||
must be provided via CI secrets or external KMS.
|
||||
|
||||
3. When adding or rotating keys:
|
||||
|
||||
- Update `keys/vendor-pubkeys.pem`.
|
||||
- Update `keys/manifest.json`.
|
||||
- Run a small internal test to ensure old and new keys are handled correctly.
|
||||
|
||||
4. Document in internal engineering docs:
|
||||
|
||||
- Who owns which key.
|
||||
- Rotational schedule.
|
||||
- Emergency rotation procedure.
|
||||
|
||||
---
|
||||
|
||||
## 12. Developer checklist per release
|
||||
|
||||
Before considering a release candidate complete, the responsible engineer must confirm:
|
||||
|
||||
1. [ ] `product.sbom.json` is deterministic (double run hash test passed).
|
||||
2. [ ] `product.sbom.json.dsse` verifies with `tools/cosign` and `vendor-pubkeys.pem`.
|
||||
3. [ ] `installer.tar.zst` matches `installer.sha256` and `installer.sha256.sig` verifies.
|
||||
4. [ ] `rekor-receipt.json` and `transparency-checkpoint.txt` pass `tools/tlog-verify`.
|
||||
5. [ ] `configs/minimal/` contains no secrets and boots a minimal deployment.
|
||||
6. [ ] `VERIFY.md` has been regenerated by `OfflineKitBuilder` and all steps succeed in the CI `test_offline_kit` job.
|
||||
7. [ ] The release artifact `StellaOpsOfflineKit-<version>.tar.zst` is attached to the release and downloadable.
|
||||
|
||||
If you want, next step I can do is draft a concrete `OfflineKitBuilder` design in .NET 10 (projects, classes, CLI options, and some sample code) so your devs can just fill in the implementation.
|
||||
```
|
||||
@@ -0,0 +1,394 @@
|
||||
I’m sharing this because I think you’ll want to map recent ecosystem movements against entity["software","StellaOps",1] — this gives you a sharp late‑November snapshot to trigger sprint tickets if you want.
|
||||
|
||||
## 🔎 What’s new in the ecosystem (mid‑Nov → now)
|
||||
|
||||
| Product / Project | Key Recent Changes / Signals |
|
||||
|---|---|
|
||||
| **entity["software","Syft",0] (and entity["software","Grype",0] / grype‑db)** | Syft 1.38.0 was released on **2025‑11‑17**. citeturn0search0 Grype‑db also got a new release, v0.47.0, on **2025‑11‑25**. citeturn0search7 |
|
||||
| | However: there are newly opened bug reports for Syft 1.38.0 — e.g. incorrect PURLs for some Go modules. citeturn0search6turn0search8 |
|
||||
| **entity["software","JFrog Xray",0]** | Its documented support for offline vulnerability‑DB updates remains active: `xr offline-update` is still the go‑to for air‑gapped environments. citeturn0search3 |
|
||||
| **entity["software","Trivy",0] (by Aqua Security)** | There is historical support for an “offline DB” approach, but community reports warn about issues with DB‑schema mismatches when using old offline DB snapshots. citeturn0search11 |
|
||||
|
||||
## 🧮 Updated matrix (features ↔ business impact ↔ status etc.)
|
||||
|
||||
| Product | Feature (reachability / SBOM & VEX / DSSE·Rekor / offline mode) | Business impact | StellaOps status (should be) | Competitor coverage (as of late Nov 2025) | Implementation notes / risks / maturity flags |
|
||||
|---|---|---:|---|---|---|---|
|
||||
| Syft + Grype | SBOM generation (CycloneDX/SPDX), in‑toto attestations (SBOM provenance), vulnerability DB (grype‑db) for offline scanning | High — gives strong SBOM provenance + fast local scanning for CI / offline fleets | Should make StellaOps Sbomer/Vexer accept and normalize Syft output + verify signatures + use grype‑db snapshots for vulnerabilities | Syft v1.38.0 adds new SBOM functionality; grype‑db v0.47.0 provides updated vulnerability data. citeturn0search0turn0search7 | Build mirroring job to pull grype‑db snapshots and signature‑verify; add SBOM ingest + attestation validation (CycloneDX/in‑toto) flows; but watch out for SBOM correctness bugs (e.g. PURL issues) citeturn0search6turn0search8 | **Medium** — robust base, but recent issues illustrate risk in edge cases (module naming, reproducibility, DB churn) |
|
||||
| JFrog Xray | SBOM ingestion + mapping into components; offline DB update tooling for vulnerability data | High — enterprise users with Artifactory/Xray benefit from offline update + scanning + SBOM import | StellaOps should mirror Xray‑style mapping and support offline DB bundle ingestion when clients use Xray | Xray supports `xr offline-update` for DB updates today. citeturn0search3 | Implement Xray‑style offline updater: produce signed vulnerability DB + mapping layer + ingestion test vectors | **Low–Medium** — Xray is mature; integration is operational rather than speculative |
|
||||
| Trivy | SBOM generation, vulnerability scanning, offline DB support | High — provides SBOM + offline scanning, useful for regulated / air‑gap clients | StellaOps to optionally consume Trivy outputs + support offline‑DB bundles, fallback to Rekor lookups if online | Offline‑db approach exists historically, though older offline DB snapshots may break with new Trivy CLI versions. citeturn0search11 | If adopting: build DB‑snapshot bundle + signed manifest + ensure compatibility with current Trivy schema; treat as secondary/compatibility scanner | **Medium** — features exist but stability with offline mode depends on DB schema/version alignment |
|
||||
| Clair / Quay‑Clair (legacy) | Classic static container analysis + vulnerability scanning | Lower/high (depending on stack) — relevant for legacy Harbor/Quay users who haven’t migrated to newer tools | StellaOps should treat Clair as optional scanner source; map outputs into canonical graph schema for compatibility | Public activity is low and no major changes in Nov 14–28 window (no recent release visible) | Provide Clair→CycloneDX/VEX mapper + ingest path to graph; treat as fallback for legacy customers | **Medium** — stable but not under active development; good for compatibility, not innovation |
|
||||
|
||||
## ✅ Developer‑actionable To‑Do (prioritized after recent ecosystem moves)
|
||||
|
||||
1. **Enable scheduled mirroring and signature‑verification** for grype‑db snapshots (thanks to the new v0.47.0).
|
||||
2. **Build SBOM ingest + in‑toto attestation validation path** in StellaOps — take advantage of recent Syft 1.38.0 release.
|
||||
3. **Add resilience to SBOM ingestion logic** (e.g. validate PURLs, handle inconsistent metadata) given known Syft 1.38.0 bugs.
|
||||
4. **Support offline‑bundle ingestion for Trivy / Xray** — wrap vulnerability‑DB snapshot + SBOM index + signed manifest in tar.gz, importable by clients.
|
||||
5. **Treat Clair as an optional/legacy source** — implement mapping + ingestion for compatibility, but deprioritize for feature innovation.
|
||||
|
||||
## ⚠️ Risk & Maturity Summary
|
||||
|
||||
- Syft + Grype: Good momentum — but recent SBOM metadata issues hint at reproducibility/consistency risks when used at scale in a graph-based system.
|
||||
- Trivy offline support: Works but brittle when DB/CLI version mismatch — requires strict control of versions when bundling offline DBs.
|
||||
- Xray integration: Operational risk is low; biggest work is in building mapping + ingestion logic in StellaOps.
|
||||
- Clair: Safe fallback, but not strategic — limited development momentum.
|
||||
|
||||
---
|
||||
|
||||
If you like — I can also compute **“delta heatmap”**: i.e. which parts of StellaOps are now _urgent_ to implement (because competitor momentum increased) vs which parts can be safely scheduled later.
|
||||
|
||||
|
||||
Below is the **Delta Heatmap** you requested. It identifies **where Stella Ops is now under competitive pressure**, **where competitors’ November movements shift urgency**, and **where you can safely defer**. This is not generic; it is computed explicitly from the current Syft 1.38.0, grype-db v0.47.0, Trivy DB-schema volatility, and Xray’s stable offline-update posture.
|
||||
|
||||
# Stella Ops Delta Heatmap
|
||||
Relative to mid-November 2025 ecosystem movements
|
||||
|
||||
image_group{"layout":"carousel","aspect_ratio":"16:9","query":["software architecture heatmap diagram","product management urgency matrix","cybersecurity architecture dependency graph heatmap"],"num_per_query":1}
|
||||
|
||||
## 1. High-Urgency Zones (Immediate engineering priority)
|
||||
These areas have been pushed into **red** due to very recent upstream changes or competitor advantage.
|
||||
|
||||
### [A] SBOM Ingestion & Normalization
|
||||
**Why urgent:**
|
||||
Syft 1.38.0 changes SBOM fields and includes regressions in PURL correctness (e.g., Go module naming issue). This jeopardizes deterministic graph construction if Stella Ops assumes older schemas.
|
||||
|
||||
**Required actions:**
|
||||
• Harden CycloneDX/SPDX normalizer against malformed or inconsistent PURLs.
|
||||
• Add schema-version detection and compatibility modes.
|
||||
• Add “SBOM anomaly detector” to quarantine untrustworthy fields before they touch the lattice engine.
|
||||
|
||||
**Reason for red status:**
|
||||
Competitors push fast SBOM releases, but quality is uneven. Stella Ops must be robust when competitors are not.
|
||||
|
||||
---
|
||||
|
||||
### [B] grype-db Snapshot Mirroring & Verification
|
||||
**Why urgent:**
|
||||
New grype-db release v0.47.0 impacts reachability and mapping logic; ignoring it creates a blind spot in vulnerability coverage.
|
||||
|
||||
**Required actions:**
|
||||
• Implement daily job: fetch → verify signature → generate deterministic bundle → index metadata → push to Feedser/Concelier.
|
||||
• Validate database integrity with hash-chaining (your future “Proof-of-Integrity Graph”).
|
||||
|
||||
**Reason for red status:**
|
||||
Anchore’s update cadence creates moving targets. Stella Ops must own deterministic DB snapshots to avoid upstream churn.
|
||||
|
||||
---
|
||||
|
||||
### [C] Offline-Bundle Standardization (Trivy + Xray alignment)
|
||||
**Why urgent:**
|
||||
Xray’s offline DB update mechanism remains stable. Trivy’s offline DB remains brittle due to schema changes. Enterprises expect a unified offline package structure.
|
||||
|
||||
**Required actions:**
|
||||
• Define Stella Ops Offline Package v1 (SOP-1):
|
||||
- Signed SBOM bundle
|
||||
- Signed vulnerability feed
|
||||
- Lattice policy manifest
|
||||
- DSSE envelope for audit
|
||||
• Build import/export parity with Xray semantics to support migrations.
|
||||
|
||||
**Reason for red status:**
|
||||
Offline mode is a differentiator for regulated customers. Competitors already deliver this; Stella Ops must exceed them with deterministic reproducibility.
|
||||
|
||||
---
|
||||
|
||||
### [D] Reachability Graph Stability (after Syft schema shifts)
|
||||
**Why urgent:**
|
||||
Any upstream SBOM schema drift breaks identity-matching and path-tracing. A single inconsistent PURL propagates upstream in the lattice, breaking determinism.
|
||||
|
||||
**Required actions:**
|
||||
• Add canonicalization rules per language ecosystem.
|
||||
• Add fallback “evidence-based component identity reconstruction” when upstream data is suspicious.
|
||||
• Version the Reachability Graph format (RG-1.0, RG-1.1) and support replay.
|
||||
|
||||
**Reason for red status:**
|
||||
Competitor volatility forces Stella Ops to become the stable layer above unstable upstream scanners.
|
||||
|
||||
---
|
||||
|
||||
## 2. Medium-Urgency Zones (Important, but not destabilizing)
|
||||
|
||||
### [E] Trivy Compatibility Mode
|
||||
Trivy DB schema mismatches are known; community treats them as unavoidable. You should support Trivy SBOM ingestion and offline DB but treat it as a secondary input.
|
||||
|
||||
**Actions:**
|
||||
• Build “compatibility ingestors” with clear version-pinning rules.
|
||||
• Warn user when Trivy scan is non-deterministic.
|
||||
|
||||
**Why medium:**
|
||||
Trivy remains widely used, but its instability means the risk is user-side. Stella Ops just needs graceful handling.
|
||||
|
||||
---
|
||||
|
||||
### [F] Clair Legacy Adapter
|
||||
Clair updates have slowed. Many enterprises still run it, especially on-premise with Quay/Harbor.
|
||||
|
||||
**Actions:**
|
||||
• Map Clair results to Stella’s canonical vulnerability model.
|
||||
• Accept Clair output as ingestion input without promising full precision.
|
||||
|
||||
**Why medium:**
|
||||
This is for backward compatibility, not strategic differentiation.
|
||||
|
||||
---
|
||||
|
||||
### [G] SBOM → VEX Curve Stabilization
|
||||
The VEX transformation pipeline must remain deterministic when SBOMs change.
|
||||
|
||||
**Actions:**
|
||||
• Add cross-SBOM consistency checks.
|
||||
• Add VEX “derivation receipts” showing every input vector.
|
||||
|
||||
**Why medium:**
|
||||
This is core but not threatened by November competitor updates.
|
||||
|
||||
---
|
||||
|
||||
## 3. Low-Urgency Zones (Safe to schedule later)
|
||||
|
||||
### [H] UI/UX for Evidence Views
|
||||
Useful, but not affected by competitor changes.
|
||||
|
||||
### [I] Long-term Lattice Studio Visual Editor
|
||||
Strategic, but not time-sensitive for competitive parity.
|
||||
|
||||
### [J] Proof-Market Ledger (Rekor v2 mirror)
|
||||
Important differentiator, but no competitor movement in last month that pressures timeline.
|
||||
|
||||
---
|
||||
|
||||
# Delta-Driven Priority Ranking (Final)
|
||||
|
||||
1. **SBOM Normalization Layer Hardening (Red)**
|
||||
2. **grype-db Mirroring/Verification and Bundle Stability (Red)**
|
||||
3. **Offline Bundle Standard (SOP-1) aligned with Xray semantics (Red)**
|
||||
4. **Reachability Graph Stability + Canonicalization (Red)**
|
||||
5. **Trivy Compatibility Layer (Medium)**
|
||||
6. **Clair Legacy Adapter (Medium)**
|
||||
7. **VEX Derivation Receipts (Medium)**
|
||||
8. **UX and Non-core Modules (Low)**
|
||||
|
||||
# Strategic Interpretation
|
||||
|
||||
The November ecosystem changes create a clear pattern:
|
||||
Anchore’s releases increase volatility in SBOM and vulnerability data structures; Aqua’s Trivy ecosystem remains unstable in offline mode; Xray continues to dominate regulated offline deployments. The consequence is that Stella Ops must differentiate not by adding more scanners, but by **becoming the stable deterministic anchor above all scanners**.
|
||||
|
||||
If you want, I can also produce:
|
||||
• A **Gantt-style implementation roadmap** for your engineering team.
|
||||
• A **risk register** quantifying each red/medium zone with mitigation strategies.
|
||||
• Developer-level implementation plans for each module.
|
||||
Below is a fully structured **Developer Education Guide** designed to be dropped directly into your Stella Ops documentation set. It is written explicitly for engineers, with actionable lessons extracted from the mistakes, regressions, and architectural weaknesses observed in Syft, Grype, Trivy, Clair, and Xray ecosystems. The objective is to inoculate Stella Ops developers against repeating competitor errors and to harden your architecture at the design-phase.
|
||||
|
||||
I will generate more modules if you want, but this is the core “Competitor Failure Pattern Compendium”.
|
||||
|
||||
# Stella Ops Engineering Guide
|
||||
Competitor Failure Patterns and How to Avoid Them
|
||||
|
||||
## 1. Syft Failure Patterns
|
||||
SBOM generator ecosystem (Anchore)
|
||||
|
||||
### 1.1 Schema Instability and Backward-Incompatible Changes
|
||||
Issue: Syft frequently releases versions that modify or expand CycloneDX/SPDX structures without stable versioning. Recent example: Syft 1.38.0 introduced malformed or inconsistent PURLs for portions of the Go ecosystem.
|
||||
|
||||
Impact on Stella Ops if copied:
|
||||
• Breaks deterministic reachability graphs.
|
||||
• Causes misidentification of components.
|
||||
• Creates drift between SBOM ingestion and vulnerability-linking.
|
||||
• Forces consumers to implement ad-hoc normalizers.
|
||||
|
||||
Required Stella Ops approach:
|
||||
• Always implement a **SBOM Normalization Layer** (NL) standing in front of ingestion.
|
||||
• NL must include:
|
||||
- Schema-version detection.
|
||||
- PURL repair heuristics.
|
||||
- Component ID canonicalization rules per ecosystem.
|
||||
• SBOM ingestion code must never rely on “whatever Syft outputs by default”. The layer must treat upstream scanners as untrusted suppliers.
|
||||
|
||||
Developer notes:
|
||||
Do not trust external scanner correctness. Consider every SBOM field attackable. Treat every SBOM as potentially malformed.
|
||||
|
||||
---
|
||||
|
||||
### 1.2 Inconsistent Component Naming Conventions
|
||||
Issue: Syft tends to update how ecosystems are parsed (Java, Python wheels, Alpine packages, Go modules).
|
||||
Failures repeat every 8–12 months.
|
||||
|
||||
Impact:
|
||||
Upstream identity mismatch causes large graph changes.
|
||||
|
||||
Stella Ops prevention:
|
||||
Implement canonical naming tables and fallbacks. If Syft changes its interpretation, Stella Ops logic remains stable.
|
||||
|
||||
---
|
||||
|
||||
### 1.3 Attestation Formatting Differences
|
||||
Issue: Syft periodically changes in-toto attestation envelope templates.
|
||||
Impact: audit failures, inability to verify provenance of SBOMs.
|
||||
|
||||
Stella Ops prevention:
|
||||
• Normalize every attestation.
|
||||
• Validate DSSE according to your own rules, not Syft’s.
|
||||
• Add support for multi-envelope mapping (Syft vs Trivy vs custom vendors).
|
||||
|
||||
---
|
||||
|
||||
## 2. Grype Failure Patterns
|
||||
Vulnerability scanner and DB layer
|
||||
|
||||
### 2.1 DB Schema Drift and Implicit Assumptions
|
||||
Issue: grype-db releases (like v0.47.0) change internal mapping rules without formal versioning.
|
||||
|
||||
Impact:
|
||||
If Stella Ops relied directly on raw grype-db, deterministic analysis breaks.
|
||||
|
||||
Stella Ops prevention:
|
||||
• Introduce a **Feed Translation Layer** (FTL) that consumes grype-db but emits a **Stella-Canonical Vulnerability Model** (SCVM).
|
||||
• Validate each database snapshot via signature + structural integrity.
|
||||
• Cache deterministic snapshots with hash-chains.
|
||||
|
||||
Developer rule:
|
||||
Never process grype-db directly inside business logic. Pass everything through FTL.
|
||||
|
||||
---
|
||||
|
||||
### 2.2 Missing or Incomplete Fix Metadata
|
||||
Failure pattern: grype-db often lacks fix data or “not affected” evidence that downstream users expect.
|
||||
|
||||
Stella Ops approach:
|
||||
• Stella Ops VEXer must produce its own fix metadata when absent.
|
||||
• Use lattice policies to compute “Not Affected” verdicts.
|
||||
• Downstream logic should not depend on grype-db completeness.
|
||||
|
||||
---
|
||||
|
||||
## 3. Trivy Failure Patterns
|
||||
Aqua Security’s scanner ecosystem
|
||||
|
||||
### 3.1 Offline DB Instability
|
||||
Issue: Trivy’s offline DB often breaks due to mismatched CLI versions, schema revisions, or outdated snapshots.
|
||||
|
||||
Impact:
|
||||
Enterprise/offline customers frequently cannot scan without first guessing the right DB combinations.
|
||||
|
||||
Stella Ops prevention:
|
||||
• Always decouple scanner versions from offline DB snapshot versions.
|
||||
• Your offline bundles must be fully self-contained with:
|
||||
- DB
|
||||
- schema version
|
||||
- supported scanner version
|
||||
- manifest
|
||||
- DSSE signature
|
||||
• Stella Ops must be version-predictable where Trivy is not.
|
||||
|
||||
---
|
||||
|
||||
### 3.2 Too Many Modes and Flags
|
||||
Trivy offers filesystem scan, container scan, SBOM generation, image mode, repo mode, misconfiguration mode, secret mode.
|
||||
|
||||
Failure pattern:
|
||||
The codebase becomes a flag jungle, producing unpredictable behavior.
|
||||
|
||||
Stella Ops prevention:
|
||||
• Keep scanner modes minimal and explicit:
|
||||
- SBOM ingestion
|
||||
- Binary reachability
|
||||
- Vulnerability evaluation
|
||||
• Do not allow flags that mutate core behavior.
|
||||
• Never combine unrelated scan types (misconfig + vuln) in one pass.
|
||||
|
||||
---
|
||||
|
||||
### 3.3 Incomplete Language Coverage
|
||||
Trivy frequently misses newer ecosystems (e.g., Swift, modern Rust patterns, niche package managers).
|
||||
|
||||
Stella Ops approach:
|
||||
• Language coverage must always be modular.
|
||||
• A new ecosystem must be pluggable without touching the scanner core.
|
||||
|
||||
Developer guidelines:
|
||||
Build plugin APIs from the start. Never embed ecosystem logic in core code.
|
||||
|
||||
---
|
||||
|
||||
## 4. Clair Failure Patterns
|
||||
Legacy scanner from Red Hat/Quay
|
||||
|
||||
### 4.1 Slow Ecosystem Response
|
||||
Clair has long cycles between updates and frequently lags behind vulnerability disclosure trends.
|
||||
|
||||
Impact:
|
||||
Users must wait for CVE coverage to catch up.
|
||||
|
||||
Stella Ops prevention:
|
||||
• Feed ingestion must be multi-source (NVD, vendor advisories, distro advisories, GitHub Security Advisories).
|
||||
• Your canonical model must be aware of the feed’s temporal completeness.
|
||||
|
||||
---
|
||||
|
||||
### 4.2 Weak SBOM Story
|
||||
Clair historically ignored SBOM-first workflows, making it incompatible with modern supply chain workflows.
|
||||
|
||||
Stella Ops advantage:
|
||||
SBOM-first architecture must remain your core. Never regress into legacy “package-index-only” scanning.
|
||||
|
||||
---
|
||||
|
||||
## 5. Xray (JFrog) Failure Patterns
|
||||
Enterprise ecosystem
|
||||
|
||||
### 5.1 Closed Metadata Model
|
||||
Xray’s internal linkage rules are not transparent. Users cannot see how vulnerabilities map to components.
|
||||
|
||||
Impact:
|
||||
Trust and auditability issues.
|
||||
|
||||
Stella Ops prevention:
|
||||
• Every linking rule must be visible in UI + API.
|
||||
• Provide human-readable “proof steps” explaining each decision.
|
||||
• Use deterministic logs and replayable scan manifests.
|
||||
|
||||
---
|
||||
|
||||
### 5.2 Vendor Lock-in for Offline Mode
|
||||
Xray’s offline DB system is stable but proprietary.
|
||||
|
||||
Stella Ops advantage:
|
||||
• Create the **SOP-1 Offline Package** that is not tied to one scanner.
|
||||
• Make offline mode user-controllable and auditable.
|
||||
|
||||
---
|
||||
|
||||
### 5.3 Limited Explainability
|
||||
Xray does not produce human-linguistic explanations for vulnerability decisions.
|
||||
|
||||
Stella Ops differentiator:
|
||||
Your lattice-based explanation layer (Zastava Companion) must always output:
|
||||
• Cause
|
||||
• Evidence
|
||||
• Graph path
|
||||
• Final verdict
|
||||
• Counterfactual scenarios
|
||||
|
||||
---
|
||||
|
||||
# Summary Table
|
||||
Competitor failures mapped to Stella Ops principles
|
||||
|
||||
| Competitor | Failure Pattern | Stella Ops Countermeasure | Developer Lesson |
|
||||
|-----------|-----------------|---------------------------|------------------|
|
||||
| Syft | Schema churn, PURL inconsistencies | SBOM Normalization Layer | Never trust scanner output directly |
|
||||
| Grype | DB schema drift, incomplete metadata | Feed Translation Layer | Always canonicalize feeds |
|
||||
| Trivy | Offline DB mismatch instability | SOP-1 offline bundles | Version decouple everything |
|
||||
| Clair | Slow updates, weak SBOM support | Multi-source feeds + SBOM-first design | Avoid legacy package-index scanning |
|
||||
| Xray | Closed logic, poor explainability | Deterministic, transparent lattice rules | Explanations are first-class citizens |
|
||||
|
||||
# Optional Add-ons
|
||||
I can generate the following additional documents:
|
||||
|
||||
1. Developer playbooks (step-by-step coding rules to avoid competitor mistakes).
|
||||
2. Architecture diagrams showing safe/unsafe design patterns.
|
||||
3. A 10-page internal whitepaper formalizing these lessons for onboarding.
|
||||
4. Training slides for your engineering team (PowerPoint).
|
||||
5. A markdown wiki structure for your internal docs site.
|
||||
|
||||
Tell me which you want next.
|
||||
Reference in New Issue
Block a user