Merge branch 'main' of https://git.stella-ops.org/stella-ops.org/git.stella-ops.org
This commit is contained in:
168
docs/modules/advisory-ai/architecture-detail.md
Normal file
168
docs/modules/advisory-ai/architecture-detail.md
Normal file
@@ -0,0 +1,168 @@
|
||||
|
||||
> **Imposed rule:** Work of this type or tasks of this type on this component must also be applied everywhere else it should be applied.
|
||||
|
||||
# Advisory AI Architecture
|
||||
|
||||
_Updated: 2025-11-03 • Owner: Docs Guild & Advisory AI Guild • Status: Draft_
|
||||
|
||||
This document decomposes how Advisory AI transforms immutable evidence into deterministic, explainable outputs. It complements `docs/modules/advisory-ai/architecture.md` with service-level views, data flows, and integration checklists for Sprint 110.
|
||||
|
||||
## 1. High-level flow
|
||||
|
||||
```
|
||||
Conseiller / Excititor / SBOM / Policy
|
||||
| (retrievers)
|
||||
v
|
||||
+----------------------------+
|
||||
| AdvisoryPipelineOrchestrator |
|
||||
| (plan generation) |
|
||||
+----------------------------+
|
||||
| plan + cache key
|
||||
v
|
||||
+----------------------------+
|
||||
| Guarded Prompt Runtime |
|
||||
| (profile-specific) |
|
||||
+----------------------------+
|
||||
| validated output + citations
|
||||
v
|
||||
+----------------------------+
|
||||
| Cache & Provenance |
|
||||
| (PostgreSQL + DSSE opt.) |
|
||||
+----------------------------+
|
||||
| \
|
||||
v v
|
||||
REST API CLI / Console
|
||||
```
|
||||
|
||||
Key stages:
|
||||
1. **Retrieval** – deterministic chunkers pull AOC-compliant data: Conseiller advisories, Excititor VEX statements, SBOM context, Policy explain traces, optional runtime telemetry.
|
||||
2. **Plan generation** – the orchestrator builds an `AdvisoryTaskPlan` (Summary / Conflict / Remediation) containing budgets, prompt template IDs, cache keys, and metadata.
|
||||
3. **Guarded inference** – profile-specific prompt runners execute with guardrails (redaction, injection defence, citation enforcement). Failures are logged and downstream consumers receive deterministic errors.
|
||||
4. **Persistence** – outputs are hashed (`outputHash`), referenced with `inputDigest`, optionally sealed with DSSE, and exposed for CLI/Console consumption.
|
||||
|
||||
## 2. Component responsibilities
|
||||
|
||||
| Component | Description | Notes |
|
||||
|-----------|-------------|-------|
|
||||
| `AdvisoryRetrievalService` | Facade that composes Conseiller/Excititor/SBOM/Policy clients into context packs. | Deterministic ordering; per-source limits enforced. |
|
||||
| `AdvisoryPipelineOrchestrator` | Builds task plans, selects prompt templates, allocates token budgets. | Tenant-scoped; memoises by cache key. |
|
||||
| `GuardrailService` | Applies redaction filters, prompt allowlists, validation schemas, and DSSE sealing. | Shares configuration with Security Guild. |
|
||||
| `ProfileRegistry` | Maps profile IDs to runtime implementations (local model, remote connector). | Enforces tenant consent and allowlists. |
|
||||
| `AdvisoryOutputStore` | PostgreSQL table storing cached artefacts plus provenance manifest. | TTL defaults 24h; DSSE metadata optional. |
|
||||
| `AdvisoryPipelineWorker` | Background executor for queued jobs (future sprint once 004A wires queue). | Consumes `advisory.pipeline.execute` messages. |
|
||||
|
||||
## 3. Data contracts
|
||||
|
||||
### 3.1 `AdvisoryTaskRequest`
|
||||
|
||||
```json
|
||||
{
|
||||
"taskType": "Summary",
|
||||
"advisoryKey": "csaf:redhat:RHSA-2025:1001",
|
||||
"artifactId": "registry.stella-ops.internal/runtime/api",
|
||||
"artifactPurl": "pkg:oci/runtime-api@sha256:d2c3...",
|
||||
"policyVersion": "2025.10.1",
|
||||
"profile": "fips-local",
|
||||
"preferredSections": ["Summary", "Remediation"],
|
||||
"forceRefresh": false
|
||||
}
|
||||
```
|
||||
|
||||
- `taskType` ∈ `Summary|Conflict|Remediation`.
|
||||
- Provide either `artifactId` or `artifactPurl` for remediation tasks (unlocks dependency analysis).
|
||||
- `forceRefresh` bypasses cache and regenerates output (deterministic with identical inputs).
|
||||
|
||||
### 3.2 `AdvisoryPipelinePlanResponse`
|
||||
|
||||
Returned when plan preview is enabled; summarises chunk and vector usage so operators can verify evidence.
|
||||
|
||||
```json
|
||||
{
|
||||
"taskType": "Summary",
|
||||
"cacheKey": "adv-summary:csaf:redhat:RHSA-2025:1001:fips-local",
|
||||
"budget": { "promptTokens": 1024, "completionTokens": 256 },
|
||||
"chunks": [{"documentId": "doc-1", "chunkId": "doc-1:0001", "section": "Summary"}],
|
||||
"vectors": [{"query": "Summary query", "matches": [{"chunkId": "doc-1:0001", "score": 0.92}]}],
|
||||
"sbom": {
|
||||
"artifactId": "registry.stella-ops.internal/runtime/api",
|
||||
"versionTimelineCount": 8,
|
||||
"dependencyPathCount": 5,
|
||||
"dependencyNodeCount": 17
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3.3 Output envelope
|
||||
|
||||
See `docs/modules/advisory-ai/guides/api.md` §6. Each response includes `inputDigest`, `outputHash`, Markdown content, citations, TTL, and context summary to support offline replay.
|
||||
|
||||
## 4. Profiles & runtime selection
|
||||
|
||||
| Profile | Runtime | Crypto posture | Default availability |
|
||||
|---------|---------|----------------|----------------------|
|
||||
| `default` / `fips-local` | On-prem model (GPU/CPU) | FIPS-validated primitives | Enabled |
|
||||
| `gost-local` | Sovereign local model | GOST algorithms | Opt-in |
|
||||
| `cloud-openai` | Remote connector via secure gateway | Depends on hosting region | Disabled (requires tenant consent) |
|
||||
| Custom | Operator-supplied | Matches declared policy | Disabled until Authority admin approves |
|
||||
|
||||
Profile selection is controlled via Authority configuration (`advisoryAi.allowedProfiles`). Remote profiles require tenant consent, allowlisted endpoints, and custom SLIs to track latency/error budgets.
|
||||
|
||||
## 5. Guardrails & validation pipeline
|
||||
|
||||
1. **Prompt preparation** – sanitized context injected into templated prompts (Liquid/Handlebars). Sensitive tokens scrubbed before render.
|
||||
2. **Prompt allowlist** – each template fingerprinted; runtime rejects prompts whose hash is not documented.
|
||||
3. **Response schema** – JSON validator ensures sections, severity tags, and citation arrays meet contract.
|
||||
4. **Citation resolution** – referenced `[n]` items must map to context chunk identifiers.
|
||||
5. **DSSE sealing (optional)** – outputs can be sealed with the Advisory AI signing key; DSSE bundle stored alongside cache artefact.
|
||||
6. **Audit trail** – guardrail results logged (`advisory_ai.guardrail.blocked|passed`) with tenant and trace IDs.
|
||||
|
||||
## 6. Caching & storage model
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `_id` | `outputHash` (sha256 of content body). |
|
||||
| `inputDigest` | sha256 of canonical context pack. |
|
||||
| `taskType` | Summary/Conflict/Remediation. |
|
||||
| `profile` | Inference profile used. |
|
||||
| `content` | Markdown/JSON body and format metadata. |
|
||||
| `citations` | Array of `{index, kind, sourceId, uri}`. |
|
||||
| `generatedAt` | UTC timestamp. |
|
||||
| `ttlSeconds` | Derived from tenant configuration (default 86400). |
|
||||
| `dsse` | Optional DSSE bundle metadata. |
|
||||
|
||||
Cache misses trigger orchestration and inference; hits return stored artefacts immediately. TTL expiry removes entries unless `forceRefresh` has already regenerated them.
|
||||
|
||||
## 7. Telemetry & SLOs
|
||||
|
||||
Metrics (registered in Observability backlog):
|
||||
- `advisory_ai_requests_total{tenant,task,profile}`
|
||||
- `advisory_ai_latency_seconds_bucket`
|
||||
- `advisory_ai_guardrail_blocks_total`
|
||||
- `advisory_ai_cache_hits_total`
|
||||
- `advisory_ai_remote_profile_requests_total`
|
||||
|
||||
Logs include `traceId`, `tenant`, `task`, `profile`, `outputHash`, `cacheStatus` (`hit|miss|bypass`). Prompt bodies are never logged; guardrail violations log sanitized excerpts only.
|
||||
|
||||
Suggested SLOs:
|
||||
- **Latency:** P95 ≤ 3s (local), ≤ 8s (remote).
|
||||
- **Availability:** 99.5% successful responses per tenant over 7 days.
|
||||
- **Guardrail block rate:** ≤ 1%; investigate higher values.
|
||||
|
||||
## 8. Deployment & offline guidance
|
||||
|
||||
- Package prompts, guardrail configs, profile manifests, and local model weights in the Offline Kit.
|
||||
- Remote profiles remain disabled until Authority admins set `advisoryAi.remoteProfiles` and record tenant consent.
|
||||
- Export Center reads cached outputs using `advisory-ai:view` and benefits from DSSE sealing when enabled.
|
||||
|
||||
## 9. Checklist
|
||||
|
||||
- [ ] `AdvisoryRetrievalService` wired to the SBOM context client (AIAI-31-002).
|
||||
- [ ] Authority scopes (`advisory-ai:*`, `aoc:verify`) validated in staging.
|
||||
- [ ] Guardrail library reviewed by Security Guild (AIAI-31-005).
|
||||
- [ ] Cache TTLs/DSSE policy signed off by Platform & Compliance.
|
||||
- [ ] Observability dashboards published (DOCS-OBS backlog).
|
||||
- [ ] Offline Kit bundle updated with prompts, guardrails, local profile assets.
|
||||
|
||||
---
|
||||
|
||||
_For questions or contributions, contact the Advisory AI Guild (Slack #guild-advisory-ai) and tag Docs Guild reviewers._
|
||||
@@ -1,7 +1,7 @@
|
||||
# Advisory AI architecture
|
||||
|
||||
> Captures the retrieval, guardrail, and inference packaging requirements defined in the Advisory AI implementation plan and related module guides.
|
||||
> Configuration knobs (inference modes, guardrails, cache/queue budgets) now live in [`docs/policy/assistant-parameters.md`](../../policy/assistant-parameters.md) per DOCS-AIAI-31-006.
|
||||
> Configuration knobs (inference modes, guardrails, cache/queue budgets) now live in [`docs/modules/policy/guides/assistant-parameters.md`](../policy/guides/assistant-parameters.md) per DOCS-AIAI-31-006.
|
||||
|
||||
## 1) Goals
|
||||
|
||||
|
||||
210
docs/modules/advisory-ai/guides/api.md
Normal file
210
docs/modules/advisory-ai/guides/api.md
Normal file
@@ -0,0 +1,210 @@
|
||||
|
||||
> **Imposed rule:** Work of this type or tasks of this type on this component must also be applied everywhere else it should be applied.
|
||||
|
||||
# Advisory AI API Reference (Sprint 110 Preview)
|
||||
|
||||
_Updated: 2025-11-03 • Owner: Docs Guild & Advisory AI Guild • Status: In progress_
|
||||
|
||||
## 1. Overview
|
||||
|
||||
The Advisory AI service exposes deterministic, guardrail-enforced endpoints for generating advisory summaries, conflict explanations, and remediation plans. Each request is backed by the Aggregation-Only Contract (AOC); inputs originate from immutable Conseiller/Excititor evidence and SBOM context, and every output ships with verifiable citations and cache digests.
|
||||
|
||||
This document captures the API surface targeted for Sprint 110. The surface is gated behind Authority scopes and designed to operate identically online or offline (local inference profiles).
|
||||
|
||||
## 2. Base conventions
|
||||
|
||||
| Item | Value |
|
||||
|------|-------|
|
||||
| Base path | `/v1/advisory-ai` |
|
||||
| Media types | `application/json` (request + response) |
|
||||
| Authentication | OAuth2 access token (JWT, DPoP-bound or mTLS as per tenant policy) |
|
||||
| Required scopes | See [Authentication & scopes](#3-authentication--scopes) |
|
||||
| Idempotency | Requests are cached by `(taskType, advisoryKey, policyVersion, profile, artifactId/purl, preferredSections)` unless `forceRefresh` is `true` |
|
||||
| Determinism | Guardrails reject outputs lacking citations; cache digests allow replay and offline verification |
|
||||
|
||||
## 3. Authentication & scopes
|
||||
|
||||
Advisory AI calls must include `aoc:verify` plus an Advisory AI scope. Authority enforces tenant binding for all combinations.
|
||||
|
||||
| Scope | Purpose | Typical principals |
|
||||
|-------|---------|--------------------|
|
||||
| `advisory-ai:view` | Read cached artefacts (`GET /outputs/{{hash}}`) | Console backend, evidence exporters |
|
||||
| `advisory-ai:operate` | Submit inference jobs (`POST /summaries`, `/conflicts`, `/remediation`) | Platform services, CLI automation |
|
||||
| `advisory-ai:admin` | Manage profiles & policy (`PATCH /profiles`, future) | Platform operators |
|
||||
|
||||
Requests without `aoc:verify` are rejected with `invalid_scope`. Tokens aimed at remote inference profiles must also satisfy tenant consent (`requireTenantConsent` in Authority config).
|
||||
|
||||
## 4. Profiles & inference selection
|
||||
|
||||
Profiles determine which model backend and guardrail stack execute the request. The `profile` field defaults to `default` (`fips-local`).
|
||||
|
||||
| Profile | Description |
|
||||
|---------|-------------|
|
||||
| `default` / `fips-local` | Local deterministic model packaged with Offline Kit; FIPS-compliant crypto |
|
||||
| `gost-local` | Local profile using GOST-approved crypto stack |
|
||||
| `cloud-openai` | Remote inference via cloud connector (disabled unless tenant consent flag set) |
|
||||
| Custom | Installations may register additional profiles via Authority `advisory-ai` admin APIs |
|
||||
|
||||
## 5. Common request envelope
|
||||
|
||||
All task endpoints accept the same JSON payload; `taskType` is implied by the route.
|
||||
|
||||
```json
|
||||
{
|
||||
"advisoryKey": "csaf:redhat:RHSA-2025:1001",
|
||||
"artifactId": "registry.stella-ops.internal/runtime/api",
|
||||
"artifactPurl": "pkg:oci/runtime-api@sha256:d2c3...",
|
||||
"policyVersion": "2025.10.1",
|
||||
"profile": "fips-local",
|
||||
"preferredSections": ["Summary", "Remediation"],
|
||||
"forceRefresh": false
|
||||
}
|
||||
```
|
||||
|
||||
Field notes:
|
||||
|
||||
- `advisoryKey` **required**. Matches Conseiller advisory identifier or VEX statement key.
|
||||
- `artifactId` / `artifactPurl` optional but recommended for remediation tasks (enables SBOM context).
|
||||
- `policyVersion` locks evaluation to a specific Policy Engine digest. Omit for "current".
|
||||
- `profile` selects inference profile (see §4). Unknown values return `400`.
|
||||
- `preferredSections` prioritises advisory sections; the orchestrator still enforces AOC.
|
||||
- `forceRefresh` bypasses cache, regenerating output and resealing DSSE bundle.
|
||||
|
||||
## 6. Responses & caching
|
||||
|
||||
Successful responses share a common envelope:
|
||||
|
||||
```json
|
||||
{
|
||||
"taskType": "Summary",
|
||||
"profile": "fips-local",
|
||||
"generatedAt": "2025-11-03T18:22:43Z",
|
||||
"inputDigest": "sha256:6f3b...",
|
||||
"outputHash": "sha256:1d7e...",
|
||||
"ttlSeconds": 86400,
|
||||
"content": {
|
||||
"format": "markdown",
|
||||
"body": "### Summary
|
||||
1. [Vendor statement][1] ..."
|
||||
},
|
||||
"citations": [
|
||||
{
|
||||
"index": 1,
|
||||
"kind": "advisory",
|
||||
"sourceId": "concelier:csaf:redhat:RHSA-2025:1001:paragraph:12",
|
||||
"uri": "https://access.redhat.com/errata/RHSA-2025:1001"
|
||||
}
|
||||
],
|
||||
"context": {
|
||||
"planCacheKey": "adv-summary:csaf:redhat:RHSA-2025:1001:fips-local",
|
||||
"chunks": 42,
|
||||
"vectorMatches": 12,
|
||||
"sbom": {
|
||||
"artifactId": "registry.stella-ops.internal/runtime/api",
|
||||
"versionTimeline": 8,
|
||||
"dependencyPaths": 5,
|
||||
"dependencyNodes": 17
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- `content.format` is `markdown` for human-readable payloads; machine-readable JSON attachments will use `json`. The CLI and Console render Markdown directly.
|
||||
- `citations` indexes correspond to bracketed references in the Markdown body.
|
||||
- `context.planCacheKey` lets operators resubmit the same request or inspect the plan (`GET /v1/advisory-ai/plans/`cacheKey``) – optional when enabled.
|
||||
- Cached copies honour tenant-specific TTLs (default 24h). Exceeding TTL triggers regeneration on next request.
|
||||
|
||||
## 7. Endpoints
|
||||
|
||||
### 7.1 `POST /v1/advisory-ai/summaries`
|
||||
|
||||
Generate or retrieve a cached advisory summary. Requires `advisory-ai:operate`.
|
||||
|
||||
- **Request body:** Common envelope (preferred sections default to `Summary`).
|
||||
- **Response:** Summary output (see §6 example).
|
||||
- **Errors:**
|
||||
- `400 advisory.summary.missingAdvisoryKey` – empty or malformed `advisoryKey`.
|
||||
- `404 advisory.summary.advisoryNotFound` – Conseiller cannot resolve the advisory or tenant forbidden.
|
||||
- `409 advisory.summary.contextUnavailable` – SBOM context still indexing; retry later.
|
||||
|
||||
### 7.2 `POST /v1/advisory-ai/conflicts`
|
||||
|
||||
Explain conflicting VEX statements, ranked by trust metadata.
|
||||
|
||||
- **Additional payload hints:** Set `preferredSections` to include `Conflicts` or targeted statement IDs.
|
||||
- **Response extensions:** `content.format` remains Markdown; `context.conflicts` array highlights conflicting statement IDs and trust scores.
|
||||
- **Errors:** include `404 advisory.conflict.vexNotFound`, `409 advisory.conflict.trustDataPending` (waiting on Excititor linksets).
|
||||
|
||||
### 7.3 `POST /v1/advisory-ai/remediation`
|
||||
|
||||
Produce remediation plan with fix versions and verification steps.
|
||||
|
||||
- **Additional payload hints:** Provide `artifactId` or `artifactPurl` to unlock SBOM timeline + dependency analysis.
|
||||
- **Response extensions:** `content.format` Markdown plus `context.remediation` with recommended fix versions (`package`, `fixedVersion`, `rationale`).
|
||||
- **Errors:** `422 advisory.remediation.noFixAvailable` (vendor has not published fix), `409 advisory.remediation.policyHold` (policy forbids automated remediation).
|
||||
|
||||
### 7.4 `GET /v1/advisory-ai/outputs/{{outputHash}}`
|
||||
|
||||
Fetch cached artefact (same envelope as §6). Requires `advisory-ai:view`.
|
||||
|
||||
- **Headers:** Supports `If-None-Match` with the `outputHash` (Etag) for cache validation.
|
||||
- **Errors:** `404 advisory.output.notFound` if cache expired or tenant lacks access.
|
||||
|
||||
### 7.5 `GET /v1/advisory-ai/plans/{{cacheKey}}` (optional)
|
||||
|
||||
When plan preview is enabled (feature flag `advisoryAi.planPreview.enabled`), this endpoint returns the orchestration plan using `AdvisoryPipelinePlanResponse` (task metadata, chunk/vector counts). Requires `advisory-ai:operate`.
|
||||
|
||||
## 8. Error model
|
||||
|
||||
Errors follow a standard problem+JSON envelope:
|
||||
|
||||
```json
|
||||
{
|
||||
"status": 400,
|
||||
"code": "advisory.summary.missingAdvisoryKey",
|
||||
"message": "advisoryKey must be provided",
|
||||
"traceId": "01HECAJ6RE8T5H4P6Q0XZ7ZD4T",
|
||||
"retryAfter": 30
|
||||
}
|
||||
```
|
||||
|
||||
| HTTP | Code prefix | Meaning |
|
||||
|------|-------------|---------|
|
||||
| 400 | `advisory.summary.*`, `advisory.remediation.*` | Validation failures or unsupported profile/task combinations |
|
||||
| 401 | `auth.invalid_token` | Token expired/invalid; ensure DPoP proof matches access token |
|
||||
| 403 | `auth.insufficient_scope` | Missing `advisory-ai` scope or tenant consent |
|
||||
| 404 | `advisory.*.notFound` | Advisory/key not available for tenant |
|
||||
| 409 | `advisory.*.contextUnavailable` | Dependencies (SBOM, VEX, policy) not ready; retry after indicated seconds |
|
||||
| 422 | `advisory.*.noFixAvailable` | Remediation cannot be produced given current evidence |
|
||||
| 429 | `rate_limit.exceeded` | Caller breached tenant or profile rate limit; examine `Retry-After` |
|
||||
| 503 | `advisory.backend.unavailable` | Model backend offline or remote profile disabled |
|
||||
|
||||
All errors include `traceId` for cross-service correlation and log search.
|
||||
|
||||
## 9. Rate limiting & quotas
|
||||
|
||||
Advisory AI honours per-tenant quotas configured under `advisoryAi.rateLimits`:
|
||||
|
||||
- Default: 30 summary/conflict requests per minute per tenant & profile.
|
||||
- Remediation requests default to 10/minute due to heavier SBOM analysis.
|
||||
- Cached `GET /outputs/{{hash}}` calls share the `advisory-ai:view` bucket (60/minute).
|
||||
|
||||
Limits are enforced at the gateway; the API returns `429` with standard `Retry-After` seconds. Operators can adjust limits via Authority configuration bundles and propagate offline using the Offline Kit.
|
||||
|
||||
## 10. Observability & audit
|
||||
|
||||
- Metrics: `advisory_ai_requests_total``tenant,task,profile``, `advisory_ai_latency_seconds`, `advisory_ai_validation_failures_total`, `advisory_ai_cache_hits_total`.
|
||||
- Logs: Structured with `traceId`, `tenant`, `task`, `profile`, `outputHash`, `cacheStatus` (`hit`|`miss`|`bypass`). Prompt bodies are **never** logged; guardrail violations emit sanitized snippets only.
|
||||
- Audit events: `advisory_ai.output.generated`, `advisory_ai.output.accessed`, `advisory_ai.guardrail.blocked` ship to the Authority audit stream with tenant + actor metadata.
|
||||
|
||||
## 11. Offline & sovereignty considerations
|
||||
|
||||
- Offline installations bundle prompt templates, guardrail configs, and local model weights. Remote profiles (`cloud-openai`) remain disabled unless operators explicitly enable them and record consent per tenant.
|
||||
- Cached outputs include DSSE attestations when DSSE mode is enabled. Export Center ingests cached artefacts via `GET /outputs/{{hash}}` using `advisory-ai:view`.
|
||||
- Force-refresh regenerates outputs using the same cache key, allowing auditors to replay evidence during compliance reviews.
|
||||
|
||||
## 12. Change log
|
||||
|
||||
| Date (UTC) | Change |
|
||||
|------------|--------|
|
||||
| 2025-11-03 | Initial sprint-110 preview covering summary/conflict/remediation endpoints, cache retrieval, plan preview, and error/rate limit model. |
|
||||
70
docs/modules/advisory-ai/guides/cli.md
Normal file
70
docs/modules/advisory-ai/guides/cli.md
Normal file
@@ -0,0 +1,70 @@
|
||||
# Advisory AI CLI Usage (DOCS-AIAI-31-005)
|
||||
|
||||
_Updated: 2025-11-24 · Owners: Docs Guild · DevEx/CLI Guild · Sprint 0111_
|
||||
|
||||
This guide shows how to drive Advisory AI from the StellaOps CLI using the `advise run` verb, with deterministic fixtures published on 2025-11-19 (`CLI-VULN-29-001`, `CLI-VEX-30-001`). It is designed for CI/offline use and mirrors the guardrail/policy contracts captured in `docs/modules/advisory-ai/guides/guardrails-and-evidence.md` and `docs/modules/policy/guides/assistant-parameters.md`.
|
||||
|
||||
## Prerequisites
|
||||
- CLI binary from Sprint 205 (`stella`), logged in with scopes `advisory-ai:operate` + `aoc:verify`.
|
||||
- Base URL pointed at Advisory AI gateway: `export STELLAOPS_ADVISORYAI_URL=https://advisory-ai.internal` (falls back to main backend base address when unset).
|
||||
- Evidence fixtures available locally (offline friendly):
|
||||
- `out/console/guardrails/cli-vuln-29-001/sample-vuln-output.ndjson` (SHA256 `e5aecfba5cee8d412408fb449f12fa4d5bf0a7cb7e5b316b99da3b9019897186`).
|
||||
- `out/console/guardrails/cli-vuln-29-001/sample-sbom-context.json` (SHA256 `421af53f9eeba6903098d292fbd56f98be62ea6130b5161859889bf11d699d18`).
|
||||
- `out/console/guardrails/cli-vex-30-001/sample-vex-output.ndjson` (SHA256 `2b11b1e2043c2ec1b0cb832c29577ad1c5cbc3fbd0b379b0ca0dee46c1bc32f6`).
|
||||
- Policy hash pinned: set `ADVISORYAI__POLICYVERSION=2025.11.19` (or the bundle hash shipped in the Offline Kit).
|
||||
|
||||
## Quickstart
|
||||
```bash
|
||||
stella advise run summary \
|
||||
--advisory-key csaf:redhat:RHSA-2025:1001 \
|
||||
--artifact-id registry.stella-ops.internal/runtime/api \
|
||||
--policy-version "$ADVISORYAI__POLICYVERSION" \
|
||||
--profile fips-local \
|
||||
--timeout 30 \
|
||||
--json
|
||||
```
|
||||
- Use `--timeout 0` for cache-only probes in CI; add `--force-refresh` to bypass cache.
|
||||
- `--profile cloud-openai` remains disabled unless tenant consent is recorded in Authority; guardrails reject with exit code 12 when disabled.
|
||||
- Guardrail fixtures (`sample-vuln-output.ndjson`, `sample-vex-output.ndjson`, `sample-sbom-context.json`) live in Offline Kits and feed the backend self-tests; the CLI fetches evidence from backend services automatically.
|
||||
|
||||
## Exit codes
|
||||
| Code | Meaning | Notes |
|
||||
| --- | --- | --- |
|
||||
| 0 | Success (hit or miss; output cached or freshly generated) | Includes `outputHash` and citations. |
|
||||
| 2 | Validation error (missing advisory key, bad profile) | Mirrors HTTP 400.
|
||||
| 3 | Context unavailable (SBOM/LNM/policy missing) | Mirrors HTTP 409 `advisory.contextUnavailable`.
|
||||
| 4 | Guardrail block (PII, citation gap, prompt too large) | Mirrors HTTP 422 `advisory.guardrail.blocked`.
|
||||
| 5 | Timeout waiting for output | Respect `--timeout` in seconds (0 = no wait). |
|
||||
| 12 | Remote profile disabled | Returned when `cloud-openai` is selected without consent. |
|
||||
| 7 | Transport/auth failure | Network/TLS/token issues. |
|
||||
|
||||
## Scripting patterns
|
||||
- **Cache-only probes (CI smoke):** `stella advise run summary --advisory-key ... --timeout 0 --json > cache.json` (fails fast if evidence missing).
|
||||
- **Batch mode:** pipe advisory keys: `cat advisories.txt | xargs -n1 -I{} stella advise run summary --advisory-key {} --timeout 0 --json`.
|
||||
- **Profile gating:** set `--profile fips-local` for offline; use `--profile cloud-openai` only after Authority consent and when `ADVISORYAI__INFERENCE__MODE=Remote`.
|
||||
- **Policy pinning:** always pass `--policy-version` (matches Offline Kit bundle hash); outputs include the policy hash in `context.planCacheKey`.
|
||||
|
||||
## Sample output (trimmed)
|
||||
```json
|
||||
{
|
||||
"taskType": "Summary",
|
||||
"profile": "fips-local",
|
||||
"generatedAt": "2025-11-24T00:00:00Z",
|
||||
"outputHash": "sha256:cafe...babe",
|
||||
"citations": [{"index":1,"kind":"advisory","sourceId":"concelier:csaf:redhat:RHSA-2025:1001:paragraph:12"}],
|
||||
"context": {
|
||||
"planCacheKey": "adv-summary:csaf:redhat:RHSA-2025:1001:fips-local",
|
||||
"sbom": {"artifactId":"registry.stella-ops.internal/runtime/api","versionTimeline":8,"dependencyPaths":5}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Offline kit notes
|
||||
- Copy the three CLI guardrail artefact bundles and their `hashes.sha256` files into `offline-kit/advisory-ai/fixtures/` and record them in `SHA256SUMS`.
|
||||
- Set `ADVISORYAI__SBOM__BASEADDRESS` to the SBOM Service endpoint packaged in the kit; leave unset to fall back to `NullSbomContextClient` (Advisory AI will still respond deterministically with context counts set to 0).
|
||||
- Keep `profiles.catalog.json` and `prompts.manifest` hashes aligned with the guardrail pack referenced in the Offline Kit manifest.
|
||||
|
||||
## Troubleshooting
|
||||
- `contextUnavailable`: ensure SBOM service is reachable or provide `--sbom-context` fixture; verify LNM linkset IDs and hashes.
|
||||
- `guardrail.blocked`: check blocked phrase list (`docs/modules/policy/guides/assistant-parameters.md`) and payload size; remove PII or reduce SBOM clamps.
|
||||
- `timeout`: raise `--timeout` or run cache-only mode to avoid long waits in CI.
|
||||
8
docs/modules/advisory-ai/guides/console-fixtures.sha256
Normal file
8
docs/modules/advisory-ai/guides/console-fixtures.sha256
Normal file
@@ -0,0 +1,8 @@
|
||||
bd85eb2ab4528825c17cd0549b547c2d1a6a5e8ee697a6b4615119245665cc02 docs/api/console/samples/advisory-ai-guardrail-banner.json
|
||||
57d7bf9ab226b561e19b3e23e3c8d6c88a3a1252c1ea471ef03bf7a237de8079 docs/api/console/samples/vex-statement-sse.ndjson
|
||||
af3459e8cf7179c264d1ac1f82a968e26e273e7e45cd103c8966d0dd261c3029 docs/api/console/samples/vuln-findings-sample.json
|
||||
336c55d72abea77bf4557f1e3dcaa4ab8366d79008670d87020f900dcfc833c0 docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2-payload.json
|
||||
c55217e8526700c2d303677a66351a706007381219adab99773d4728cc61f293 docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2.svg
|
||||
9bc89861ba873c7f470c5a30c97fb2cd089d6af23b085fba2095e88f8d1f8ede docs/assets/advisory-ai/console/evidence-drawer-b1820ad.svg
|
||||
f6093257134f38033abb88c940d36f7985b48f4f79870d5b6310d70de5a586f9 docs/samples/console/console-vex-30-001.json
|
||||
921bcb360454e801bb006a3df17f62e1fcfecaaccda471ae66f167147539ad1e docs/samples/console/console-vuln-29-001.json
|
||||
297
docs/modules/advisory-ai/guides/console.md
Normal file
297
docs/modules/advisory-ai/guides/console.md
Normal file
@@ -0,0 +1,297 @@
|
||||
# Advisory AI Console Workflows
|
||||
|
||||
_Last updated: 2025-12-04_
|
||||
|
||||
This guide documents the forthcoming Advisory AI console experience so that console, docs, and QA guilds share a single reference while the new endpoints finish landing.
|
||||
|
||||
## 1. Entry points & navigation
|
||||
- **Dashboard tile**: `Advisory AI` card on the console overview routes to `/console/vuln/advisory-ai` once CONSOLE-VULN-29-001 ships. The tile must include the current model build stamp and data freshness time.
|
||||
- **Deep links**: Copy-as-ticket payloads link back into the console using `/console/vex/{statementId}` (CONSOLE-VEX-30-001). Provide fallbacks that open the Evidence modal with a toast if the workspace is still loading.
|
||||
|
||||
## 2. Evidence surfacing
|
||||
| Workflow | Required API | Notes |
|
||||
| --- | --- | --- |
|
||||
| Findings overview | `GET /console/vuln/findings` | Must include policy verdict badge, VEX justification summary, and last-seen timestamps. |
|
||||
| Evidence drawer | `GET /console/vex/statements/{id}` | Stream SSE chunk descriptions so long-form provenance renders progressively. |
|
||||
| Copy as ticket | `POST /console/vuln/tickets` | Returns signed payload + attachment list for JIRA/ServiceNow templates. |
|
||||
|
||||
### 2.1 Plan composer vs response panel
|
||||
- **Plan inspector** (left rail) mirrors the orchestrator output: structured chunks, SBOM summary, dependency counts, and cache key. Surface cache hits with the “Reused plan” badge that reads from `plan.planFromCache`.
|
||||
- **Prompt preview** must show the sanitized prompt _and_ the raw inference response side-by-side once CONSOLE-VULN-29-001 exposes `/console/vuln/advisory-ai/{cacheKey}`. Always label the sanitized prompt “Guardrail-safe prompt”.
|
||||
- **Citations**: render as `[n] Source Name` chips that scroll the evidence drawer to the matching chunk. Use the chunk ID from `prompt.citations[*].chunkId` to keep navigation deterministic.
|
||||
- **Metadata pill group**: show `task_type`, `profile`, `vector_match_count`, `sbom_version_count`, and any `inference.*` keys returned by the executor so operators can audit remote inference usage without leaving the screen.
|
||||
|
||||
Deterministic fixture snapshot (command output, replaces inline screenshot):
|
||||
|
||||
```bash
|
||||
python - <<'PY'
|
||||
import json, pathlib
|
||||
payload_path = pathlib.Path('docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2-payload.json')
|
||||
data = json.loads(payload_path.read_text())
|
||||
metrics = data.get('metrics', {})
|
||||
guard = data.get('guardrail', {})
|
||||
violations = guard.get('violations', [])
|
||||
print(f"# Advisory AI list view fixture (build {data.get('build')})")
|
||||
print(f"- workspace: {data.get('workspace')} | generated: {data.get('generatedAtUtc')} | profile: {data.get('profile')} | cacheHit: {str(metrics.get('cacheHit', False)).lower()}")
|
||||
meta = guard.get('metadata', {})
|
||||
print(f"- guardrail: state={guard.get('state')} blocked={str(guard.get('blocked', False)).lower()} violations={len(violations)} promptLength={meta.get('promptLength')} blockedPhraseFile={meta.get('blockedPhraseFile')}")
|
||||
print("\n| severity | policy | summary | reachability | vex | lastSeen | sbom |")
|
||||
print("| --- | --- | --- | --- | --- | --- | --- |")
|
||||
for item in data.get('findings', []):
|
||||
print("| {severity} | {policy} | {summary} | {reach} | {vex} | {last_seen} | {sbom} |".format(
|
||||
severity=item.get('severity'),
|
||||
policy=item.get('policyBadge'),
|
||||
summary=item.get('summary').replace('|', '\\|'),
|
||||
reach=item.get('reachability'),
|
||||
vex=item.get('vexState'),
|
||||
last_seen=item.get('lastSeen'),
|
||||
sbom=item.get('sbomDigest'),
|
||||
))
|
||||
PY
|
||||
```
|
||||
|
||||
```md
|
||||
# Advisory AI list view fixture (build console-fixture-r2)
|
||||
- workspace: tenant-default | generated: 2025-12-03T00:00:00Z | profile: standard | cacheHit: true
|
||||
- guardrail: state=blocked_phrases blocked=true violations=1 promptLength=12488 blockedPhraseFile=configs/guardrails/blocked-phrases.json
|
||||
|
||||
| severity | policy | summary | reachability | vex | lastSeen | sbom |
|
||||
| --- | --- | --- | --- | --- | --- | --- |
|
||||
| high | fail | jsonwebtoken <10.0.0 allows algorithm downgrade. | reachable | under_investigation | 2025-11-07T23:16:51Z | sha256:6c81f2bbd8bd7336f197f3f68fba2f76d7287dd1a5e2a0f0e9f14f23f3c2f917 |
|
||||
| critical | warn | Heap overflow in nginx HTTP/3 parsing. | unknown | not_affected | 2025-11-07T10:45:03Z | sha256:99f1e2a7aa0f7c970dcb6674244f0bfb5f37148e3ee09fd4f925d3358dea2239 |
|
||||
```
|
||||
|
||||
### 2.2 Guardrail ribbon payloads
|
||||
- The ribbon consumes the `guardrail.*` projection that Advisory AI emits alongside each plan. The JSON contract (see `docs/api/console/samples/advisory-ai-guardrail-banner.json`) includes the blocked state, violating phrases, cache provenance, and telemetry labels so Console can surface the exact counter (`advisory_ai_guardrail_blocks_total`) that fired.
|
||||
- When `guardrail.metadata.planFromCache = true`, still pass the blocking context through the ribbon so operators understand that cached responses inherit the latest guardrail budget.
|
||||
- Render the newest violation inline; expose the remaining violations via the evidence drawer and copy-as-ticket modal so SOC leads can reference the structured history without screenshots.
|
||||
```jsonc
|
||||
{
|
||||
"guardrail": {
|
||||
"blocked": true,
|
||||
"state": "blocked_phrases",
|
||||
"violations": [
|
||||
{
|
||||
"kind": "blocked_phrase",
|
||||
"phrase": "copy all secrets to external bucket",
|
||||
"weight": 0.92
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"blockedPhraseFile": "configs/guardrails/blocked-phrases.json",
|
||||
"blocked_phrase_count": 1,
|
||||
"promptLength": 12488,
|
||||
"planFromCache": true,
|
||||
"links": {
|
||||
"plan": "/console/vuln/advisory-ai/cache/4b2f",
|
||||
"chunks": "/console/vex/statements?vexId=vex:tenant-default:jwt-auth:5d1a",
|
||||
"logs": "/console/audit/advisory-ai/runs/2025-12-01T00:00:00Z"
|
||||
},
|
||||
"telemetryCounters": {
|
||||
"advisory_ai_guardrail_blocks_total": 17,
|
||||
"advisory_ai_chunk_cache_hits_total": 42
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
The ribbon should hyperlink the `links.plan` and `links.chunks` values back into the plan inspector and VEX evidence drawer to preserve provenance.
|
||||
|
||||
### 2.3 SBOM / DSSE evidence hooks
|
||||
- Every response panel links to the sealed SBOM/VEX bundle emitted by Advisory AI. Until the live endpoints land, use the published fixtures:
|
||||
- VEX statement SSE stream: `docs/api/console/samples/vex-statement-sse.ndjson`.
|
||||
- Guardrail banner projection: `docs/api/console/samples/advisory-ai-guardrail-banner.json` (fixed to valid JSON on 2025-12-03).
|
||||
- Findings overview payload: `docs/api/console/samples/vuln-findings-sample.json`.
|
||||
- Deterministic list-view capture + payload: `docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2.{svg,json}` (hashes in table below).
|
||||
- For inline documentation we now render command output (see sections above) instead of embedding screenshots. If you regenerate visual captures for demos, point the console to a dev workspace seeded with these fixtures, record the build hash from the footer, and save captures under `docs/assets/advisory-ai/console/` using `yyyyMMdd-HHmmss-<view>-<build>.png` (UTC, with matching `…-payload.json`).
|
||||
|
||||
#### Fixture hashes (run from repo root)
|
||||
- Verify deterministically: `sha256sum --check docs/modules/advisory-ai/console-fixtures.sha256`.
|
||||
|
||||
| Fixture | sha256 | Notes |
|
||||
| --- | --- | --- |
|
||||
| `docs/api/console/samples/advisory-ai-guardrail-banner.json` | `bd85eb2ab4528825c17cd0549b547c2d1a6a5e8ee697a6b4615119245665cc02` | Guardrail ribbon projection. |
|
||||
| `docs/api/console/samples/vex-statement-sse.ndjson` | `57d7bf9ab226b561e19b3e23e3c8d6c88a3a1252c1ea471ef03bf7a237de8079` | SSE stream sample. |
|
||||
| `docs/api/console/samples/vuln-findings-sample.json` | `af3459e8cf7179c264d1ac1f82a968e26e273e7e45cd103c8966d0dd261c3029` | Findings overview payload. |
|
||||
| `docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2-payload.json` | `336c55d72abea77bf4557f1e3dcaa4ab8366d79008670d87020f900dcfc833c0` | List-view sealed payload. |
|
||||
| `docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2.svg` | `c55217e8526700c2d303677a66351a706007381219adab99773d4728cc61f293` | Deterministic list-view capture. |
|
||||
| `docs/assets/advisory-ai/console/evidence-drawer-b1820ad.svg` | `9bc89861ba873c7f470c5a30c97fb2cd089d6af23b085fba2095e88f8d1f8ede` | Evidence drawer mock (keep until live capture). |
|
||||
| `docs/samples/console/console-vex-30-001.json` | `f6093257134f38033abb88c940d36f7985b48f4f79870d5b6310d70de5a586f9` | Console VEX search fixture. |
|
||||
| `docs/samples/console/console-vuln-29-001.json` | `921bcb360454e801bb006a3df17f62e1fcfecaaccda471ae66f167147539ad1e` | Console vuln search fixture. |
|
||||
|
||||
## 3. Accessibility & offline requirements
|
||||
- Console screens must pass WCAG 2.2 AA contrast and provide focus order that matches the keyboard shortcuts planned for Advisory AI (see `docs/modules/advisory-ai/overview.md`).
|
||||
- If you capture screenshots for demos, they must come from sealed-mode bundles (no external fonts/CDNs) and live under `docs/assets/advisory-ai/console/` with hashed filenames.
|
||||
- Modal dialogs need `aria-describedby` attributes referencing the explanation text returned by the API; translation strings must live with existing locale packs.
|
||||
|
||||
### 3.1 Guardrail & inference status
|
||||
- Display a **guardrail ribbon** at the top of the response panel with three states:
|
||||
- `Blocked` (red) when `guardrail.blocked = true` → show blocked phrase count and require the operator to acknowledge before the response JSON is revealed.
|
||||
- `Warnings` (amber) when `guardrail.violations.Length > 0` but not blocked.
|
||||
- `Clean` (green) otherwise.
|
||||
- If the executor falls back to sanitized prompts (`inference.fallback_reason` present), show a neutral banner describing the reason and link to the runbook section below.
|
||||
- Surface `inference.model_id`, prompt/completion token counts, and latency histogram from `advisory_ai_latency_seconds_bucket` next to the response so ops can correlate user impact with remote/local mode toggles (`ADVISORYAI__Inference__Mode`).
|
||||
|
||||
Guardrail ribbon projection (command output, replaces mock screenshot):
|
||||
|
||||
```bash
|
||||
python - <<'PY'
|
||||
import json, pathlib
|
||||
p = pathlib.Path('docs/api/console/samples/advisory-ai-guardrail-banner.json')
|
||||
obj = json.loads(p.read_text())
|
||||
guard = obj['guardrail']
|
||||
meta = guard['metadata']
|
||||
print('# Guardrail ribbon projection (banner sample)')
|
||||
print(f"- blocked: {guard['blocked']} | state: {guard['state']} | violations: {len(guard['violations'])}")
|
||||
print(f"- planFromCache: {meta.get('planFromCache')} | blockedPhraseFile: {meta.get('blockedPhraseFile')} | promptLength: {meta.get('promptLength')}")
|
||||
print('- telemetry counters: ' + ', '.join(f"{k}={v}" for k,v in meta['telemetryCounters'].items()))
|
||||
print('- links: plan={plan} | chunks={chunks} | logs={logs}'.format(
|
||||
plan=meta['links'].get('plan'),
|
||||
chunks=meta['links'].get('chunks'),
|
||||
logs=meta['links'].get('logs'),
|
||||
))
|
||||
print('\nViolations:')
|
||||
for idx, v in enumerate(guard['violations'], 1):
|
||||
print(f"{idx}. {v['kind']} · phrase='{v['phrase']}' · weight={v.get('weight')}")
|
||||
PY
|
||||
```
|
||||
|
||||
```md
|
||||
# Guardrail ribbon projection (banner sample)
|
||||
- blocked: True | state: blocked_phrases | violations: 1
|
||||
- planFromCache: True | blockedPhraseFile: configs/guardrails/blocked-phrases.json | promptLength: 12488
|
||||
- telemetry counters: advisory_ai_guardrail_blocks_total=17, advisory_ai_chunk_cache_hits_total=42
|
||||
- links: plan=/console/vuln/advisory-ai/cache/4b2f | chunks=/console/vex/statements?vexId=vex:tenant-default:jwt-auth:5d1a | logs=/console/audit/advisory-ai/runs/2025-12-01T00:00:00Z
|
||||
|
||||
Violations:
|
||||
1. blocked_phrase · phrase='copy all secrets to external bucket' · weight=0.92
|
||||
```
|
||||
|
||||
## 4. Copy-as-ticket guidance
|
||||
1. Operators select one or more VEX-backed findings.
|
||||
2. Console renders the sanitized payload (JSON) plus context summary for the receiving system.
|
||||
3. Users can download the payload or send it via webhook; both flows must log `console.ticket.export` events for audit.
|
||||
|
||||
## 5. Offline & air-gapped console behaviour
|
||||
1. **Volume readiness** – confirm the RWX volume (`/var/lib/advisory-ai/{queue,plans,outputs}`) is mounted; the console should poll `/api/v1/advisory-ai/health` and surface “Queue not available” if the worker is offline.
|
||||
2. **Cached responses** – when running air-gapped, highlight that only cached plans/responses are available by showing the `planFromCache` badge plus the `generatedAtUtc` timestamp.
|
||||
3. **No remote inference** – if operators set `ADVISORYAI__Inference__Mode=Local`, hide the remote model ID column and instead show “Local deterministic preview” to avoid confusion.
|
||||
4. **Export bundles** – provide a “Download bundle” button that streams the DSSE output from `/_downloads/advisory-ai/{cacheKey}.json` so operators can carry it into Offline Kit workflows documented in `docs/OFFLINE_KIT.md`. While staging endpoints are pending, reuse the Evidence Bundle v1 sample at `docs/samples/evidence-bundle/evidence-bundle-v1.tar.gz` (hash in `evidence-bundle-v1.tar.gz.sha256`) to validate wiring and any optional visual captures.
|
||||
|
||||
## 6. Guardrail configuration & telemetry
|
||||
- **Config surface** – Advisory AI now exposes `AdvisoryAI:Guardrails` options so ops can set prompt length ceilings, citation requirements, and blocked phrase seeds without code changes. Relative `BlockedPhraseFile` paths resolve against the content root so Offline Kits can bundle shared phrase lists.
|
||||
- **Sample**
|
||||
|
||||
```json
|
||||
{
|
||||
"AdvisoryAI": {
|
||||
"Guardrails": {
|
||||
"MaxPromptLength": 32000,
|
||||
"RequireCitations": true,
|
||||
"BlockedPhraseFile": "configs/guardrail-blocked-phrases.json",
|
||||
"BlockedPhrases": [
|
||||
"copy all secrets to"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- **Console wiring** – the guardrail ribbon pulls `guardrail.blocked`, `guardrail.violations`, and `guardrail.metadata.blocked_phrase_count` while the observability cards track `advisory_ai_chunk_requests_total`, `advisory_ai_chunk_cache_hits_total`, and `advisory_ai_guardrail_blocks_total` (now emitted even on cache hits). Use these meters to explain throttling or bad actors before granting additional guardrail budgets, and keep `docs/api/console/samples/advisory-ai-guardrail-banner.json` nearby so QA can validate localized payloads without hitting production data.
|
||||
|
||||
## 7. Publication state
|
||||
- [x] Fixture-backed payloads and captures committed (`20251203-0000-list-view-build-r2.svg`, `evidence-drawer-b1820ad.svg`).
|
||||
- [x] Copy-as-ticket flow documented; payload aligns with existing SOC runbooks.
|
||||
- [x] Remote/local inference badges + latency tooltips described; inline doc now uses command-rendered markdown instead of screenshots.
|
||||
- [x] SBOM/VEX bundle example attached (Evidence Bundle v1 sample).
|
||||
- [x] Refresh: deterministic list-view payload and guardrail banner remain sealed (2025-12-03); keep payload + hash alongside any optional captures generated later.
|
||||
|
||||
### Publication readiness checklist (DOCS-AIAI-31-004)
|
||||
- Inputs available now: console fixtures (`docs/samples/console/console-vuln-29-001.json`, `console-vex-30-001.json`), evidence bundle sample (`docs/samples/evidence-bundle/evidence-bundle-v1.tar.gz`), guardrail ribbon contract.
|
||||
- Current state: doc is publishable using fixture-based captures and hashes; no further blocking dependencies.
|
||||
- Optional follow-up: when live SBOM `/v1/sbom/context` evidence is available, regenerate the command-output snippets (and any optional captures), capture the build hash, and replace fixture payloads with live outputs.
|
||||
|
||||
> Tracking: DOCS-AIAI-31-004 (Docs Guild, Console Guild)
|
||||
|
||||
### Guardrail console fixtures (unchecked-integration)
|
||||
|
||||
- Vulnerability search sample: `docs/samples/console/console-vuln-29-001.json` (maps to CONSOLE-VULN-29-001).
|
||||
- VEX search sample: `docs/samples/console/console-vex-30-001.json` (maps to CONSOLE-VEX-30-001).
|
||||
- Use these until live endpoints are exposed; replace with real captures when staging is available.
|
||||
|
||||
### Fixture bundle regeneration (deterministic)
|
||||
|
||||
- Rebuild the fixture capture deterministically from the sealed payload:
|
||||
|
||||
```bash
|
||||
python - <<'PY'
|
||||
import html, json
|
||||
from pathlib import Path
|
||||
root = Path('docs/assets/advisory-ai/console')
|
||||
payload = json.loads((root/'20251203-0000-list-view-build-r2-payload.json').read_text())
|
||||
guard = payload['guardrail']; metrics = payload['metrics']; items = payload['findings']
|
||||
|
||||
def color_sev(sev):
|
||||
return {'critical':'#b3261e','high':'#d05c00','medium':'#c38f00','low':'#00695c'}.get(sev.lower(), '#0f172a')
|
||||
def color_policy(val):
|
||||
return {'fail':'#b3261e','warn':'#d97706','pass':'#0f5b3a'}.get(val.lower(), '#0f172a')
|
||||
|
||||
rows = []
|
||||
for idx, item in enumerate(items):
|
||||
y = 210 + idx * 120
|
||||
rows.append(f"""
|
||||
<g transform=\"translate(32,{y})\">
|
||||
<rect width=\"888\" height=\"104\" rx=\"10\" fill=\"#ffffff\" stroke=\"#e2e8f0\" />
|
||||
<text x=\"20\" y=\"30\" class=\"title\">{html.escape(item['summary'])}</text>
|
||||
<text x=\"20\" y=\"52\" class=\"mono subtle\">{html.escape(item['package'])} · {html.escape(item['component'])} · {html.escape(item['image'])}</text>
|
||||
<text x=\"20\" y=\"72\" class=\"mono subtle\">reachability={html.escape(str(item.get('reachability')))} · vex={html.escape(str(item.get('vexState')))} · lastSeen={html.escape(str(item.get('lastSeen')))}</text>
|
||||
<text x=\"20\" y=\"92\" class=\"mono faint\">sbom={html.escape(str(item.get('sbomDigest')))}</text>
|
||||
<rect x=\"748\" y=\"14\" width=\"120\" height=\"28\" rx=\"6\" ry=\"6\" fill=\"{color_sev(item['severity'])}\" opacity=\"0.12\" />
|
||||
<text x=\"758\" y=\"33\" class=\"mono\" fill=\"{color_sev(item['severity'])}\">sev:{html.escape(item['severity'])}</text>
|
||||
<rect x=\"732\" y=\"50\" width=\"140\" height=\"28\" rx=\"6\" ry=\"6\" fill=\"{color_policy(item.get('policyBadge',''))}\" opacity=\"0.12\" />
|
||||
<text x=\"742\" y=\"69\" class=\"mono\" fill=\"{color_policy(item.get('policyBadge',''))}\">policy:{html.escape(item.get('policyBadge',''))}</text>
|
||||
</g>
|
||||
""")
|
||||
|
||||
rows_svg = "\n".join(rows)
|
||||
banner = '#b3261e' if guard.get('blocked') else '#0f5b3a'
|
||||
svg = f"""<svg xmlns=\"http://www.w3.org/2000/svg\" width=\"1280\" height=\"720\" viewBox=\"0 0 1280 720\">
|
||||
<style>
|
||||
.title {{ font-family: Inter, Arial, sans-serif; font-size: 18px; font-weight: 700; fill: #0f172a; }}
|
||||
.mono {{ font-family: Menlo, monospace; font-size: 13px; fill: #0f172a; }}
|
||||
.mono.subtle {{ fill: #475569; }}
|
||||
.mono.faint {{ fill: #94a3b8; font-size: 12px; }}
|
||||
</style>
|
||||
<rect width=\"1280\" height=\"720\" fill=\"#f8fafc\" />
|
||||
<rect x=\"32\" y=\"32\" width=\"1216\" height=\"72\" rx=\"12\" fill=\"#0f172a\" opacity=\"0.05\" />
|
||||
<text x=\"48\" y=\"76\" class=\"title\">Advisory AI · Console fixture</text>
|
||||
<text x=\"48\" y=\"104\" class=\"mono\" fill=\"#475569\">build={html.escape(payload['build'])} · generated={html.escape(payload['generatedAtUtc'])} · workspace={html.escape(payload['workspace'])} · profile={html.escape(payload['profile'])} · cacheHit={str(metrics.get('cacheHit', False)).lower()}</text>
|
||||
<rect x=\"32\" y=\"120\" width=\"1216\" height=\"72\" rx=\"12\" fill=\"#fff1f0\" stroke=\"#f87171\" stroke-width=\"1\" />
|
||||
<text x=\"48\" y=\"156\" class=\"title\" fill=\"{banner}\">Guardrail: {html.escape(guard.get('state','unknown'))}</text>
|
||||
<text x=\"48\" y=\"176\" class=\"mono\" fill=\"#0f172a\">{html.escape(guard['metadata'].get('blockedPhraseFile',''))} · violations={len(guard.get('violations',[]))} · promptLength={guard['metadata'].get('promptLength')}</text>
|
||||
<rect x=\"1080\" y=\"138\" width=\"96\" height=\"28\" rx=\"6\" ry=\"6\" fill=\"{banner}\" opacity=\"0.12\" />
|
||||
<text x=\"1090\" y=\"157\" class=\"mono\" fill=\"{banner}\">blocked</text>
|
||||
<rect x=\"944\" y=\"210\" width=\"304\" height=\"428\" rx=\"12\" fill=\"#0f172a\" opacity=\"0.04\" />
|
||||
<text x=\"964\" y=\"244\" class=\"title\">Runtime metrics</text>
|
||||
<text x=\"964\" y=\"272\" class=\"mono\">p50 latency: {metrics.get('latencyMsP50') or 'n/a'} ms</text>
|
||||
<text x=\"964\" y=\"292\" class=\"mono\">p95 latency: {metrics.get('latencyMsP95') or 'n/a'} ms</text>
|
||||
<text x=\"964\" y=\"312\" class=\"mono\">SBOM ctx: {html.escape(payload.get('sbomContextDigest',''))}</text>
|
||||
<text x=\"964\" y=\"332\" class=\"mono\">Guardrail blocks: {guard['metadata']['telemetryCounters'].get('advisory_ai_guardrail_blocks_total')}</text>
|
||||
<text x=\"964\" y=\"352\" class=\"mono\">Chunk cache hits: {guard['metadata']['telemetryCounters'].get('advisory_ai_chunk_cache_hits_total')}</text>
|
||||
{rows_svg}
|
||||
</svg>"""
|
||||
|
||||
(root/'20251203-0000-list-view-build-r2.svg').write_text(svg)
|
||||
PY
|
||||
```
|
||||
|
||||
- Verify the regenerated outputs match the sealed fixtures before publishing:
|
||||
|
||||
```bash
|
||||
sha256sum docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2.{svg,payload.json}
|
||||
# expected:
|
||||
# c55217e8526700c2d303677a66351a706007381219adab99773d4728cc61f293 ...-build-r2.svg
|
||||
# 336c55d72abea77bf4557f1e3dcaa4ab8366d79008670d87020f900dcfc833c0 ...-build-r2-payload.json
|
||||
```
|
||||
|
||||
**Reference**: API contracts and sample payloads live in `docs/api/console/workspaces.md` (see `/console/vuln/*` and `/console/vex/*` sections) plus the JSON fixtures under `docs/api/console/samples/`.
|
||||
104
docs/modules/advisory-ai/guides/evidence-payloads.md
Normal file
104
docs/modules/advisory-ai/guides/evidence-payloads.md
Normal file
@@ -0,0 +1,104 @@
|
||||
# Advisory AI Evidence Payloads (LNM-Aligned)
|
||||
|
||||
_Updated: 2025-11-24 · Owner: Advisory AI Docs Guild · Sprint: 0111 (AIAI-RAG-31-003)_
|
||||
|
||||
This document defines how Advisory AI consumes Link-Not-Merge (LNM) observations and linksets for Retrieval-Augmented Generation (RAG). It aligns payloads with the frozen LNM v1 schema (`docs/modules/concelier/link-not-merge-schema.md`, 2025-11-17) and replaces prior draft payloads. CLI/Policy artefacts (`CLI-VULN-29-001`, `CLI-VEX-30-001`, `policyVersion` digests) are referenced but optional at runtime; missing artefacts trigger deterministic `409 advisory.contextUnavailable` responses rather than fallback merging. A deterministic SBOM context fixture lives at `out/console/guardrails/cli-vuln-29-001/sample-sbom-context.json` (SHA256 `421af53f9eeba6903098d292fbd56f98be62ea6130b5161859889bf11d699d18`) and is used in the examples below.
|
||||
|
||||
## 1) Input envelope (per task)
|
||||
|
||||
```json
|
||||
{
|
||||
"advisoryKey": "csaf:redhat:RHSA-2025:1001",
|
||||
"profile": "fips-local",
|
||||
"policyVersion": "2025.10.1",
|
||||
"lnm": {
|
||||
"observationIds": ["6561e41b3e3f4a6e9d3b91c1", "6561e41b3e3f4a6e9d3b91c2"],
|
||||
"linksetId": "6561e41b3e3f4a6e9d3b91d0",
|
||||
"provenanceHash": "sha256:0f7c...9ad3"
|
||||
},
|
||||
"sbom": {
|
||||
"artifactId": "registry.stella-ops.internal/runtime/api",
|
||||
"purl": "pkg:oci/runtime-api@sha256:d2c3...",
|
||||
"timelineClamp": 500,
|
||||
"dependencyPathClamp": 200
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Rules:
|
||||
- `lnm.linksetId` and `lnm.observationIds` are **required**. Missing values → `409 advisory.contextUnavailable`.
|
||||
- `provenanceHash` must match the hash list embedded in the LNM linkset; Advisory AI refuses linksets whose hashes mismatch.
|
||||
- SBOM fields optional; if absent, remediation tasks skip SBOM deltas and still return deterministic outputs.
|
||||
|
||||
## 2) Canonical chunk mapping
|
||||
|
||||
| LNM source | Advisory AI chunk | Transformation |
|
||||
| --- | --- | --- |
|
||||
| `advisory_observations._id` | `source_id` | Stored verbatim; used for citations. |
|
||||
| `advisory_observations.advisoryId` | `advisory_key` | Also populates `content_hash` seed. |
|
||||
| `advisory_observations.summary` | `text` | Trimmed, Markdown-safe. |
|
||||
| `advisory_observations.affected[].purl` | `purl` | Lowercased, deduped; no range merging. |
|
||||
| `advisory_observations.severities[]` | `severity` | Passed through; multiple severities allowed. |
|
||||
| `advisory_observations.references[]` | `references` | Sorted for determinism. |
|
||||
| `advisory_observations.relationships[]` | `relationships` | Surface upstream `type/source/target/provenance`; no merge. |
|
||||
| `advisory_observations.provenance.sourceArtifactSha` | `content_hash` | Drives dedup + cache key. |
|
||||
| `advisory_linksets.conflicts[]` | `conflicts` | Serialized verbatim for conflict tasks. |
|
||||
| `advisory_linksets.normalized.purls|versions|ranges|severities` | `normalized` | Used as hints only; never overwrite observation fields. |
|
||||
|
||||
Chunk ordering: observations sorted by `(source, advisoryId, provenance.fetchedAt)` as per LNM invariant; chunks are emitted in the same order to keep cache keys stable. SBOM deltas, when present, append after observations but before conflict echoes to keep hashes reproducible with and without SBOM context.
|
||||
|
||||
## 3) Output citation rules
|
||||
|
||||
- `citations[n].sourceId` points to the LNM `source_id`; `citations[n].uri` must remain the upstream reference URI when present.
|
||||
- If SBOM deltas are included, they appear as separate citations with `kind: "sbom"` and `sourceId` built from SBOM context digest (`sbom:{artifactId}:{digest}`).
|
||||
- Conflict outputs must echo `linkset.conflicts[].reason` in the Markdown body with matching citation indexes; guardrails block outputs where a conflict reason lacks a citation.
|
||||
|
||||
## 4) Error conditions (aligned to LNM)
|
||||
|
||||
| Condition | Code | Notes |
|
||||
| --- | --- | --- |
|
||||
| Missing `lnm.linksetId` or `lnm.observationIds` | `409 advisory.contextUnavailable` | Caller should pass LNM IDs; retry once upstream emits them. |
|
||||
| Hash mismatch between `provenanceHash` and linkset | `409 advisory.contextHashMismatch` | Indicates tampering or stale payload; retry after refreshing linkset. |
|
||||
| Observation count exceeds clamp (defaults: 200 obs, 600 chunks) | `413 advisory.contextTooLarge` | Caller may request narrower `preferredSections` or reduce obs set. |
|
||||
| Conflicts array empty for conflict task | `422 advisory.conflict.noConflicts` | Signals upstream data gap; reported to Concelier. |
|
||||
|
||||
## 5) Sample normalized RAG bundle
|
||||
|
||||
```json
|
||||
{
|
||||
"taskType": "Summary",
|
||||
"advisoryKey": "csaf:redhat:RHSA-2025:1001",
|
||||
"lnmBundle": {
|
||||
"linksetId": "6561e41b3e3f4a6e9d3b91d0",
|
||||
"provenanceHash": "sha256:0f7c...9ad3",
|
||||
"chunks": [
|
||||
{
|
||||
"source_id": "concelier:ghsa:GHSA-xxxx:obs:6561e41b3e3f4a6e9d3b91c1",
|
||||
"content_hash": "sha256:1234...",
|
||||
"advisory_key": "csaf:redhat:RHSA-2025:1001",
|
||||
"purl": "pkg:maven/org.example/foo@1.2.3",
|
||||
"severity": [{"system":"cvssv3","score":7.8,"vector":"AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H"}],
|
||||
"references": ["https://access.redhat.com/errata/RHSA-2025:1001"],
|
||||
"relationships": [{"type":"affects","source":"nvd","target":"cpe:/o:redhat:enterprise_linux:9"}]
|
||||
}
|
||||
],
|
||||
"conflicts": [
|
||||
{"field":"affected.versions","reason":"vendor_range_differs","values":["<1.2.0","<=1.2.3"]}
|
||||
]
|
||||
},
|
||||
"sbomSummary": {
|
||||
"artifactId": "registry.stella-ops.internal/runtime/api",
|
||||
"versionTimeline": 8,
|
||||
"dependencyPaths": 5
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Operators can store this bundle alongside plan cache entries; the `lnmBundle.provenanceHash` proves the evidence set matches the frozen Concelier linkset.
|
||||
|
||||
## 6) Operator validation steps
|
||||
|
||||
- Verify LNM collections at schema v1 (2025-11-17 freeze) before enabling Advisory AI tasks.
|
||||
- Ensure `lnm.provenanceHash` matches linkset `observationHashes` before calling Advisory AI.
|
||||
- Keep clamps deterministic: observations ≤200, chunks ≤600, timeline entries ≤500, dependency paths ≤200 (defaults; override only if documented).
|
||||
- When running offline, include LNM linkset exports in the Offline Kit to preserve citation replay.
|
||||
76
docs/modules/advisory-ai/guides/guardrails-and-evidence.md
Normal file
76
docs/modules/advisory-ai/guides/guardrails-and-evidence.md
Normal file
@@ -0,0 +1,76 @@
|
||||
# Advisory AI Guardrails & Evidence Intake
|
||||
|
||||
_Updated: 2025-12-09 | Owner: Advisory AI Docs Guild | Status: Ready to publish (Sprint 0111 / AIAI-DOCS-31-001)_
|
||||
|
||||
This note captures the guardrail behaviors and evidence intake boundaries required by Sprint 0111 tasks (`AIAI-DOCS-31-001`, `AIAI-RAG-31-003`). It binds Advisory AI guardrails to upstream evidence sources and clarifies how Link-Not-Merge (LNM) documents flow into Retrieval-Augmented Generation (RAG) payloads.
|
||||
|
||||
## 1) Evidence sources and contracts
|
||||
|
||||
**Upstream readiness gates (now satisfied)**
|
||||
|
||||
- CLI guardrail artefacts (2025-11-19) are sealed at `out/console/guardrails/cli-vuln-29-001/` and `out/console/guardrails/cli-vex-30-001/`; hashes live in `docs/modules/cli/artefacts/guardrails-artefacts-2025-11-19.md`.
|
||||
- Policy pin: set `policyVersion=2025.11.19` per `docs/modules/policy/guides/assistant-parameters.md` before enabling non-default profiles.
|
||||
- SBOM context service is live: the 2025-12-08 smoke against `/sbom/context` produced `sha256:0c705259fdf984bf300baba0abf484fc3bbae977cf8a0a2d1877481f552d600d` with evidence in `evidence-locker/sbom-context/2025-12-08-response.json` and offline mirror `offline-kit/advisory-ai/fixtures/sbom-context/2025-12-08/`.
|
||||
- DEVOPS-AIAI-31-001 landed: deterministic CI harness at `ops/devops/advisoryai-ci-runner/run-advisoryai-ci.sh` emits binlog/TRX/hashes for Advisory AI.
|
||||
|
||||
**Evidence feeds**
|
||||
|
||||
- Advisory observations (LNM) - consume immutable `advisory_observations` and `advisory_linksets` produced per `docs/modules/concelier/link-not-merge-schema.md` (frozen v1, 2025-11-17).
|
||||
- VEX statements - Excititor + VEX Lens linksets with trust weights; treated as structured chunks with `source_id` and `confidence`.
|
||||
- SBOM context - `SBOM-AIAI-31-001` contract: timelines and dependency paths retrieved via `ISbomContextRetriever` (`AddSbomContextHttpClient`), default clamps 500 timeline entries / 200 paths.
|
||||
- Policy explain traces - Policy Engine digests referenced by `policyVersion`; cache keys include policy hash to keep outputs replayable.
|
||||
- Runtime posture (optional) - Zastava signals (`exposure`, `admissionStatus`) when provided by Link-Not-Merge-enabled tenants; optional chunks tagged `runtime`.
|
||||
|
||||
All evidence items must carry `content_hash` + `source_id`; Advisory AI never mutates or merges upstream facts (Aggregation-Only Contract).
|
||||
|
||||
## 2) Guardrail stages
|
||||
|
||||
1. **Pre-flight sanitization**
|
||||
- Redact secrets (AWS-style keys, PEM blobs, generic tokens).
|
||||
- Strip prompt-injection phrases; enforce max input payload 16kB (configurable, default).
|
||||
- Reject requests missing `advisoryKey` or linkset-backed evidence (LNM guard).
|
||||
2. **Prompt assembly**
|
||||
- Deterministic section order: advisory excerpts -> VEX statements -> SBOM deltas -> policy traces -> runtime hints.
|
||||
- Vector previews capped at 600 chars + ellipsis; section budgets fixed per profile (`default`, `fips-local`, `gost-local`, `cloud-openai`) in `profiles.catalog.json` and hashed into DSSE provenance.
|
||||
3. **LLM invocation (local/remote)**
|
||||
- Profiles selected via `profile` field; remote profiles require Authority tenant consent plus `advisory-ai:operate` and `aoc:verify`.
|
||||
4. **Validation & citation enforcement**
|
||||
- Every emitted fact must map to an input chunk (`source_id` + `content_hash`); citations serialized as `[n]` in Markdown.
|
||||
- Block outputs lacking citations, exceeding section budgets, or including unredacted PII.
|
||||
5. **Output sealing**
|
||||
- Store `outputHash`, `inputDigest`, `provenanceHash`; wrap in DSSE when configured.
|
||||
- Cache TTL defaults to 24h; regenerate only when inputs change or `forceRefresh=true`.
|
||||
|
||||
Metrics: `advisory_ai_guardrail_blocks_total`, `advisory_ai_outputs_stored_total`, `advisory_ai_citation_coverage_ratio`. Logs carry `output_hash`, `profile`, and block reason; no secrets or raw prompt bodies are logged.
|
||||
|
||||
## 3) RAG payload mapping to LNM (summary)
|
||||
|
||||
| LNM field | RAG chunk field | Notes |
|
||||
| --- | --- | --- |
|
||||
| `observation._id` | `source_id` | Used for citations and conflict surfacing. |
|
||||
| `observation.advisoryId` | `advisory_key` | Keyed alongside task type in cache. |
|
||||
| `observation.affected[].purl` | `purl` | Included for remediation + SBOM joins. |
|
||||
| `observation.severities[]` | `severity` | Passed through unmerged; multiple severities allowed. |
|
||||
| `linkset.conflicts[]` | `conflicts` | Rendered verbatim for conflict tasks; no inference merges. |
|
||||
| `provenance.sourceArtifactSha` | `content_hash` | Drives determinism and replay. |
|
||||
|
||||
See `docs/modules/advisory-ai/guides/evidence-payloads.md` for full JSON examples and alignment rules.
|
||||
|
||||
## 4) Compliance with upstream artefacts and verification
|
||||
|
||||
- References: `CONSOLE-VULN-29-001`, `CONSOLE-VEX-30-001`, `CLI-VULN-29-001`, `CLI-VEX-30-001`, `EXCITITOR-CONSOLE-23-001`, `DEVOPS-AIAI-31-001`, `SBOM-AIAI-31-001`.
|
||||
- CLI fixtures: expected hashes `421af53f9eeba6903098d292fbd56f98be62ea6130b5161859889bf11d699d18` (sample SBOM context) and `e5aecfba5cee8d412408fb449f12fa4d5bf0a7cb7e5b316b99da3b9019897186` / `2b11b1e2043c2ec1b0cb832c29577ad1c5cbc3fbd0b379b0ca0dee46c1bc32f6` (sample vuln/vex outputs). Verify with `sha256sum --check docs/modules/cli/artefacts/guardrails-artefacts-2025-11-19.md`.
|
||||
- SBOM context: fixture hash `sha256:421af53f9eeba6903098d292fbd56f98be62ea6130b5161859889bf11d699d18`; live SbomService smoke (2025-12-08) hash `sha256:0c705259fdf984bf300baba0abf484fc3bbae977cf8a0a2d1877481f552d600d` stored in `evidence-locker/sbom-context/2025-12-08-response.json` and mirrored under `offline-kit/advisory-ai/fixtures/sbom-context/2025-12-08/`.
|
||||
- CI harness: `ops/devops/advisoryai-ci-runner/run-advisoryai-ci.sh` emits `ops/devops/artifacts/advisoryai-ci/<UTC>/build.binlog`, `tests/advisoryai.trx`, and `summary.json` with SHA256s; include the latest run when shipping Offline Kits.
|
||||
- Policy compatibility: guardrails must remain compatible with `docs/modules/policy/guides/assistant-parameters.md`; configuration knobs documented there are authoritative for env vars and defaults.
|
||||
- Packaging tasks (AIAI-PACKAGING-31-002) must include this guardrail summary in DSSE metadata to keep Offline Kit parity.
|
||||
|
||||
## 5) Operator checklist
|
||||
|
||||
- LNM feed enabled and Concelier schemas at v1 (2025-11-17).
|
||||
- SBOM retriever configured or `NullSbomContextClient` left as safe default; verify latest context hash (`sha256:0c705259f...d600d`) or fixture hash (`sha256:421af53f9...9d18`) before enabling remediation tasks.
|
||||
- Policy hash pinned via `policyVersion` when reproducibility is required.
|
||||
- CLI guardrail artefact hashes verified against `docs/modules/cli/artefacts/guardrails-artefacts-2025-11-19.md` and mirrored into Offline Kits.
|
||||
- CI harness run captured from `ops/devops/advisoryai-ci-runner/run-advisoryai-ci.sh`; store `summary.json` alongside doc promotion.
|
||||
- Remote profiles only after Authority consent and profile allowlist are set.
|
||||
- Cache directories shared between web + worker hosts for DSSE sealing.
|
||||
66
docs/modules/advisory-ai/guides/packaging.md
Normal file
66
docs/modules/advisory-ai/guides/packaging.md
Normal file
@@ -0,0 +1,66 @@
|
||||
# Advisory AI Packaging & SBOM Bundle (AIAI-PACKAGING-31-002)
|
||||
|
||||
_Updated: 2025-11-22 · Owner: Advisory AI Release · Status: Draft_
|
||||
|
||||
Defines the artefacts and provenance required to ship Advisory AI in Sprint 0111, covering offline kits and on-prem deployments.
|
||||
|
||||
## 1) Bundle contents
|
||||
|
||||
| Artefact | Purpose | Provenance |
|
||||
| --- | --- | --- |
|
||||
| `advisory-ai-web` image | API surface + plan cache | SBOM: `SBOM-AIAI-31-001:web`; DSSE attestation signed by Release key |
|
||||
| `advisory-ai-worker` image | Queue + inference executor | SBOM: `SBOM-AIAI-31-001:worker`; DSSE attestation |
|
||||
| Prompt + guardrail pack | Deterministic prompts, redaction lists, validation rules | DSSE sealed; hash recorded in `prompts.manifest` |
|
||||
| Profile catalog | `default`, `fips-local`, `gost-local`, `cloud-openai` (disabled) | Versioned JSON, hashed; tenant consent flags captured |
|
||||
| Policy bundle | `policyVersion` digest for baseline evaluation; Authority importable | DSSE + provenance to Policy Engine digests |
|
||||
| LNM evidence export (optional) | Concelier `advisory_linksets` + `advisory_observations` for air-gap replay | Hash list aligned to `provenanceHash` in RAG bundles |
|
||||
| SBOM context client config | Example `AddSbomContextHttpClient` settings (`BaseAddress`, `Endpoint`, `ApiKey`) | Signed `sbom-context.example.json` |
|
||||
|
||||
## 2) Directory layout (Offline Kit)
|
||||
|
||||
```
|
||||
/offline-kit/advisory-ai/
|
||||
images/
|
||||
advisory-ai-web.tar.zst
|
||||
advisory-ai-worker.tar.zst
|
||||
sboms/
|
||||
SBOM-AIAI-31-001-web.json
|
||||
SBOM-AIAI-31-001-worker.json
|
||||
provenance/
|
||||
advisory-ai-web.intoto.jsonl
|
||||
advisory-ai-worker.intoto.jsonl
|
||||
prompts.manifest.dsse
|
||||
profiles.catalog.json
|
||||
policy-bundle.intoto.jsonl
|
||||
config/
|
||||
advisoryai.appsettings.example.json
|
||||
sbom-context.example.json
|
||||
evidence/
|
||||
lnm-linksets.ndjson # optional; aligns to linkset hashes in RAG bundles
|
||||
lnm-observations.ndjson # optional; immutable raw docs
|
||||
```
|
||||
|
||||
- All files hashed into `SHA256SUMS` with DSSE signature (`SHA256SUMS.dsse`).
|
||||
- Profiles catalog and prompt pack hashes must be propagated into `AdvisoryAI:Provenance` settings for runtime verification.
|
||||
|
||||
## 3) SBOM & provenance rules
|
||||
|
||||
- SBOMs must follow SPDX 3.0.1; embed image digest (`sha256:<...>`) and build args.
|
||||
- Attestations use DSSE + SPDX predicate; signer key matches Release guild key referenced in `DEVOPS-AIAI-31-001`.
|
||||
- For air-gapped installs, operators verify: `slsa-verifier verify-attestation --source=stellaops/advisory-ai-web --bundle advisory-ai-web.intoto.jsonl --digest <image-digest>`.
|
||||
|
||||
## 4) Deployment checklist
|
||||
|
||||
- [ ] Import `advisory-ai-web` and `advisory-ai-worker` images to registry.
|
||||
- [ ] Apply `profiles.catalog.json`; ensure remote profiles disabled unless Authority consent granted.
|
||||
- [ ] Load prompt pack and set `AdvisoryAI:Prompts:ManifestHash` to `prompts.manifest`.
|
||||
- [ ] Configure SBOM client (or keep `NullSbomContextClient` default).
|
||||
- [ ] If shipping LNM evidence, seed `advisory_linksets` and `advisory_observations` collections before enabling inference.
|
||||
- [ ] Record hashes in deployment log; surface in Authority audit via `advisory_ai.output.generated` events.
|
||||
|
||||
## 5) Update obligations
|
||||
|
||||
- Any change to prompts, guardrails, or profiles → bump manifest hash and regenerate DSSE.
|
||||
- SBOM updates follow the same `SBOM-AIAI-31-001` idempotent contract; replace files, update `SHA256SUMS`, resign.
|
||||
- Link all changes into the sprint Execution Log and Decisions & Risks sections.
|
||||
- CLI/Policy artefacts must be present before enabling `cloud-openai` or `default` profiles for tenants; if missing, keep profiles disabled and record the reason in `Decisions & Risks`.
|
||||
61
docs/modules/advisory-ai/guides/sbom-context-hand-off.md
Normal file
61
docs/modules/advisory-ai/guides/sbom-context-hand-off.md
Normal file
@@ -0,0 +1,61 @@
|
||||
# SBOM Context Hand-off for Advisory AI (SBOM-AIAI-31-003)
|
||||
|
||||
_Updated: 2025-11-24 · Owners: Advisory AI Guild · SBOM Service Guild · Sprint 0111_
|
||||
|
||||
Defines the contract and smoke test for passing SBOM context from SBOM Service to Advisory AI `/v1/sbom/context` consumers. Aligns with `SBOM-AIAI-31-001` (paths/timelines) and the CLI fixtures published on 2025-11-19.
|
||||
|
||||
## Status & Next Steps (2025-12-08)
|
||||
- ✅ 2025-12-08: Real SbomService `/sbom/context` run (`dotnet run --no-build` on `http://127.0.0.1:5090`) using `sample-sbom-context.json` scope. Response hash `sha256:0c705259fdf984bf300baba0abf484fc3bbae977cf8a0a2d1877481f552d600d` captured with timeline + dependency paths.
|
||||
- Evidence: `evidence-locker/sbom-context/2025-12-05-smoke.ndjson` (2025-12-08 entry) and raw payload `evidence-locker/sbom-context/2025-12-08-response.json`.
|
||||
- Offline kit mirror: `offline-kit/advisory-ai/fixtures/sbom-context/2025-12-08/` (CLI guardrail fixtures, new `sbom-context-response.json`, and `SHA256SUMS` manifest).
|
||||
- 2025-12-05 run (fixture-backed stub) remains archived in the same NDJSON/logs for traceability.
|
||||
|
||||
## Contract
|
||||
- **Endpoint** (SBOM Service): `/sbom/context`
|
||||
- **Request** (minimal):
|
||||
```json
|
||||
{
|
||||
"artifactId": "registry.stella-ops.internal/runtime/api",
|
||||
"purl": "pkg:oci/runtime-api@sha256:d2c3...",
|
||||
"timelineClamp": 500,
|
||||
"dependencyPathClamp": 200
|
||||
}
|
||||
```
|
||||
- **Response** (summarised):
|
||||
```json
|
||||
{
|
||||
"schema": "stellaops.sbom.context/1.0",
|
||||
"generated": "2025-11-19T00:00:00Z",
|
||||
"packages": [
|
||||
{"name":"openssl","version":"1.1.1w","purl":"pkg:deb/openssl@1.1.1w"},
|
||||
{"name":"zlib","version":"1.2.11","purl":"pkg:deb/zlib@1.2.11"}
|
||||
],
|
||||
"timeline": 8,
|
||||
"dependencyPaths": 5,
|
||||
"hash": "sha256:421af53f9eeba6903098d292fbd56f98be62ea6130b5161859889bf11d699d18"
|
||||
}
|
||||
```
|
||||
- **Determinism**: clamp values fixed unless overridden; `generated` timestamp frozen per fixture when offline.
|
||||
- **Headers**: `X-StellaOps-Tenant` required; `X-StellaOps-ApiKey` optional for bootstrap.
|
||||
|
||||
## Smoke test (tenants/offline)
|
||||
1. Start SBOM Service with fixture data loaded (or use `sample-sbom-context.json`).
|
||||
2. Run: `curl -s -H "X-StellaOps-Tenant: demo" -H "Content-Type: application/json" \
|
||||
-d @out/console/guardrails/cli-vuln-29-001/sample-sbom-context.json \
|
||||
http://localhost:8080/sbom/context | jq .hash` (expect `sha256:421a...9d18`).
|
||||
3. Configure Advisory AI:
|
||||
- `AdvisoryAI:SBOM:BaseAddress=http://localhost:8080`
|
||||
- `AdvisoryAI:SBOM:ApiKey=<key-if-required>`
|
||||
4. Call Advisory AI cache-only: `stella advise run remediation --advisory-key csaf:redhat:RHSA-2025:1001 --artifact-id registry.stella-ops.internal/runtime/api --timeout 0 --json`.
|
||||
- Expect exit 0 and `sbomSummary.dependencyPaths=5` in response.
|
||||
5. Record the hash and endpoint in ops log; mirror fixture + hashes into Offline Kit under `offline-kit/advisory-ai/fixtures/sbom-context/`.
|
||||
|
||||
## Failure modes
|
||||
- `409 advisory.contextHashMismatch` — occurs when the returned `hash` differs from the LNM linkset `provenanceHash`; refresh context or re-export.
|
||||
- `403` — tenant/api key mismatch; check `X-StellaOps-Tenant` and API key.
|
||||
- `429` — clamp exceeded; reduce `timelineClamp`/`dependencyPathClamp` or narrow `artifactId`.
|
||||
|
||||
## References
|
||||
- `docs/modules/sbom-service/guides/remediation-heuristics.md` (blast-radius scoring).
|
||||
- `docs/modules/advisory-ai/guides/guardrails-and-evidence.md` (evidence contract).
|
||||
- `docs/modules/cli/artefacts/guardrails-artefacts-2025-11-19.md` (hashes for fixtures).
|
||||
@@ -34,7 +34,7 @@ Wire the deterministic pipeline (Summary / Conflict / Remediation flows) into th
|
||||
- Persist `AdvisoryTaskPlan` metadata + generated output keyed by cache key + policy version.
|
||||
- Expose TTL/force-refresh semantics.
|
||||
4. **Docs & QA**
|
||||
- Publish API spec (`docs/advisory-ai/api.md`) + CLI docs.
|
||||
- Publish API spec (`docs/modules/advisory-ai/guides/api.md`) + CLI docs.
|
||||
- Add golden outputs for deterministic runs; property tests for cache key stability (unit coverage landed for cache hashing + option clamps).
|
||||
|
||||
## 4. Task Breakdown
|
||||
|
||||
102
docs/modules/advisory-ai/overview.md
Normal file
102
docs/modules/advisory-ai/overview.md
Normal file
@@ -0,0 +1,102 @@
|
||||
> **Imposed rule:** Work of this type or tasks of this type on this component must also be applied everywhere else it should be applied.
|
||||
|
||||
# Advisory AI Overview
|
||||
|
||||
_Updated: 2025-11-03 • Owner: Docs Guild & Advisory AI Guild • Status: Draft_
|
||||
|
||||
Advisory AI is the retrieval-augmented assistant that synthesises Conseiller (advisory) and Excititor (VEX) evidence, Policy Engine context, and SBOM insights into explainable outputs. It operates under the Aggregation-Only Contract (AOC): no derived intelligence alters or mutates raw facts, and every generated recommendation is paired with verifiable provenance.
|
||||
|
||||
## 1. Value proposition
|
||||
|
||||
- **Summaries on demand** – deterministically produce advisory briefs that highlight impact, exploitability, and mitigation steps with paragraph-level citations.
|
||||
- **Conflict explainers** – reconcile diverging VEX statements by exposing supplier trust metadata, confidence weights, and precedence logic.
|
||||
- **Remediation planning** – merge SBOM timelines, dependency paths, and policy thresholds to propose actionable remediation plans tailored to the requesting tenant.
|
||||
- **Offline parity** – the same workflows execute in air-gapped deployments using local inference profiles; cache artefacts can be exported as DSSE bundles for audits.
|
||||
|
||||
## 2. Architectural highlights
|
||||
|
||||
| Layer | Responsibilities | Key dependencies |
|
||||
|-------|------------------|------------------|
|
||||
| Retrievers | Fetch deterministic advisory/VEX/SBOM context, guardrail inputs, policy digests. | Conseiller, Excititor, SBOM Service, Policy Engine |
|
||||
| Orchestrator | Builds `AdvisoryTaskPlan` objects (summary/conflict/remediation) with budgets and cache keys. | Deterministic toolset (AIAI-31-003), Authority scopes |
|
||||
| Guardrails | Enforce redaction, structured prompts, citation validation, injection defence, and DSSE sealing. | Security Guild guardrail library |
|
||||
| Outputs | Persist cache entries (hash + context manifest), expose via API/CLI/Console, emit telemetry. | PostgreSQL cache store, Export Center, Observability stack |
|
||||
|
||||
See `docs/modules/advisory-ai/architecture.md` for deep technical diagrams and sequence flows.
|
||||
|
||||
## 3. Guardrails & compliance
|
||||
|
||||
1. **Aggregation-only** – only raw facts from authorised sources are consumed; no on-the-fly enrichment beyond deterministic tooling.
|
||||
2. **Citation-first** – every sentence referencing external evidence must cite a canonical paragraph/statement identifier.
|
||||
3. **Content filters** – redaction, policy-based profanity filters, and prompt allowlists are applied before model invocation.
|
||||
4. **Deterministic cache** – outputs are stored with `inputDigest` and `outputHash`; force-refresh regenerates the same output unless upstream evidence changes.
|
||||
5. **Audit & scope** – Authority scopes (`advisory-ai:view|operate|admin`) plus `aoc:verify` are mandatory; audit events (`advisory_ai.output.generated`, etc.) flow to the Authority ledger.
|
||||
|
||||
## 4. Supported personas & surfaces
|
||||
|
||||
| Persona | Typical role | Access |
|
||||
|---------|--------------|--------|
|
||||
| **Operations engineer** | Reviews summaries/remediation recommendations during incident triage. | Console + `advisory-ai:view` |
|
||||
| **Platform engineer** | Automates remediation planning via CI/CD or CLI. | CLI + API + `advisory-ai:operate` |
|
||||
| **Security/Compliance** | Audits guardrail decisions, exports outputs for evidence lockers. | API/Export Center + `advisory-ai:view` |
|
||||
| **Service owner** | Tunes profiles, remote inference settings, and rate limits. | Authority admin + `advisory-ai:admin` |
|
||||
|
||||
Surfaces:
|
||||
- **Console**: dashboard widgets (pending in CONSOLE-AIAI backlog) render cached summaries and conflicts.
|
||||
- **CLI**: `stella advise run <task>` (AIAI-31-004C) for automation scripts.
|
||||
- **API**: `/v1/advisory-ai/*` endpoints documented in `docs/modules/advisory-ai/guides/api.md`.
|
||||
|
||||
## 5. Data sources & provenance
|
||||
|
||||
- **Advisories** – Conseiller raw observations (CSAF/OSV) with paragraph anchors and supersedes chains.
|
||||
- **VEX statements** – Excititor VEX observations plus trust weights provided by VEX Lens.
|
||||
- **SBOM context** – SBOM Service timelines and dependency graphs (requires AIAI-31-002 completion).
|
||||
- **Policy** – Policy Engine explain traces, waivers, and risk ratings used to contextualise responses.
|
||||
- **Runtime posture** – Optional Zastava signals (exposure, admission status) when available.
|
||||
|
||||
All sources are referenced via content hashes (`content_hash`, `statement_id`, `timeline_entry_id`) ensuring reproducibility.
|
||||
|
||||
## 6. Profiles & deployment options
|
||||
|
||||
| Profile | Location | Notes |
|
||||
|---------|----------|-------|
|
||||
| `default` / `fips-local` | On-prem GPU/CPU | Packaged with Offline Kit; FIPS-approved crypto.
|
||||
| `gost-local` | Sovereign clusters | GOST-compliant crypto & model pipeline.
|
||||
| `cloud-openai` | Remote (optional) | Disabled by default; requires tenant consent and policy alignment.
|
||||
| Custom profiles | Operator-defined | Managed via Authority `advisory-ai` admin APIs and documented policy bundles.
|
||||
|
||||
Offline deployments mirror prompts, guardrails, and weights within Offline Kits. Remote profiles must pass through Authority consent enforcement and strict allowlists.
|
||||
|
||||
## 7. Observability & SLOs
|
||||
|
||||
Metrics (pre-registered in Observability backlog):
|
||||
- `advisory_ai_requests_total{tenant,task,profile}`
|
||||
- `advisory_ai_latency_seconds_bucket`
|
||||
- `advisory_ai_guardrail_blocks_total`
|
||||
- `advisory_ai_cache_hits_total`
|
||||
|
||||
Suggested SLOs (subject to Observability sprint sign-off):
|
||||
- P95 latency ≤ 3s for local profiles, ≤ 8s for remote profiles.
|
||||
- Guardrail block rate < 1% (investigate above threshold).
|
||||
- Cache hit ratio ≥ 60% for repeated advisory requests per tenant.
|
||||
|
||||
## 8. Roadmap & dependencies
|
||||
|
||||
| Area | Key tasks |
|
||||
|------|----------|
|
||||
| API delivery | DOCS-AIAI-31-003 (completed), AIAI-31-004A (service wiring), AIAI-31-006 (public endpoints). |
|
||||
| Guardrails | AIAI-31-005, Security Guild reviews, DSSE provenance wiring (AIAI-31-004B). |
|
||||
| CLI & Console | AIAI-31-004C (CLI), CONSOLE-AIAI tasks (dashboards, widgets). |
|
||||
| Docs | DOCS-AIAI-31-002 (architecture deep-dive), DOCS-AIAI-31-004 (console guide), DOCS-AIAI-31-005 (CLI guide). |
|
||||
|
||||
## 9. Checklist
|
||||
|
||||
- [ ] SBOM context retriever (AIAI-31-002) completed and tested across ecosystems.
|
||||
- [ ] Guardrail library integrated and security-reviewed.
|
||||
- [ ] Authority scopes and consent toggles validated in staging.
|
||||
- [ ] Telemetry dashboard reviewed with Observability guild.
|
||||
- [ ] Offline kit bundle includes prompts, guardrail configs, local profile weights.
|
||||
|
||||
---
|
||||
|
||||
_For questions or contributions, contact the Advisory AI Guild (Slack #guild-advisory-ai) and tag Docs Guild reviewers._
|
||||
@@ -4,6 +4,8 @@
|
||||
**Source:** `src/AirGap/`
|
||||
**Owner:** Platform Team
|
||||
|
||||
> **Note:** This is the module dossier with architecture and implementation details. For operational guides and workflows, see [docs/modules/airgap/guides/](./guides/).
|
||||
|
||||
## Purpose
|
||||
|
||||
AirGap manages sealed knowledge snapshot export and import for offline/air-gapped deployments. Provides time-anchored snapshots with staleness policies, deterministic bundle creation, and secure import validation for complete offline operation.
|
||||
|
||||
35
docs/modules/airgap/gaps/AG1-AG12-remediation.md
Normal file
35
docs/modules/airgap/gaps/AG1-AG12-remediation.md
Normal file
@@ -0,0 +1,35 @@
|
||||
# Remediation plan for AG1–AG12 (Air‑gap deployment playbook gaps)
|
||||
|
||||
Source: `31-Nov-2025 FINDINGS.md` (AG1–AG12). Scope: sprint `SPRINT_0510_0001_0001_airgap`.
|
||||
|
||||
## Summary of actions
|
||||
- **AG1 Trust roots & key custody:** Define per-profile root hierarchy (FIPS/eIDAS/GOST/SM + optional PQ). Require M-of-N custody for offline signer keys; dual-sign (ECDSA+PQ) where regionally allowed. Add rotation cadence (quarterly PQ, annual classical) and HSM/offline signer paths. Manifest fields: `trustRoots[] {id, profile, algo, fingerprint, rotationDue}`.
|
||||
- **AG2 Rekor mirror integrity:** Standardize mirror format as DSSE-signed CAR with `mirror.manifest` (root hash, start/end index, freshness ts, signature). Include staleness window hours and reconciliation steps (prefer upstream Rekor if available, else fail closed when stale > window).
|
||||
- **AG3 Feed freezing & provenance:** Extend offline kit manifest with `feeds[] {name, source, snapshotId, sha256, validFrom, validTo, dsse}`. Replay must refuse newer/older feeds unless override DSSE is supplied.
|
||||
- **AG4 Deterministic tooling versions:** Add `tools[] {name, version, sha256, imageDigest}` to manifest; CLI verifies before replay. Require `--offline`/`--disable-telemetry` flags in runner scripts.
|
||||
- **AG5 Size/resource limits:** Add kit chunking spec (`zstd` chunks, 256 MiB max, per-chunk SHA256) and max kit size (10 GiB). Provide streaming verifier script path (`scripts/verify-kit.sh`) and fail on missing/invalid chunks.
|
||||
- **AG6 Malware/content scanning:** Require pre-publish AV/YARA scan with signed report hash in manifest (`scans[] {tool, version, result, reportSha256}`) and post-ingest scan before registry load. Scanner defaults to offline sigs.
|
||||
- **AG7 Policy/graph alignment:** Manifest must carry policy bundle hash and graph revision hash (DSSE references). Replay fails closed on mismatch. Controller status surfaces hashes and drift seconds.
|
||||
- **AG8 Tenant/env scoping:** Manifest includes `tenant`, `environment`; importer enforces equality and tenant-scoped storage paths. DSSE annotations must carry tenant/env; reject mismatches.
|
||||
- **AG9 Ingress/egress audit trail:** Add signed ingress/egress receipts (`ingress_receipt.dsse`, `egress_receipt.dsse`) capturing kit hash, operator ID, decision, timestamp. Store in Proof Graph (or local CAS mirror when offline).
|
||||
- **AG10 Replay validation depth:** Define levels: `hash-only`, `recompute`, `recompute+policy-freeze`. Manifest states required level; replay script enforces and emits evidence bundle (`replay_evidence.dsse`) with success criteria.
|
||||
- **AG11 Observability in air-gap:** Provide OTLP-to-file/SQLite exporter in kit; default retention 7d/5 GiB cap; redaction allowlist documented. No external sinks. Controller/Importer log to local file + optional JSON lines.
|
||||
- **AG12 Operational runbooks:** Add `docs/airgap/runbooks/` covering: signature failure, missing gateway headers, stale mirror, policy mismatch, chunk verification failure. Include required approvals and fail-closed guidance.
|
||||
|
||||
## Files to update (next steps)
|
||||
- Offline kit manifest schema (`docs/airgap/offline-kit-manifest.schema.json`, new) with fields above.
|
||||
- Runner scripts: `scripts/verify-kit.sh`, `scripts/replay-kit.sh` (enforce hash/tool checks, replay levels).
|
||||
- Add AV/YARA guidance to `docs/airgap/offline-kit/README.md` and integrate into CI.
|
||||
- Update controller/importer status APIs to surface policy/graph hash and scan results.
|
||||
- Add ingress/egress receipt DSSE templates (`docs/airgap/templates/receipt.ingress.json`).
|
||||
|
||||
## Owners & timelines
|
||||
- Schema & manifest updates: AirGap Importer Guild (due 2025-12-05).
|
||||
- Key custody/rotation doc + dual-sign flows: Authority Guild (due 2025-12-06).
|
||||
- Mirror/feeds/tool hashing + scripts: DevOps Guild (due 2025-12-06).
|
||||
- Runbooks + observability defaults: Ops Guild (due 2025-12-07).
|
||||
|
||||
## Acceptance
|
||||
- All new schema fields documented with examples; DSSE signatures validated in CI.
|
||||
- Replay and verify scripts fail-closed on mismatch/staleness; tests cover chunking and hash drift.
|
||||
- Ingress/egress receipts produced during CI dry-run and verified against Proof Graph mirror.
|
||||
@@ -0,0 +1,384 @@
|
||||
# VEX Signature Verification: Offline Mode
|
||||
|
||||
**Sprint:** SPRINT_1227_0004_0001_BE_signature_verification
|
||||
**Task:** T11 - Document offline mode with bundled trust anchors
|
||||
**Date:** 2025-12-28
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This document describes how to configure VEX signature verification for air-gapped (offline) deployments where network access to public trust infrastructure (Sigstore, Fulcio, Rekor) is unavailable.
|
||||
|
||||
---
|
||||
|
||||
## Offline Mode Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Air-Gapped Environment │
|
||||
│ │
|
||||
│ ┌───────────────┐ ┌────────────────────────────────┐ │
|
||||
│ │ VEX Documents │────▶│ ProductionVexSignatureVerifier │ │
|
||||
│ └───────────────┘ └────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ┌──────────────┴────────────────┐ │
|
||||
│ ▼ ▼ │
|
||||
│ ┌─────────────────────────┐ ┌─────────────────────┐ │
|
||||
│ │ Bundled Trust Anchors │ │ Bundled Issuer Dir │ │
|
||||
│ │ /var/stellaops/trust/ │ │ /var/stellaops/ │ │
|
||||
│ │ ├── fulcio-root.pem │ │ bundles/issuers.json│ │
|
||||
│ │ ├── sigstore-root.pem │ └─────────────────────┘ │
|
||||
│ │ └── internal-ca.pem │ │
|
||||
│ └─────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
### 1. Enable Offline Mode
|
||||
|
||||
**File:** `etc/excititor.yaml`
|
||||
|
||||
```yaml
|
||||
VexSignatureVerification:
|
||||
Enabled: true
|
||||
DefaultProfile: "world"
|
||||
OfflineMode: true # Critical: Enable offline verification
|
||||
|
||||
# Offline-specific settings
|
||||
OfflineBundle:
|
||||
Enabled: true
|
||||
BundlePath: "/var/stellaops/bundles"
|
||||
RefreshOnStartup: false
|
||||
|
||||
# Trust anchors for signature verification
|
||||
TrustAnchors:
|
||||
Fulcio:
|
||||
- "/var/stellaops/trust/fulcio-root.pem"
|
||||
- "/var/stellaops/trust/fulcio-intermediate.pem"
|
||||
Sigstore:
|
||||
- "/var/stellaops/trust/sigstore-root.pem"
|
||||
Internal:
|
||||
- "/var/stellaops/trust/internal-ca.pem"
|
||||
- "/var/stellaops/trust/internal-intermediate.pem"
|
||||
|
||||
# IssuerDirectory in offline mode
|
||||
IssuerDirectory:
|
||||
OfflineBundle: "/var/stellaops/bundles/issuers.json"
|
||||
FallbackToBundle: true
|
||||
# ServiceUrl not needed in offline mode
|
||||
```
|
||||
|
||||
### 2. Directory Structure
|
||||
|
||||
```
|
||||
/var/stellaops/
|
||||
├── bundles/
|
||||
│ ├── issuers.json # Issuer directory bundle
|
||||
│ ├── revocations.json # Key revocation data
|
||||
│ └── tuf-metadata/ # TUF metadata for updates
|
||||
│ ├── root.json
|
||||
│ ├── targets.json
|
||||
│ └── snapshot.json
|
||||
├── trust/
|
||||
│ ├── fulcio-root.pem # Sigstore Fulcio root CA
|
||||
│ ├── fulcio-intermediate.pem
|
||||
│ ├── sigstore-root.pem # Sigstore root
|
||||
│ ├── rekor-pubkey.pem # Rekor public key
|
||||
│ ├── internal-ca.pem # Internal enterprise CA
|
||||
│ └── internal-intermediate.pem
|
||||
└── cache/
|
||||
└── verification-cache.db # Local verification cache
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Bundle Preparation
|
||||
|
||||
### 1. Download Trust Anchors
|
||||
|
||||
Run this on a connected machine to prepare the bundle:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# prepare-offline-bundle.sh
|
||||
|
||||
BUNDLE_DIR="./offline-bundle"
|
||||
mkdir -p "$BUNDLE_DIR/trust" "$BUNDLE_DIR/bundles"
|
||||
|
||||
# Download Sigstore trust anchors
|
||||
echo "Downloading Sigstore trust anchors..."
|
||||
curl -sSL https://fulcio.sigstore.dev/api/v2/trustBundle \
|
||||
-o "$BUNDLE_DIR/trust/fulcio-root.pem"
|
||||
|
||||
curl -sSL https://rekor.sigstore.dev/api/v1/log/publicKey \
|
||||
-o "$BUNDLE_DIR/trust/rekor-pubkey.pem"
|
||||
|
||||
# Download TUF metadata
|
||||
echo "Downloading TUF metadata..."
|
||||
cosign initialize --mirror=https://tuf-repo.sigstore.dev \
|
||||
--root="$BUNDLE_DIR/bundles/tuf-metadata"
|
||||
|
||||
# Export issuer directory
|
||||
echo "Exporting issuer directory..."
|
||||
stellaops-cli issuer-directory export \
|
||||
--format json \
|
||||
--output "$BUNDLE_DIR/bundles/issuers.json"
|
||||
|
||||
# Export revocation data
|
||||
echo "Exporting revocation data..."
|
||||
stellaops-cli revocations export \
|
||||
--format json \
|
||||
--output "$BUNDLE_DIR/bundles/revocations.json"
|
||||
|
||||
# Create manifest
|
||||
echo "Creating bundle manifest..."
|
||||
cat > "$BUNDLE_DIR/manifest.json" <<EOF
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"createdAt": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"expiresAt": "$(date -u -d '+90 days' +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"contents": {
|
||||
"trustAnchors": ["fulcio-root.pem", "rekor-pubkey.pem"],
|
||||
"bundles": ["issuers.json", "revocations.json"],
|
||||
"tufMetadata": true
|
||||
},
|
||||
"checksum": "$(find $BUNDLE_DIR -type f -exec sha256sum {} \; | sha256sum | cut -d' ' -f1)"
|
||||
}
|
||||
EOF
|
||||
|
||||
# Package bundle
|
||||
echo "Creating tarball..."
|
||||
tar -czvf "stellaops-trust-bundle-$(date +%Y%m%d).tar.gz" -C "$BUNDLE_DIR" .
|
||||
|
||||
echo "Bundle ready: stellaops-trust-bundle-$(date +%Y%m%d).tar.gz"
|
||||
```
|
||||
|
||||
### 2. Transfer to Air-Gapped Environment
|
||||
|
||||
```bash
|
||||
# On air-gapped machine
|
||||
sudo mkdir -p /var/stellaops/{trust,bundles,cache}
|
||||
sudo tar -xzvf stellaops-trust-bundle-20250128.tar.gz -C /var/stellaops/
|
||||
|
||||
# Verify bundle integrity
|
||||
stellaops-cli bundle verify /var/stellaops/manifest.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Issuer Directory Bundle Format
|
||||
|
||||
**File:** `/var/stellaops/bundles/issuers.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"exportedAt": "2025-01-28T10:30:00Z",
|
||||
"issuers": [
|
||||
{
|
||||
"id": "redhat-security",
|
||||
"name": "Red Hat Product Security",
|
||||
"description": "Official Red Hat security advisories",
|
||||
"jurisdiction": "us",
|
||||
"trustLevel": "high",
|
||||
"keys": [
|
||||
{
|
||||
"keyId": "rh-vex-signing-key-2024",
|
||||
"algorithm": "ECDSA-P256",
|
||||
"publicKey": "-----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0...\n-----END PUBLIC KEY-----",
|
||||
"notBefore": "2024-01-01T00:00:00Z",
|
||||
"notAfter": "2026-01-01T00:00:00Z",
|
||||
"revoked": false
|
||||
}
|
||||
],
|
||||
"csafPublisher": {
|
||||
"providerMetadataUrl": "https://access.redhat.com/.well-known/csaf/provider-metadata.json",
|
||||
"tlpWhite": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "internal-security",
|
||||
"name": "Internal Security Team",
|
||||
"description": "Internal VEX attestations",
|
||||
"jurisdiction": "internal",
|
||||
"trustLevel": "high",
|
||||
"keys": [
|
||||
{
|
||||
"keyId": "internal-vex-key-001",
|
||||
"algorithm": "Ed25519",
|
||||
"publicKey": "-----BEGIN PUBLIC KEY-----\nMCowBQYDK2VwAyEA...\n-----END PUBLIC KEY-----",
|
||||
"notBefore": "2024-06-01T00:00:00Z",
|
||||
"notAfter": "2025-06-01T00:00:00Z",
|
||||
"revoked": false
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"revokedKeys": [
|
||||
{
|
||||
"keyId": "old-compromised-key",
|
||||
"revokedAt": "2024-03-15T00:00:00Z",
|
||||
"reason": "key_compromise"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verification Behavior in Offline Mode
|
||||
|
||||
### Supported Verification Methods
|
||||
|
||||
| Method | Offline Support | Notes |
|
||||
|--------|-----------------|-------|
|
||||
| DSSE | Full | Uses bundled keys |
|
||||
| PGP | Full | Uses bundled keyrings |
|
||||
| X.509 | Partial | Requires bundled CA chain |
|
||||
| Cosign (keyed) | Full | Uses bundled public keys |
|
||||
| Cosign (keyless) | Limited | Requires bundled Fulcio root |
|
||||
| Rekor Verification | No | Transparency log unavailable |
|
||||
|
||||
### Fallback Behavior
|
||||
|
||||
```yaml
|
||||
VexSignatureVerification:
|
||||
OfflineFallback:
|
||||
# When Rekor is unavailable
|
||||
SkipRekorVerification: true
|
||||
WarnOnMissingTransparency: true
|
||||
|
||||
# When issuer key not in bundle
|
||||
UnknownIssuerAction: "warn" # warn | block | allow
|
||||
|
||||
# When certificate chain incomplete
|
||||
IncompleteChainAction: "warn"
|
||||
```
|
||||
|
||||
### Verification Result Fields
|
||||
|
||||
```json
|
||||
{
|
||||
"verified": true,
|
||||
"method": "dsse",
|
||||
"mode": "offline",
|
||||
"warnings": [
|
||||
"transparency_log_skipped"
|
||||
],
|
||||
"issuerName": "Red Hat Product Security",
|
||||
"keyId": "rh-vex-signing-key-2024",
|
||||
"bundleVersion": "2025.01.28",
|
||||
"bundleAge": "P3D"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Bundle Updates
|
||||
|
||||
### Manual Update Process
|
||||
|
||||
1. **Export new bundle** on connected machine
|
||||
2. **Transfer** via secure media (USB, CD)
|
||||
3. **Verify** bundle signature on air-gapped machine
|
||||
4. **Deploy** with rollback capability
|
||||
|
||||
```bash
|
||||
# On air-gapped machine
|
||||
cd /var/stellaops
|
||||
|
||||
# Backup current bundle
|
||||
sudo cp -r bundles bundles.backup-$(date +%Y%m%d)
|
||||
|
||||
# Deploy new bundle
|
||||
sudo tar -xzvf new-bundle.tar.gz -C /tmp/new-bundle
|
||||
sudo stellaops-cli bundle verify /tmp/new-bundle/manifest.json
|
||||
|
||||
# Apply with verification
|
||||
sudo stellaops-cli bundle apply /tmp/new-bundle --verify
|
||||
sudo systemctl restart stellaops-excititor
|
||||
|
||||
# Rollback if needed
|
||||
# sudo stellaops-cli bundle rollback --to bundles.backup-20250115
|
||||
```
|
||||
|
||||
### Recommended Update Frequency
|
||||
|
||||
| Component | Recommended Frequency | Criticality |
|
||||
|-----------|----------------------|-------------|
|
||||
| Trust anchors | Quarterly | High |
|
||||
| Issuer directory | Monthly | Medium |
|
||||
| Revocation data | Weekly | Critical |
|
||||
| TUF metadata | Monthly | Medium |
|
||||
|
||||
---
|
||||
|
||||
## Monitoring and Alerts
|
||||
|
||||
### Bundle Expiration Warning
|
||||
|
||||
```yaml
|
||||
# prometheus-alerts.yaml
|
||||
groups:
|
||||
- name: stellaops-verification
|
||||
rules:
|
||||
- alert: TrustBundleExpiringSoon
|
||||
expr: stellaops_trust_bundle_expiry_days < 30
|
||||
for: 1h
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: "Trust bundle expires in {{ $value }} days"
|
||||
|
||||
- alert: TrustBundleExpired
|
||||
expr: stellaops_trust_bundle_expiry_days <= 0
|
||||
for: 5m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "Trust bundle has expired - verification may fail"
|
||||
```
|
||||
|
||||
### Metrics
|
||||
|
||||
| Metric | Description |
|
||||
|--------|-------------|
|
||||
| `stellaops_trust_bundle_expiry_days` | Days until bundle expiration |
|
||||
| `stellaops_verification_offline_mode` | 1 if running in offline mode |
|
||||
| `stellaops_verification_bundle_key_count` | Number of issuer keys in bundle |
|
||||
| `stellaops_verification_revoked_key_count` | Number of revoked keys |
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **"Unknown issuer" for known vendor**
|
||||
- Update issuer directory bundle
|
||||
- Add vendor's keys to bundle
|
||||
|
||||
2. **"Expired certificate" for recent VEX**
|
||||
- Certificate may have rotated after bundle export
|
||||
- Update trust anchors bundle
|
||||
|
||||
3. **"Chain validation failed"**
|
||||
- Missing intermediate certificate
|
||||
- Add intermediate to bundle
|
||||
|
||||
4. **Stale revocation data**
|
||||
- Key may be compromised but bundle doesn't know
|
||||
- Update revocation bundle urgently
|
||||
|
||||
---
|
||||
|
||||
## See Also
|
||||
|
||||
- [VEX Signature Verification Configuration](../operations/vex-verification-config.md)
|
||||
- [Air-Gap Deployment Guide](../airgap/deployment-guide.md)
|
||||
- [TUF Repository Management](../operations/tuf-repository.md)
|
||||
339
docs/modules/airgap/guides/advisory-implementation-roadmap.md
Normal file
339
docs/modules/airgap/guides/advisory-implementation-roadmap.md
Normal file
@@ -0,0 +1,339 @@
|
||||
# Offline and Air-Gap Advisory Implementation Roadmap
|
||||
|
||||
**Source Advisory:** 14-Dec-2025 - Offline and Air-Gap Technical Reference
|
||||
**Document Version:** 1.0
|
||||
**Last Updated:** 2025-12-15
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This document outlines the implementation roadmap for gaps identified between the 14-Dec-2025 Offline and Air-Gap Technical Reference advisory and the current StellaOps codebase. The implementation is organized into 5 sprints addressing security-critical, high-priority, and enhancement-level improvements.
|
||||
|
||||
---
|
||||
|
||||
## Implementation Overview
|
||||
|
||||
### Sprint Summary
|
||||
|
||||
| Sprint | Topic | Priority | Gaps | Effort | Dependencies |
|
||||
|--------|-------|----------|------|--------|--------------|
|
||||
| [0338](../implplan/SPRINT_0338_0001_0001_airgap_importer_core.md) | AirGap Importer Core | P0 | G6, G7 | Medium | None |
|
||||
| [0339](../implplan/SPRINT_0339_0001_0001_cli_offline_commands.md) | CLI Offline Commands | P1 | G4 | Medium | 0338 |
|
||||
| [0340](../implplan/SPRINT_0340_0001_0001_scanner_offline_config.md) | Scanner Offline Config | P2 | G5 | Medium | 0338 |
|
||||
| [0341](../implplan/SPRINT_0341_0001_0001_observability_audit.md) | Observability & Audit | P1-P2 | G11-G14 | Medium | 0338 |
|
||||
| [0342](../implplan/SPRINT_0342_0001_0001_evidence_reconciliation.md) | Evidence Reconciliation | P3 | G10 | High | 0338, 0340 |
|
||||
|
||||
### Dependency Graph
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ Sprint 0338: AirGap Importer Core (P0) │
|
||||
│ - Monotonicity enforcement (G6) │
|
||||
│ - Quarantine handling (G7) │
|
||||
│ │
|
||||
└──────────────────┬──────────────────────────┘
|
||||
│
|
||||
┌─────────────────────┼─────────────────────┐
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌────────────────┐ ┌────────────────┐ ┌────────────────┐
|
||||
│ Sprint 0339 │ │ Sprint 0340 │ │ Sprint 0341 │
|
||||
│ CLI Commands │ │ Scanner Config │ │ Observability │
|
||||
│ (P1) │ │ (P2) │ │ (P1-P2) │
|
||||
│ - G4 │ │ - G5 │ │ - G11-G14 │
|
||||
└────────────────┘ └───────┬────────┘ └────────────────┘
|
||||
│
|
||||
▼
|
||||
┌────────────────┐
|
||||
│ Sprint 0342 │
|
||||
│ Evidence Recon │
|
||||
│ (P3) │
|
||||
│ - G10 │
|
||||
└────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Gap-to-Sprint Mapping
|
||||
|
||||
### P0 - Critical (Must Implement First)
|
||||
|
||||
| Gap ID | Description | Sprint | Rationale |
|
||||
|--------|-------------|--------|-----------|
|
||||
| **G6** | Monotonicity enforcement | 0338 | Rollback prevention is security-critical; prevents replay attacks |
|
||||
| **G7** | Quarantine directory handling | 0338 | Essential for forensic analysis of failed imports |
|
||||
|
||||
### P1 - High Priority
|
||||
|
||||
| Gap ID | Description | Sprint | Rationale |
|
||||
|--------|-------------|--------|-----------|
|
||||
| **G4** | CLI `offline` command group | 0339 | Primary operator interface; competitive parity |
|
||||
| **G11** | Prometheus metrics | 0341 | Operational visibility in air-gap environments |
|
||||
| **G13** | Error reason codes | 0341 | Automation and troubleshooting |
|
||||
|
||||
### P2 - Important
|
||||
|
||||
| Gap ID | Description | Sprint | Rationale |
|
||||
|--------|-------------|--------|-----------|
|
||||
| **G5** | Scanner offline config surface | 0340 | Enterprise trust anchor management |
|
||||
| **G12** | Structured logging fields | 0341 | Log aggregation and correlation |
|
||||
| **G14** | Audit schema enhancement | 0341 | Compliance and chain-of-custody |
|
||||
|
||||
### P3 - Lower Priority
|
||||
|
||||
| Gap ID | Description | Sprint | Rationale |
|
||||
|--------|-------------|--------|-----------|
|
||||
| **G10** | Evidence reconciliation algorithm | 0342 | Complex but valuable; VEX-first decisioning |
|
||||
|
||||
### Deferred (Not Implementing)
|
||||
|
||||
| Gap ID | Description | Rationale |
|
||||
|--------|-------------|-----------|
|
||||
| **G9** | YAML verification policy schema | Over-engineering; existing JSON/code config sufficient |
|
||||
|
||||
---
|
||||
|
||||
## Technical Architecture
|
||||
|
||||
### New Components
|
||||
|
||||
```
|
||||
src/AirGap/
|
||||
├── StellaOps.AirGap.Importer/
|
||||
│ ├── Versioning/
|
||||
│ │ ├── BundleVersion.cs # Sprint 0338
|
||||
│ │ ├── IVersionMonotonicityChecker.cs # Sprint 0338
|
||||
│ │ └── IBundleVersionStore.cs # Sprint 0338
|
||||
│ ├── Quarantine/
|
||||
│ │ ├── IQuarantineService.cs # Sprint 0338
|
||||
│ │ ├── FileSystemQuarantineService.cs # Sprint 0338
|
||||
│ │ └── QuarantineOptions.cs # Sprint 0338
|
||||
│ ├── Telemetry/
|
||||
│ │ ├── OfflineKitMetrics.cs # Sprint 0341
|
||||
│ │ ├── OfflineKitLogFields.cs # Sprint 0341
|
||||
│ │ └── OfflineKitLogScopes.cs # Sprint 0341
|
||||
│ ├── Reconciliation/
|
||||
│ │ ├── ArtifactIndex.cs # Sprint 0342
|
||||
│ │ ├── EvidenceCollector.cs # Sprint 0342
|
||||
│ │ ├── DocumentNormalizer.cs # Sprint 0342
|
||||
│ │ ├── PrecedenceLattice.cs # Sprint 0342
|
||||
│ │ └── EvidenceGraphEmitter.cs # Sprint 0342
|
||||
src/Scanner/
|
||||
├── __Libraries/StellaOps.Scanner.Core/
|
||||
│ ├── Configuration/
|
||||
│ │ ├── OfflineKitOptions.cs # Sprint 0340
|
||||
│ │ ├── TrustAnchorConfig.cs # Sprint 0340
|
||||
│ │ └── OfflineKitOptionsValidator.cs # Sprint 0340
|
||||
│ └── TrustAnchors/
|
||||
│ ├── PurlPatternMatcher.cs # Sprint 0340
|
||||
│ ├── ITrustAnchorRegistry.cs # Sprint 0340
|
||||
│ └── TrustAnchorRegistry.cs # Sprint 0340
|
||||
|
||||
src/Cli/
|
||||
├── StellaOps.Cli/
|
||||
│ ├── Commands/
|
||||
│ ├── Offline/
|
||||
│ │ ├── OfflineCommandGroup.cs # Sprint 0339
|
||||
│ │ ├── OfflineImportHandler.cs # Sprint 0339
|
||||
│ │ ├── OfflineStatusHandler.cs # Sprint 0339
|
||||
│ │ └── OfflineExitCodes.cs # Sprint 0339
|
||||
│ └── Verify/
|
||||
│ └── VerifyOfflineHandler.cs # Sprint 0339
|
||||
│ └── Output/
|
||||
│ └── OfflineKitReasonCodes.cs # Sprint 0341
|
||||
|
||||
src/Authority/
|
||||
├── __Libraries/StellaOps.Authority.Storage.Postgres/
|
||||
│ └── Migrations/
|
||||
│ └── 004_offline_kit_audit.sql # Sprint 0341
|
||||
```
|
||||
|
||||
### Database Changes
|
||||
|
||||
| Table | Schema | Sprint | Purpose |
|
||||
|-------|--------|--------|---------|
|
||||
| `airgap.bundle_versions` | New | 0338 | Track active bundle versions per tenant/type |
|
||||
| `airgap.bundle_version_history` | New | 0338 | Version history for audit trail |
|
||||
| `authority.offline_kit_audit` | New | 0341 | Enhanced audit with Rekor/DSSE fields |
|
||||
|
||||
### Configuration Changes
|
||||
|
||||
| Section | Sprint | Fields |
|
||||
|---------|--------|--------|
|
||||
| `AirGap:Quarantine` | 0338 | `QuarantineRoot`, `RetentionPeriod`, `MaxQuarantineSizeBytes` |
|
||||
| `Scanner:OfflineKit` | 0340 | `RequireDsse`, `RekorOfflineMode`, `TrustAnchors[]` |
|
||||
|
||||
### CLI Commands
|
||||
|
||||
| Command | Sprint | Description |
|
||||
|---------|--------|-------------|
|
||||
| `stellaops offline import` | 0339 | Import offline kit with verification |
|
||||
| `stellaops offline status` | 0339 | Display current kit status |
|
||||
| `stellaops verify offline` | 0339 | Offline evidence verification |
|
||||
|
||||
### Metrics
|
||||
|
||||
| Metric | Type | Sprint | Labels |
|
||||
|--------|------|--------|--------|
|
||||
| `offlinekit_import_total` | Counter | 0341 | `status`, `tenant_id` |
|
||||
| `offlinekit_attestation_verify_latency_seconds` | Histogram | 0341 | `attestation_type`, `success` |
|
||||
| `attestor_rekor_success_total` | Counter | 0341 | `mode` |
|
||||
| `attestor_rekor_retry_total` | Counter | 0341 | `reason` |
|
||||
| `rekor_inclusion_latency` | Histogram | 0341 | `success` |
|
||||
|
||||
---
|
||||
|
||||
## Implementation Sequence
|
||||
|
||||
### Phase 1: Foundation (Sprint 0338)
|
||||
**Duration:** 1 sprint
|
||||
**Focus:** Security-critical infrastructure
|
||||
|
||||
1. Implement `BundleVersion` model with semver parsing
|
||||
2. Create `IVersionMonotonicityChecker` and Postgres store
|
||||
3. Integrate monotonicity check into `ImportValidator`
|
||||
4. Implement `--force-activate` with audit trail
|
||||
5. Create `IQuarantineService` and file-system implementation
|
||||
6. Integrate quarantine into all import failure paths
|
||||
7. Write comprehensive tests
|
||||
|
||||
**Exit Criteria:**
|
||||
- [ ] Rollback attacks are prevented
|
||||
- [ ] Failed bundles are preserved for investigation
|
||||
- [ ] Force activation requires justification
|
||||
|
||||
### Phase 2: Operator Experience (Sprints 0339, 0341)
|
||||
**Duration:** 1-2 sprints (can parallelize)
|
||||
**Focus:** CLI and observability
|
||||
|
||||
**Sprint 0339 (CLI):**
|
||||
1. Create `offline` command group
|
||||
2. Implement `offline import` with all flags
|
||||
3. Implement `offline status` with output formats
|
||||
4. Implement `verify offline` with policy loading
|
||||
5. Add exit code standardization
|
||||
6. Write CLI integration tests
|
||||
|
||||
**Sprint 0341 (Observability):**
|
||||
1. Add Prometheus metrics infrastructure
|
||||
2. Implement offline kit metrics
|
||||
3. Standardize structured logging fields
|
||||
4. Complete error reason codes
|
||||
5. Create audit schema migration
|
||||
6. Implement audit repository and emitter
|
||||
7. Create Grafana dashboard
|
||||
|
||||
> Blockers: Prometheus `/metrics` endpoint hosting and audit emitter call-sites await an owning Offline Kit import/activation flow (`POST /api/offline-kit/import`).
|
||||
|
||||
**Exit Criteria:**
|
||||
- [ ] Operators can import/verify kits via CLI
|
||||
- [ ] Metrics are visible in Prometheus/Grafana
|
||||
- [ ] All operations are auditable
|
||||
|
||||
### Phase 3: Configuration (Sprint 0340)
|
||||
**Duration:** 1 sprint
|
||||
**Focus:** Trust anchor management
|
||||
|
||||
1. Create `OfflineKitOptions` configuration class
|
||||
2. Implement PURL pattern matcher
|
||||
3. Create `TrustAnchorRegistry` with precedence resolution
|
||||
4. Add options validation
|
||||
5. Integrate trust anchors with DSSE verification
|
||||
6. Update Helm chart values
|
||||
7. Write configuration tests
|
||||
|
||||
**Exit Criteria:**
|
||||
- [ ] Trust anchors configurable per ecosystem
|
||||
- [ ] DSSE verification uses configured anchors
|
||||
- [ ] Invalid configuration fails startup
|
||||
|
||||
### Phase 4: Advanced Features (Sprint 0342)
|
||||
**Duration:** 1-2 sprints
|
||||
**Focus:** Evidence reconciliation
|
||||
|
||||
1. Design artifact indexing
|
||||
2. Implement evidence collection
|
||||
3. Create document normalization
|
||||
4. Implement VEX precedence lattice
|
||||
5. Create evidence graph emitter
|
||||
6. Integrate with CLI `verify offline`
|
||||
7. Write golden-file determinism tests
|
||||
|
||||
**Exit Criteria:**
|
||||
- [ ] Evidence reconciliation is deterministic
|
||||
- [ ] VEX conflicts resolved by precedence
|
||||
- [ ] Graph output is signed and verifiable
|
||||
|
||||
---
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Unit Tests
|
||||
- All new classes have corresponding test classes
|
||||
- Mock dependencies for isolation
|
||||
- Property-based tests for lattice operations
|
||||
|
||||
### Integration Tests
|
||||
- Testcontainers for PostgreSQL
|
||||
- Full import → verification → audit flow
|
||||
- CLI command execution tests
|
||||
|
||||
### Determinism Tests
|
||||
- Golden-file tests for evidence reconciliation
|
||||
- Cross-platform validation (Windows, Linux, macOS)
|
||||
- Reproducibility across runs
|
||||
|
||||
### Security Tests
|
||||
- Monotonicity bypass attempts
|
||||
- Signature verification edge cases
|
||||
- Trust anchor configuration validation
|
||||
|
||||
---
|
||||
|
||||
## Documentation Updates
|
||||
|
||||
| Document | Sprint | Updates |
|
||||
|----------|--------|---------|
|
||||
| `docs/airgap/importer.md` | 0338 | Monotonicity + quarantine reference |
|
||||
| `docs/airgap/runbooks/quarantine-investigation.md` | 0338 | New runbook |
|
||||
| `docs/modules/cli/commands/offline.md` | 0339 | New command reference |
|
||||
| `docs/modules/cli/guides/airgap.md` | 0339 | Update with CLI examples |
|
||||
| `docs/modules/scanner/configuration.md` | 0340 | Add offline kit config section |
|
||||
| `docs/airgap/observability.md` | 0341 | Metrics and logging reference |
|
||||
| `docs/airgap/evidence-reconciliation.md` | 0342 | Algorithm documentation |
|
||||
|
||||
---
|
||||
|
||||
## Risk Register
|
||||
|
||||
| Risk | Impact | Mitigation |
|
||||
|------|--------|------------|
|
||||
| Monotonicity breaks existing workflows | High | Provide `--force-activate` escape hatch |
|
||||
| Quarantine disk exhaustion | Medium | Implement quota and TTL cleanup |
|
||||
| Trust anchor config complexity | Medium | Provide sensible defaults, validate at startup |
|
||||
| Evidence reconciliation performance | Medium | Streaming processing, caching |
|
||||
| Cross-platform determinism failures | High | CI matrix, golden-file tests |
|
||||
|
||||
---
|
||||
|
||||
## Success Metrics
|
||||
|
||||
| Metric | Target | Sprint |
|
||||
|--------|--------|--------|
|
||||
| Rollback attack prevention | 100% | 0338 |
|
||||
| Failed bundle quarantine rate | 100% | 0338 |
|
||||
| CLI command adoption | 50% operators | 0339 |
|
||||
| Metric collection uptime | 99.9% | 0341 |
|
||||
| Audit completeness | 100% events | 0341 |
|
||||
| Reconciliation determinism | 100% | 0342 |
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- [14-Dec-2025 Offline and Air-Gap Technical Reference](../product-advisories/14-Dec-2025%20-%20Offline%20and%20Air-Gap%20Technical%20Reference.md)
|
||||
- [Air-Gap Mode Playbook](./airgap-mode.md)
|
||||
- [Offline Kit Documentation](../OFFLINE_KIT.md)
|
||||
- [Importer](./importer.md)
|
||||
77
docs/modules/airgap/guides/airgap-mode.md
Normal file
77
docs/modules/airgap/guides/airgap-mode.md
Normal file
@@ -0,0 +1,77 @@
|
||||
# Air-Gapped Mode Playbook
|
||||
|
||||
> Work of this type or tasks of this type on this component must also be applied everywhere else it should be applied.
|
||||
|
||||
## Overview
|
||||
|
||||
Air-Gapped Mode is the supported operating profile for deployments with **zero external egress**. All inputs arrive via signed mirror bundles, and every surface (CLI, Console, APIs, schedulers, scanners) operates under sealed-network constraints while preserving Aggregation-Only Contract invariants.
|
||||
|
||||
- **Primary components:** Web Services API, Console, CLI, Orchestrator, Task Runner, Conselier (formerly Feedser), Excitor (formerly Vexer), Policy Engine, Findings Ledger, Export Center, Authority & Tenancy, Notifications, Observability & Forensics.
|
||||
- **Surfaces:** offline bootstrap, mirror ingestion, deterministic jobs, offline advisories/VEX/policy packs/notifications, evidence exports.
|
||||
- **Dependencies:** Export Center, Containerized Distribution, Authority-backed scopes & tenancy, Observability & Forensics, Policy Studio.
|
||||
|
||||
## Guiding principles
|
||||
|
||||
1. **Zero egress:** all outbound network calls are disabled unless explicitly allowed. Any feature requiring online data must degrade gracefully with clear UX messaging.
|
||||
2. **Deterministic inputs:** the platform accepts only signed Mirror Bundles (advisories, VEX, policy packs, vendor feeds, images, dashboards). Bundles carry provenance attestations and chain-of-custody manifests.
|
||||
3. **Auditable exchange:** every import/export records provenance, signatures, and operator identity. Evidence bundles and reports remain verifiable offline.
|
||||
4. **Aggregation-Only Contract compliance:** Conseiller and Excitor continue to aggregate without mutating source records, even when ingesting mirrored feeds.
|
||||
5. **Operator ergonomics:** offline bootstrap, upgrade, and verification steps are reproducible and scripted.
|
||||
|
||||
## Lifecycle & modes
|
||||
|
||||
| Mode | Description | Tooling |
|
||||
| --- | --- | --- |
|
||||
| Connected | Standard deployment with online feeds. Operators use Export Center to build mirror bundles for offline environments. | `stella export bundle create --profile mirror:full` |
|
||||
| Staging mirror | Sealed host that fetches upstream feeds, runs validation, and signs mirror bundles. | Export Center, cosign, bundle validation scripts |
|
||||
| Air-gapped | Production cluster with egress sealed, consuming validated bundles, issuing provenance for inward/outward transfers. | Mirror import CLI, sealed-mode runtime flags |
|
||||
|
||||
### Installation & bootstrap
|
||||
|
||||
1. Prepare mirror bundles (images, charts, advisories/VEX, policy packs, dashboards, telemetry configs).
|
||||
2. Transfer bundles via approved media and validate signatures (`cosign verify`, bundle manifest hash).
|
||||
3. Deploy platform using offline artefacts (`helm install --set airgap.enabled=true`), referencing local registry/object storage.
|
||||
|
||||
### Updates
|
||||
|
||||
1. Staging host generates incremental bundles (mirror delta) with provenance.
|
||||
2. Offline site imports bundles via the CLI (`stella airgap import --bundle`) and records chain-of-custody.
|
||||
3. Scheduler triggers replay jobs with deterministic timelines; results remain reproducible across imports.
|
||||
|
||||
## Component responsibilities
|
||||
|
||||
| Component | Offline duties |
|
||||
| --- | --- |
|
||||
| Export Center | Produce full/delta mirror bundles, signed manifests, provenance attestations. |
|
||||
| Authority & Tenancy | Provide offline scope enforcement, short-lived tokens, revocation via local CRLs. |
|
||||
| Conseiller / Excitor | Ingest mirrored advisories/VEX, enforce AOC, versioned observations. |
|
||||
| Policy Engine & Findings Ledger | Replay evaluations using offline feeds, emit explain traces, support sealed-mode hints. |
|
||||
| Notifications | Deliver locally via approved channels (email relay, webhook proxies) or queue for manual export. |
|
||||
| Observability | Collect metrics/logs/traces locally, generate forensic bundles for external analysis. |
|
||||
|
||||
## Operational guardrails
|
||||
|
||||
- **Network policy:** enforce allowlists (`airgap.egressAllowlist=[]`). Any unexpected outbound request raises an alert.
|
||||
- **Bundle validation:** double-sign manifests (bundle signer + site-specific cosign key); reject on mismatch.
|
||||
- **Time synchronization:** rely on local NTP or manual clock audits; many signatures require monotonic time.
|
||||
- **Key rotation:** plan for offline key ceremonies; Export Center and Authority document rotation playbooks.
|
||||
- **Authority scopes:** enforce `airgap:status:read`, `airgap:import`, and `airgap:seal` via tenant-scoped roles; require operator reason/ticket metadata for sealing.
|
||||
- **AirGap controller API:** requires tenant identity (`x-tenant-id` header or tenant claim) plus the matching scope; requests without tenant context are rejected.
|
||||
- **Incident response:** maintain scripts for replaying imports, regenerating manifests, and exporting forensic data without egress.
|
||||
- **EgressPolicy facade:** all services route outbound calls through `StellaOps.AirGap.Policy`. In sealed mode `EgressPolicy` enforces the `airgap.egressAllowlist`, auto-permits loopback targets, and raises `AIRGAP_EGRESS_BLOCKED` exceptions with remediation text (add host to allowlist or coordinate break-glass). Unsealed mode logs intents but does not block, giving operators a single toggle for rehearsals. Task Runner now feeds every `run.egress` declaration and runtime network hint into the shared policy during planning, preventing sealed-mode packs from executing unless destinations are declared and allow-listed.
|
||||
- **CLI guard:** the CLI now routes outbound HTTP through the shared egress policy. When sealed, commands that would dial external endpoints (for example, `scanner download` or remote `sources ingest` URIs) are refused with `AIRGAP_EGRESS_BLOCKED` messaging and remediation guidance instead of attempting the network call.
|
||||
- **Observability exporters:** `StellaOps.Telemetry.Core` now binds OTLP exporters to the configured egress policy. When sealed, any collector endpoint that is not loopback or allow-listed is skipped at startup and a structured warning is written so operators see the remediation guidance without leaving sealed mode.
|
||||
- **Linting/CI:** enable the `StellaOps.AirGap.Policy.Analyzers` package in solution-level analyzers so CI fails on raw `HttpClient` usage. The analyzer emits `AIRGAP001` and the bundled code fix rewrites to `EgressHttpClientFactory.Create(...)`; treat analyzer warnings as errors in sealed-mode pipelines.
|
||||
|
||||
## Testing & verification
|
||||
|
||||
- Integration tests mimic offline installs by running with `AIRGAP_ENABLED=true` in CI.
|
||||
- Mirror bundles include validation scripts to compare hash manifests across staging and production.
|
||||
- Sealed-mode smoke tests ensure services fail closed when attempting egress.
|
||||
|
||||
## References
|
||||
|
||||
- Export workflows: `docs/modules/export-center/overview.md`
|
||||
- Policy sealed-mode hints: `docs/modules/policy/guides/overview.md`
|
||||
- Observability forensic bundles: `docs/modules/telemetry/architecture.md`
|
||||
- Runtime posture enforcement: `docs/modules/zastava/operations/runtime.md`
|
||||
33
docs/modules/airgap/guides/bootstrap.md
Normal file
33
docs/modules/airgap/guides/bootstrap.md
Normal file
@@ -0,0 +1,33 @@
|
||||
# Bootstrap Pack (Airgap 56-004)
|
||||
|
||||
Guidance to build and install the bootstrap pack that primes sealed environments.
|
||||
|
||||
## Contents
|
||||
- Core images/charts for platform services (Authority, Excititor, Concelier, Export Center, Scheduler) with digests.
|
||||
- Offline NuGet/npm caches (if permitted) with checksum manifest.
|
||||
- Configuration defaults: sealed-mode toggles, trust roots, time-anchor bundle, network policy presets.
|
||||
- Verification scripts: hash check, DSSE verification (if available), and connectivity probes to local mirrors.
|
||||
|
||||
## Build steps
|
||||
1. Gather image digests and charts from trusted registry/mirror.
|
||||
2. Create `bootstrap-manifest.json` with:
|
||||
- `bundleId`, `createdAt` (UTC), `producer`, `mirrorGeneration`
|
||||
- `files[]` (path, sha256, size, mediaType)
|
||||
- optional `dsseEnvelopeHash`
|
||||
3. Package into tarball with deterministic ordering (POSIX tar, sorted paths, numeric owner 0:0).
|
||||
4. Compute sha256 for tarball; record in manifest.
|
||||
|
||||
## Install steps
|
||||
1. Transfer pack to sealed site (removable media).
|
||||
2. Verify tarball hash and DSSE (if present) using offline trust roots.
|
||||
3. Load images/charts into local registry; preload caches to `local-nugets/` etc.
|
||||
4. Apply network policies (deny-all) and sealed-mode config.
|
||||
5. Register bootstrap manifest and mirrorGeneration with Excititor/Export Center.
|
||||
|
||||
## Determinism & rollback
|
||||
- Keep manifests in ISO-8601 UTC; no host-specific metadata in tar headers.
|
||||
- For rollback, retain previous bootstrap tarball + manifest; restore registry contents and config snapshots.
|
||||
|
||||
## Related
|
||||
- `docs/modules/airgap/guides/mirror-bundles.md` — mirror pack format and validation.
|
||||
- `docs/modules/airgap/guides/sealing-and-egress.md` — egress enforcement used during install.
|
||||
38
docs/modules/airgap/guides/bundle-repositories.md
Normal file
38
docs/modules/airgap/guides/bundle-repositories.md
Normal file
@@ -0,0 +1,38 @@
|
||||
# Bundle Catalog & Items Repositories (prep for AIRGAP-IMP-57-001)
|
||||
|
||||
## Scope
|
||||
- Deterministic storage for offline bundle metadata with tenant isolation (RLS) and stable ordering.
|
||||
- Ready for PostgreSQL-backed implementation while providing in-memory deterministic reference behavior.
|
||||
|
||||
## Schema (logical)
|
||||
- `bundle_catalog`:
|
||||
- `tenant_id` (string, PK part, RLS partition)
|
||||
- `bundle_id` (string, PK part)
|
||||
- `digest` (hex string)
|
||||
- `imported_at_utc` (datetime)
|
||||
- `content_paths` (array of strings, sorted ordinal)
|
||||
- `bundle_items`:
|
||||
- `tenant_id` (string, PK part, RLS partition)
|
||||
- `bundle_id` (string, PK part)
|
||||
- `path` (string, PK part)
|
||||
- `digest` (hex string)
|
||||
- `size_bytes` (long)
|
||||
|
||||
## Implementation delivered (2025-11-20)
|
||||
- In-memory repositories enforcing tenant isolation and deterministic ordering:
|
||||
- `InMemoryBundleCatalogRepository` (upsert + list ordered by `bundle_id`).
|
||||
- `InMemoryBundleItemRepository` (bulk upsert + list ordered by `path`).
|
||||
- Models: `BundleCatalogEntry`, `BundleItem`.
|
||||
- Tests cover upsert overwrite semantics, tenant isolation, and deterministic ordering (`tests/AirGap/StellaOps.AirGap.Importer.Tests/InMemoryBundleRepositoriesTests.cs`).
|
||||
|
||||
## Migration notes (for PostgreSQL backends)
|
||||
- Create compound unique indexes on (`tenant_id`, `bundle_id`) for catalog; (`tenant_id`, `bundle_id`, `path`) for items.
|
||||
- Enforce RLS by always scoping queries to `tenant_id` and validating it at repository boundary (as done in in-memory reference impl).
|
||||
- Keep paths lowercased or use ordinal comparisons to avoid locale drift; sort before persistence to preserve determinism.
|
||||
|
||||
## Next steps
|
||||
- Implement PostgreSQL-backed repositories mirroring the deterministic behavior and indexes above.
|
||||
- Wire repositories into importer service/CLI once storage provider is selected.
|
||||
|
||||
## Owners
|
||||
- AirGap Importer Guild.
|
||||
6
docs/modules/airgap/guides/console-airgap-tasks.md
Normal file
6
docs/modules/airgap/guides/console-airgap-tasks.md
Normal file
@@ -0,0 +1,6 @@
|
||||
# Console Airgap Implementation Tasks (link to DOCS-AIRGAP-57-002)
|
||||
|
||||
- Implement sealed badge + staleness indicators using `staleness-and-time.md` rules.
|
||||
- Hook import wizard to backend once mirror bundle schema and timeline event API are available.
|
||||
- Ensure admin-only import; read-only view otherwise.
|
||||
- Emit telemetry for imports (success/failure) and denied attempts.
|
||||
100
docs/modules/airgap/guides/controller.md
Normal file
100
docs/modules/airgap/guides/controller.md
Normal file
@@ -0,0 +1,100 @@
|
||||
# AirGap Controller
|
||||
|
||||
The AirGap Controller is the tenant-scoped state keeper for sealed-mode operation. It records whether an installation is sealed, what policy hash is active, which time anchor is in force, and what staleness budgets apply.
|
||||
|
||||
For workflow context, start at `docs/modules/airgap/guides/overview.md` and `docs/modules/airgap/guides/airgap-mode.md`.
|
||||
|
||||
## Responsibilities
|
||||
|
||||
- Maintain the current AirGap state per tenant (sealed/unsealed, policy hash, time anchor, staleness budgets).
|
||||
- Provide a deterministic, auditable status snapshot for operators and automation.
|
||||
- Enforce sealed/unsealed transitions via Authority scopes.
|
||||
- Emit telemetry signals suitable for dashboards and forensics timelines.
|
||||
|
||||
Non-goals:
|
||||
|
||||
- Bundle signature validation and import staging (owned by the importer; see `docs/modules/airgap/guides/importer.md`).
|
||||
- Cryptographic signing (Signer/Attestor).
|
||||
|
||||
## API
|
||||
|
||||
Base route group: `/system/airgap` (requires authorization).
|
||||
|
||||
### `GET /system/airgap/status`
|
||||
|
||||
Required scope: `airgap:status:read`
|
||||
|
||||
Response: `AirGapStatusResponse` (current state + staleness evaluation).
|
||||
|
||||
Notes:
|
||||
|
||||
- Tenant routing uses `x-tenant-id` (defaults to `default` if absent).
|
||||
- `driftSeconds` and `secondsRemaining` are derived from the active time anchor and staleness budget evaluation.
|
||||
- `contentStaleness` contains per-category staleness evaluations (clients should treat keys as case-insensitive).
|
||||
|
||||
### `POST /system/airgap/seal`
|
||||
|
||||
Required scope: `airgap:seal`
|
||||
|
||||
Body: `SealRequest`
|
||||
|
||||
- `policyHash` (required): binds the sealed state to a specific policy revision.
|
||||
- `timeAnchor` (optional): time anchor record (from the AirGap Time service).
|
||||
- `stalenessBudget` (optional): default staleness budget.
|
||||
- `contentBudgets` (optional): per-category staleness budgets (e.g., `advisories`, `vex`, `scanner`).
|
||||
|
||||
Behavior:
|
||||
|
||||
- Rejects requests missing `policyHash` (`400 { \"error\": \"policy_hash_required\" }`).
|
||||
- Records the sealed state and returns an updated status snapshot.
|
||||
|
||||
### `POST /system/airgap/unseal`
|
||||
|
||||
Required scope: `airgap:seal`
|
||||
|
||||
Behavior:
|
||||
|
||||
- Clears the sealed state and returns an updated status snapshot.
|
||||
- Staleness is returned as `Unknown` after unseal (clients should treat this as "not applicable").
|
||||
|
||||
### `POST /system/airgap/verify`
|
||||
|
||||
Required scope: `airgap:verify`
|
||||
|
||||
Purpose: verify replay / bundle verification requests against the currently active AirGap state.
|
||||
|
||||
## State model (per tenant)
|
||||
|
||||
Canonical fields captured by the controller (see `src/AirGap/StellaOps.AirGap.Controller`):
|
||||
|
||||
- `tenantId`
|
||||
- `sealed`
|
||||
- `policyHash` (nullable)
|
||||
- `timeAnchor` (`TimeAnchor`, may be `Unknown`)
|
||||
- `stalenessBudget` (`StalenessBudget`)
|
||||
- `contentBudgets` (`Dictionary<string, StalenessBudget>`)
|
||||
- `driftBaselineSeconds` (baseline used to keep drift evaluation stable across transitions)
|
||||
- `lastTransitionAt` (UTC)
|
||||
|
||||
Determinism requirements:
|
||||
|
||||
- Use UTC timestamps only.
|
||||
- Use ordinal comparisons for keys and stable serialization settings for JSON responses.
|
||||
- Never infer state from wall-clock behavior other than the injected `TimeProvider`.
|
||||
|
||||
## Telemetry
|
||||
|
||||
The controller emits:
|
||||
|
||||
- Structured logs: `airgap.status.read`, `airgap.sealed`, `airgap.unsealed`, `airgap.verify` (include `tenant_id`, `policy_hash`, and drift/staleness).
|
||||
- Metrics: `airgap_seal_total`, `airgap_unseal_total`, `airgap_status_read_total`, and gauges for drift/budget/remaining seconds.
|
||||
- Timeline events (optional): `airgap.sealed`, `airgap.unsealed`, `airgap.staleness.warning`, `airgap.staleness.breach`.
|
||||
|
||||
## References
|
||||
|
||||
- `docs/modules/airgap/guides/overview.md`
|
||||
- `docs/modules/airgap/guides/sealed-startup-diagnostics.md`
|
||||
- `docs/modules/airgap/guides/staleness-and-time.md`
|
||||
- `docs/modules/airgap/guides/time-api.md`
|
||||
- `docs/modules/airgap/guides/importer.md`
|
||||
|
||||
19
docs/modules/airgap/guides/degradation-matrix.md
Normal file
19
docs/modules/airgap/guides/degradation-matrix.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# Airgap Degradation Matrix (DOCS-AIRGAP-58-001)
|
||||
|
||||
What works and what degrades across modes (sealed → constrained → connected).
|
||||
|
||||
| Capability | Connected | Constrained | Sealed | Notes |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| Mirror imports | ✓ | ✓ | ✓ | Sealed requires preloaded media + offline validation. |
|
||||
| Time anchors (external NTP) | ✓ | ✓ (allowlisted) | ✗ | Sealed relies on signed time anchors. |
|
||||
| Transparency log lookups | ✓ | ✓ (if allowlisted) | ✗ | Sealed skips; rely on bundled checkpoints. |
|
||||
| Rekor witness | ✓ | optional | ✗ | Disabled in sealed; log locally. |
|
||||
| SBOM feed refresh | ✓ | limited mirrors | offline only | Use mirror bundles. |
|
||||
| CLI plugin downloads | ✓ | allowlisted | ✗ | Must ship in bootstrap pack. |
|
||||
| Telemetry export | ✓ | optional | optional/log-only | Sealed may use console exporter only. |
|
||||
| Webhook callbacks | ✓ | allowlisted internal only | ✗ | Use internal queue instead. |
|
||||
| OTA updates | ✓ | partial | ✗ | Use mirrorGeneration refresh. |
|
||||
|
||||
## Remediation guidance
|
||||
- If a capability is degraded in sealed mode, provide offline substitute (mirror bundles, time anchors, console exporter).
|
||||
- When moving to constrained/connected, re-enable trust roots and transparency checks gradually; verify hashes first.
|
||||
23
docs/modules/airgap/guides/devportal-offline.md
Normal file
23
docs/modules/airgap/guides/devportal-offline.md
Normal file
@@ -0,0 +1,23 @@
|
||||
# DevPortal Offline (DOCS-AIRGAP-DEVPORT-64-001)
|
||||
|
||||
How to use the developer portal in fully offline/sealed environments.
|
||||
|
||||
## Serving the portal
|
||||
- Host static build from local object store or file server; no CDN.
|
||||
- Set `DEVPORTAL_OFFLINE=true` to disable external analytics/fonts.
|
||||
|
||||
## Auth
|
||||
- Use Authority in offline mode with pre-provisioned tenants; cache JWKS locally.
|
||||
|
||||
## Bundles
|
||||
- Provide mirror/bootstrap bundles via offline download page with hashes and DSSE (if available).
|
||||
- Offer time anchors download; display staleness and mirrorGeneration in UI header.
|
||||
|
||||
## Search/docs
|
||||
- Bundle docs and search index; disable remote doc fetch.
|
||||
|
||||
## Telemetry
|
||||
- Disable remote telemetry; keep console logs only or send to local OTLP endpoint.
|
||||
|
||||
## Verification
|
||||
- On load, run self-check to confirm no external requests; fail with clear banner if any detected.
|
||||
732
docs/modules/airgap/guides/epss-bundles.md
Normal file
732
docs/modules/airgap/guides/epss-bundles.md
Normal file
@@ -0,0 +1,732 @@
|
||||
# EPSS Air-Gapped Bundles Guide
|
||||
|
||||
## Overview
|
||||
|
||||
This guide describes how to create, distribute, and import EPSS (Exploit Prediction Scoring System) data bundles for air-gapped StellaOps deployments. EPSS bundles enable offline vulnerability risk scoring with the same probabilistic threat intelligence available to online deployments.
|
||||
|
||||
**Key Concepts**:
|
||||
- **Risk Bundle**: Aggregated security data (EPSS + KEV + advisories) for offline import
|
||||
- **EPSS Snapshot**: Single-day EPSS scores for all CVEs (~300k rows)
|
||||
- **Staleness Threshold**: How old EPSS data can be before fallback to CVSS-only
|
||||
- **Deterministic Import**: Same bundle imported twice yields identical database state
|
||||
|
||||
---
|
||||
|
||||
## Bundle Structure
|
||||
|
||||
### Standard Risk Bundle Layout
|
||||
|
||||
```
|
||||
risk-bundle-2025-12-17/
|
||||
├── manifest.json # Bundle metadata and checksums
|
||||
├── epss/
|
||||
│ ├── epss_scores-2025-12-17.csv.zst # EPSS data (ZSTD compressed)
|
||||
│ └── epss_metadata.json # EPSS provenance
|
||||
├── kev/
|
||||
│ └── kev-catalog.json # CISA KEV catalog
|
||||
├── advisories/
|
||||
│ ├── nvd-updates.ndjson.zst
|
||||
│ └── ghsa-updates.ndjson.zst
|
||||
└── signatures/
|
||||
├── bundle.dsse.json # DSSE signature (optional)
|
||||
└── bundle.sha256sums # File integrity checksums
|
||||
```
|
||||
|
||||
### manifest.json
|
||||
|
||||
```json
|
||||
{
|
||||
"bundle_id": "risk-bundle-2025-12-17",
|
||||
"created_at": "2025-12-17T00:00:00Z",
|
||||
"created_by": "stellaops-bundler-v1.2.3",
|
||||
"bundle_type": "risk",
|
||||
"schema_version": "v1",
|
||||
"contents": {
|
||||
"epss": {
|
||||
"model_date": "2025-12-17",
|
||||
"file": "epss/epss_scores-2025-12-17.csv.zst",
|
||||
"sha256": "abc123...",
|
||||
"size_bytes": 15728640,
|
||||
"row_count": 231417
|
||||
},
|
||||
"kev": {
|
||||
"catalog_version": "2025-12-17",
|
||||
"file": "kev/kev-catalog.json",
|
||||
"sha256": "def456...",
|
||||
"known_exploited_count": 1247
|
||||
},
|
||||
"advisories": {
|
||||
"nvd": {
|
||||
"file": "advisories/nvd-updates.ndjson.zst",
|
||||
"sha256": "ghi789...",
|
||||
"record_count": 1523
|
||||
},
|
||||
"ghsa": {
|
||||
"file": "advisories/ghsa-updates.ndjson.zst",
|
||||
"sha256": "jkl012...",
|
||||
"record_count": 8734
|
||||
}
|
||||
}
|
||||
},
|
||||
"signature": {
|
||||
"type": "dsse",
|
||||
"file": "signatures/bundle.dsse.json",
|
||||
"key_id": "stellaops-bundler-2025",
|
||||
"algorithm": "ed25519"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### epss/epss_metadata.json
|
||||
|
||||
```json
|
||||
{
|
||||
"model_date": "2025-12-17",
|
||||
"model_version": "v2025.12.17",
|
||||
"published_date": "2025-12-17",
|
||||
"row_count": 231417,
|
||||
"source_uri": "https://epss.empiricalsecurity.com/epss_scores-2025-12-17.csv.gz",
|
||||
"retrieved_at": "2025-12-17T00:05:32Z",
|
||||
"file_sha256": "abc123...",
|
||||
"decompressed_sha256": "xyz789...",
|
||||
"compression": "zstd",
|
||||
"compression_level": 19
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Creating EPSS Bundles
|
||||
|
||||
### Prerequisites
|
||||
|
||||
**Build System Requirements**:
|
||||
- Internet access (for fetching FIRST.org data)
|
||||
- StellaOps Bundler CLI: `stellaops-bundler`
|
||||
- ZSTD compression: `zstd` (v1.5+)
|
||||
- Python 3.10+ (for verification scripts)
|
||||
|
||||
**Permissions**:
|
||||
- Read access to FIRST.org EPSS API/CSV endpoints
|
||||
- Write access to bundle staging directory
|
||||
- (Optional) Signing key for DSSE signatures
|
||||
|
||||
### Daily Bundle Creation (Automated)
|
||||
|
||||
**Recommended Schedule**: Daily at 01:00 UTC (after FIRST publishes at ~00:00 UTC)
|
||||
|
||||
**Script**: `scripts/create-risk-bundle.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
BUNDLE_DATE=$(date -u +%Y-%m-%d)
|
||||
BUNDLE_DIR="risk-bundle-${BUNDLE_DATE}"
|
||||
STAGING_DIR="/tmp/stellaops-bundles/${BUNDLE_DIR}"
|
||||
|
||||
echo "Creating risk bundle for ${BUNDLE_DATE}..."
|
||||
|
||||
# 1. Create staging directory
|
||||
mkdir -p "${STAGING_DIR}"/{epss,kev,advisories,signatures}
|
||||
|
||||
# 2. Fetch EPSS data from FIRST.org
|
||||
echo "Fetching EPSS data..."
|
||||
curl -sL "https://epss.empiricalsecurity.com/epss_scores-${BUNDLE_DATE}.csv.gz" \
|
||||
-o "${STAGING_DIR}/epss/epss_scores-${BUNDLE_DATE}.csv.gz"
|
||||
|
||||
# 3. Decompress and re-compress with ZSTD (better compression for offline)
|
||||
gunzip "${STAGING_DIR}/epss/epss_scores-${BUNDLE_DATE}.csv.gz"
|
||||
zstd -19 -q "${STAGING_DIR}/epss/epss_scores-${BUNDLE_DATE}.csv" \
|
||||
-o "${STAGING_DIR}/epss/epss_scores-${BUNDLE_DATE}.csv.zst"
|
||||
rm "${STAGING_DIR}/epss/epss_scores-${BUNDLE_DATE}.csv"
|
||||
|
||||
# 4. Generate EPSS metadata
|
||||
stellaops-bundler epss metadata \
|
||||
--file "${STAGING_DIR}/epss/epss_scores-${BUNDLE_DATE}.csv.zst" \
|
||||
--model-date "${BUNDLE_DATE}" \
|
||||
--output "${STAGING_DIR}/epss/epss_metadata.json"
|
||||
|
||||
# 5. Fetch KEV catalog
|
||||
echo "Fetching KEV catalog..."
|
||||
curl -sL "https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json" \
|
||||
-o "${STAGING_DIR}/kev/kev-catalog.json"
|
||||
|
||||
# 6. Fetch advisory updates (optional, for comprehensive bundles)
|
||||
# stellaops-bundler advisories fetch ...
|
||||
|
||||
# 7. Generate checksums
|
||||
echo "Generating checksums..."
|
||||
(cd "${STAGING_DIR}" && find . -type f ! -name "*.sha256sums" -exec sha256sum {} \;) \
|
||||
> "${STAGING_DIR}/signatures/bundle.sha256sums"
|
||||
|
||||
# 8. Generate manifest
|
||||
stellaops-bundler manifest create \
|
||||
--bundle-dir "${STAGING_DIR}" \
|
||||
--bundle-id "${BUNDLE_DIR}" \
|
||||
--output "${STAGING_DIR}/manifest.json"
|
||||
|
||||
# 9. Sign bundle (if signing key available)
|
||||
if [ -n "${SIGNING_KEY:-}" ]; then
|
||||
echo "Signing bundle..."
|
||||
stellaops-bundler sign \
|
||||
--manifest "${STAGING_DIR}/manifest.json" \
|
||||
--key "${SIGNING_KEY}" \
|
||||
--output "${STAGING_DIR}/signatures/bundle.dsse.json"
|
||||
fi
|
||||
|
||||
# 10. Create tarball
|
||||
echo "Creating tarball..."
|
||||
tar -C "$(dirname "${STAGING_DIR}")" -czf "/var/stellaops/bundles/${BUNDLE_DIR}.tar.gz" \
|
||||
"$(basename "${STAGING_DIR}")"
|
||||
|
||||
echo "Bundle created: /var/stellaops/bundles/${BUNDLE_DIR}.tar.gz"
|
||||
echo "Size: $(du -h /var/stellaops/bundles/${BUNDLE_DIR}.tar.gz | cut -f1)"
|
||||
|
||||
# 11. Verify bundle
|
||||
stellaops-bundler verify "/var/stellaops/bundles/${BUNDLE_DIR}.tar.gz"
|
||||
```
|
||||
|
||||
**Cron Schedule**:
|
||||
```cron
|
||||
# Daily at 01:00 UTC (after FIRST publishes EPSS at ~00:00 UTC)
|
||||
0 1 * * * /opt/stellaops/scripts/create-risk-bundle.sh >> /var/log/stellaops/bundler.log 2>&1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Distributing Bundles
|
||||
|
||||
### Transfer Methods
|
||||
|
||||
#### 1. Physical Media (Highest Security)
|
||||
|
||||
```bash
|
||||
# Copy to USB drive
|
||||
cp /var/stellaops/bundles/risk-bundle-2025-12-17.tar.gz /media/usb/stellaops/
|
||||
|
||||
# Verify checksum
|
||||
sha256sum /media/usb/stellaops/risk-bundle-2025-12-17.tar.gz
|
||||
```
|
||||
|
||||
#### 2. Secure File Transfer (Network Isolation)
|
||||
|
||||
```bash
|
||||
# SCP over dedicated management network
|
||||
scp /var/stellaops/bundles/risk-bundle-2025-12-17.tar.gz \
|
||||
admin@airgap-gateway.internal:/incoming/
|
||||
|
||||
# Verify after transfer
|
||||
ssh admin@airgap-gateway.internal \
|
||||
"sha256sum /incoming/risk-bundle-2025-12-17.tar.gz"
|
||||
```
|
||||
|
||||
#### 3. Offline Bundle Repository (CD/DVD)
|
||||
|
||||
```bash
|
||||
# Burn to CD/DVD (for regulated industries)
|
||||
growisofs -Z /dev/sr0 \
|
||||
-R -J -joliet-long \
|
||||
-V "StellaOps Risk Bundle 2025-12-17" \
|
||||
/var/stellaops/bundles/risk-bundle-2025-12-17.tar.gz
|
||||
|
||||
# Verify disc
|
||||
md5sum /dev/sr0 > risk-bundle-2025-12-17.md5
|
||||
```
|
||||
|
||||
### Storage Recommendations
|
||||
|
||||
**Bundle Retention**:
|
||||
- **Online bundler**: Keep last 90 days (rolling cleanup)
|
||||
- **Air-gapped system**: Keep last 30 days minimum (for rollback)
|
||||
|
||||
**Naming Convention**:
|
||||
- Pattern: `risk-bundle-YYYY-MM-DD.tar.gz`
|
||||
- Example: `risk-bundle-2025-12-17.tar.gz`
|
||||
|
||||
**Directory Structure** (air-gapped system):
|
||||
```
|
||||
/opt/stellaops/bundles/
|
||||
├── incoming/ # Transfer staging area
|
||||
├── verified/ # Verified, ready to import
|
||||
├── imported/ # Successfully imported (archive)
|
||||
└── failed/ # Failed verification/import (quarantine)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Importing Bundles (Air-Gapped System)
|
||||
|
||||
### Pre-Import Verification
|
||||
|
||||
**Step 1: Transfer to Verified Directory**
|
||||
|
||||
```bash
|
||||
# Transfer from incoming to verified (manual approval gate)
|
||||
sudo mv /opt/stellaops/bundles/incoming/risk-bundle-2025-12-17.tar.gz \
|
||||
/opt/stellaops/bundles/verified/
|
||||
```
|
||||
|
||||
**Step 2: Verify Bundle Integrity**
|
||||
|
||||
```bash
|
||||
# Extract bundle
|
||||
cd /opt/stellaops/bundles/verified
|
||||
tar -xzf risk-bundle-2025-12-17.tar.gz
|
||||
|
||||
# Verify checksums
|
||||
cd risk-bundle-2025-12-17
|
||||
sha256sum -c signatures/bundle.sha256sums
|
||||
|
||||
# Expected output:
|
||||
# epss/epss_scores-2025-12-17.csv.zst: OK
|
||||
# epss/epss_metadata.json: OK
|
||||
# kev/kev-catalog.json: OK
|
||||
# manifest.json: OK
|
||||
```
|
||||
|
||||
**Step 3: Verify DSSE Signature (if signed)**
|
||||
|
||||
```bash
|
||||
stellaops-bundler verify-signature \
|
||||
--manifest manifest.json \
|
||||
--signature signatures/bundle.dsse.json \
|
||||
--trusted-keys /etc/stellaops/trusted-keys.json
|
||||
|
||||
# Expected output:
|
||||
# ✓ Signature valid
|
||||
# ✓ Key ID: stellaops-bundler-2025
|
||||
# ✓ Signed at: 2025-12-17T01:05:00Z
|
||||
```
|
||||
|
||||
### Import Procedure
|
||||
|
||||
**Step 4: Import Bundle**
|
||||
|
||||
```bash
|
||||
# Import using stellaops CLI
|
||||
stellaops offline import \
|
||||
--bundle /opt/stellaops/bundles/verified/risk-bundle-2025-12-17.tar.gz \
|
||||
--verify \
|
||||
--dry-run
|
||||
|
||||
# Review dry-run output, then execute
|
||||
stellaops offline import \
|
||||
--bundle /opt/stellaops/bundles/verified/risk-bundle-2025-12-17.tar.gz \
|
||||
--verify
|
||||
```
|
||||
|
||||
**Import Output**:
|
||||
```
|
||||
Importing risk bundle: risk-bundle-2025-12-17
|
||||
✓ Manifest validated
|
||||
✓ Checksums verified
|
||||
✓ Signature verified
|
||||
|
||||
Importing EPSS data...
|
||||
Model Date: 2025-12-17
|
||||
Row Count: 231,417
|
||||
✓ epss_import_runs created (import_run_id: 550e8400-...)
|
||||
✓ epss_scores inserted (231,417 rows, 23.4s)
|
||||
✓ epss_changes computed (12,345 changes, 8.1s)
|
||||
✓ epss_current upserted (231,417 rows, 5.2s)
|
||||
✓ Event emitted: epss.updated
|
||||
|
||||
Importing KEV catalog...
|
||||
Known Exploited Count: 1,247
|
||||
✓ kev_catalog updated
|
||||
|
||||
Import completed successfully in 41.2s
|
||||
```
|
||||
|
||||
**Step 5: Verify Import**
|
||||
|
||||
```bash
|
||||
# Check EPSS status
|
||||
stellaops epss status
|
||||
|
||||
# Expected output:
|
||||
# EPSS Status:
|
||||
# Latest Model Date: 2025-12-17
|
||||
# Source: bundle://risk-bundle-2025-12-17
|
||||
# CVE Count: 231,417
|
||||
# Staleness: FRESH (0 days)
|
||||
# Import Time: 2025-12-17T10:30:00Z
|
||||
|
||||
# Query specific CVE to verify
|
||||
stellaops epss get CVE-2024-12345
|
||||
|
||||
# Expected output:
|
||||
# CVE-2024-12345
|
||||
# Score: 0.42357
|
||||
# Percentile: 88.2th
|
||||
# Model Date: 2025-12-17
|
||||
# Source: bundle://risk-bundle-2025-12-17
|
||||
```
|
||||
|
||||
**Step 6: Archive Imported Bundle**
|
||||
|
||||
```bash
|
||||
# Move to imported archive
|
||||
sudo mv /opt/stellaops/bundles/verified/risk-bundle-2025-12-17.tar.gz \
|
||||
/opt/stellaops/bundles/imported/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Automation (Air-Gapped System)
|
||||
|
||||
### Automated Import on Arrival
|
||||
|
||||
**Script**: `/opt/stellaops/scripts/auto-import-bundle.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
INCOMING_DIR="/opt/stellaops/bundles/incoming"
|
||||
VERIFIED_DIR="/opt/stellaops/bundles/verified"
|
||||
IMPORTED_DIR="/opt/stellaops/bundles/imported"
|
||||
FAILED_DIR="/opt/stellaops/bundles/failed"
|
||||
LOG_FILE="/var/log/stellaops/auto-import.log"
|
||||
|
||||
log() {
|
||||
echo "[$(date -Iseconds)] $*" | tee -a "${LOG_FILE}"
|
||||
}
|
||||
|
||||
# Watch for new bundles in incoming/
|
||||
for bundle in "${INCOMING_DIR}"/risk-bundle-*.tar.gz; do
|
||||
[ -f "${bundle}" ] || continue
|
||||
|
||||
BUNDLE_NAME=$(basename "${bundle}")
|
||||
log "Detected new bundle: ${BUNDLE_NAME}"
|
||||
|
||||
# Extract
|
||||
EXTRACT_DIR="${VERIFIED_DIR}/${BUNDLE_NAME%.tar.gz}"
|
||||
mkdir -p "${EXTRACT_DIR}"
|
||||
tar -xzf "${bundle}" -C "${VERIFIED_DIR}"
|
||||
|
||||
# Verify checksums
|
||||
if ! (cd "${EXTRACT_DIR}" && sha256sum -c signatures/bundle.sha256sums > /dev/null 2>&1); then
|
||||
log "ERROR: Checksum verification failed for ${BUNDLE_NAME}"
|
||||
mv "${bundle}" "${FAILED_DIR}/"
|
||||
rm -rf "${EXTRACT_DIR}"
|
||||
continue
|
||||
fi
|
||||
|
||||
log "Checksum verification passed"
|
||||
|
||||
# Verify signature (if present)
|
||||
if [ -f "${EXTRACT_DIR}/signatures/bundle.dsse.json" ]; then
|
||||
if ! stellaops-bundler verify-signature \
|
||||
--manifest "${EXTRACT_DIR}/manifest.json" \
|
||||
--signature "${EXTRACT_DIR}/signatures/bundle.dsse.json" \
|
||||
--trusted-keys /etc/stellaops/trusted-keys.json > /dev/null 2>&1; then
|
||||
log "ERROR: Signature verification failed for ${BUNDLE_NAME}"
|
||||
mv "${bundle}" "${FAILED_DIR}/"
|
||||
rm -rf "${EXTRACT_DIR}"
|
||||
continue
|
||||
fi
|
||||
log "Signature verification passed"
|
||||
fi
|
||||
|
||||
# Import
|
||||
if stellaops offline import --bundle "${bundle}" --verify >> "${LOG_FILE}" 2>&1; then
|
||||
log "Import successful for ${BUNDLE_NAME}"
|
||||
mv "${bundle}" "${IMPORTED_DIR}/"
|
||||
rm -rf "${EXTRACT_DIR}"
|
||||
else
|
||||
log "ERROR: Import failed for ${BUNDLE_NAME}"
|
||||
mv "${bundle}" "${FAILED_DIR}/"
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
**Systemd Service**: `/etc/systemd/system/stellaops-bundle-watcher.service`
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
Description=StellaOps Bundle Auto-Import Watcher
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
ExecStart=/usr/bin/inotifywait -m -e close_write --format '%w%f' /opt/stellaops/bundles/incoming | \
|
||||
while read file; do /opt/stellaops/scripts/auto-import-bundle.sh; done
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
User=stellaops
|
||||
Group=stellaops
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
**Enable Service**:
|
||||
```bash
|
||||
sudo systemctl enable stellaops-bundle-watcher
|
||||
sudo systemctl start stellaops-bundle-watcher
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Staleness Handling
|
||||
|
||||
### Staleness Thresholds
|
||||
|
||||
| Days Since Model Date | Status | Action |
|
||||
|-----------------------|--------|--------|
|
||||
| 0-1 | FRESH | Normal operation |
|
||||
| 2-7 | ACCEPTABLE | Continue, low-priority alert |
|
||||
| 8-14 | STALE | Alert, plan bundle import |
|
||||
| 15+ | VERY_STALE | Fallback to CVSS-only, urgent alert |
|
||||
|
||||
### Monitoring Staleness
|
||||
|
||||
**SQL Query**:
|
||||
```sql
|
||||
SELECT * FROM concelier.epss_model_staleness;
|
||||
|
||||
-- Output:
|
||||
-- latest_model_date | latest_import_at | days_stale | staleness_status
|
||||
-- 2025-12-10 | 2025-12-10 10:30:00+00 | 7 | ACCEPTABLE
|
||||
```
|
||||
|
||||
**Prometheus Metric**:
|
||||
```promql
|
||||
epss_model_staleness_days{instance="airgap-prod"}
|
||||
|
||||
# Alert rule:
|
||||
- alert: EpssDataStale
|
||||
expr: epss_model_staleness_days > 7
|
||||
for: 1h
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: "EPSS data is stale ({{ $value }} days old)"
|
||||
```
|
||||
|
||||
### Fallback Behavior
|
||||
|
||||
When EPSS data is VERY_STALE (>14 days):
|
||||
|
||||
**Automatic Fallback**:
|
||||
- Scanner: Skip EPSS evidence, log warning
|
||||
- Policy: Use CVSS-only scoring (no EPSS bonus)
|
||||
- Notifications: Disabled EPSS-based alerts
|
||||
- UI: Show staleness banner, disable EPSS filters
|
||||
|
||||
**Manual Override** (force continue using stale data):
|
||||
```yaml
|
||||
# etc/scanner.yaml
|
||||
scanner:
|
||||
epss:
|
||||
staleness_policy: continue # Options: fallback, continue, error
|
||||
max_staleness_days: 30 # Override 14-day default
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Bundle Import Failed: Checksum Mismatch
|
||||
|
||||
**Symptom**:
|
||||
```
|
||||
ERROR: Checksum verification failed
|
||||
epss/epss_scores-2025-12-17.csv.zst: FAILED
|
||||
```
|
||||
|
||||
**Diagnosis**:
|
||||
1. Verify bundle was not corrupted during transfer:
|
||||
```bash
|
||||
# Compare with original
|
||||
sha256sum risk-bundle-2025-12-17.tar.gz
|
||||
```
|
||||
|
||||
2. Re-transfer bundle from source
|
||||
|
||||
**Resolution**:
|
||||
- Delete corrupted bundle: `rm risk-bundle-2025-12-17.tar.gz`
|
||||
- Re-download/re-transfer from bundler system
|
||||
|
||||
### Bundle Import Failed: Signature Invalid
|
||||
|
||||
**Symptom**:
|
||||
```
|
||||
ERROR: Signature verification failed
|
||||
Invalid signature or untrusted key
|
||||
```
|
||||
|
||||
**Diagnosis**:
|
||||
1. Check trusted keys configured:
|
||||
```bash
|
||||
cat /etc/stellaops/trusted-keys.json
|
||||
```
|
||||
|
||||
2. Verify key ID in bundle signature matches:
|
||||
```bash
|
||||
jq '.signature.key_id' manifest.json
|
||||
```
|
||||
|
||||
**Resolution**:
|
||||
- Update trusted keys file with current bundler public key
|
||||
- Or: Skip signature verification (if signatures optional):
|
||||
```bash
|
||||
stellaops offline import --bundle risk-bundle-2025-12-17.tar.gz --skip-signature-verify
|
||||
```
|
||||
|
||||
### No EPSS Data After Import
|
||||
|
||||
**Symptom**:
|
||||
- Import succeeded, but `stellaops epss status` shows "No EPSS data"
|
||||
|
||||
**Diagnosis**:
|
||||
```sql
|
||||
-- Check import runs
|
||||
SELECT * FROM concelier.epss_import_runs ORDER BY created_at DESC LIMIT 1;
|
||||
|
||||
-- Check epss_current count
|
||||
SELECT COUNT(*) FROM concelier.epss_current;
|
||||
```
|
||||
|
||||
**Resolution**:
|
||||
1. If import_runs shows FAILED status:
|
||||
- Check error column: `SELECT error FROM concelier.epss_import_runs WHERE status = 'FAILED'`
|
||||
- Re-run import with verbose logging
|
||||
|
||||
2. If epss_current is empty:
|
||||
- Manually trigger upsert:
|
||||
```sql
|
||||
-- Re-run upsert for latest model_date
|
||||
-- (This SQL is safe to re-run)
|
||||
INSERT INTO concelier.epss_current (cve_id, epss_score, percentile, model_date, import_run_id, updated_at)
|
||||
SELECT s.cve_id, s.epss_score, s.percentile, s.model_date, s.import_run_id, NOW()
|
||||
FROM concelier.epss_scores s
|
||||
WHERE s.model_date = (SELECT MAX(model_date) FROM concelier.epss_import_runs WHERE status = 'SUCCEEDED')
|
||||
ON CONFLICT (cve_id) DO UPDATE SET
|
||||
epss_score = EXCLUDED.epss_score,
|
||||
percentile = EXCLUDED.percentile,
|
||||
model_date = EXCLUDED.model_date,
|
||||
import_run_id = EXCLUDED.import_run_id,
|
||||
updated_at = NOW();
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Weekly Bundle Import Cadence
|
||||
|
||||
**Recommended Schedule**:
|
||||
- **Minimum**: Weekly (every Monday)
|
||||
- **Preferred**: Bi-weekly (Monday & Thursday)
|
||||
- **Ideal**: Daily (if transfer logistics allow)
|
||||
|
||||
### 2. Bundle Verification Checklist
|
||||
|
||||
Before importing:
|
||||
- [ ] Checksum verification passed
|
||||
- [ ] Signature verification passed (if signed)
|
||||
- [ ] Model date within acceptable staleness window
|
||||
- [ ] Disk space available (estimate: 500MB per bundle)
|
||||
- [ ] Backup current EPSS data (for rollback)
|
||||
|
||||
### 3. Rollback Plan
|
||||
|
||||
If new bundle causes issues:
|
||||
```bash
|
||||
# 1. Identify problematic import_run_id
|
||||
SELECT import_run_id, model_date, status
|
||||
FROM concelier.epss_import_runs
|
||||
ORDER BY created_at DESC LIMIT 5;
|
||||
|
||||
# 2. Delete problematic import (cascades to epss_scores, epss_changes)
|
||||
DELETE FROM concelier.epss_import_runs
|
||||
WHERE import_run_id = '550e8400-...';
|
||||
|
||||
# 3. Restore epss_current from previous day
|
||||
-- (Upsert from previous model_date as shown in troubleshooting)
|
||||
|
||||
# 4. Verify rollback
|
||||
stellaops epss status
|
||||
```
|
||||
|
||||
### 4. Audit Trail
|
||||
|
||||
Log all bundle imports for compliance:
|
||||
|
||||
**Audit Log Format** (`/var/log/stellaops/bundle-audit.log`):
|
||||
```json
|
||||
{
|
||||
"timestamp": "2025-12-17T10:30:00Z",
|
||||
"action": "import",
|
||||
"bundle_id": "risk-bundle-2025-12-17",
|
||||
"bundle_sha256": "abc123...",
|
||||
"imported_by": "admin@example.com",
|
||||
"import_run_id": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"result": "SUCCESS",
|
||||
"row_count": 231417,
|
||||
"duration_seconds": 41.2
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Appendix: Bundle Creation Tools
|
||||
|
||||
### stellaops-bundler CLI Reference
|
||||
|
||||
```bash
|
||||
# Create EPSS metadata
|
||||
stellaops-bundler epss metadata \
|
||||
--file epss_scores-2025-12-17.csv.zst \
|
||||
--model-date 2025-12-17 \
|
||||
--output epss_metadata.json
|
||||
|
||||
# Create manifest
|
||||
stellaops-bundler manifest create \
|
||||
--bundle-dir risk-bundle-2025-12-17 \
|
||||
--bundle-id risk-bundle-2025-12-17 \
|
||||
--output manifest.json
|
||||
|
||||
# Sign bundle
|
||||
stellaops-bundler sign \
|
||||
--manifest manifest.json \
|
||||
--key /path/to/signing-key.pem \
|
||||
--output bundle.dsse.json
|
||||
|
||||
# Verify bundle
|
||||
stellaops-bundler verify risk-bundle-2025-12-17.tar.gz
|
||||
```
|
||||
|
||||
### Custom Bundle Scripts
|
||||
|
||||
Example for creating weekly bundles (7-day snapshots):
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# create-weekly-bundle.sh
|
||||
|
||||
WEEK_START=$(date -u -d "last monday" +%Y-%m-%d)
|
||||
WEEK_END=$(date -u +%Y-%m-%d)
|
||||
BUNDLE_ID="risk-bundle-weekly-${WEEK_START}"
|
||||
|
||||
echo "Creating weekly bundle: ${BUNDLE_ID}"
|
||||
|
||||
for day in $(seq 0 6); do
|
||||
CURRENT_DATE=$(date -u -d "${WEEK_START} + ${day} days" +%Y-%m-%d)
|
||||
# Fetch EPSS for each day...
|
||||
curl -sL "https://epss.empiricalsecurity.com/epss_scores-${CURRENT_DATE}.csv.gz" \
|
||||
-o "epss/epss_scores-${CURRENT_DATE}.csv.gz"
|
||||
done
|
||||
|
||||
# Compress and bundle...
|
||||
tar -czf "${BUNDLE_ID}.tar.gz" epss/ kev/ manifest.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-17
|
||||
**Version**: 1.0
|
||||
**Maintainer**: StellaOps Operations Team
|
||||
77
docs/modules/airgap/guides/importer.md
Normal file
77
docs/modules/airgap/guides/importer.md
Normal file
@@ -0,0 +1,77 @@
|
||||
# AirGap Importer
|
||||
|
||||
The AirGap Importer verifies and ingests offline bundles (mirror, bootstrap, evidence kits) into a sealed or constrained deployment. It fails closed by default: imports are rejected when verification fails, and failures are diagnosable offline.
|
||||
|
||||
This document describes importer behavior and its key building blocks. For bundle formats and operational workflow, see `docs/modules/airgap/guides/offline-bundle-format.md`, `docs/modules/airgap/guides/mirror-bundles.md`, and `docs/modules/airgap/guides/operations.md`.
|
||||
|
||||
## Responsibilities
|
||||
|
||||
- Verify bundle integrity and authenticity (DSSE signatures; optional TUF metadata where applicable).
|
||||
- Enforce monotonicity (prevent version rollback unless explicitly force-activated with a recorded reason).
|
||||
- Stage verified content into deterministic layouts (catalog + item repository + object-store paths).
|
||||
- Quarantine failed bundles for forensic analysis with deterministic logs and metadata.
|
||||
- Emit an audit trail for every dry-run and import attempt (success or failure).
|
||||
|
||||
## Verification pipeline (conceptual)
|
||||
|
||||
1. **Plan**: build an ordered list of validation/ingest steps for the bundle (`BundleImportPlanner`).
|
||||
2. **Validate signatures**: verify DSSE envelopes and trusted key fingerprints.
|
||||
3. **Validate metadata** (when present): verify TUF root/snapshot/timestamp consistency against trust roots.
|
||||
4. **Compute deterministic roots**: compute a Merkle root over staged bundle items (stable ordering).
|
||||
5. **Check monotonicity**: ensure the incoming bundle version is newer than the currently active version.
|
||||
6. **Quarantine on failure**: preserve the bundle + verification log and emit a stable failure reason code.
|
||||
7. **Commit**: write catalog/item entries and activation record; emit audit/timeline events.
|
||||
|
||||
The step order must remain stable; if steps change, treat it as a contract change and update CLI/UI guidance.
|
||||
|
||||
## Quarantine
|
||||
|
||||
When verification fails, the importer quarantines the bundle with enough information to debug offline.
|
||||
|
||||
Typical structure:
|
||||
|
||||
- `/updates/quarantine/<tenantId>/<timestamp>-<reason>-<id>/`
|
||||
- `bundle.tar.zst` (original)
|
||||
- `manifest.json` (if extracted)
|
||||
- `verification.log` (deterministic, no machine-specific paths)
|
||||
- `failure-reason.txt` (human-readable)
|
||||
- `quarantine.json` (structured metadata: tenant, reason, timestamps, sizes, hashes)
|
||||
|
||||
Operational expectations:
|
||||
|
||||
- Quarantine is bounded: enforce per-tenant quota + TTL cleanup.
|
||||
- Listing is deterministic: sort by `quarantined_at` then `quarantine_id` (ordinal).
|
||||
|
||||
## Version monotonicity
|
||||
|
||||
Rollback resistance is enforced via:
|
||||
|
||||
- A per-tenant version store (`IBundleVersionStore`) backed by Postgres in production.
|
||||
- A monotonicity checker (`IVersionMonotonicityChecker`) that compares incoming bundle versions to the active version.
|
||||
- Optional force-activate path requiring a human reason, stored alongside the activation record.
|
||||
|
||||
## Storage model
|
||||
|
||||
The importer writes deterministic metadata that other components can query:
|
||||
|
||||
- **Bundle catalog**: (tenant, bundle_id, digest, imported_at_utc, content paths).
|
||||
- **Bundle items**: (tenant, bundle_id, path, digest, size).
|
||||
|
||||
For the logical schema and deterministic ordering rules, see `docs/modules/airgap/guides/bundle-repositories.md`.
|
||||
|
||||
## Telemetry and auditing
|
||||
|
||||
Minimum signals:
|
||||
|
||||
- Counters: imports attempted/succeeded/failed, dry-runs, quarantines created, monotonicity failures, force-activations.
|
||||
- Structured logs with stable reason codes (e.g., `dsse_signature_invalid`, `tuf_root_invalid`, `merkle_mismatch`, `version_rollback_blocked`).
|
||||
- Audit emission: include tenant, bundle_id, digest, operator identity, and whether sealed mode was active.
|
||||
|
||||
## References
|
||||
|
||||
- `docs/modules/airgap/guides/offline-bundle-format.md`
|
||||
- `docs/modules/airgap/guides/mirror-bundles.md`
|
||||
- `docs/modules/airgap/guides/bundle-repositories.md`
|
||||
- `docs/modules/airgap/guides/operations.md`
|
||||
- `docs/modules/airgap/guides/controller.md`
|
||||
|
||||
218
docs/modules/airgap/guides/job-sync-offline.md
Normal file
218
docs/modules/airgap/guides/job-sync-offline.md
Normal file
@@ -0,0 +1,218 @@
|
||||
# HLC Job Sync Offline Operations
|
||||
|
||||
Sprint: SPRINT_20260105_002_003_ROUTER
|
||||
|
||||
This document describes the offline job synchronization mechanism using Hybrid Logical Clock (HLC) ordering for air-gap scenarios.
|
||||
|
||||
## Overview
|
||||
|
||||
When nodes operate in disconnected/offline mode, scheduled jobs are enqueued locally with HLC timestamps. Upon reconnection or air-gap transfer, these job logs are merged deterministically to maintain global ordering.
|
||||
|
||||
Key features:
|
||||
- **Deterministic ordering**: Jobs merge by HLC total order `(T_hlc.PhysicalTime, T_hlc.LogicalCounter, NodeId, JobId)`
|
||||
- **Chain integrity**: Each entry links to the previous via `link = Hash(prev_link || job_id || t_hlc || payload_hash)`
|
||||
- **Conflict-free**: Same payload = same JobId (deterministic), so duplicates are safely dropped
|
||||
- **Audit trail**: Source node ID and original links preserved for traceability
|
||||
|
||||
## CLI Commands
|
||||
|
||||
### Export Job Logs
|
||||
|
||||
Export offline job logs to a sync bundle for air-gap transfer:
|
||||
|
||||
```bash
|
||||
# Export job logs for a tenant
|
||||
stella airgap jobs export --tenant my-tenant -o job-sync-bundle.json
|
||||
|
||||
# Export with verbose output
|
||||
stella airgap jobs export --tenant my-tenant -o bundle.json --verbose
|
||||
|
||||
# Export as JSON for automation
|
||||
stella airgap jobs export --tenant my-tenant --json
|
||||
```
|
||||
|
||||
Options:
|
||||
- `--tenant, -t` - Tenant ID (defaults to "default")
|
||||
- `--output, -o` - Output file path
|
||||
- `--node` - Export specific node only (default: current node)
|
||||
- `--sign` - Sign bundle with DSSE
|
||||
- `--json` - Output result as JSON
|
||||
- `--verbose` - Enable verbose logging
|
||||
|
||||
### Import Job Logs
|
||||
|
||||
Import a job sync bundle from air-gap transfer:
|
||||
|
||||
```bash
|
||||
# Verify bundle without importing
|
||||
stella airgap jobs import bundle.json --verify-only
|
||||
|
||||
# Import bundle
|
||||
stella airgap jobs import bundle.json
|
||||
|
||||
# Force import despite validation issues
|
||||
stella airgap jobs import bundle.json --force
|
||||
|
||||
# Import with JSON output for automation
|
||||
stella airgap jobs import bundle.json --json
|
||||
```
|
||||
|
||||
Options:
|
||||
- `bundle` - Path to job sync bundle file (required)
|
||||
- `--verify-only` - Only verify the bundle without importing
|
||||
- `--force` - Force import even if validation fails
|
||||
- `--json` - Output result as JSON
|
||||
- `--verbose` - Enable verbose logging
|
||||
|
||||
### List Available Bundles
|
||||
|
||||
List job sync bundles in a directory:
|
||||
|
||||
```bash
|
||||
# List bundles in current directory
|
||||
stella airgap jobs list
|
||||
|
||||
# List bundles in specific directory
|
||||
stella airgap jobs list --source /path/to/bundles
|
||||
|
||||
# Output as JSON
|
||||
stella airgap jobs list --json
|
||||
```
|
||||
|
||||
Options:
|
||||
- `--source, -s` - Source directory (default: current directory)
|
||||
- `--json` - Output result as JSON
|
||||
- `--verbose` - Enable verbose logging
|
||||
|
||||
## Bundle Format
|
||||
|
||||
Job sync bundles are JSON files with the following structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"bundleId": "guid",
|
||||
"tenantId": "string",
|
||||
"createdAt": "ISO8601",
|
||||
"createdByNodeId": "string",
|
||||
"manifestDigest": "sha256:hex",
|
||||
"signature": "base64 (optional)",
|
||||
"signedBy": "keyId (optional)",
|
||||
"jobLogs": [
|
||||
{
|
||||
"nodeId": "string",
|
||||
"lastHlc": "HLC timestamp string",
|
||||
"chainHead": "base64",
|
||||
"entries": [
|
||||
{
|
||||
"nodeId": "string",
|
||||
"tHlc": "HLC timestamp string",
|
||||
"jobId": "guid",
|
||||
"partitionKey": "string (optional)",
|
||||
"payload": "JSON string",
|
||||
"payloadHash": "base64",
|
||||
"prevLink": "base64 (null for first)",
|
||||
"link": "base64",
|
||||
"enqueuedAt": "ISO8601"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Validation
|
||||
|
||||
Bundle validation checks:
|
||||
1. **Manifest digest**: Recomputes digest from job logs and compares
|
||||
2. **Chain integrity**: Verifies each entry's prev_link matches expected
|
||||
3. **Link verification**: Recomputes links and verifies against stored values
|
||||
4. **Chain head**: Verifies last entry link matches node's chain head
|
||||
|
||||
## Merge Algorithm
|
||||
|
||||
When importing bundles from multiple nodes:
|
||||
|
||||
1. **Collect**: Gather all entries from all node logs
|
||||
2. **Sort**: Order by HLC total order `(PhysicalTime, LogicalCounter, NodeId, JobId)`
|
||||
3. **Deduplicate**: Same JobId = same payload (drop later duplicates)
|
||||
4. **Recompute chain**: Build unified chain from merged entries
|
||||
|
||||
This produces a deterministic ordering regardless of import sequence.
|
||||
|
||||
## Conflict Resolution
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Same JobId, same payload, different HLC | Take earliest HLC, drop duplicates |
|
||||
| Same JobId, different payloads | Error - indicates bug in deterministic ID computation |
|
||||
|
||||
## Metrics
|
||||
|
||||
The following metrics are emitted:
|
||||
|
||||
| Metric | Type | Description |
|
||||
|--------|------|-------------|
|
||||
| `airgap_bundles_exported_total` | Counter | Total bundles exported |
|
||||
| `airgap_bundles_imported_total` | Counter | Total bundles imported |
|
||||
| `airgap_jobs_synced_total` | Counter | Total jobs synced |
|
||||
| `airgap_duplicates_dropped_total` | Counter | Duplicates dropped during merge |
|
||||
| `airgap_merge_conflicts_total` | Counter | Merge conflicts by type |
|
||||
| `airgap_offline_enqueues_total` | Counter | Offline enqueue operations |
|
||||
| `airgap_bundle_size_bytes` | Histogram | Bundle size distribution |
|
||||
| `airgap_sync_duration_seconds` | Histogram | Sync operation duration |
|
||||
| `airgap_merge_entries_count` | Histogram | Entries per merge operation |
|
||||
|
||||
## Service Registration
|
||||
|
||||
To use job sync in your application:
|
||||
|
||||
```csharp
|
||||
// Register core services
|
||||
services.AddAirGapSyncServices(nodeId: "my-node-id");
|
||||
|
||||
// Register file-based transport (for air-gap)
|
||||
services.AddFileBasedJobSyncTransport();
|
||||
|
||||
// Or router-based transport (for connected scenarios)
|
||||
services.AddRouterJobSyncTransport();
|
||||
|
||||
// Register sync service (requires ISyncSchedulerLogRepository)
|
||||
services.AddAirGapSyncImportService();
|
||||
```
|
||||
|
||||
## Operational Runbook
|
||||
|
||||
### Pre-Export Checklist
|
||||
- [ ] Node has offline job logs to export
|
||||
- [ ] Target path is writable
|
||||
- [ ] Signing key available (if --sign used)
|
||||
|
||||
### Pre-Import Checklist
|
||||
- [ ] Bundle file accessible
|
||||
- [ ] Bundle signature verified (if signed)
|
||||
- [ ] Scheduler database accessible
|
||||
- [ ] Sufficient disk space
|
||||
|
||||
### Recovery Procedures
|
||||
|
||||
**Chain validation failure:**
|
||||
1. Identify which entry has chain break
|
||||
2. Check for data corruption in bundle
|
||||
3. Re-export from source node if possible
|
||||
4. Use `--force` only if data loss is acceptable
|
||||
|
||||
**Duplicate conflict:**
|
||||
1. This is expected - duplicates are safely dropped
|
||||
2. Check duplicate count in output
|
||||
3. Verify merged jobs match expected count
|
||||
|
||||
**Payload mismatch (same JobId, different payloads):**
|
||||
1. This indicates a bug - same idempotency key should produce same payload
|
||||
2. Review job generation logic
|
||||
3. Do not force import - fix root cause
|
||||
|
||||
## See Also
|
||||
|
||||
- [Air-Gap Operations](operations.md)
|
||||
- [Mirror Bundles](mirror-bundles.md)
|
||||
- [Staleness and Time](staleness-and-time.md)
|
||||
210
docs/modules/airgap/guides/macos-offline.md
Normal file
210
docs/modules/airgap/guides/macos-offline.md
Normal file
@@ -0,0 +1,210 @@
|
||||
# macOS Offline Kit Integration
|
||||
|
||||
> Owner: Scanner Guild, Offline Kit Guild
|
||||
> Related tasks: SCANNER-ENG-0020..0023
|
||||
|
||||
## Overview
|
||||
|
||||
This document describes the offline operation requirements for macOS package scanning, including Homebrew formula metadata, pkgutil receipts, and application bundle analysis.
|
||||
|
||||
## Homebrew Offline Mirroring
|
||||
|
||||
### Required Tap Mirrors
|
||||
|
||||
For comprehensive macOS scanning in offline environments, mirror the following Homebrew taps:
|
||||
|
||||
| Tap | Path | Est. Size | Update Frequency |
|
||||
|-----|------|-----------|------------------|
|
||||
| `homebrew/core` | `/opt/stellaops/mirror/homebrew-core` | ~400MB | Weekly |
|
||||
| `homebrew/cask` | `/opt/stellaops/mirror/homebrew-cask` | ~150MB | Weekly |
|
||||
| Custom taps | As configured | Varies | As needed |
|
||||
|
||||
### Mirroring Procedure
|
||||
|
||||
```bash
|
||||
# Clone or update homebrew-core
|
||||
git clone --depth 1 https://github.com/Homebrew/homebrew-core.git \
|
||||
/opt/stellaops/mirror/homebrew-core
|
||||
|
||||
# Clone or update homebrew-cask
|
||||
git clone --depth 1 https://github.com/Homebrew/homebrew-cask.git \
|
||||
/opt/stellaops/mirror/homebrew-cask
|
||||
|
||||
# Create manifest for offline verification
|
||||
stellaops-cli offline create-manifest \
|
||||
--source /opt/stellaops/mirror/homebrew-core \
|
||||
--output /opt/stellaops/mirror/homebrew-core.manifest.json
|
||||
```
|
||||
|
||||
### Formula Metadata Extraction
|
||||
|
||||
The scanner extracts metadata from `INSTALL_RECEIPT.json` files in the Cellar. For policy evaluation, ensure the following fields are preserved:
|
||||
|
||||
- `tap` - Source tap identifier
|
||||
- `version` and `revision` - Package version info
|
||||
- `poured_from_bottle` - Build source indicator
|
||||
- `source.url`, `source.checksum` - Provenance data
|
||||
- `runtime_dependencies`, `build_dependencies` - Dependency graph
|
||||
|
||||
## pkgutil Receipt Data
|
||||
|
||||
### Receipt Location
|
||||
|
||||
macOS pkgutil receipts are stored in `/var/db/receipts/`. The scanner reads:
|
||||
|
||||
- `*.plist` - Receipt metadata (installer, version, date)
|
||||
- `*.bom` - Bill of Materials (installed files)
|
||||
|
||||
### Offline Considerations
|
||||
|
||||
pkgutil receipts are system-local and don't require external mirroring. However, for policy enforcement against known package identifiers, maintain a reference database of:
|
||||
|
||||
- Apple system package identifiers (`com.apple.pkg.*`)
|
||||
- Xcode component identifiers
|
||||
- Third-party installer identifiers
|
||||
|
||||
## Application Bundle Inspection
|
||||
|
||||
### Code Signing & Notarization
|
||||
|
||||
For offline notarization verification, prefetch:
|
||||
|
||||
1. **Apple Root Certificates**
|
||||
- Apple Root CA
|
||||
- Apple Root CA - G2
|
||||
- Apple Root CA - G3
|
||||
|
||||
2. **WWDR Certificates**
|
||||
- Apple Worldwide Developer Relations Certification Authority
|
||||
- Developer ID Certification Authority
|
||||
|
||||
3. **CRL/OCSP Caches**
|
||||
```bash
|
||||
# Prefetch Apple CRLs
|
||||
curl -o /opt/stellaops/cache/apple-crl/root.crl \
|
||||
https://www.apple.com/appleca/root.crl
|
||||
```
|
||||
|
||||
### Entitlement Taxonomy
|
||||
|
||||
The scanner classifies entitlements into capability categories for policy evaluation:
|
||||
|
||||
| Category | Entitlements | Risk Level |
|
||||
|----------|--------------|------------|
|
||||
| `network` | `com.apple.security.network.client`, `.server` | Low |
|
||||
| `camera` | `com.apple.security.device.camera` | High |
|
||||
| `microphone` | `com.apple.security.device.microphone` | High |
|
||||
| `filesystem` | `com.apple.security.files.*` | Medium |
|
||||
| `automation` | `com.apple.security.automation.apple-events` | High |
|
||||
| `code-execution` | `com.apple.security.cs.allow-*` | Critical |
|
||||
| `debugging` | `com.apple.security.get-task-allow` | High |
|
||||
|
||||
### High-Risk Entitlement Alerting
|
||||
|
||||
The following entitlements trigger elevated policy warnings by default:
|
||||
|
||||
```
|
||||
com.apple.security.device.camera
|
||||
com.apple.security.device.microphone
|
||||
com.apple.security.cs.allow-unsigned-executable-memory
|
||||
com.apple.security.cs.disable-library-validation
|
||||
com.apple.security.get-task-allow
|
||||
com.apple.security.files.all
|
||||
com.apple.security.automation.apple-events
|
||||
```
|
||||
|
||||
## Policy Predicates
|
||||
|
||||
### Available Predicates
|
||||
|
||||
The following SPL predicates are available for macOS components:
|
||||
|
||||
```spl
|
||||
# Bundle signing predicates
|
||||
macos.signed # Bundle has code signature
|
||||
macos.signed("TEAMID123") # Signed by specific team
|
||||
macos.signed("TEAMID123", true) # Signed with hardened runtime
|
||||
macos.sandboxed # App sandbox enabled
|
||||
macos.hardened_runtime # Hardened runtime enabled
|
||||
|
||||
# Entitlement predicates
|
||||
macos.entitlement("com.apple.security.network.client")
|
||||
macos.entitlement_any(["com.apple.security.device.camera", "..."])
|
||||
macos.category("network") # Has any network entitlement
|
||||
macos.category_any(["camera", "microphone"])
|
||||
macos.high_risk_entitlements # Has any high-risk entitlement
|
||||
|
||||
# Package receipt predicates
|
||||
macos.pkg_receipt("com.apple.pkg.Safari")
|
||||
macos.pkg_receipt("com.apple.pkg.Safari", "17.1")
|
||||
|
||||
# Metadata accessors
|
||||
macos.bundle_id # CFBundleIdentifier
|
||||
macos.team_id # Code signing team ID
|
||||
macos.min_os_version # LSMinimumSystemVersion
|
||||
```
|
||||
|
||||
### Example Policy Rules
|
||||
|
||||
```spl
|
||||
# Block unsigned third-party apps
|
||||
rule block_unsigned_apps priority 3 {
|
||||
when sbom.any_component(
|
||||
macos.bundle_id != "" and
|
||||
not macos.signed and
|
||||
not macos.bundle_id.startswith("com.apple.")
|
||||
)
|
||||
then status := "blocked"
|
||||
because "Unsigned third-party macOS applications are not permitted."
|
||||
}
|
||||
|
||||
# Warn on high-risk entitlements
|
||||
rule warn_high_risk_entitlements priority 4 {
|
||||
when sbom.any_component(macos.high_risk_entitlements)
|
||||
then status := "warn"
|
||||
because "Application requests high-risk entitlements (camera, microphone, etc.)."
|
||||
}
|
||||
|
||||
# Require hardened runtime for non-Apple apps
|
||||
rule require_hardened_runtime priority 5 {
|
||||
when sbom.any_component(
|
||||
macos.signed and
|
||||
not macos.hardened_runtime and
|
||||
not macos.bundle_id.startswith("com.apple.")
|
||||
)
|
||||
then status := "warn"
|
||||
because "Third-party apps should enable hardened runtime."
|
||||
}
|
||||
```
|
||||
|
||||
## Disk Space Requirements
|
||||
|
||||
| Component | Estimated Size | Notes |
|
||||
|-----------|---------------|-------|
|
||||
| Homebrew core tap snapshot | ~400MB | Compressed git clone |
|
||||
| Homebrew cask tap snapshot | ~150MB | Compressed git clone |
|
||||
| Apple certificate cache | ~5MB | Root + WWDR chains |
|
||||
| CRL/OCSP cache | ~10MB | Periodic refresh needed |
|
||||
| **Total** | ~565MB | Per release cycle |
|
||||
|
||||
## Validation Scripts
|
||||
|
||||
### Verify Offline Readiness
|
||||
|
||||
```bash
|
||||
# Check Homebrew mirror integrity
|
||||
stellaops-cli offline verify-homebrew \
|
||||
--mirror /opt/stellaops/mirror/homebrew-core \
|
||||
--manifest /opt/stellaops/mirror/homebrew-core.manifest.json
|
||||
|
||||
# Verify Apple certificate chain
|
||||
stellaops-cli offline verify-apple-certs \
|
||||
--cache /opt/stellaops/cache/apple-certs \
|
||||
--require wwdr
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- `docs/modules/scanner/design/macos-analyzer.md` - Analyzer design specification
|
||||
- `docs/modules/airgap/guides/mirror-bundles.md` - General mirroring patterns
|
||||
- Apple Developer Documentation: Code Signing Guide
|
||||
28
docs/modules/airgap/guides/mirror-bundles.md
Normal file
28
docs/modules/airgap/guides/mirror-bundles.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# Mirror Bundles (Airgap 56-003)
|
||||
|
||||
Defines the mirror bundle format and validation workflow for sealed deployments.
|
||||
|
||||
## Contents
|
||||
- Images/charts: OCI artifacts exported with digests + SBOMs.
|
||||
- Manifests: `manifest.json` with entries:
|
||||
- `bundleId`, `mirrorGeneration`, `createdAt`, `producer` (export center), `hashes` (sha256 list)
|
||||
- `dsseEnvelopeHash` for signed manifest (if available)
|
||||
- `files[]`: path, sha256, size, mediaType
|
||||
- Transparency: optional TUF metadata (`timestamp.json`, `snapshot.json`) for replay protection.
|
||||
|
||||
## Validation steps
|
||||
1. Verify `manifest.json` sha256 matches provided hash.
|
||||
2. If DSSE present, verify signature against offline trust roots.
|
||||
3. Validate Merkle root (if included) over `files[]` hashes.
|
||||
4. For each OCI artifact, confirm digest matches and SBOM present.
|
||||
5. Record `mirrorGeneration` and manifest hash; store in audit log and timeline event.
|
||||
|
||||
## Workflow
|
||||
- Export Center produces bundle + manifest; Attestor/Excititor importers validate before ingest.
|
||||
- Bundle consumers must refuse imports if any hash/signature fails.
|
||||
- Keep format stable; any schema change bumps `manifestVersion` in `manifest.json`.
|
||||
|
||||
## Determinism
|
||||
- Sort `files[]` by path; compute hashes with UTF-8 canonical paths.
|
||||
- Use ISO-8601 UTC timestamps in manifests.
|
||||
- Do not include host-specific paths or timestamps in tar layers.
|
||||
213
docs/modules/airgap/guides/offline-bundle-format.md
Normal file
213
docs/modules/airgap/guides/offline-bundle-format.md
Normal file
@@ -0,0 +1,213 @@
|
||||
# Offline Bundle Format (.stella.bundle.tgz)
|
||||
|
||||
> Sprint: SPRINT_3603_0001_0001
|
||||
> Module: ExportCenter
|
||||
|
||||
This document describes the `.stella.bundle.tgz` format for portable, signed, verifiable evidence packages.
|
||||
|
||||
## Overview
|
||||
|
||||
The offline bundle is a self-contained archive containing all evidence and artifacts needed for offline triage of security findings. Bundles are:
|
||||
|
||||
- **Portable**: Single file that can be transferred to air-gapped environments
|
||||
- **Signed**: DSSE-signed manifest for authenticity verification
|
||||
- **Verifiable**: Content-addressable with SHA-256 hashes for integrity
|
||||
- **Complete**: Contains all data needed for offline decision-making
|
||||
|
||||
## File Format
|
||||
|
||||
```
|
||||
{alert-id}.stella.bundle.tgz
|
||||
├── manifest.json # Bundle manifest (DSSE-signed)
|
||||
├── metadata/
|
||||
│ ├── alert.json # Alert metadata snapshot
|
||||
│ └── generation-info.json # Bundle generation metadata
|
||||
├── evidence/
|
||||
│ ├── reachability-proof.json # Call-graph reachability evidence
|
||||
│ ├── callstack.json # Exploitability call stacks
|
||||
│ └── provenance.json # Build provenance attestations
|
||||
├── vex/
|
||||
│ ├── decisions.ndjson # VEX decision history (NDJSON)
|
||||
│ └── current-status.json # Current VEX status
|
||||
├── sbom/
|
||||
│ ├── current.cdx.json # Current SBOM slice (CycloneDX)
|
||||
│ └── baseline.cdx.json # Baseline SBOM for diff
|
||||
├── diff/
|
||||
│ └── sbom-delta.json # SBOM delta changes
|
||||
└── attestations/
|
||||
├── bundle.dsse.json # DSSE envelope for bundle
|
||||
└── evidence.dsse.json # Evidence attestation chain
|
||||
```
|
||||
|
||||
## Manifest Schema
|
||||
|
||||
The `manifest.json` file follows this schema:
|
||||
|
||||
```json
|
||||
{
|
||||
"bundle_format_version": "1.0.0",
|
||||
"bundle_id": "abc123def456...",
|
||||
"alert_id": "alert-789",
|
||||
"created_at": "2024-12-15T10:00:00Z",
|
||||
"created_by": "user@example.com",
|
||||
"stellaops_version": "1.5.0",
|
||||
"entries": [
|
||||
{
|
||||
"path": "metadata/alert.json",
|
||||
"hash": "sha256:...",
|
||||
"size": 1234,
|
||||
"content_type": "application/json"
|
||||
}
|
||||
],
|
||||
"root_hash": "sha256:...",
|
||||
"signature": {
|
||||
"algorithm": "ES256",
|
||||
"key_id": "signing-key-001",
|
||||
"value": "..."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Manifest Fields
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `bundle_format_version` | string | Yes | Format version (semver) |
|
||||
| `bundle_id` | string | Yes | Unique bundle identifier |
|
||||
| `alert_id` | string | Yes | Source alert identifier |
|
||||
| `created_at` | ISO 8601 | Yes | Bundle creation timestamp (UTC) |
|
||||
| `created_by` | string | Yes | Actor who created the bundle |
|
||||
| `stellaops_version` | string | Yes | StellaOps version that created bundle |
|
||||
| `entries` | array | Yes | List of content entries with hashes |
|
||||
| `root_hash` | string | Yes | Merkle root of all entry hashes |
|
||||
| `signature` | object | No | DSSE signature (if signed) |
|
||||
|
||||
## Entry Schema
|
||||
|
||||
Each entry in the manifest:
|
||||
|
||||
```json
|
||||
{
|
||||
"path": "evidence/reachability-proof.json",
|
||||
"hash": "sha256:abc123...",
|
||||
"size": 2048,
|
||||
"content_type": "application/json",
|
||||
"compression": null
|
||||
}
|
||||
```
|
||||
|
||||
## DSSE Signing
|
||||
|
||||
Bundles support DSSE (Dead Simple Signing Envelope) signing:
|
||||
|
||||
```json
|
||||
{
|
||||
"payloadType": "application/vnd.stellaops.bundle.manifest+json",
|
||||
"payload": "<base64-encoded manifest>",
|
||||
"signatures": [
|
||||
{
|
||||
"keyid": "signing-key-001",
|
||||
"sig": "<base64-encoded signature>"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Creation
|
||||
|
||||
### API Endpoint
|
||||
|
||||
```http
|
||||
GET /v1/alerts/{alertId}/bundle
|
||||
Authorization: Bearer <token>
|
||||
|
||||
Response: application/gzip
|
||||
Content-Disposition: attachment; filename="alert-123.stella.bundle.tgz"
|
||||
```
|
||||
|
||||
### Programmatic
|
||||
|
||||
```csharp
|
||||
var packager = services.GetRequiredService<IOfflineBundlePackager>();
|
||||
|
||||
var result = await packager.CreateBundleAsync(new BundleRequest
|
||||
{
|
||||
AlertId = "alert-123",
|
||||
ActorId = "user@example.com",
|
||||
IncludeVexHistory = true,
|
||||
IncludeSbomSlice = true
|
||||
});
|
||||
|
||||
// result.Content contains the tarball stream
|
||||
// result.ManifestHash contains the verification hash
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
### API Endpoint
|
||||
|
||||
```http
|
||||
POST /v1/alerts/{alertId}/bundle/verify
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"bundle_hash": "sha256:abc123...",
|
||||
"signature": "<optional DSSE signature>"
|
||||
}
|
||||
|
||||
Response:
|
||||
{
|
||||
"is_valid": true,
|
||||
"hash_valid": true,
|
||||
"chain_valid": true,
|
||||
"signature_valid": true,
|
||||
"verified_at": "2024-12-15T10:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
### Programmatic
|
||||
|
||||
```csharp
|
||||
var verification = await packager.VerifyBundleAsync(
|
||||
bundlePath: "/path/to/bundle.stella.bundle.tgz",
|
||||
expectedHash: "sha256:abc123...");
|
||||
|
||||
if (!verification.IsValid)
|
||||
{
|
||||
Console.WriteLine($"Verification failed: {string.Join(", ", verification.Errors)}");
|
||||
}
|
||||
```
|
||||
|
||||
## CLI Usage
|
||||
|
||||
```bash
|
||||
# Export bundle
|
||||
stellaops alert bundle export --alert-id alert-123 --output ./bundles/
|
||||
|
||||
# Verify bundle
|
||||
stellaops alert bundle verify --file ./bundles/alert-123.stella.bundle.tgz
|
||||
|
||||
# Import bundle (air-gapped instance)
|
||||
stellaops alert bundle import --file ./bundles/alert-123.stella.bundle.tgz
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Hash Verification**: Always verify bundle hash before processing
|
||||
2. **Signature Validation**: Verify DSSE signature if present
|
||||
3. **Content Validation**: Validate JSON schemas after extraction
|
||||
4. **Size Limits**: Enforce maximum bundle size limits (default: 100MB)
|
||||
5. **Path Traversal**: Tarball extraction must prevent path traversal attacks
|
||||
|
||||
## Versioning
|
||||
|
||||
| Format Version | Changes | Min StellaOps Version |
|
||||
|----------------|---------|----------------------|
|
||||
| 1.0.0 | Initial format | 1.0.0 |
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Evidence Bundle Envelope](./evidence-bundle-envelope.md)
|
||||
- [DSSE Signing Guide](./dsse-signing.md)
|
||||
- [Offline Kit Guide](../OFFLINE_KIT.md)
|
||||
- [API Reference](../api/evidence-decision-api.openapi.yaml)
|
||||
518
docs/modules/airgap/guides/offline-parity-verification.md
Normal file
518
docs/modules/airgap/guides/offline-parity-verification.md
Normal file
@@ -0,0 +1,518 @@
|
||||
# Offline Parity Verification
|
||||
|
||||
**Last Updated:** 2025-12-14
|
||||
**Next Review:** 2026-03-14
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This document defines the methodology for verifying that StellaOps scanner produces **identical results** in offline/air-gapped environments compared to connected deployments. Parity verification ensures that security decisions made in disconnected environments are equivalent to those made with full network access.
|
||||
|
||||
---
|
||||
|
||||
## 1. PARITY VERIFICATION OBJECTIVES
|
||||
|
||||
### 1.1 Core Guarantees
|
||||
|
||||
| Guarantee | Description | Target |
|
||||
|-----------|-------------|--------|
|
||||
| **Bitwise Fidelity** | Scan outputs are byte-identical offline vs online | 100% |
|
||||
| **Semantic Fidelity** | Same vulnerabilities, severities, and verdicts | 100% |
|
||||
| **Temporal Parity** | Same results given identical feed snapshots | 100% |
|
||||
| **Policy Parity** | Same pass/fail decisions with identical policies | 100% |
|
||||
|
||||
### 1.2 What Parity Does NOT Cover
|
||||
|
||||
- **Feed freshness**: Offline feeds may be hours/days behind live feeds (by design)
|
||||
- **Network-only enrichment**: EPSS lookups, live KEV checks (graceful degradation applies)
|
||||
- **Transparency log submission**: Rekor entries created only when connected
|
||||
|
||||
---
|
||||
|
||||
## 2. TEST METHODOLOGY
|
||||
|
||||
### 2.1 Environment Configuration
|
||||
|
||||
#### Connected Environment
|
||||
|
||||
```yaml
|
||||
environment:
|
||||
mode: connected
|
||||
network: enabled
|
||||
feeds:
|
||||
sources: [osv, ghsa, nvd]
|
||||
refresh: live
|
||||
rekor: enabled
|
||||
epss: enabled
|
||||
timestamp_source: ntp
|
||||
```
|
||||
|
||||
#### Offline Environment
|
||||
|
||||
```yaml
|
||||
environment:
|
||||
mode: offline
|
||||
network: disabled
|
||||
feeds:
|
||||
sources: [local-bundle]
|
||||
refresh: none
|
||||
rekor: offline-snapshot
|
||||
epss: bundled-cache
|
||||
timestamp_source: frozen
|
||||
timestamp_value: "2025-12-14T00:00:00Z"
|
||||
```
|
||||
|
||||
### 2.2 Test Procedure
|
||||
|
||||
```
|
||||
PARITY VERIFICATION PROCEDURE v1.0
|
||||
══════════════════════════════════
|
||||
|
||||
PHASE 1: BUNDLE CAPTURE (Connected Environment)
|
||||
─────────────────────────────────────────────────
|
||||
1. Capture current feed state:
|
||||
- Record feed version/digest
|
||||
- Snapshot EPSS scores (top 1000 CVEs)
|
||||
- Record KEV list state
|
||||
|
||||
2. Run connected scan:
|
||||
stellaops scan --image <test-image> \
|
||||
--format json \
|
||||
--output connected-scan.json \
|
||||
--receipt connected-receipt.json
|
||||
|
||||
3. Export offline bundle:
|
||||
stellaops offline bundle export \
|
||||
--feeds-snapshot \
|
||||
--epss-cache \
|
||||
--output parity-bundle-$(date +%Y%m%d).tar.zst
|
||||
|
||||
PHASE 2: OFFLINE SCAN (Air-Gapped Environment)
|
||||
───────────────────────────────────────────────
|
||||
1. Import bundle:
|
||||
stellaops offline bundle import parity-bundle-*.tar.zst
|
||||
|
||||
2. Freeze clock to bundle timestamp:
|
||||
export STELLAOPS_DETERMINISM_TIMESTAMP="2025-12-14T00:00:00Z"
|
||||
|
||||
3. Run offline scan:
|
||||
stellaops scan --image <test-image> \
|
||||
--format json \
|
||||
--output offline-scan.json \
|
||||
--receipt offline-receipt.json \
|
||||
--offline-mode
|
||||
|
||||
PHASE 3: PARITY COMPARISON
|
||||
──────────────────────────
|
||||
1. Compare findings digests:
|
||||
diff <(jq -S '.findings | sort_by(.id)' connected-scan.json) \
|
||||
<(jq -S '.findings | sort_by(.id)' offline-scan.json)
|
||||
|
||||
2. Compare policy decisions:
|
||||
diff <(jq -S '.policyDecision' connected-scan.json) \
|
||||
<(jq -S '.policyDecision' offline-scan.json)
|
||||
|
||||
3. Compare receipt input hashes:
|
||||
jq '.inputHash' connected-receipt.json
|
||||
jq '.inputHash' offline-receipt.json
|
||||
# MUST be identical if same bundle used
|
||||
|
||||
PHASE 4: RECORD RESULTS
|
||||
───────────────────────
|
||||
1. Generate parity report:
|
||||
stellaops parity report \
|
||||
--connected connected-scan.json \
|
||||
--offline offline-scan.json \
|
||||
--output parity-report-$(date +%Y%m%d).json
|
||||
```
|
||||
|
||||
### 2.3 Test Image Matrix
|
||||
|
||||
Run parity tests against this representative image set:
|
||||
|
||||
| Image | Category | Expected Vulns | Notes |
|
||||
|-------|----------|----------------|-------|
|
||||
| `alpine:3.19` | Minimal | ~5 | Fast baseline |
|
||||
| `debian:12-slim` | Standard | ~40 | OS package focus |
|
||||
| `node:20-alpine` | Application | ~100 | npm + OS packages |
|
||||
| `python:3.12` | Application | ~150 | pip + OS packages |
|
||||
| `dotnet/aspnet:8.0` | Application | ~75 | NuGet + OS packages |
|
||||
| `postgres:16-alpine` | Database | ~70 | Database + OS |
|
||||
|
||||
---
|
||||
|
||||
## 3. COMPARISON CRITERIA
|
||||
|
||||
### 3.1 Bitwise Comparison
|
||||
|
||||
Compare canonical JSON outputs after normalization:
|
||||
|
||||
```bash
|
||||
# Canonical comparison script
|
||||
canonical_compare() {
|
||||
local connected="$1"
|
||||
local offline="$2"
|
||||
|
||||
# Normalize both outputs
|
||||
jq -S . "$connected" > /tmp/connected-canonical.json
|
||||
jq -S . "$offline" > /tmp/offline-canonical.json
|
||||
|
||||
# Compute hashes
|
||||
CONNECTED_HASH=$(sha256sum /tmp/connected-canonical.json | cut -d' ' -f1)
|
||||
OFFLINE_HASH=$(sha256sum /tmp/offline-canonical.json | cut -d' ' -f1)
|
||||
|
||||
if [[ "$CONNECTED_HASH" == "$OFFLINE_HASH" ]]; then
|
||||
echo "PASS: Bitwise identical"
|
||||
return 0
|
||||
else
|
||||
echo "FAIL: Hash mismatch"
|
||||
echo " Connected: $CONNECTED_HASH"
|
||||
echo " Offline: $OFFLINE_HASH"
|
||||
diff --color /tmp/connected-canonical.json /tmp/offline-canonical.json
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
```
|
||||
|
||||
### 3.2 Semantic Comparison
|
||||
|
||||
When bitwise comparison fails, perform semantic comparison:
|
||||
|
||||
| Field | Comparison Rule | Allowed Variance |
|
||||
|-------|-----------------|------------------|
|
||||
| `findings[].id` | Exact match | None |
|
||||
| `findings[].severity` | Exact match | None |
|
||||
| `findings[].cvss.score` | Exact match | None |
|
||||
| `findings[].cvss.vector` | Exact match | None |
|
||||
| `findings[].affected` | Exact match | None |
|
||||
| `findings[].reachability` | Exact match | None |
|
||||
| `sbom.components[].purl` | Exact match | None |
|
||||
| `sbom.components[].version` | Exact match | None |
|
||||
| `metadata.timestamp` | Ignored | Expected to differ |
|
||||
| `metadata.scanId` | Ignored | Expected to differ |
|
||||
| `metadata.environment` | Ignored | Expected to differ |
|
||||
|
||||
### 3.3 Fields Excluded from Comparison
|
||||
|
||||
These fields are expected to differ and are excluded from parity checks:
|
||||
|
||||
```json
|
||||
{
|
||||
"excludedFields": [
|
||||
"$.metadata.scanId",
|
||||
"$.metadata.timestamp",
|
||||
"$.metadata.hostname",
|
||||
"$.metadata.environment.network",
|
||||
"$.attestations[*].rekorEntry",
|
||||
"$.metadata.epssEnrichedAt"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### 3.4 Graceful Degradation Fields
|
||||
|
||||
Fields that may be absent in offline mode (acceptable):
|
||||
|
||||
| Field | Online | Offline | Parity Rule |
|
||||
|-------|--------|---------|-------------|
|
||||
| `epssScore` | Present | May be stale/absent | Check if bundled |
|
||||
| `kevStatus` | Live | Bundled snapshot | Compare against bundle date |
|
||||
| `rekorEntry` | Present | Absent | Exclude from comparison |
|
||||
| `fulcioChain` | Present | Absent | Exclude from comparison |
|
||||
|
||||
---
|
||||
|
||||
## 4. AUTOMATED PARITY CI
|
||||
|
||||
### 4.1 CI Workflow
|
||||
|
||||
```yaml
|
||||
# .gitea/workflows/offline-parity.yml
|
||||
name: Offline Parity Verification
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 3 * * 1' # Weekly Monday 3am
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
parity-test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Setup .NET
|
||||
uses: actions/setup-dotnet@v4
|
||||
with:
|
||||
dotnet-version: '10.0.x'
|
||||
|
||||
- name: Set determinism environment
|
||||
run: |
|
||||
echo "TZ=UTC" >> $GITHUB_ENV
|
||||
echo "LC_ALL=C" >> $GITHUB_ENV
|
||||
echo "STELLAOPS_DETERMINISM_SEED=42" >> $GITHUB_ENV
|
||||
|
||||
- name: Capture connected baseline
|
||||
run: scripts/parity/capture-connected.sh
|
||||
|
||||
- name: Export offline bundle
|
||||
run: scripts/parity/export-bundle.sh
|
||||
|
||||
- name: Run offline scan (sandboxed)
|
||||
run: |
|
||||
docker run --network none \
|
||||
-v $(pwd)/bundle:/bundle:ro \
|
||||
-v $(pwd)/results:/results \
|
||||
stellaops/scanner:latest \
|
||||
scan --offline-mode --bundle /bundle
|
||||
|
||||
- name: Compare parity
|
||||
run: scripts/parity/compare-parity.sh
|
||||
|
||||
- name: Upload parity report
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: parity-report
|
||||
path: results/parity-report-*.json
|
||||
```
|
||||
|
||||
### 4.2 Parity Test Script
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# scripts/parity/compare-parity.sh
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
CONNECTED_DIR="results/connected"
|
||||
OFFLINE_DIR="results/offline"
|
||||
REPORT_FILE="results/parity-report-$(date +%Y%m%d).json"
|
||||
|
||||
declare -a IMAGES=(
|
||||
"alpine:3.19"
|
||||
"debian:12-slim"
|
||||
"node:20-alpine"
|
||||
"python:3.12"
|
||||
"mcr.microsoft.com/dotnet/aspnet:8.0"
|
||||
"postgres:16-alpine"
|
||||
)
|
||||
|
||||
TOTAL=0
|
||||
PASSED=0
|
||||
FAILED=0
|
||||
RESULTS=()
|
||||
|
||||
for image in "${IMAGES[@]}"; do
|
||||
TOTAL=$((TOTAL + 1))
|
||||
image_hash=$(echo "$image" | sha256sum | cut -c1-12)
|
||||
|
||||
connected_file="${CONNECTED_DIR}/${image_hash}-scan.json"
|
||||
offline_file="${OFFLINE_DIR}/${image_hash}-scan.json"
|
||||
|
||||
# Compare findings
|
||||
connected_findings=$(jq -S '.findings | sort_by(.id) | map(del(.metadata.timestamp))' "$connected_file")
|
||||
offline_findings=$(jq -S '.findings | sort_by(.id) | map(del(.metadata.timestamp))' "$offline_file")
|
||||
|
||||
connected_hash=$(echo "$connected_findings" | sha256sum | cut -d' ' -f1)
|
||||
offline_hash=$(echo "$offline_findings" | sha256sum | cut -d' ' -f1)
|
||||
|
||||
if [[ "$connected_hash" == "$offline_hash" ]]; then
|
||||
PASSED=$((PASSED + 1))
|
||||
status="PASS"
|
||||
else
|
||||
FAILED=$((FAILED + 1))
|
||||
status="FAIL"
|
||||
fi
|
||||
|
||||
RESULTS+=("{\"image\":\"$image\",\"status\":\"$status\",\"connectedHash\":\"$connected_hash\",\"offlineHash\":\"$offline_hash\"}")
|
||||
done
|
||||
|
||||
# Generate report
|
||||
cat > "$REPORT_FILE" <<EOF
|
||||
{
|
||||
"reportDate": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"bundleVersion": "$(cat bundle/version.txt)",
|
||||
"summary": {
|
||||
"total": $TOTAL,
|
||||
"passed": $PASSED,
|
||||
"failed": $FAILED,
|
||||
"parityRate": $(echo "scale=4; $PASSED / $TOTAL" | bc)
|
||||
},
|
||||
"results": [$(IFS=,; echo "${RESULTS[*]}")]
|
||||
}
|
||||
EOF
|
||||
|
||||
echo "Parity Report: $PASSED/$TOTAL passed ($(echo "scale=2; $PASSED * 100 / $TOTAL" | bc)%)"
|
||||
|
||||
if [[ $FAILED -gt 0 ]]; then
|
||||
echo "PARITY VERIFICATION FAILED"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. PARITY RESULTS
|
||||
|
||||
### 5.1 Latest Verification Results
|
||||
|
||||
| Date | Bundle Version | Images Tested | Parity Rate | Notes |
|
||||
|------|---------------|---------------|-------------|-------|
|
||||
| 2025-12-14 | 2025.12.0 | 6 | 100% | Baseline established |
|
||||
| — | — | — | — | — |
|
||||
|
||||
### 5.2 Historical Parity Tracking
|
||||
|
||||
```sql
|
||||
-- Query for parity trend analysis
|
||||
SELECT
|
||||
date_trunc('week', report_date) AS week,
|
||||
AVG(parity_rate) AS avg_parity,
|
||||
MIN(parity_rate) AS min_parity,
|
||||
COUNT(*) AS test_runs
|
||||
FROM parity_reports
|
||||
WHERE report_date >= NOW() - INTERVAL '90 days'
|
||||
GROUP BY 1
|
||||
ORDER BY 1 DESC;
|
||||
```
|
||||
|
||||
### 5.3 Parity Database Schema
|
||||
|
||||
```sql
|
||||
CREATE TABLE scanner.parity_reports (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
report_date TIMESTAMPTZ NOT NULL,
|
||||
bundle_version TEXT NOT NULL,
|
||||
bundle_digest TEXT NOT NULL,
|
||||
total_images INT NOT NULL,
|
||||
passed_images INT NOT NULL,
|
||||
failed_images INT NOT NULL,
|
||||
parity_rate NUMERIC(5,4) NOT NULL,
|
||||
results JSONB NOT NULL,
|
||||
ci_run_id TEXT,
|
||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
CREATE INDEX idx_parity_reports_date ON scanner.parity_reports(report_date DESC);
|
||||
CREATE INDEX idx_parity_reports_bundle ON scanner.parity_reports(bundle_version);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. KNOWN LIMITATIONS
|
||||
|
||||
### 6.1 Acceptable Differences
|
||||
|
||||
| Scenario | Expected Behavior | Parity Impact |
|
||||
|----------|-------------------|---------------|
|
||||
| **EPSS scores** | Use bundled cache (may be stale) | None if cache bundled |
|
||||
| **KEV status** | Use bundled snapshot | None if snapshot bundled |
|
||||
| **Rekor entries** | Not created offline | Excluded from comparison |
|
||||
| **Timestamp fields** | Differ by design | Excluded from comparison |
|
||||
| **Network-only advisories** | Not available offline | Feed drift (documented) |
|
||||
|
||||
### 6.2 Known Edge Cases
|
||||
|
||||
1. **Race conditions during bundle capture**: If feeds update during bundle export, connected scan may include newer data than bundle. Mitigation: Capture bundle first, then run connected scan.
|
||||
|
||||
2. **Clock drift**: Offline environments with drifted clocks may compute different freshness scores. Mitigation: Always use frozen timestamps from bundle.
|
||||
|
||||
3. **Locale differences**: String sorting may differ across locales. Mitigation: Force `LC_ALL=C` in both environments.
|
||||
|
||||
4. **Floating point rounding**: CVSS v4 MacroVector interpolation may have micro-differences. Mitigation: Use integer basis points throughout.
|
||||
|
||||
### 6.3 Out of Scope
|
||||
|
||||
The following are intentionally NOT covered by parity verification:
|
||||
|
||||
- Real-time threat intelligence (requires network)
|
||||
- Live vulnerability disclosure (requires network)
|
||||
- Transparency log inclusion proofs (requires Rekor)
|
||||
- OIDC/Fulcio certificate chains (requires network)
|
||||
|
||||
---
|
||||
|
||||
## 7. TROUBLESHOOTING
|
||||
|
||||
### 7.1 Common Parity Failures
|
||||
|
||||
| Symptom | Likely Cause | Resolution |
|
||||
|---------|--------------|------------|
|
||||
| Different vulnerability counts | Feed version mismatch | Verify bundle digest matches |
|
||||
| Different CVSS scores | CVSS v4 calculation issue | Check MacroVector lookup parity |
|
||||
| Different severity labels | Threshold configuration | Compare policy bundles |
|
||||
| Missing EPSS data | EPSS cache not bundled | Re-export with `--epss-cache` |
|
||||
| Different component counts | SBOM generation variance | Check analyzer versions |
|
||||
|
||||
### 7.2 Debug Commands
|
||||
|
||||
```bash
|
||||
# Compare feed versions
|
||||
stellaops feeds version --connected
|
||||
stellaops feeds version --offline --bundle ./bundle
|
||||
|
||||
# Compare policy digests
|
||||
stellaops policy digest --connected
|
||||
stellaops policy digest --offline --bundle ./bundle
|
||||
|
||||
# Detailed diff of findings
|
||||
stellaops parity diff \
|
||||
--connected connected-scan.json \
|
||||
--offline offline-scan.json \
|
||||
--verbose
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. METRICS AND MONITORING
|
||||
|
||||
### 8.1 Prometheus Metrics
|
||||
|
||||
```
|
||||
# Parity verification metrics
|
||||
parity_test_total{status="pass|fail"}
|
||||
parity_test_duration_seconds (histogram)
|
||||
parity_bundle_age_seconds (gauge)
|
||||
parity_findings_diff_count (gauge)
|
||||
```
|
||||
|
||||
### 8.2 Alerting Rules
|
||||
|
||||
```yaml
|
||||
groups:
|
||||
- name: offline-parity
|
||||
rules:
|
||||
- alert: ParityTestFailed
|
||||
expr: parity_test_total{status="fail"} > 0
|
||||
for: 0m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "Offline parity test failed"
|
||||
|
||||
- alert: ParityRateDegraded
|
||||
expr: |
|
||||
(sum(parity_test_total{status="pass"}) /
|
||||
sum(parity_test_total)) < 0.95
|
||||
for: 1h
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: "Parity rate below 95%"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. REFERENCES
|
||||
|
||||
- [Offline Update Kit (OUK)](../OFFLINE_KIT.md)
|
||||
- [Offline and Air-Gap Technical Reference](../product-advisories/14-Dec-2025%20-%20Offline%20and%20Air-Gap%20Technical%20Reference.md)
|
||||
- [Determinism and Reproducibility Technical Reference](../product-advisories/14-Dec-2025%20-%20Determinism%20and%20Reproducibility%20Technical%20Reference.md)
|
||||
- [Determinism CI Harness](../modules/scanner/design/determinism-ci-harness.md)
|
||||
- [Performance Baselines](../benchmarks/performance-baselines.md)
|
||||
|
||||
---
|
||||
|
||||
**Document Version**: 1.0
|
||||
**Target Platform**: .NET 10, PostgreSQL >=16
|
||||
34
docs/modules/airgap/guides/operations.md
Normal file
34
docs/modules/airgap/guides/operations.md
Normal file
@@ -0,0 +1,34 @@
|
||||
# Airgap Operations (DOCS-AIRGAP-57-004)
|
||||
|
||||
Runbooks for imports, failure recovery, and auditing in sealed/constrained modes.
|
||||
|
||||
## Imports
|
||||
1) Verify bundle hash/DSSE (see `mirror-bundles.md`).
|
||||
2) `stella airgap import --bundle ... --generation N --dry-run` (optional).
|
||||
3) Apply network policy: ensure sealed/constrained mode set correctly.
|
||||
4) Import with `stella airgap import ...` and watch logs.
|
||||
5) Confirm timeline event emitted (bundleId, mirrorGeneration, actor).
|
||||
|
||||
## Failure recovery
|
||||
- Hash/signature mismatch: reject bundle; re-request export; log incident.
|
||||
- Partial import: rerun with `--force` after cleaning registry/cache; keep previous generation for rollback.
|
||||
- Staleness breach: if imports unavailable, raise amber alert; if >72h, go red and halt new ingest until refreshed.
|
||||
- Time anchor expired: apply new anchor from trusted media before continuing operations.
|
||||
|
||||
## Auditing
|
||||
- Record every import in audit log: `{tenant, mirrorGeneration, manifestHash, actor, sealed}`.
|
||||
- Preserve manifests and hashes for at least two generations.
|
||||
- Periodically (daily) run `stella airgap list --format json` and archive output.
|
||||
- Ensure logs are immutable (append-only) in sealed environments.
|
||||
|
||||
## Observability
|
||||
- Monitor counters for denied egress, import success/failure, and staleness alerts.
|
||||
- Expose `/obs/airgap/status` (if available) to scrape bundle freshness.
|
||||
|
||||
## Checklist (per import)
|
||||
- [ ] Hash/DSSE verified
|
||||
- [ ] Sealed/constrained mode configured
|
||||
- [ ] Registry/cache reachable
|
||||
- [ ] Import succeeded
|
||||
- [ ] Timeline/audit recorded
|
||||
- [ ] Staleness dashboard updated
|
||||
32
docs/modules/airgap/guides/overview.md
Normal file
32
docs/modules/airgap/guides/overview.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# Airgap Overview
|
||||
|
||||
This page orients teams before diving into per-component runbooks. It summarises modes, lifecycle, and governance responsibilities for sealed deployments.
|
||||
|
||||
## Modes
|
||||
- **Sealed**: deny-all egress; only preloaded bundles (mirror + bootstrap) allowed. Requires exported time anchors and offline trust roots.
|
||||
- **Constrained**: limited egress to allowlisted registries and NTP; mirror bundles still preferred.
|
||||
- **Connected**: full egress for staging; must remain policy-compatible with sealed mode.
|
||||
|
||||
## Lifecycle
|
||||
1. **Prepare bundles**: export mirror + bootstrap packs (images/charts, SBOMs, DSSE metadata) signed and hashed.
|
||||
2. **Stage & verify**: load bundles into the offline store, verify hashes/DSSE, record mirrorGeneration.
|
||||
3. **Activate**: flip sealed toggle; enforce deny-all egress and policy banners; register bundles with Excititor/Export Center.
|
||||
4. **Operate**: run periodic staleness checks, apply time anchors, and audit imports via timeline events.
|
||||
5. **Refresh/rollback**: import next mirrorGeneration or roll back using previous manifest + hashes.
|
||||
|
||||
## Responsibilities
|
||||
- **AirGap Controller Guild**: owns network posture (deny-all, allowlists), sealed-mode policy banners, and change control.
|
||||
- **Export Center / Evidence Locker Guilds**: produce and verify bundle manifests, DSSE envelopes, and Merkle roots.
|
||||
- **Module owners** (Excititor, Concelier, etc.): honor sealed-mode toggles, emit staleness headers, and refuse unsigned/unknown bundles.
|
||||
- **Ops/Signals Guild**: maintain time anchors and observability sinks compatible with sealed deployments.
|
||||
|
||||
## Rule banner (sealed mode)
|
||||
Display a top-of-console banner when `sealed=true`:
|
||||
- "Sealed mode: no external egress. Only registered bundles permitted. Imports logged; violations trigger audit."
|
||||
- Include current `mirrorGeneration`, bundle manifest hash, and time-anchor status.
|
||||
|
||||
## Related docs
|
||||
- `docs/modules/airgap/guides/airgap-mode.md` — deeper policy shapes per mode.
|
||||
- `docs/modules/airgap/guides/bundle-repositories.md` — mirror/bootstrap bundle structure.
|
||||
- `docs/modules/airgap/guides/staleness-and-time.md` — time anchors and staleness checks.
|
||||
- `docs/modules/airgap/guides/controller.md` / `docs/modules/airgap/guides/importer.md` — controller + importer references.
|
||||
@@ -0,0 +1,254 @@
|
||||
# Portable Evidence Bundle Verification Guide
|
||||
|
||||
This document describes how Advisory AI teams can verify the integrity and authenticity of portable evidence bundles produced by StellaOps Excititor for sealed deployments.
|
||||
|
||||
## Overview
|
||||
|
||||
Portable evidence bundles are self-contained ZIP archives that include:
|
||||
- Evidence locker manifest with cryptographic Merkle root
|
||||
- DSSE attestation envelope (when signing is enabled)
|
||||
- Raw evidence items organized by provider
|
||||
- Audit timeline events
|
||||
- Bundle manifest with content index
|
||||
|
||||
## Bundle Structure
|
||||
|
||||
```
|
||||
evidence-bundle-{tenant}-{timestamp}.zip
|
||||
├── manifest.json # VexLockerManifest with Merkle root
|
||||
├── attestation.json # DSSE envelope (optional)
|
||||
├── evidence/
|
||||
│ └── {provider}/
|
||||
│ └── sha256_{digest}.json
|
||||
├── timeline.json # Audit timeline events
|
||||
├── bundle-manifest.json # Index of all contents
|
||||
└── VERIFY.md # Verification instructions
|
||||
```
|
||||
|
||||
## Verification Steps
|
||||
|
||||
### Step 1: Extract and Validate Structure
|
||||
|
||||
```bash
|
||||
# Extract the bundle
|
||||
unzip evidence-bundle-*.zip -d evidence-bundle/
|
||||
|
||||
# Verify expected files exist
|
||||
ls -la evidence-bundle/
|
||||
# Should see: manifest.json, bundle-manifest.json, evidence/, timeline.json, VERIFY.md
|
||||
```
|
||||
|
||||
### Step 2: Verify Evidence Item Integrity
|
||||
|
||||
Each evidence item's content hash must match its filename:
|
||||
|
||||
```bash
|
||||
cd evidence-bundle/evidence
|
||||
|
||||
# For each provider directory
|
||||
for provider in */; do
|
||||
for file in "$provider"*.json; do
|
||||
# Extract expected hash from filename (sha256_xxxx.json -> xxxx)
|
||||
expected=$(basename "$file" .json | sed 's/sha256_//')
|
||||
# Compute actual hash
|
||||
actual=$(sha256sum "$file" | cut -d' ' -f1)
|
||||
if [ "$expected" != "$actual" ]; then
|
||||
echo "MISMATCH: $file"
|
||||
fi
|
||||
done
|
||||
done
|
||||
```
|
||||
|
||||
### Step 3: Verify Merkle Root
|
||||
|
||||
The Merkle root provides cryptographic proof that all evidence items are included without modification.
|
||||
|
||||
#### Python Verification Script
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
import json
|
||||
import hashlib
|
||||
from pathlib import Path
|
||||
|
||||
def compute_merkle_root(hashes):
|
||||
"""Compute Merkle root from list of hex hashes."""
|
||||
if len(hashes) == 0:
|
||||
return hashlib.sha256(b'').hexdigest()
|
||||
if len(hashes) == 1:
|
||||
return hashes[0]
|
||||
|
||||
# Pad to even number
|
||||
if len(hashes) % 2 != 0:
|
||||
hashes = hashes + [hashes[-1]]
|
||||
|
||||
# Compute next level
|
||||
next_level = []
|
||||
for i in range(0, len(hashes), 2):
|
||||
combined = bytes.fromhex(hashes[i] + hashes[i+1])
|
||||
next_level.append(hashlib.sha256(combined).hexdigest())
|
||||
|
||||
return compute_merkle_root(next_level)
|
||||
|
||||
def verify_bundle(bundle_path):
|
||||
"""Verify a portable evidence bundle."""
|
||||
bundle_path = Path(bundle_path)
|
||||
|
||||
# Load manifest
|
||||
with open(bundle_path / 'manifest.json') as f:
|
||||
manifest = json.load(f)
|
||||
|
||||
# Extract hashes, sorted by observationId then providerId
|
||||
items = sorted(manifest['items'],
|
||||
key=lambda x: (x['observationId'], x['providerId'].lower()))
|
||||
|
||||
hashes = []
|
||||
for item in items:
|
||||
content_hash = item['contentHash']
|
||||
# Strip sha256: prefix if present
|
||||
if content_hash.startswith('sha256:'):
|
||||
content_hash = content_hash[7:]
|
||||
hashes.append(content_hash.lower())
|
||||
|
||||
# Compute Merkle root
|
||||
computed_root = 'sha256:' + compute_merkle_root(hashes)
|
||||
expected_root = manifest['merkleRoot']
|
||||
|
||||
if computed_root == expected_root:
|
||||
print(f"✓ Merkle root verified: {computed_root}")
|
||||
return True
|
||||
else:
|
||||
print(f"✗ Merkle root mismatch!")
|
||||
print(f" Expected: {expected_root}")
|
||||
print(f" Computed: {computed_root}")
|
||||
return False
|
||||
|
||||
if __name__ == '__main__':
|
||||
import sys
|
||||
if len(sys.argv) != 2:
|
||||
print(f"Usage: {sys.argv[0]} <bundle-directory>")
|
||||
sys.exit(1)
|
||||
|
||||
success = verify_bundle(sys.argv[1])
|
||||
sys.exit(0 if success else 1)
|
||||
```
|
||||
|
||||
### Step 4: Verify Attestation (if present)
|
||||
|
||||
When `attestation.json` exists, verify the DSSE envelope:
|
||||
|
||||
```bash
|
||||
# Check if attestation exists
|
||||
if [ -f "evidence-bundle/attestation.json" ]; then
|
||||
# Extract attestation metadata
|
||||
jq '.' evidence-bundle/attestation.json
|
||||
|
||||
# Verify signature using appropriate tool
|
||||
# For Sigstore/cosign attestations:
|
||||
# cosign verify-attestation --type custom ...
|
||||
fi
|
||||
```
|
||||
|
||||
#### Attestation Fields
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `dsseEnvelope` | Base64-encoded DSSE envelope |
|
||||
| `envelopeDigest` | SHA-256 hash of the envelope |
|
||||
| `predicateType` | in-toto predicate type URI |
|
||||
| `signatureType` | Signature algorithm (e.g., "ES256") |
|
||||
| `keyId` | Signing key identifier |
|
||||
| `issuer` | Certificate issuer |
|
||||
| `subject` | Certificate subject |
|
||||
| `signedAt` | Signing timestamp (ISO-8601) |
|
||||
| `transparencyLogRef` | Rekor transparency log entry URL |
|
||||
|
||||
### Step 5: Validate Timeline
|
||||
|
||||
The timeline provides audit trail of bundle creation:
|
||||
|
||||
```bash
|
||||
# View timeline events
|
||||
jq '.' evidence-bundle/timeline.json
|
||||
|
||||
# Check for any failed events
|
||||
jq '.[] | select(.errorCode != null)' evidence-bundle/timeline.json
|
||||
```
|
||||
|
||||
#### Timeline Event Types
|
||||
|
||||
| Event Type | Description |
|
||||
|------------|-------------|
|
||||
| `airgap.import.started` | Bundle import initiated |
|
||||
| `airgap.import.completed` | Import succeeded |
|
||||
| `airgap.import.failed` | Import failed (check errorCode) |
|
||||
|
||||
## Error Codes Reference
|
||||
|
||||
| Code | Description | Resolution |
|
||||
|------|-------------|------------|
|
||||
| `AIRGAP_EGRESS_BLOCKED` | External URL blocked in sealed mode | Use mirror/portable media |
|
||||
| `AIRGAP_SOURCE_UNTRUSTED` | Publisher not allowlisted | Contact administrator |
|
||||
| `AIRGAP_SIGNATURE_MISSING` | Required signature absent | Re-export with signing |
|
||||
| `AIRGAP_SIGNATURE_INVALID` | Signature verification failed | Check key/certificate |
|
||||
| `AIRGAP_PAYLOAD_STALE` | Timestamp exceeds tolerance | Re-create bundle |
|
||||
| `AIRGAP_PAYLOAD_MISMATCH` | Hash doesn't match metadata | Verify transfer integrity |
|
||||
|
||||
## Advisory AI Integration
|
||||
|
||||
### Quick Integrity Check
|
||||
|
||||
For automated pipelines, use the bundle manifest:
|
||||
|
||||
```python
|
||||
import json
|
||||
|
||||
with open('bundle-manifest.json') as f:
|
||||
manifest = json.load(f)
|
||||
|
||||
# Key fields for Advisory AI
|
||||
print(f"Bundle ID: {manifest['bundleId']}")
|
||||
print(f"Merkle Root: {manifest['merkleRoot']}")
|
||||
print(f"Item Count: {manifest['itemCount']}")
|
||||
print(f"Has Attestation: {manifest['hasAttestation']}")
|
||||
```
|
||||
|
||||
### Evidence Lookup
|
||||
|
||||
Find evidence for specific observations:
|
||||
|
||||
```python
|
||||
# Index evidence by observation ID
|
||||
evidence_index = {e['observationId']: e for e in manifest['evidence']}
|
||||
|
||||
# Lookup specific observation
|
||||
obs_id = 'obs-123-abc'
|
||||
if obs_id in evidence_index:
|
||||
entry = evidence_index[obs_id]
|
||||
file_path = f"evidence/{entry['providerId']}/sha256_{entry['contentHash'][7:]}.json"
|
||||
```
|
||||
|
||||
### Provenance Chain
|
||||
|
||||
Build complete provenance from bundle:
|
||||
|
||||
1. `bundle-manifest.json` → Bundle creation metadata
|
||||
2. `manifest.json` → Evidence locker snapshot
|
||||
3. `attestation.json` → Cryptographic attestation
|
||||
4. `timeline.json` → Audit trail
|
||||
|
||||
## Offline Verification
|
||||
|
||||
For fully air-gapped environments:
|
||||
|
||||
1. Transfer bundle via approved media
|
||||
2. Extract to isolated verification system
|
||||
3. Run verification scripts without network
|
||||
4. Document verification results for audit
|
||||
|
||||
## Support
|
||||
|
||||
For questions or issues:
|
||||
- Review bundle contents with `jq` and standard Unix tools
|
||||
- Check timeline for error codes and messages
|
||||
- Contact StellaOps support with bundle ID and merkle root
|
||||
27
docs/modules/airgap/guides/portable-evidence.md
Normal file
27
docs/modules/airgap/guides/portable-evidence.md
Normal file
@@ -0,0 +1,27 @@
|
||||
# Portable Evidence Bundles (DOCS-AIRGAP-58-004)
|
||||
|
||||
Guidance for exporting/importing portable evidence bundles across enclaves.
|
||||
|
||||
## Bundle contents
|
||||
- Evidence payloads (VEX observations/linksets) as NDJSON.
|
||||
- Timeline events and attestation DSSE envelopes.
|
||||
- Manifest with `bundleId`, `source`, `tenant`, `createdAt`, `files[]`, `dsseEnvelopeHash` (optional).
|
||||
|
||||
## Export
|
||||
- Produce from Evidence Locker/Excititor with deterministic ordering and SHA-256 hashes.
|
||||
- Include Merkle root over evidence files; store in manifest.
|
||||
- Sign manifest (DSSE) when trust roots available.
|
||||
|
||||
## Import
|
||||
- Verify manifest hash, Merkle root, and DSSE signature offline.
|
||||
- Enforce tenant scoping; refuse cross-tenant bundles.
|
||||
- Emit timeline event upon successful import.
|
||||
|
||||
## Constraints
|
||||
- No external lookups; verification uses bundled roots.
|
||||
- Max size per bundle configurable; default 500 MB.
|
||||
- Keep file paths UTF-8 and slash-separated; avoid host-specific metadata.
|
||||
|
||||
## Determinism
|
||||
- Sort files lexicographically; use ISO-8601 UTC timestamps.
|
||||
- Avoid re-compressing files; if tar is used, set deterministic headers (uid/gid=0, mtime=0).
|
||||
584
docs/modules/airgap/guides/proof-chain-verification.md
Normal file
584
docs/modules/airgap/guides/proof-chain-verification.md
Normal file
@@ -0,0 +1,584 @@
|
||||
# Proof Chain Verification in Air-Gap Mode
|
||||
|
||||
> **Version**: 1.0.0
|
||||
> **Last Updated**: 2025-12-17
|
||||
> **Related**: [Proof Chain API](../api/proofs.md), [Key Rotation Runbook](../operations/key-rotation-runbook.md)
|
||||
|
||||
This document describes how to verify proof chains in air-gapped (offline) environments where Rekor transparency log access is unavailable.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Proof chains in StellaOps consist of cryptographically-linked attestations:
|
||||
1. **Evidence statements** - Raw vulnerability findings
|
||||
2. **Reasoning statements** - Policy evaluation traces
|
||||
3. **VEX verdict statements** - Final vulnerability status determinations
|
||||
4. **Graph root statements** - Merkle root commitments to graph analysis results
|
||||
5. **Proof spine** - Merkle tree aggregating all components
|
||||
|
||||
In online mode, proof chains include Rekor inclusion proofs for transparency. In air-gap mode, verification proceeds without Rekor but maintains cryptographic integrity.
|
||||
|
||||
---
|
||||
|
||||
## Verification Levels
|
||||
|
||||
### Level 1: Content-Addressed ID Verification
|
||||
Verifies that content-addressed IDs match payload hashes.
|
||||
|
||||
```bash
|
||||
# Verify a proof bundle ID
|
||||
stellaops proof verify --offline \
|
||||
--proof-bundle sha256:1a2b3c4d... \
|
||||
--level content-id
|
||||
|
||||
# Expected output:
|
||||
# ✓ Content-addressed ID verified
|
||||
# ✓ Payload hash: sha256:1a2b3c4d...
|
||||
```
|
||||
|
||||
### Level 2: DSSE Signature Verification
|
||||
Verifies DSSE envelope signatures against trust anchors.
|
||||
|
||||
```bash
|
||||
# Verify signatures with local trust anchors
|
||||
stellaops proof verify --offline \
|
||||
--proof-bundle sha256:1a2b3c4d... \
|
||||
--anchor-file /path/to/trust-anchors.json \
|
||||
--level signature
|
||||
|
||||
# Expected output:
|
||||
# ✓ DSSE signature valid
|
||||
# ✓ Signer: key-2025-prod
|
||||
# ✓ Trust anchor: 550e8400-e29b-41d4-a716-446655440000
|
||||
```
|
||||
|
||||
### Level 3: Merkle Path Verification
|
||||
Verifies the proof spine merkle tree structure.
|
||||
|
||||
```bash
|
||||
# Verify merkle paths
|
||||
stellaops proof verify --offline \
|
||||
--proof-bundle sha256:1a2b3c4d... \
|
||||
--level merkle
|
||||
|
||||
# Expected output:
|
||||
# ✓ Merkle root verified
|
||||
# ✓ Evidence paths: 3/3 valid
|
||||
# ✓ Reasoning path: valid
|
||||
# ✓ VEX verdict path: valid
|
||||
```
|
||||
|
||||
### Level 4: Full Verification (Offline)
|
||||
Performs all verification steps except Rekor.
|
||||
|
||||
```bash
|
||||
# Full offline verification
|
||||
stellaops proof verify --offline \
|
||||
--proof-bundle sha256:1a2b3c4d... \
|
||||
--anchor-file /path/to/trust-anchors.json
|
||||
|
||||
# Expected output:
|
||||
# Proof Chain Verification
|
||||
# ═══════════════════════
|
||||
# ✓ Content-addressed IDs verified
|
||||
# ✓ DSSE signatures verified (3 envelopes)
|
||||
# ✓ Merkle paths verified
|
||||
# ⊘ Rekor verification skipped (offline mode)
|
||||
#
|
||||
# Overall: VERIFIED (offline)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Trust Anchor Distribution
|
||||
|
||||
In air-gap environments, trust anchors must be distributed out-of-band.
|
||||
|
||||
### Export Trust Anchors
|
||||
|
||||
```bash
|
||||
# On the online system, export trust anchors
|
||||
stellaops anchor export --format json > trust-anchors.json
|
||||
|
||||
# Verify export integrity
|
||||
sha256sum trust-anchors.json > trust-anchors.sha256
|
||||
```
|
||||
|
||||
### Trust Anchor File Format
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "1.0",
|
||||
"exportedAt": "2025-12-17T00:00:00Z",
|
||||
"anchors": [
|
||||
{
|
||||
"trustAnchorId": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"purlPattern": "pkg:*",
|
||||
"allowedKeyids": ["key-2024-prod", "key-2025-prod"],
|
||||
"allowedPredicateTypes": [
|
||||
"evidence.stella/v1",
|
||||
"reasoning.stella/v1",
|
||||
"cdx-vex.stella/v1",
|
||||
"proofspine.stella/v1"
|
||||
],
|
||||
"revokedKeys": ["key-2023-prod"],
|
||||
"keyMaterial": {
|
||||
"key-2024-prod": {
|
||||
"algorithm": "ECDSA-P256",
|
||||
"publicKey": "-----BEGIN PUBLIC KEY-----\n..."
|
||||
},
|
||||
"key-2025-prod": {
|
||||
"algorithm": "ECDSA-P256",
|
||||
"publicKey": "-----BEGIN PUBLIC KEY-----\n..."
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Import Trust Anchors
|
||||
|
||||
```bash
|
||||
# On the air-gapped system
|
||||
stellaops anchor import --file trust-anchors.json
|
||||
|
||||
# Verify import
|
||||
stellaops anchor list
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Proof Bundle Distribution
|
||||
|
||||
### Export Proof Bundles
|
||||
|
||||
```bash
|
||||
# Export a proof bundle for offline transfer
|
||||
stellaops proof export \
|
||||
--entry sha256:abc123:pkg:npm/lodash@4.17.21 \
|
||||
--output proof-bundle.zip
|
||||
|
||||
# Bundle contents:
|
||||
# proof-bundle.zip
|
||||
# ├── proof-spine.json # The proof spine
|
||||
# ├── evidence/ # Evidence statements
|
||||
# │ ├── sha256_e1.json
|
||||
# │ └── sha256_e2.json
|
||||
# ├── reasoning.json # Reasoning statement
|
||||
# ├── vex-verdict.json # VEX verdict statement
|
||||
# ├── envelopes/ # DSSE envelopes
|
||||
# │ ├── evidence-e1.dsse
|
||||
# │ ├── evidence-e2.dsse
|
||||
# │ ├── reasoning.dsse
|
||||
# │ ├── vex-verdict.dsse
|
||||
# │ └── proof-spine.dsse
|
||||
# └── VERIFY.md # Verification instructions
|
||||
```
|
||||
|
||||
### Verify Exported Bundle
|
||||
|
||||
```bash
|
||||
# On the air-gapped system
|
||||
stellaops proof verify --offline \
|
||||
--bundle-file proof-bundle.zip \
|
||||
--anchor-file trust-anchors.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Batch Verification
|
||||
|
||||
For audits, verify multiple proof bundles efficiently:
|
||||
|
||||
```bash
|
||||
# Create a verification manifest
|
||||
cat > verify-manifest.json << 'EOF'
|
||||
{
|
||||
"bundles": [
|
||||
"sha256:1a2b3c4d...",
|
||||
"sha256:5e6f7g8h...",
|
||||
"sha256:9i0j1k2l..."
|
||||
],
|
||||
"options": {
|
||||
"checkRekor": false,
|
||||
"failFast": false
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# Run batch verification
|
||||
stellaops proof verify-batch \
|
||||
--manifest verify-manifest.json \
|
||||
--anchor-file trust-anchors.json \
|
||||
--output verification-report.json
|
||||
```
|
||||
|
||||
### Verification Report Format
|
||||
|
||||
```json
|
||||
{
|
||||
"verifiedAt": "2025-12-17T10:00:00Z",
|
||||
"mode": "offline",
|
||||
"anchorsUsed": ["550e8400..."],
|
||||
"results": [
|
||||
{
|
||||
"proofBundleId": "sha256:1a2b3c4d...",
|
||||
"verified": true,
|
||||
"checks": {
|
||||
"contentId": true,
|
||||
"signature": true,
|
||||
"merklePath": true,
|
||||
"rekorInclusion": null
|
||||
}
|
||||
}
|
||||
],
|
||||
"summary": {
|
||||
"total": 3,
|
||||
"verified": 3,
|
||||
"failed": 0,
|
||||
"skipped": 0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Graph Root Attestation Verification (Offline)
|
||||
|
||||
Graph root attestations provide tamper-evident commitment to graph analysis results. In air-gap mode, these attestations can be verified without network access.
|
||||
|
||||
### Verify Graph Root Attestation
|
||||
|
||||
```bash
|
||||
# Verify a single graph root attestation
|
||||
stellaops graph-root verify --offline \
|
||||
--envelope graph-root.dsse \
|
||||
--anchor-file trust-anchors.json
|
||||
|
||||
# Expected output:
|
||||
# Graph Root Verification
|
||||
# ═══════════════════════
|
||||
# ✓ DSSE signature verified
|
||||
# ✓ Predicate type: graph-root.stella/v1
|
||||
# ✓ Graph type: ReachabilityGraph
|
||||
# ✓ Canon version: stella:canon:v1
|
||||
# ⊘ Rekor verification skipped (offline mode)
|
||||
#
|
||||
# Overall: VERIFIED (offline)
|
||||
```
|
||||
|
||||
### Verify with Node/Edge Reconstruction
|
||||
|
||||
When you have the original graph data, you can recompute and verify the Merkle root:
|
||||
|
||||
```bash
|
||||
# Verify with reconstruction
|
||||
stellaops graph-root verify --offline \
|
||||
--envelope graph-root.dsse \
|
||||
--nodes nodes.json \
|
||||
--edges edges.json \
|
||||
--anchor-file trust-anchors.json
|
||||
|
||||
# Expected output:
|
||||
# Graph Root Verification (with reconstruction)
|
||||
# ═════════════════════════════════════════════
|
||||
# ✓ DSSE signature verified
|
||||
# ✓ Nodes canonicalized: 1234 entries
|
||||
# ✓ Edges canonicalized: 5678 entries
|
||||
# ✓ Merkle root recomputed: sha256:abc123...
|
||||
# ✓ Merkle root matches claimed: sha256:abc123...
|
||||
#
|
||||
# Overall: VERIFIED (reconstructed)
|
||||
```
|
||||
|
||||
### Graph Data File Formats
|
||||
|
||||
**nodes.json** - Array of node identifiers:
|
||||
```json
|
||||
{
|
||||
"canonVersion": "stella:canon:v1",
|
||||
"nodes": [
|
||||
"pkg:npm/lodash@4.17.21",
|
||||
"pkg:npm/express@4.18.2",
|
||||
"pkg:npm/body-parser@1.20.0"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**edges.json** - Array of edge identifiers:
|
||||
```json
|
||||
{
|
||||
"canonVersion": "stella:canon:v1",
|
||||
"edges": [
|
||||
"pkg:npm/express@4.18.2->pkg:npm/body-parser@1.20.0",
|
||||
"pkg:npm/express@4.18.2->pkg:npm/lodash@4.17.21"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Verification Steps (Detailed)
|
||||
|
||||
The offline graph root verification algorithm:
|
||||
|
||||
1. **Parse DSSE envelope** - Extract payload and signatures
|
||||
2. **Decode in-toto statement** - Parse subject and predicate
|
||||
3. **Verify signature** - Check DSSE signature against trust anchor allowed keys
|
||||
4. **Validate predicate type** - Confirm `graph-root.stella/v1`
|
||||
5. **Extract Merkle root** - Get claimed root from predicate
|
||||
6. **If reconstruction requested**:
|
||||
- Load nodes.json and edges.json
|
||||
- Verify canon version matches predicate
|
||||
- Sort nodes lexicographically
|
||||
- Sort edges lexicographically
|
||||
- Concatenate sorted lists
|
||||
- Build SHA-256 Merkle tree
|
||||
- Compare computed root to claimed root
|
||||
7. **Emit verification result**
|
||||
|
||||
### Programmatic Verification (.NET)
|
||||
|
||||
```csharp
|
||||
using StellaOps.Attestor.GraphRoot;
|
||||
|
||||
// Load trust anchors
|
||||
var anchors = await TrustAnchors.LoadFromFileAsync("trust-anchors.json");
|
||||
|
||||
// Create verifier
|
||||
var verifier = new GraphRootAttestor(signer, canonicalJsonSerializer);
|
||||
|
||||
// Load envelope
|
||||
var envelope = await DsseEnvelope.LoadAsync("graph-root.dsse");
|
||||
|
||||
// Verify without reconstruction
|
||||
var result = await verifier.VerifyAsync(
|
||||
envelope,
|
||||
trustAnchors: anchors,
|
||||
verifyRekor: false);
|
||||
|
||||
// Verify with reconstruction
|
||||
var nodeIds = new[] { "pkg:npm/lodash@4.17.21", "pkg:npm/express@4.18.2" };
|
||||
var edgeIds = new[] { "pkg:npm/express@4.18.2->pkg:npm/lodash@4.17.21" };
|
||||
|
||||
var fullResult = await verifier.VerifyAsync(
|
||||
envelope,
|
||||
nodeIds: nodeIds,
|
||||
edgeIds: edgeIds,
|
||||
trustAnchors: anchors,
|
||||
verifyRekor: false);
|
||||
|
||||
Console.WriteLine($"Verified: {fullResult.IsValid}");
|
||||
Console.WriteLine($"Merkle root: {fullResult.MerkleRoot}");
|
||||
```
|
||||
|
||||
### Integration with Proof Spine
|
||||
|
||||
Graph roots can be included in proof spines for comprehensive verification:
|
||||
|
||||
```bash
|
||||
# Export proof bundle with graph roots
|
||||
stellaops proof export \
|
||||
--entry sha256:abc123:pkg:npm/lodash@4.17.21 \
|
||||
--include-graph-roots \
|
||||
--output proof-bundle.zip
|
||||
|
||||
# Bundle now includes:
|
||||
# proof-bundle.zip
|
||||
# ├── proof-spine.json
|
||||
# ├── evidence/
|
||||
# ├── reasoning.json
|
||||
# ├── vex-verdict.json
|
||||
# ├── graph-roots/ # Graph root attestations
|
||||
# │ ├── reachability.dsse
|
||||
# │ └── dependency.dsse
|
||||
# ├── envelopes/
|
||||
# └── VERIFY.md
|
||||
|
||||
# Verify with graph roots
|
||||
stellaops proof verify --offline \
|
||||
--bundle-file proof-bundle.zip \
|
||||
--verify-graph-roots \
|
||||
--anchor-file trust-anchors.json
|
||||
```
|
||||
|
||||
### Determinism Requirements
|
||||
|
||||
For offline verification to succeed:
|
||||
|
||||
1. **Same canonicalization** - Use `stella:canon:v1` consistently
|
||||
2. **Same ordering** - Lexicographic sort for nodes and edges
|
||||
3. **Same encoding** - UTF-8 for all string operations
|
||||
4. **Same hash algorithm** - SHA-256 for Merkle tree
|
||||
|
||||
---
|
||||
|
||||
## Key Rotation in Air-Gap Mode
|
||||
|
||||
When keys are rotated, trust anchor updates must be distributed:
|
||||
|
||||
### 1. Export Updated Anchors
|
||||
|
||||
```bash
|
||||
# On online system after key rotation
|
||||
stellaops anchor export --since 2025-01-01 > anchor-update.json
|
||||
sha256sum anchor-update.json > anchor-update.sha256
|
||||
```
|
||||
|
||||
### 2. Verify and Import Update
|
||||
|
||||
```bash
|
||||
# On air-gapped system
|
||||
sha256sum -c anchor-update.sha256
|
||||
stellaops anchor import --file anchor-update.json --merge
|
||||
|
||||
# Verify key history
|
||||
stellaops anchor show --anchor-id 550e8400... --show-history
|
||||
```
|
||||
|
||||
### 3. Temporal Verification
|
||||
|
||||
When verifying old proofs after key rotation:
|
||||
|
||||
```bash
|
||||
# Verify proof signed with now-revoked key
|
||||
stellaops proof verify --offline \
|
||||
--proof-bundle sha256:old-proof... \
|
||||
--anchor-file trust-anchors.json \
|
||||
--at-time "2024-06-15T12:00:00Z"
|
||||
|
||||
# The verification uses key validity at the specified time
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Manual Verification (No CLI)
|
||||
|
||||
For environments without the StellaOps CLI, manual verification is possible:
|
||||
|
||||
### 1. Verify Content-Addressed ID
|
||||
|
||||
```bash
|
||||
# Extract payload from DSSE envelope
|
||||
jq -r '.payload' proof-spine.dsse | base64 -d > payload.json
|
||||
|
||||
# Compute hash
|
||||
sha256sum payload.json
|
||||
# Compare with proof bundle ID
|
||||
```
|
||||
|
||||
### 2. Verify DSSE Signature
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
import json
|
||||
import base64
|
||||
from cryptography.hazmat.primitives import hashes
|
||||
from cryptography.hazmat.primitives.asymmetric import ec
|
||||
from cryptography.hazmat.primitives.serialization import load_pem_public_key
|
||||
|
||||
def verify_dsse(envelope_path, public_key_pem):
|
||||
"""Verify a DSSE envelope signature."""
|
||||
with open(envelope_path) as f:
|
||||
envelope = json.load(f)
|
||||
|
||||
payload_type = envelope['payloadType']
|
||||
payload = base64.b64decode(envelope['payload'])
|
||||
|
||||
# Build PAE (Pre-Authentication Encoding)
|
||||
pae = f"DSSEv1 {len(payload_type)} {payload_type} {len(payload)} ".encode() + payload
|
||||
|
||||
public_key = load_pem_public_key(public_key_pem.encode())
|
||||
|
||||
for sig in envelope['signatures']:
|
||||
signature = base64.b64decode(sig['sig'])
|
||||
try:
|
||||
public_key.verify(signature, pae, ec.ECDSA(hashes.SHA256()))
|
||||
print(f"✓ Signature valid for keyid: {sig['keyid']}")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"✗ Signature invalid: {e}")
|
||||
|
||||
return False
|
||||
```
|
||||
|
||||
### 3. Verify Merkle Path
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
import json
|
||||
import hashlib
|
||||
|
||||
def verify_merkle_path(leaf_hash, path, root_hash, leaf_index):
|
||||
"""Verify a Merkle inclusion path."""
|
||||
current = bytes.fromhex(leaf_hash)
|
||||
index = leaf_index
|
||||
|
||||
for sibling in path:
|
||||
sibling_bytes = bytes.fromhex(sibling)
|
||||
if index % 2 == 0:
|
||||
# Current is left child
|
||||
combined = current + sibling_bytes
|
||||
else:
|
||||
# Current is right child
|
||||
combined = sibling_bytes + current
|
||||
current = hashlib.sha256(combined).digest()
|
||||
index //= 2
|
||||
|
||||
computed_root = current.hex()
|
||||
if computed_root == root_hash:
|
||||
print("✓ Merkle path verified")
|
||||
return True
|
||||
else:
|
||||
print(f"✗ Merkle root mismatch: {computed_root} != {root_hash}")
|
||||
return False
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Exit Codes
|
||||
|
||||
Offline verification uses the same exit codes as online:
|
||||
|
||||
| Code | Meaning | CI/CD Action |
|
||||
|------|---------|--------------|
|
||||
| 0 | Verification passed | Proceed |
|
||||
| 1 | Verification failed | Block |
|
||||
| 2 | System error | Retry/investigate |
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Missing Trust Anchor
|
||||
|
||||
```
|
||||
Error: No trust anchor found for keyid "key-2025-prod"
|
||||
```
|
||||
|
||||
**Solution**: Import updated trust anchors from online system.
|
||||
|
||||
### Key Not Valid at Time
|
||||
|
||||
```
|
||||
Error: Key "key-2024-prod" was revoked at 2024-12-01, before proof signature at 2025-01-15
|
||||
```
|
||||
|
||||
**Solution**: This indicates the proof was signed after key revocation. Investigate the signature timestamp.
|
||||
|
||||
### Merkle Path Invalid
|
||||
|
||||
```
|
||||
Error: Merkle path verification failed for evidence sha256:e1...
|
||||
```
|
||||
|
||||
**Solution**: The proof bundle may be corrupted. Re-export from online system.
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Proof Chain API Reference](../api/proofs.md)
|
||||
- [Key Rotation Runbook](../operations/key-rotation-runbook.md)
|
||||
- [Portable Evidence Bundle Verification](portable-evidence-bundle-verification.md)
|
||||
- [Offline Bundle Format](offline-bundle-format.md)
|
||||
@@ -0,0 +1,368 @@
|
||||
# Reachability Drift Air-Gap Workflows
|
||||
|
||||
**Sprint:** SPRINT_3600_0001_0001
|
||||
**Task:** RDRIFT-MASTER-0006 - Document air-gap workflows for reachability drift
|
||||
|
||||
## Overview
|
||||
|
||||
Reachability Drift Detection can operate in fully air-gapped environments using offline bundles. This document describes the workflows for running reachability drift analysis without network connectivity, building on the Smart-Diff air-gap patterns.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Offline Kit** - Downloaded and verified (`stellaops offline kit download`)
|
||||
2. **Feed Snapshots** - Pre-staged vulnerability feeds and surfaces
|
||||
3. **Call Graph Cache** - Pre-extracted call graphs for target artifacts
|
||||
4. **Vulnerability Surface Bundles** - Pre-computed trigger method mappings
|
||||
|
||||
## Key Differences from Online Mode
|
||||
|
||||
| Aspect | Online Mode | Air-Gap Mode |
|
||||
|--------|-------------|--------------|
|
||||
| Surface Queries | Real-time API | Local bundle lookup |
|
||||
| Call Graph Extraction | On-demand | Pre-computed + cached |
|
||||
| Graph Diff | Direct comparison | Bundle-to-bundle |
|
||||
| Attestation | Online transparency log | Offline DSSE bundle |
|
||||
| Metrics | Telemetry enabled | Local-only metrics |
|
||||
|
||||
---
|
||||
|
||||
## Workflow 1: Offline Reachability Drift Analysis
|
||||
|
||||
### Step 1: Prepare Offline Bundle with Call Graphs
|
||||
|
||||
On a connected machine:
|
||||
|
||||
```bash
|
||||
# Download offline kit with reachability bundles
|
||||
stellaops offline kit download \
|
||||
--output /path/to/offline-bundle \
|
||||
--include-feeds nvd,osv,epss \
|
||||
--include-surfaces \
|
||||
--feed-date 2025-01-15
|
||||
|
||||
# Pre-extract call graphs for known artifacts
|
||||
stellaops callgraph extract \
|
||||
--artifact registry.example.com/app:v1 \
|
||||
--artifact registry.example.com/app:v2 \
|
||||
--output /path/to/offline-bundle/callgraphs \
|
||||
--languages dotnet,nodejs,java,go,python
|
||||
|
||||
# Include vulnerability surface bundles
|
||||
stellaops surfaces export \
|
||||
--cve-list /path/to/known-cves.txt \
|
||||
--output /path/to/offline-bundle/surfaces \
|
||||
--format ndjson
|
||||
|
||||
# Package for transfer
|
||||
stellaops offline kit package \
|
||||
--input /path/to/offline-bundle \
|
||||
--output stellaops-reach-offline-2025-01-15.tar.gz \
|
||||
--sign
|
||||
```
|
||||
|
||||
### Step 2: Transfer to Air-Gapped Environment
|
||||
|
||||
Transfer the bundle using approved media:
|
||||
- USB drive (scanned and approved)
|
||||
- Optical media (DVD/Blu-ray)
|
||||
- Data diode
|
||||
|
||||
### Step 3: Import Bundle
|
||||
|
||||
On the air-gapped machine:
|
||||
|
||||
```bash
|
||||
# Verify bundle signature
|
||||
stellaops offline kit verify \
|
||||
--input stellaops-reach-offline-2025-01-15.tar.gz \
|
||||
--public-key /path/to/signing-key.pub
|
||||
|
||||
# Extract and configure
|
||||
stellaops offline kit import \
|
||||
--input stellaops-reach-offline-2025-01-15.tar.gz \
|
||||
--data-dir /opt/stellaops/data
|
||||
```
|
||||
|
||||
### Step 4: Run Reachability Drift Analysis
|
||||
|
||||
```bash
|
||||
# Set offline mode
|
||||
export STELLAOPS_OFFLINE=true
|
||||
export STELLAOPS_DATA_DIR=/opt/stellaops/data
|
||||
export STELLAOPS_SURFACES_DIR=/opt/stellaops/data/surfaces
|
||||
export STELLAOPS_CALLGRAPH_CACHE=/opt/stellaops/data/callgraphs
|
||||
|
||||
# Run reachability drift
|
||||
stellaops reach-drift \
|
||||
--base-scan scan-v1.json \
|
||||
--current-scan scan-v2.json \
|
||||
--base-callgraph callgraph-v1.json \
|
||||
--current-callgraph callgraph-v2.json \
|
||||
--output drift-report.json \
|
||||
--format json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Workflow 2: Pre-Computed Drift Export
|
||||
|
||||
For environments that cannot run the full analysis, pre-compute drift results on a connected machine and export them for review.
|
||||
|
||||
### Step 1: Pre-Compute Drift Results
|
||||
|
||||
```bash
|
||||
# On connected machine: compute drift
|
||||
stellaops reach-drift \
|
||||
--base-scan scan-v1.json \
|
||||
--current-scan scan-v2.json \
|
||||
--output drift-results.json \
|
||||
--include-witnesses \
|
||||
--include-paths
|
||||
|
||||
# Generate offline viewer bundle
|
||||
stellaops offline viewer export \
|
||||
--drift-report drift-results.json \
|
||||
--output drift-viewer-bundle.html \
|
||||
--self-contained
|
||||
```
|
||||
|
||||
### Step 2: Transfer and Review
|
||||
|
||||
The self-contained HTML viewer can be opened in any browser on the air-gapped machine without additional dependencies.
|
||||
|
||||
---
|
||||
|
||||
## Workflow 3: Incremental Call Graph Updates
|
||||
|
||||
For environments that need to update call graphs without full re-extraction.
|
||||
|
||||
### Step 1: Export Graph Delta
|
||||
|
||||
On connected machine after code changes:
|
||||
|
||||
```bash
|
||||
# Extract delta since last snapshot
|
||||
stellaops callgraph delta \
|
||||
--base-snapshot callgraph-v1.json \
|
||||
--current-source /path/to/code \
|
||||
--output graph-delta.json
|
||||
```
|
||||
|
||||
### Step 2: Apply Delta in Air-Gap
|
||||
|
||||
```bash
|
||||
# Merge delta into existing graph
|
||||
stellaops callgraph merge \
|
||||
--base /opt/stellaops/data/callgraphs/app-v1.json \
|
||||
--delta graph-delta.json \
|
||||
--output /opt/stellaops/data/callgraphs/app-v2.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Bundle Contents
|
||||
|
||||
### Call Graph Bundle Structure
|
||||
|
||||
```
|
||||
callgraphs/
|
||||
├── manifest.json # Bundle metadata
|
||||
├── checksums.sha256 # Content hashes
|
||||
├── app-v1/
|
||||
│ ├── snapshot.json # CallGraphSnapshot
|
||||
│ ├── entrypoints.json # Entrypoint index
|
||||
│ └── sinks.json # Sink index
|
||||
└── app-v2/
|
||||
├── snapshot.json
|
||||
├── entrypoints.json
|
||||
└── sinks.json
|
||||
```
|
||||
|
||||
### Surface Bundle Structure
|
||||
|
||||
```
|
||||
surfaces/
|
||||
├── manifest.json # Bundle metadata
|
||||
├── checksums.sha256 # Content hashes
|
||||
├── by-cve/
|
||||
│ ├── CVE-2024-1234.json # Surface + triggers
|
||||
│ └── CVE-2024-5678.json
|
||||
└── by-package/
|
||||
├── nuget/
|
||||
│ └── Newtonsoft.Json/
|
||||
│ └── surfaces.ndjson
|
||||
└── npm/
|
||||
└── lodash/
|
||||
└── surfaces.ndjson
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Offline Surface Query
|
||||
|
||||
When running in air-gap mode, the surface query service automatically uses local bundles:
|
||||
|
||||
```csharp
|
||||
// Configuration for air-gap mode
|
||||
services.AddSingleton<ISurfaceQueryService>(sp =>
|
||||
{
|
||||
var options = sp.GetRequiredService<IOptions<AirGapOptions>>().Value;
|
||||
|
||||
if (options.Enabled)
|
||||
{
|
||||
return new OfflineSurfaceQueryService(
|
||||
options.SurfacesBundlePath,
|
||||
sp.GetRequiredService<ILogger<OfflineSurfaceQueryService>>());
|
||||
}
|
||||
|
||||
return sp.GetRequiredService<OnlineSurfaceQueryService>();
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Attestation in Air-Gap Mode
|
||||
|
||||
Reachability drift results can be attested even in offline mode using pre-provisioned signing keys:
|
||||
|
||||
```bash
|
||||
# Sign drift results with offline key
|
||||
stellaops attest sign \
|
||||
--input drift-results.json \
|
||||
--predicate-type https://stellaops.io/attestation/reachability-drift/v1 \
|
||||
--key /opt/stellaops/keys/signing-key.pem \
|
||||
--output drift-attestation.dsse.json
|
||||
|
||||
# Verify attestation (offline)
|
||||
stellaops attest verify \
|
||||
--input drift-attestation.dsse.json \
|
||||
--trust-root /opt/stellaops/keys/trust-root.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Staleness Considerations
|
||||
|
||||
### Call Graph Freshness
|
||||
|
||||
Call graphs should be re-extracted when:
|
||||
- Source code changes significantly
|
||||
- Dependencies are updated
|
||||
- Framework versions change
|
||||
|
||||
Maximum recommended staleness: **7 days** for active development, **30 days** for stable releases.
|
||||
|
||||
### Surface Bundle Freshness
|
||||
|
||||
Surface bundles should be updated when:
|
||||
- New CVEs are published
|
||||
- Vulnerability details are refined
|
||||
- Trigger methods are updated
|
||||
|
||||
Maximum recommended staleness: **24 hours** for high-security environments, **7 days** for standard environments.
|
||||
|
||||
### Staleness Indicators
|
||||
|
||||
```bash
|
||||
# Check bundle freshness
|
||||
stellaops offline status \
|
||||
--data-dir /opt/stellaops/data
|
||||
|
||||
# Output:
|
||||
# Bundle Type | Last Updated | Age | Status
|
||||
# -----------------|---------------------|--------|--------
|
||||
# NVD Feed | 2025-01-15T00:00:00 | 3 days | OK
|
||||
# OSV Feed | 2025-01-15T00:00:00 | 3 days | OK
|
||||
# Surfaces | 2025-01-14T12:00:00 | 4 days | WARNING
|
||||
# Call Graphs (v1) | 2025-01-10T08:00:00 | 8 days | STALE
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Determinism Requirements
|
||||
|
||||
All offline workflows must produce deterministic results:
|
||||
|
||||
1. **Call Graph Extraction** - Same source produces identical graph hash
|
||||
2. **Drift Detection** - Same inputs produce identical drift report
|
||||
3. **Path Witnesses** - Same reachability query produces identical paths
|
||||
4. **Attestation** - Signature over canonical JSON (sorted keys, no whitespace)
|
||||
|
||||
Verification:
|
||||
|
||||
```bash
|
||||
# Verify determinism
|
||||
stellaops reach-drift \
|
||||
--base-scan scan-v1.json \
|
||||
--current-scan scan-v2.json \
|
||||
--output drift-1.json
|
||||
|
||||
stellaops reach-drift \
|
||||
--base-scan scan-v1.json \
|
||||
--current-scan scan-v2.json \
|
||||
--output drift-2.json
|
||||
|
||||
# Must be identical
|
||||
diff drift-1.json drift-2.json
|
||||
# (no output = identical)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Missing Surface Data
|
||||
|
||||
```
|
||||
Error: No surface found for CVE-2024-1234 in package pkg:nuget/Newtonsoft.Json@12.0.1
|
||||
```
|
||||
|
||||
**Resolution:** Update surface bundle or fall back to package-API-level reachability:
|
||||
|
||||
```bash
|
||||
stellaops reach-drift \
|
||||
--fallback-mode package-api \
|
||||
...
|
||||
```
|
||||
|
||||
### Call Graph Extraction Failure
|
||||
|
||||
```
|
||||
Error: Failed to extract call graph - missing language support for 'rust'
|
||||
```
|
||||
|
||||
**Resolution:** Pre-extract call graphs on a machine with required tooling, or skip unsupported languages:
|
||||
|
||||
```bash
|
||||
stellaops callgraph extract \
|
||||
--skip-unsupported \
|
||||
...
|
||||
```
|
||||
|
||||
### Bundle Signature Verification Failure
|
||||
|
||||
```
|
||||
Error: Bundle signature invalid - public key mismatch
|
||||
```
|
||||
|
||||
**Resolution:** Ensure correct public key is used, or re-download bundle:
|
||||
|
||||
```bash
|
||||
# List available trust roots
|
||||
stellaops offline trust-roots list
|
||||
|
||||
# Import new trust root (requires approval)
|
||||
stellaops offline trust-roots import \
|
||||
--key new-signing-key.pub \
|
||||
--fingerprint <expected-fingerprint>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Smart-Diff Air-Gap Workflows](smart-diff-airgap-workflows.md)
|
||||
- [Offline Bundle Format](offline-bundle-format.md)
|
||||
- [Air-Gap Operations](operations.md)
|
||||
- [Staleness and Time](staleness-and-time.md)
|
||||
- [Sealing and Egress](sealing-and-egress.md)
|
||||
389
docs/modules/airgap/guides/risk-bundles.md
Normal file
389
docs/modules/airgap/guides/risk-bundles.md
Normal file
@@ -0,0 +1,389 @@
|
||||
# Risk Bundles (Airgap)
|
||||
|
||||
Risk bundles package vulnerability intelligence data for offline/air-gapped environments. They provide deterministic, signed archives containing provider datasets (CISA KEV, FIRST EPSS, OSV) that can be verified and imported without network connectivity.
|
||||
|
||||
## Bundle Structure
|
||||
|
||||
A risk bundle is a gzip-compressed tar archive (`risk-bundle.tar.gz`) with the following structure:
|
||||
|
||||
```
|
||||
risk-bundle.tar.gz
|
||||
├── manifests/
|
||||
│ └── provider-manifest.json # Bundle metadata and provider entries
|
||||
├── providers/
|
||||
│ ├── cisa-kev/
|
||||
│ │ └── snapshot # CISA Known Exploited Vulnerabilities JSON
|
||||
│ ├── first-epss/
|
||||
│ │ └── snapshot # FIRST EPSS scores CSV/JSON
|
||||
│ └── osv/ # (optional) OpenSSF OSV bulk JSON
|
||||
│ └── snapshot
|
||||
└── signatures/
|
||||
└── provider-manifest.dsse # DSSE envelope for manifest
|
||||
```
|
||||
|
||||
## Provider Manifest
|
||||
|
||||
The `provider-manifest.json` contains bundle metadata and per-provider entries:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"bundleId": "risk-bundle-20241211-120000",
|
||||
"createdAt": "2024-12-11T12:00:00Z",
|
||||
"inputsHash": "sha256:abc123...",
|
||||
"providers": [
|
||||
{
|
||||
"providerId": "cisa-kev",
|
||||
"digest": "sha256:def456...",
|
||||
"snapshotDate": "2024-12-11T00:00:00Z",
|
||||
"optional": false
|
||||
},
|
||||
{
|
||||
"providerId": "first-epss",
|
||||
"digest": "sha256:789abc...",
|
||||
"snapshotDate": "2024-12-11T00:00:00Z",
|
||||
"optional": true
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `version` | Manifest schema version (currently `1.0.0`) |
|
||||
| `bundleId` | Unique identifier for this bundle |
|
||||
| `createdAt` | ISO-8601 UTC timestamp of bundle creation |
|
||||
| `inputsHash` | SHA-256 hash of concatenated provider digests (deterministic ordering) |
|
||||
| `providers[]` | Array of provider entries sorted by `providerId` |
|
||||
|
||||
### Provider Entry Fields
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `providerId` | Provider identifier (`cisa-kev`, `first-epss`, `osv`) |
|
||||
| `digest` | SHA-256 hash of snapshot file (`sha256:<hex>`) |
|
||||
| `snapshotDate` | ISO-8601 timestamp of provider data snapshot |
|
||||
| `optional` | Whether provider is required for bundle validity |
|
||||
|
||||
## Provider Catalog
|
||||
|
||||
| Provider | Source | Coverage | Refresh | Required |
|
||||
|----------|--------|----------|---------|----------|
|
||||
| `cisa-kev` | CISA Known Exploited Vulnerabilities | Exploited CVEs with KEV flag | Daily | Yes |
|
||||
| `first-epss` | FIRST EPSS scores | Exploitation probability per CVE | Daily | No |
|
||||
| `osv` | OpenSSF OSV | OSS advisories with affected ranges | Weekly | No (opt-in) |
|
||||
|
||||
## Building Risk Bundles
|
||||
|
||||
### Using the Export Worker
|
||||
|
||||
The ExportCenter worker can build risk bundles via the `stella export risk-bundle` job:
|
||||
|
||||
```bash
|
||||
# Build bundle with default providers (CISA KEV + EPSS)
|
||||
stella export risk-bundle --output /path/to/output
|
||||
|
||||
# Include OSV providers (larger bundle)
|
||||
stella export risk-bundle --output /path/to/output --include-osv
|
||||
|
||||
# Build with specific bundle ID
|
||||
stella export risk-bundle --output /path/to/output --bundle-id "custom-bundle-id"
|
||||
```
|
||||
|
||||
### Using the CI Build Script
|
||||
|
||||
For CI pipelines and deterministic testing, use the shell scripts:
|
||||
|
||||
```bash
|
||||
# Build fixture bundle for CI testing (deterministic)
|
||||
ops/devops/risk-bundle/build-bundle.sh --output /tmp/bundle --fixtures-only
|
||||
|
||||
# Build with OSV
|
||||
ops/devops/risk-bundle/build-bundle.sh --output /tmp/bundle --fixtures-only --include-osv
|
||||
|
||||
# Build with custom bundle ID
|
||||
ops/devops/risk-bundle/build-bundle.sh --output /tmp/bundle --fixtures-only --bundle-id "ci-test-bundle"
|
||||
```
|
||||
|
||||
### Build Script Options
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--output <dir>` | Output directory for bundle artifacts (required) |
|
||||
| `--fixtures-only` | Use fixture data instead of live provider downloads |
|
||||
| `--include-osv` | Include OSV providers (increases bundle size) |
|
||||
| `--bundle-id <id>` | Custom bundle ID (default: auto-generated with timestamp) |
|
||||
|
||||
### Build Outputs
|
||||
|
||||
After building, the output directory contains:
|
||||
|
||||
```
|
||||
output/
|
||||
├── risk-bundle.tar.gz # The bundle archive
|
||||
├── risk-bundle.tar.gz.sha256 # SHA-256 checksum
|
||||
└── manifest.json # Copy of provider-manifest.json
|
||||
```
|
||||
|
||||
## Verifying Risk Bundles
|
||||
|
||||
### Using the CLI
|
||||
|
||||
```bash
|
||||
# Basic verification
|
||||
stella risk bundle verify --bundle-path ./risk-bundle.tar.gz
|
||||
|
||||
# With detached signature
|
||||
stella risk bundle verify --bundle-path ./risk-bundle.tar.gz --signature-path ./bundle.sig
|
||||
|
||||
# Check Sigstore Rekor transparency log
|
||||
stella risk bundle verify --bundle-path ./risk-bundle.tar.gz --check-rekor
|
||||
|
||||
# JSON output for automation
|
||||
stella risk bundle verify --bundle-path ./risk-bundle.tar.gz --json
|
||||
|
||||
# Verbose output with warnings
|
||||
stella risk bundle verify --bundle-path ./risk-bundle.tar.gz --verbose
|
||||
```
|
||||
|
||||
### CLI Options
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--bundle-path, -b` | Path to risk bundle file (required) |
|
||||
| `--signature-path, -s` | Path to detached signature file |
|
||||
| `--check-rekor` | Verify transparency log entry in Sigstore Rekor |
|
||||
| `--json` | Output results as JSON |
|
||||
| `--tenant` | Tenant context for verification |
|
||||
| `--verbose` | Show detailed output including warnings |
|
||||
|
||||
### Using the Verification Script
|
||||
|
||||
For offline/air-gap verification without the CLI:
|
||||
|
||||
```bash
|
||||
# Basic verification
|
||||
ops/devops/risk-bundle/verify-bundle.sh /path/to/risk-bundle.tar.gz
|
||||
|
||||
# With detached signature
|
||||
ops/devops/risk-bundle/verify-bundle.sh /path/to/risk-bundle.tar.gz --signature /path/to/bundle.sig
|
||||
|
||||
# Strict mode (warnings are errors)
|
||||
ops/devops/risk-bundle/verify-bundle.sh /path/to/risk-bundle.tar.gz --strict
|
||||
|
||||
# JSON output
|
||||
ops/devops/risk-bundle/verify-bundle.sh /path/to/risk-bundle.tar.gz --json
|
||||
```
|
||||
|
||||
### Verification Steps
|
||||
|
||||
The verification process performs these checks:
|
||||
|
||||
1. **Archive integrity** - Bundle is a valid tar.gz archive
|
||||
2. **Structure validation** - Required files present (`manifests/provider-manifest.json`)
|
||||
3. **Manifest parsing** - Valid JSON with required fields (`bundleId`, `version`, `providers`)
|
||||
4. **Provider hash verification** - Each provider snapshot matches its declared digest
|
||||
5. **Mandatory provider check** - `cisa-kev` must be present and valid
|
||||
6. **DSSE signature validation** - Manifest signature verified (if present)
|
||||
7. **Detached signature** - Bundle archive signature verified (if provided)
|
||||
|
||||
### Exit Codes
|
||||
|
||||
| Code | Meaning |
|
||||
|------|---------|
|
||||
| 0 | Bundle is valid |
|
||||
| 1 | Bundle is invalid or verification failed |
|
||||
| 2 | Input error (missing file, bad arguments) |
|
||||
|
||||
### JSON Output Format
|
||||
|
||||
```json
|
||||
{
|
||||
"valid": true,
|
||||
"bundleId": "risk-bundle-20241211-120000",
|
||||
"version": "1.0.0",
|
||||
"providerCount": 2,
|
||||
"mandatoryProviderFound": true,
|
||||
"errorCount": 0,
|
||||
"warningCount": 1,
|
||||
"errors": [],
|
||||
"warnings": ["Optional provider not found: osv"]
|
||||
}
|
||||
```
|
||||
|
||||
## Importing Risk Bundles
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. Verify the bundle before import (see above)
|
||||
2. Ensure the target system has sufficient storage
|
||||
3. Back up existing provider data if replacing
|
||||
|
||||
### Import Steps
|
||||
|
||||
1. **Transfer the bundle** to the air-gapped environment via approved media
|
||||
2. **Verify the bundle** using the CLI or verification script
|
||||
3. **Extract to staging**:
|
||||
```bash
|
||||
mkdir -p /staging/risk-bundle
|
||||
tar -xzf risk-bundle.tar.gz -C /staging/risk-bundle
|
||||
```
|
||||
4. **Validate provider data**:
|
||||
```bash
|
||||
# Verify individual provider hashes
|
||||
sha256sum /staging/risk-bundle/providers/cisa-kev/snapshot
|
||||
sha256sum /staging/risk-bundle/providers/first-epss/snapshot
|
||||
```
|
||||
5. **Import into Concelier**:
|
||||
```bash
|
||||
stella concelier import-risk-bundle --path /staging/risk-bundle
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
| Error | Cause | Resolution |
|
||||
|-------|-------|------------|
|
||||
| "Bundle is not a valid tar.gz archive" | Corrupted download/transfer | Re-download and verify checksum |
|
||||
| "Missing required file: manifests/provider-manifest.json" | Incomplete bundle | Rebuild bundle |
|
||||
| "Missing mandatory provider: cisa-kev" | KEV snapshot missing | Rebuild with valid provider data |
|
||||
| "Hash mismatch: cisa-kev" | Corrupted provider data | Re-download provider snapshot |
|
||||
| "DSSE signature validation failed" | Tampered manifest | Investigate chain of custody |
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
### GitHub Actions / Gitea Workflow
|
||||
|
||||
The `.gitea/workflows/risk-bundle-ci.yml` workflow:
|
||||
|
||||
1. **Build job**: Compiles RiskBundles library, runs tests, builds fixture bundle
|
||||
2. **Offline kit job**: Packages bundle for offline kit distribution
|
||||
3. **Publish checksums job**: Publishes checksums to artifact store (main branch only)
|
||||
|
||||
```yaml
|
||||
# Trigger manually or on push to relevant paths
|
||||
on:
|
||||
push:
|
||||
paths:
|
||||
- 'src/ExportCenter/StellaOps.ExportCenter.RiskBundles/**'
|
||||
- 'ops/devops/risk-bundle/**'
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
include_osv:
|
||||
type: boolean
|
||||
default: false
|
||||
```
|
||||
|
||||
### Offline Kit Integration
|
||||
|
||||
Risk bundles are included in the Offline Update Kit:
|
||||
|
||||
```
|
||||
offline-kit/
|
||||
└── risk-bundles/
|
||||
├── risk-bundle.tar.gz
|
||||
├── risk-bundle.tar.gz.sha256
|
||||
├── manifest.json
|
||||
├── checksums.txt
|
||||
└── kit-manifest.json
|
||||
```
|
||||
|
||||
The `kit-manifest.json` provides metadata for offline kit consumers:
|
||||
|
||||
```json
|
||||
{
|
||||
"component": "risk-bundle",
|
||||
"version": "20241211-120000",
|
||||
"files": [
|
||||
{"path": "risk-bundle.tar.gz", "checksum_file": "risk-bundle.tar.gz.sha256"},
|
||||
{"path": "manifest.json", "checksum_file": "manifest.json.sha256"}
|
||||
],
|
||||
"verification": {
|
||||
"checksums": "checksums.txt",
|
||||
"signature": "risk-bundle.tar.gz.sig"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Signing and Trust
|
||||
|
||||
### DSSE Manifest Signature
|
||||
|
||||
The `signatures/provider-manifest.dsse` file contains a Dead Simple Signing Envelope:
|
||||
|
||||
```json
|
||||
{
|
||||
"payloadType": "application/vnd.stellaops.risk-bundle.manifest+json",
|
||||
"payload": "<base64-encoded-manifest>",
|
||||
"signatures": [
|
||||
{
|
||||
"keyid": "risk-bundle-signing-key",
|
||||
"sig": "<signature>"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Offline Trust Roots
|
||||
|
||||
For air-gapped verification, include public keys in the bundle:
|
||||
|
||||
```
|
||||
signatures/
|
||||
├── provider-manifest.dsse
|
||||
└── pubkeys/
|
||||
└── <tenant>.pem
|
||||
```
|
||||
|
||||
### Sigstore/Rekor Integration
|
||||
|
||||
When `--check-rekor` is specified, verification queries the Sigstore Rekor transparency log to confirm the bundle was published to the public ledger.
|
||||
|
||||
## Determinism Checklist
|
||||
|
||||
Risk bundles are designed for reproducible builds:
|
||||
|
||||
- [x] Fixed timestamps for tar entries (`--mtime="@<epoch>"`)
|
||||
- [x] Sorted file ordering (`--sort=name`)
|
||||
- [x] Numeric owner/group (`--owner=0 --group=0 --numeric-owner`)
|
||||
- [x] Deterministic gzip compression (`gzip -n`)
|
||||
- [x] Providers sorted by `providerId` in manifest
|
||||
- [x] Files sorted lexicographically in bundle
|
||||
- [x] UTF-8 canonical paths
|
||||
- [x] ISO-8601 UTC timestamps
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Q: Bundle verification fails with "jq not available"**
|
||||
|
||||
A: The verification script uses `jq` for JSON parsing. Install it or use the CLI (`stella risk bundle verify`) which has built-in JSON support.
|
||||
|
||||
**Q: Hash mismatch after transfer**
|
||||
|
||||
A: Binary transfers can corrupt files. Use checksums:
|
||||
```bash
|
||||
# On source system
|
||||
sha256sum risk-bundle.tar.gz > checksum.txt
|
||||
|
||||
# On target system
|
||||
sha256sum -c checksum.txt
|
||||
```
|
||||
|
||||
**Q: "Optional provider not found" warning**
|
||||
|
||||
A: This is informational. Optional providers (EPSS, OSV) enhance risk analysis but aren't required. Use `--strict` if you want to enforce their presence.
|
||||
|
||||
**Q: DSSE signature validation fails in air-gap**
|
||||
|
||||
A: Ensure the offline trust root is configured:
|
||||
```bash
|
||||
stella config set risk-bundle.trust-root /path/to/pubkey.pem
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Offline Update Kit](../OFFLINE_KIT.md) - Complete offline kit documentation
|
||||
- [Mirror Bundles](./mirror-bundles.md) - OCI artifact bundles for air-gap
|
||||
- [Provider Matrix](../modules/export-center/operations/risk-bundle-provider-matrix.md) - Detailed provider specifications
|
||||
- [ExportCenter Architecture](../modules/export-center/architecture.md) - Export service design
|
||||
@@ -0,0 +1,616 @@
|
||||
# Air-Gap Operations Runbook for Score Proofs & Reachability
|
||||
|
||||
> **Version**: 1.0.0
|
||||
> **Sprint**: 3500.0004.0004
|
||||
> **Last Updated**: 2025-12-20
|
||||
|
||||
This runbook covers air-gapped operations for Score Proofs and Reachability features, including offline kit deployment, proof verification, and bundle management.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Overview](#1-overview)
|
||||
2. [Offline Kit Deployment](#2-offline-kit-deployment)
|
||||
3. [Score Proofs in Air-Gap Mode](#3-score-proofs-in-air-gap-mode)
|
||||
4. [Reachability in Air-Gap Mode](#4-reachability-in-air-gap-mode)
|
||||
5. [Bundle Import Operations](#5-bundle-import-operations)
|
||||
6. [Proof Verification Offline](#6-proof-verification-offline)
|
||||
7. [Troubleshooting](#7-troubleshooting)
|
||||
8. [Monitoring & Alerting](#8-monitoring--alerting)
|
||||
|
||||
---
|
||||
|
||||
## 1. Overview
|
||||
|
||||
### Air-Gap Modes
|
||||
|
||||
| Mode | Network | Use Case |
|
||||
|------|---------|----------|
|
||||
| **Sealed** | No external connectivity | Classified environments |
|
||||
| **Constrained** | Limited egress (allowlist) | Regulated networks |
|
||||
| **Hybrid** | Selective connectivity | Standard enterprise |
|
||||
|
||||
### Score Proofs Air-Gap Capabilities
|
||||
|
||||
| Feature | Sealed | Constrained | Hybrid |
|
||||
|---------|--------|-------------|--------|
|
||||
| Score computation | ✅ | ✅ | ✅ |
|
||||
| Score replay | ✅ | ✅ | ✅ |
|
||||
| Proof generation | ✅ | ✅ | ✅ |
|
||||
| Proof verification | ✅ | ✅ | ✅ |
|
||||
| Rekor logging | ❌ | 🔶 (optional) | ✅ |
|
||||
| Feed updates | Bundle | Bundle | Online |
|
||||
|
||||
### Reachability Air-Gap Capabilities
|
||||
|
||||
| Feature | Sealed | Constrained | Hybrid |
|
||||
|---------|--------|-------------|--------|
|
||||
| Call graph upload | ✅ | ✅ | ✅ |
|
||||
| Reachability compute | ✅ | ✅ | ✅ |
|
||||
| Explain queries | ✅ | ✅ | ✅ |
|
||||
| Symbol resolution | Bundle | Bundle | Online |
|
||||
|
||||
---
|
||||
|
||||
## 2. Offline Kit Deployment
|
||||
|
||||
### 2.1 Offline Kit Contents
|
||||
|
||||
The offline kit contains everything needed for air-gapped Score Proofs and Reachability:
|
||||
|
||||
```
|
||||
offline-kit/
|
||||
├── manifests/
|
||||
│ ├── kit-manifest.json # Kit metadata and versions
|
||||
│ ├── feed-manifest.json # Advisory feed snapshot
|
||||
│ └── vex-manifest.json # VEX data snapshot
|
||||
├── feeds/
|
||||
│ ├── concelier/ # Advisory feed data
|
||||
│ │ ├── advisories.ndjson
|
||||
│ │ └── snapshot.dsse.json
|
||||
│ └── excititor/ # VEX data
|
||||
│ ├── vex-statements.ndjson
|
||||
│ └── snapshot.dsse.json
|
||||
├── policies/
|
||||
│ ├── scoring-policy.yaml
|
||||
│ └── policy.dsse.json
|
||||
├── trust/
|
||||
│ ├── trust-anchors.json # Public keys for verification
|
||||
│ └── time-anchor.json # Time attestation
|
||||
├── symbols/
|
||||
│ └── symbol-index.db # Symbol resolution database
|
||||
└── tools/
|
||||
├── stella-cli # CLI binary
|
||||
└── verify-kit.sh # Kit verification script
|
||||
```
|
||||
|
||||
### 2.2 Verify Kit Integrity
|
||||
|
||||
Before deployment, always verify the offline kit:
|
||||
|
||||
```bash
|
||||
# Verify kit signature
|
||||
stella airgap verify-kit --kit /path/to/offline-kit
|
||||
|
||||
# Output:
|
||||
# Kit manifest: VALID
|
||||
# Feed snapshot: VALID (sha256:feed123...)
|
||||
# VEX snapshot: VALID (sha256:vex456...)
|
||||
# Policy: VALID (sha256:policy789...)
|
||||
# Trust anchors: VALID (3 anchors)
|
||||
# Time anchor: VALID (expires: 2025-12-31T00:00:00Z)
|
||||
#
|
||||
# Kit verification: PASSED
|
||||
|
||||
# Verify individual components
|
||||
stella airgap verify --file feeds/concelier/snapshot.dsse.json
|
||||
stella airgap verify --file feeds/excititor/snapshot.dsse.json
|
||||
stella airgap verify --file policies/policy.dsse.json
|
||||
```
|
||||
|
||||
### 2.3 Deploy Offline Kit
|
||||
|
||||
```bash
|
||||
# Deploy kit (sealed mode)
|
||||
stella airgap deploy --kit /path/to/offline-kit --mode sealed
|
||||
|
||||
# Deploy kit (constrained mode with limited egress)
|
||||
stella airgap deploy --kit /path/to/offline-kit \
|
||||
--mode constrained \
|
||||
--egress-allowlist https://rekor.sigstore.dev
|
||||
|
||||
# Verify deployment
|
||||
stella airgap status
|
||||
|
||||
# Output:
|
||||
# Mode: sealed
|
||||
# Kit version: 2025.12.20
|
||||
# Feed snapshot: sha256:feed123... (2025-12-20)
|
||||
# VEX snapshot: sha256:vex456... (2025-12-20)
|
||||
# Policy: sha256:policy789... (v1.2.3)
|
||||
# Trust anchors: 3 active
|
||||
# Time anchor: Valid until 2025-12-31
|
||||
# Staleness: OK (0 days)
|
||||
```
|
||||
|
||||
### 2.4 Kit Updates
|
||||
|
||||
```bash
|
||||
# Check for kit updates (requires external access or new media)
|
||||
stella airgap check-update --current-kit /path/to/current-kit
|
||||
|
||||
# Import new kit
|
||||
stella airgap import-kit --kit /path/to/new-kit --validate
|
||||
|
||||
# Rollback to previous kit
|
||||
stella airgap rollback --generation previous
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Score Proofs in Air-Gap Mode
|
||||
|
||||
### 3.1 Create Scan (Air-Gap)
|
||||
|
||||
```bash
|
||||
# Create scan referencing offline kit snapshots
|
||||
stella scan create --artifact sha256:abc123... \
|
||||
--airgap \
|
||||
--feed-snapshot sha256:feed123... \
|
||||
--vex-snapshot sha256:vex456... \
|
||||
--policy-snapshot sha256:policy789...
|
||||
|
||||
# Or auto-detect from deployed kit
|
||||
stella scan create --artifact sha256:abc123... --use-offline-kit
|
||||
```
|
||||
|
||||
### 3.2 Score Replay (Air-Gap)
|
||||
|
||||
```bash
|
||||
# Replay with offline kit snapshots
|
||||
stella score replay --scan-id $SCAN_ID --offline
|
||||
|
||||
# Replay with specific bundle
|
||||
stella score replay --scan-id $SCAN_ID \
|
||||
--offline \
|
||||
--bundle /path/to/proof-bundle.zip
|
||||
|
||||
# Compare with different kit versions
|
||||
stella score replay --scan-id $SCAN_ID \
|
||||
--offline \
|
||||
--feed-snapshot sha256:newfeed... \
|
||||
--diff
|
||||
```
|
||||
|
||||
### 3.3 Generate Proof Bundle (Air-Gap)
|
||||
|
||||
```bash
|
||||
# Generate proof bundle for export
|
||||
stella proof export --scan-id $SCAN_ID \
|
||||
--include-manifest \
|
||||
--include-chain \
|
||||
--output proof-bundle.zip
|
||||
|
||||
# Proof bundle contents (air-gap safe):
|
||||
# - manifest.json (canonical)
|
||||
# - manifest.dsse.json
|
||||
# - score_proof.json
|
||||
# - proof_root.dsse.json
|
||||
# - meta.json
|
||||
# - NO external references
|
||||
|
||||
# Generate portable bundle
|
||||
stella proof export --scan-id $SCAN_ID \
|
||||
--portable \
|
||||
--include-trust-anchors \
|
||||
--output portable-proof.zip
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Reachability in Air-Gap Mode
|
||||
|
||||
### 4.1 Call Graph Operations (Air-Gap)
|
||||
|
||||
```bash
|
||||
# Upload call graph (works identically)
|
||||
stella scan graph upload --scan-id $SCAN_ID --file callgraph.json
|
||||
|
||||
# Call graph processing is fully local
|
||||
# No external network required
|
||||
```
|
||||
|
||||
### 4.2 Compute Reachability (Air-Gap)
|
||||
|
||||
```bash
|
||||
# Compute reachability (fully offline)
|
||||
stella reachability compute --scan-id $SCAN_ID --offline
|
||||
|
||||
# Symbol resolution uses offline database
|
||||
stella reachability compute --scan-id $SCAN_ID \
|
||||
--offline \
|
||||
--symbol-db /path/to/offline-kit/symbols/symbol-index.db
|
||||
```
|
||||
|
||||
### 4.3 Explain Queries (Air-Gap)
|
||||
|
||||
```bash
|
||||
# Explain queries work offline
|
||||
stella reachability explain --scan-id $SCAN_ID \
|
||||
--cve CVE-2024-1234 \
|
||||
--purl "pkg:npm/lodash@4.17.20" \
|
||||
--offline
|
||||
|
||||
# Export explanations for external review
|
||||
stella reachability explain-all --scan-id $SCAN_ID \
|
||||
--output explanations.json \
|
||||
--offline
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Bundle Import Operations
|
||||
|
||||
### 5.1 Import Feed Updates
|
||||
|
||||
```bash
|
||||
# Verify feed bundle before import
|
||||
stella airgap verify --bundle feed-update.zip
|
||||
|
||||
# Dry-run import
|
||||
stella airgap import --bundle feed-update.zip \
|
||||
--type feed \
|
||||
--dry-run
|
||||
|
||||
# Import feed bundle
|
||||
stella airgap import --bundle feed-update.zip \
|
||||
--type feed \
|
||||
--generation 2025.12.21
|
||||
|
||||
# Verify import
|
||||
stella airgap verify-import --generation 2025.12.21
|
||||
```
|
||||
|
||||
### 5.2 Import VEX Updates
|
||||
|
||||
```bash
|
||||
# Import VEX bundle
|
||||
stella airgap import --bundle vex-update.zip \
|
||||
--type vex \
|
||||
--generation 2025.12.21
|
||||
|
||||
# Verify VEX statements
|
||||
stella airgap vex-status
|
||||
|
||||
# Output:
|
||||
# VEX statements: 15,432
|
||||
# Last update: 2025-12-21
|
||||
# Generation: 2025.12.21
|
||||
# Signature: VALID
|
||||
```
|
||||
|
||||
### 5.3 Import Trust Anchors
|
||||
|
||||
```bash
|
||||
# Import new trust anchor (requires approval)
|
||||
stella airgap import-anchor --file new-anchor.json \
|
||||
--reason "Key rotation Q4 2025" \
|
||||
--approver admin@example.com
|
||||
|
||||
# Verify anchor chain
|
||||
stella airgap verify-anchors
|
||||
|
||||
# List active anchors
|
||||
stella airgap anchors list
|
||||
```
|
||||
|
||||
### 5.4 Import Checklist
|
||||
|
||||
**Pre-Import**:
|
||||
- [ ] Verify bundle signature (DSSE)
|
||||
- [ ] Verify bundle hash matches manifest
|
||||
- [ ] Confirm sealed/constrained mode is set
|
||||
- [ ] Backup current generation
|
||||
|
||||
**Import**:
|
||||
- [ ] Run dry-run import
|
||||
- [ ] Apply import
|
||||
- [ ] Verify import succeeded
|
||||
|
||||
**Post-Import**:
|
||||
- [ ] Verify timeline event emitted
|
||||
- [ ] Update staleness dashboard
|
||||
- [ ] Archive import manifest
|
||||
- [ ] Update audit log
|
||||
|
||||
---
|
||||
|
||||
## 6. Proof Verification Offline
|
||||
|
||||
### 6.1 Verify Proof Bundle (Full Offline)
|
||||
|
||||
```bash
|
||||
# Verify proof bundle without any network access
|
||||
stella proof verify --bundle proof-bundle.zip \
|
||||
--offline \
|
||||
--trust-anchor /path/to/trust-anchors.json
|
||||
|
||||
# Verification checks (offline):
|
||||
# ✅ Signature valid (DSSE)
|
||||
# ✅ Content-addressed ID matches
|
||||
# ✅ Merkle path valid
|
||||
# ⏭️ Rekor inclusion (SKIPPED - offline mode)
|
||||
# ✅ Time anchor valid
|
||||
```
|
||||
|
||||
### 6.2 Verify with Portable Bundle
|
||||
|
||||
```bash
|
||||
# Portable bundles include trust anchors
|
||||
stella proof verify --bundle portable-proof.zip \
|
||||
--offline \
|
||||
--self-contained
|
||||
|
||||
# Output:
|
||||
# Using embedded trust anchors
|
||||
# Signature verification: PASS
|
||||
# ID recomputation: PASS
|
||||
# Merkle path: PASS
|
||||
# Time anchor: VALID
|
||||
# Overall: VERIFIED
|
||||
```
|
||||
|
||||
### 6.3 Batch Verification
|
||||
|
||||
```bash
|
||||
# Verify multiple bundles
|
||||
stella proof verify-batch --dir /path/to/bundles/ \
|
||||
--offline \
|
||||
--trust-anchor /path/to/trust-anchors.json \
|
||||
--output verification-report.json
|
||||
|
||||
# Generate verification report
|
||||
cat verification-report.json | jq '.summary'
|
||||
# Output:
|
||||
# {
|
||||
# "total": 100,
|
||||
# "verified": 98,
|
||||
# "failed": 2,
|
||||
# "skipped": 0
|
||||
# }
|
||||
```
|
||||
|
||||
### 6.4 Verification Without CLI
|
||||
|
||||
For environments without the CLI, manual verification is possible:
|
||||
|
||||
```bash
|
||||
# 1. Extract bundle
|
||||
unzip proof-bundle.zip -d ./verify/
|
||||
|
||||
# 2. Verify DSSE signature (using openssl)
|
||||
# Extract payload from DSSE envelope
|
||||
cat ./verify/manifest.dsse.json | jq -r '.payload' | base64 -d > payload.json
|
||||
|
||||
# Verify signature
|
||||
cat ./verify/manifest.dsse.json | jq -r '.signatures[0].sig' | base64 -d > signature.bin
|
||||
openssl dgst -sha256 -verify trust-anchor-pubkey.pem -signature signature.bin payload.json
|
||||
|
||||
# 3. Verify content-addressed ID
|
||||
# Compute canonical hash
|
||||
cat ./verify/manifest.json | jq -cS . | sha256sum
|
||||
# Compare with manifestHash in bundle
|
||||
|
||||
# 4. Verify merkle path
|
||||
# (See docs/airgap/proof-chain-verification.md for algorithm)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Troubleshooting
|
||||
|
||||
### 7.1 Kit Verification Failed
|
||||
|
||||
**Symptom**: `stella airgap verify-kit` fails.
|
||||
|
||||
**Diagnosis**:
|
||||
|
||||
```bash
|
||||
# Check specific component
|
||||
stella airgap verify-kit --verbose --component feeds
|
||||
|
||||
# Common errors:
|
||||
# - "Signature verification failed": Key mismatch
|
||||
# - "Hash mismatch": Bundle corrupted during transfer
|
||||
# - "Time anchor expired": Anchor needs refresh
|
||||
```
|
||||
|
||||
**Resolution**:
|
||||
|
||||
| Error | Cause | Resolution |
|
||||
|-------|-------|------------|
|
||||
| Signature failed | Wrong trust anchor | Import correct anchor |
|
||||
| Hash mismatch | Corruption | Re-transfer bundle |
|
||||
| Time anchor expired | Clock drift or expired | Import new time anchor |
|
||||
|
||||
### 7.2 Staleness Alert
|
||||
|
||||
**Symptom**: Staleness warning/alert.
|
||||
|
||||
**Diagnosis**:
|
||||
|
||||
```bash
|
||||
# Check staleness status
|
||||
stella airgap staleness-status
|
||||
|
||||
# Output:
|
||||
# Feed age: 5 days (threshold: 7 days)
|
||||
# VEX age: 3 days (threshold: 7 days)
|
||||
# Policy age: 30 days (threshold: 90 days)
|
||||
# Status: AMBER (approaching threshold)
|
||||
```
|
||||
|
||||
**Resolution**:
|
||||
|
||||
```bash
|
||||
# Import updated bundles
|
||||
stella airgap import --bundle /path/to/latest-feed.zip --type feed
|
||||
|
||||
# If bundles unavailable and breach imminent:
|
||||
# - Raise amber alert (5-7 days)
|
||||
# - If >7 days, raise red alert and halt new ingests
|
||||
# - Request emergency bundle via secure channel
|
||||
```
|
||||
|
||||
### 7.3 Proof Verification Fails Offline
|
||||
|
||||
**Symptom**: Proof verification fails in sealed mode.
|
||||
|
||||
**Diagnosis**:
|
||||
|
||||
```bash
|
||||
# Check verification error
|
||||
stella proof verify --bundle proof.zip --offline --verbose
|
||||
|
||||
# Common errors:
|
||||
# - "Trust anchor not found": Missing anchor in offline kit
|
||||
# - "Time anchor expired": Time validation failed
|
||||
# - "Unsupported algorithm": Key algorithm not supported
|
||||
```
|
||||
|
||||
**Resolution**:
|
||||
|
||||
```bash
|
||||
# For missing trust anchor:
|
||||
# Import the required anchor
|
||||
stella airgap import-anchor --file required-anchor.json
|
||||
|
||||
# For expired time anchor:
|
||||
# Import new time anchor
|
||||
stella airgap import-time-anchor --file new-time-anchor.json
|
||||
|
||||
# For algorithm issues:
|
||||
# Regenerate proof with supported algorithm
|
||||
stella proof regenerate --scan-id $SCAN_ID --algorithm ECDSA-P256
|
||||
```
|
||||
|
||||
### 7.4 Symbol Resolution Fails
|
||||
|
||||
**Symptom**: Reachability shows "symbol not found" errors.
|
||||
|
||||
**Diagnosis**:
|
||||
|
||||
```bash
|
||||
# Check symbol database status
|
||||
stella airgap symbols-status
|
||||
|
||||
# Output:
|
||||
# Symbol DB: /path/to/symbols/symbol-index.db
|
||||
# Version: 2025.12.15
|
||||
# Entries: 5,234,567
|
||||
# Coverage: Java, .NET, Python
|
||||
```
|
||||
|
||||
**Resolution**:
|
||||
|
||||
```bash
|
||||
# Import updated symbol database
|
||||
stella airgap import --bundle symbol-update.zip --type symbols
|
||||
|
||||
# Recompute reachability with new symbols
|
||||
stella reachability compute --scan-id $SCAN_ID --offline --force
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Monitoring & Alerting
|
||||
|
||||
### 8.1 Key Metrics (Air-Gap)
|
||||
|
||||
| Metric | Description | Alert Threshold |
|
||||
|--------|-------------|-----------------|
|
||||
| `airgap_staleness_days` | Days since last bundle import | > 5 (amber), > 7 (red) |
|
||||
| `airgap_time_anchor_validity_days` | Days until time anchor expires | < 7 |
|
||||
| `airgap_verification_failures` | Offline verification failures | > 0 |
|
||||
| `airgap_import_failures` | Bundle import failures | > 0 |
|
||||
|
||||
### 8.2 Alerting Rules
|
||||
|
||||
```yaml
|
||||
groups:
|
||||
- name: airgap-operations
|
||||
rules:
|
||||
- alert: AirgapStalenessAmber
|
||||
expr: airgap_staleness_days > 5
|
||||
for: 1h
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: "Air-gap feed staleness approaching threshold"
|
||||
|
||||
- alert: AirgapStalenessRed
|
||||
expr: airgap_staleness_days > 7
|
||||
for: 1h
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "Air-gap feed staleness breach - halt new ingests"
|
||||
|
||||
- alert: AirgapTimeAnchorExpiring
|
||||
expr: airgap_time_anchor_validity_days < 7
|
||||
for: 1h
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: "Time anchor expiring in {{ $value }} days"
|
||||
|
||||
- alert: AirgapVerificationFailure
|
||||
expr: increase(airgap_verification_failures_total[1h]) > 0
|
||||
for: 5m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "Air-gap verification failures detected"
|
||||
```
|
||||
|
||||
### 8.3 Audit Requirements
|
||||
|
||||
For air-gapped environments, maintain strict audit trails:
|
||||
|
||||
```bash
|
||||
# Record every import
|
||||
{
|
||||
"timestamp": "2025-12-20T10:00:00Z",
|
||||
"action": "import",
|
||||
"bundleType": "feed",
|
||||
"bundleHash": "sha256:feed123...",
|
||||
"generation": "2025.12.20",
|
||||
"actor": "operator@example.com",
|
||||
"mode": "sealed",
|
||||
"verification": "PASS"
|
||||
}
|
||||
|
||||
# Daily audit log export
|
||||
stella airgap audit-export --date today --output audit-$(date +%Y%m%d).json
|
||||
|
||||
# Verify audit log integrity
|
||||
stella airgap audit-verify --log audit-20251220.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Air-Gap Overview](./overview.md)
|
||||
- [Offline Bundle Format](./offline-bundle-format.md)
|
||||
- [Proof Chain Verification](./proof-chain-verification.md)
|
||||
- [Time Anchor Schema](./time-anchor-schema.md)
|
||||
- [Score Proofs Runbook](../operations/score-proofs-runbook.md)
|
||||
- [Reachability Runbook](../operations/reachability-runbook.md)
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Version**: 1.0.0
|
||||
**Sprint**: 3500.0004.0004
|
||||
32
docs/modules/airgap/guides/sealed-startup-diagnostics.md
Normal file
32
docs/modules/airgap/guides/sealed-startup-diagnostics.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# AirGap Sealed-Mode Startup Diagnostics (prep for AIRGAP-CTL-57-001/57-002)
|
||||
|
||||
## Goal
|
||||
Prevent services from running when sealed-mode requirements are unmet and emit auditable diagnostics + telemetry.
|
||||
|
||||
## Pre-flight checks
|
||||
1) `airgap_state` indicates `sealed=true`.
|
||||
2) Egress allowlist configured (non-empty or explicitly `[]`).
|
||||
3) Trust root bundle + TUF metadata present and unexpired.
|
||||
4) Time anchor available (see `TimeAnchor` schema) and staleness budget not exceeded.
|
||||
5) Pending root rotations either applied or flagged with approver IDs.
|
||||
|
||||
## On failure
|
||||
- Abort host startup with structured error code: `AIRGAP_STARTUP_MISSING_<ITEM>` (implemented as `sealed-startup-blocked:<reason>` in controller host).
|
||||
- Emit structured log fields: `airgap.startup.check`, `status=failure`, `reason`, `bundlePath`, `trustRootVersion`, `timeAnchorDigest`.
|
||||
- Increment counter `airgap_startup_blocked_total{reason}` and gauge `airgap_time_anchor_age_seconds` if anchor missing/stale.
|
||||
|
||||
## Telemetry hooks
|
||||
- Trace event `airgap.startup.validation` with attributes: `sealed`, `allowlist.count`, `trust_roots.count`, `time_anchor.age_seconds`, `rotation.pending`.
|
||||
- Timeline events (for 57-002): `airgap.sealed` and `airgap.unsealed` include startup validation results and pending rotations.
|
||||
|
||||
## Integration points
|
||||
- Controller: run checks during `IHostApplicationLifetime.ApplicationStarted` before exposing endpoints.
|
||||
- Importer: reuse `ImportValidator` to ensure bundles + trust rotation are valid before proceeding.
|
||||
- Time component: provide anchor + staleness calculations to the controller checks.
|
||||
|
||||
## Artefacts
|
||||
- This document (deterministic guardrails for startup diagnostics).
|
||||
- Code references: `src/AirGap/StellaOps.AirGap.Importer/Validation/*` for trust + bundle validation primitives; `src/AirGap/StellaOps.AirGap.Time/*` for anchors.
|
||||
|
||||
## Owners
|
||||
- AirGap Controller Guild · Observability Guild.
|
||||
30
docs/modules/airgap/guides/sealing-and-egress.md
Normal file
30
docs/modules/airgap/guides/sealing-and-egress.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# Sealing and Egress (Airgap 56-002)
|
||||
|
||||
Guidance for enforcing deny-all egress and validating sealed-mode posture.
|
||||
|
||||
## Network policies
|
||||
- Kubernetes: apply namespace-scoped `NetworkPolicy` with default deny; allow only:
|
||||
- DNS to internal resolver
|
||||
- Object storage/mirror endpoints on allowlist
|
||||
- OTLP/observability endpoints if permitted for sealed monitoring
|
||||
- Docker Compose: use firewall rules or `extra_hosts` to block outbound except mirrors; ship `iptables` template in ops bundle.
|
||||
|
||||
## EgressPolicy facade
|
||||
- Services MUST read `Excititor:Network:EgressPolicy` (or module equivalent) to decide runtime behavior:
|
||||
- `sealed` → deny outbound HTTP/S except allowlist; fail fast on unexpected hosts.
|
||||
- `constrained` → allow allowlist + time/NTP if required.
|
||||
- Log policy decisions and surface `X-Sealed-Mode: true|false` on HTTP responses for diagnostics.
|
||||
|
||||
## Verification checklist
|
||||
1. Confirm policy manifests applied (kubectl/compose diff) and pods restarted.
|
||||
2. Run connectivity probe from each pod:
|
||||
- Allowed endpoints respond (200/OK or 403 expected).
|
||||
- Disallowed domains return immediate failure.
|
||||
3. Attempt bundle import; verify timeline event emitted with `sealed=true`.
|
||||
4. Check observability: counters for denied egress should increment (export or console log).
|
||||
5. Record mirrorGeneration + manifest hash in audit log.
|
||||
|
||||
## Determinism & offline posture
|
||||
- No external CRLs/OCSP in sealed mode; rely on bundled trust roots.
|
||||
- Keep allowlist minimal and declared in config; no implicit fallbacks.
|
||||
- All timestamps UTC; avoid calling external time APIs.
|
||||
288
docs/modules/airgap/guides/smart-diff-airgap-workflows.md
Normal file
288
docs/modules/airgap/guides/smart-diff-airgap-workflows.md
Normal file
@@ -0,0 +1,288 @@
|
||||
# Smart-Diff Air-Gap Workflows
|
||||
|
||||
**Sprint:** SPRINT_3500_0001_0001
|
||||
**Task:** SDIFF-MASTER-0006 - Document air-gap workflows for smart-diff
|
||||
|
||||
## Overview
|
||||
|
||||
Smart-Diff can operate in fully air-gapped environments using offline bundles. This document describes the workflows for running smart-diff analysis without network connectivity.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Offline Kit** - Downloaded and verified (`stellaops offline kit download`)
|
||||
2. **Feed Snapshots** - Pre-staged vulnerability feeds
|
||||
3. **SBOM Cache** - Pre-generated SBOMs for target artifacts
|
||||
|
||||
## Workflow 1: Offline Smart-Diff Analysis
|
||||
|
||||
### Step 1: Prepare Offline Bundle
|
||||
|
||||
On a connected machine:
|
||||
|
||||
```bash
|
||||
# Download offline kit with feeds
|
||||
stellaops offline kit download \
|
||||
--output /path/to/offline-bundle \
|
||||
--include-feeds nvd,osv,epss \
|
||||
--feed-date 2025-01-15
|
||||
|
||||
# Include SBOMs for known artifacts
|
||||
stellaops offline sbom generate \
|
||||
--artifact registry.example.com/app:v1 \
|
||||
--artifact registry.example.com/app:v2 \
|
||||
--output /path/to/offline-bundle/sboms
|
||||
|
||||
# Package for transfer
|
||||
stellaops offline kit package \
|
||||
--input /path/to/offline-bundle \
|
||||
--output stellaops-offline-2025-01-15.tar.gz \
|
||||
--sign
|
||||
```
|
||||
|
||||
### Step 2: Transfer to Air-Gapped Environment
|
||||
|
||||
Transfer the bundle using approved media:
|
||||
- USB drive (scanned and approved)
|
||||
- Optical media (DVD/Blu-ray)
|
||||
- Data diode
|
||||
|
||||
### Step 3: Import Bundle
|
||||
|
||||
On the air-gapped machine:
|
||||
|
||||
```bash
|
||||
# Verify bundle signature
|
||||
stellaops offline kit verify \
|
||||
--input stellaops-offline-2025-01-15.tar.gz \
|
||||
--public-key /path/to/signing-key.pub
|
||||
|
||||
# Extract and configure
|
||||
stellaops offline kit import \
|
||||
--input stellaops-offline-2025-01-15.tar.gz \
|
||||
--data-dir /opt/stellaops/data
|
||||
```
|
||||
|
||||
### Step 4: Run Smart-Diff
|
||||
|
||||
```bash
|
||||
# Set offline mode
|
||||
export STELLAOPS_OFFLINE=true
|
||||
export STELLAOPS_DATA_DIR=/opt/stellaops/data
|
||||
|
||||
# Run smart-diff
|
||||
stellaops smart-diff \
|
||||
--base sbom:app-v1.json \
|
||||
--target sbom:app-v2.json \
|
||||
--output smart-diff-report.json
|
||||
```
|
||||
|
||||
## Workflow 2: Pre-Computed Smart-Diff Export
|
||||
|
||||
For environments where even running analysis tools is restricted.
|
||||
|
||||
### Step 1: Prepare Artifacts (Connected Machine)
|
||||
|
||||
```bash
|
||||
# Generate SBOMs
|
||||
stellaops sbom generate --artifact app:v1 --output app-v1-sbom.json
|
||||
stellaops sbom generate --artifact app:v2 --output app-v2-sbom.json
|
||||
|
||||
# Run smart-diff with full proof bundle
|
||||
stellaops smart-diff \
|
||||
--base app-v1-sbom.json \
|
||||
--target app-v2-sbom.json \
|
||||
--output-dir ./smart-diff-export \
|
||||
--include-proofs \
|
||||
--include-evidence \
|
||||
--format bundle
|
||||
```
|
||||
|
||||
### Step 2: Verify Export Contents
|
||||
|
||||
The export bundle contains:
|
||||
```
|
||||
smart-diff-export/
|
||||
├── manifest.json # Signed manifest
|
||||
├── base-sbom.json # Base SBOM (hash verified)
|
||||
├── target-sbom.json # Target SBOM (hash verified)
|
||||
├── diff-results.json # Smart-diff findings
|
||||
├── sarif-report.json # SARIF formatted output
|
||||
├── proofs/
|
||||
│ ├── ledger.json # Proof ledger
|
||||
│ └── nodes/ # Individual proof nodes
|
||||
├── evidence/
|
||||
│ ├── reachability.json # Reachability evidence
|
||||
│ ├── vex-statements.json # VEX statements
|
||||
│ └── hardening.json # Binary hardening data
|
||||
└── signature.dsse # DSSE envelope
|
||||
```
|
||||
|
||||
### Step 3: Import and Verify (Air-Gapped Machine)
|
||||
|
||||
```bash
|
||||
# Verify bundle integrity
|
||||
stellaops verify-bundle \
|
||||
--input smart-diff-export \
|
||||
--public-key /path/to/trusted-key.pub
|
||||
|
||||
# View results
|
||||
stellaops smart-diff show \
|
||||
--bundle smart-diff-export \
|
||||
--format table
|
||||
```
|
||||
|
||||
## Workflow 3: Incremental Feed Updates
|
||||
|
||||
### Step 1: Generate Delta Feed
|
||||
|
||||
On connected machine:
|
||||
|
||||
```bash
|
||||
# Generate delta since last sync
|
||||
stellaops offline feed delta \
|
||||
--since 2025-01-10 \
|
||||
--output feed-delta-2025-01-15.tar.gz \
|
||||
--sign
|
||||
```
|
||||
|
||||
### Step 2: Apply Delta (Air-Gapped)
|
||||
|
||||
```bash
|
||||
# Import delta
|
||||
stellaops offline feed apply \
|
||||
--input feed-delta-2025-01-15.tar.gz \
|
||||
--verify
|
||||
|
||||
# Trigger score replay for affected scans
|
||||
stellaops score replay-all \
|
||||
--trigger feed-update \
|
||||
--dry-run
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
| Variable | Description | Default |
|
||||
|----------|-------------|---------|
|
||||
| `STELLAOPS_OFFLINE` | Enable offline mode | `false` |
|
||||
| `STELLAOPS_DATA_DIR` | Local data directory | `~/.stellaops` |
|
||||
| `STELLAOPS_FEED_DIR` | Feed snapshot directory | `$DATA_DIR/feeds` |
|
||||
| `STELLAOPS_SBOM_CACHE` | SBOM cache directory | `$DATA_DIR/sboms` |
|
||||
| `STELLAOPS_SKIP_NETWORK` | Block network requests | `false` |
|
||||
| `STELLAOPS_REQUIRE_SIGNATURES` | Require signed data | `true` |
|
||||
|
||||
### Config File
|
||||
|
||||
```yaml
|
||||
# ~/.stellaops/config.yaml
|
||||
offline:
|
||||
enabled: true
|
||||
data_dir: /opt/stellaops/data
|
||||
require_signatures: true
|
||||
|
||||
feeds:
|
||||
source: local
|
||||
path: /opt/stellaops/data/feeds
|
||||
|
||||
sbom:
|
||||
cache_dir: /opt/stellaops/data/sboms
|
||||
|
||||
network:
|
||||
allow_list: [] # Empty = no network
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
### Verify Feed Freshness
|
||||
|
||||
```bash
|
||||
# Check feed dates
|
||||
stellaops offline status
|
||||
|
||||
# Output:
|
||||
# Feed Status (Offline Mode)
|
||||
# ─────────────────────────────
|
||||
# NVD: 2025-01-15 (2 days old)
|
||||
# OSV: 2025-01-15 (2 days old)
|
||||
# EPSS: 2025-01-14 (3 days old)
|
||||
# KEV: 2025-01-15 (2 days old)
|
||||
```
|
||||
|
||||
### Verify Proof Integrity
|
||||
|
||||
```bash
|
||||
# Verify smart-diff proofs
|
||||
stellaops smart-diff verify \
|
||||
--input smart-diff-report.json \
|
||||
--proof-bundle ./proofs
|
||||
|
||||
# Output:
|
||||
# ✓ Manifest hash verified
|
||||
# ✓ All proof nodes valid
|
||||
# ✓ Root hash matches: sha256:abc123...
|
||||
```
|
||||
|
||||
## Determinism Guarantees
|
||||
|
||||
Offline smart-diff maintains determinism by:
|
||||
|
||||
1. **Content-addressed feeds** - Same feed hash = same results
|
||||
2. **Frozen timestamps** - All timestamps use manifest creation time
|
||||
3. **No network randomness** - No external API calls
|
||||
4. **Stable sorting** - Deterministic output ordering
|
||||
|
||||
### Reproducibility Test
|
||||
|
||||
```bash
|
||||
# Run twice and compare
|
||||
stellaops smart-diff --base a.json --target b.json --output run1.json
|
||||
stellaops smart-diff --base a.json --target b.json --output run2.json
|
||||
|
||||
# Compare hashes
|
||||
sha256sum run1.json run2.json
|
||||
# abc123... run1.json
|
||||
# abc123... run2.json (identical)
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Error: Feed not found
|
||||
|
||||
```
|
||||
Error: Feed 'nvd' not found in offline data directory
|
||||
```
|
||||
|
||||
**Solution:** Ensure feed was included in offline kit:
|
||||
```bash
|
||||
stellaops offline kit status
|
||||
ls $STELLAOPS_FEED_DIR/nvd/
|
||||
```
|
||||
|
||||
### Error: Network request blocked
|
||||
|
||||
```
|
||||
Error: Network request blocked in offline mode: api.osv.dev
|
||||
```
|
||||
|
||||
**Solution:** This is expected behavior. Ensure all required data is in offline bundle.
|
||||
|
||||
### Error: Signature verification failed
|
||||
|
||||
```
|
||||
Error: Bundle signature verification failed
|
||||
```
|
||||
|
||||
**Solution:** Ensure correct public key is configured:
|
||||
```bash
|
||||
stellaops offline kit verify \
|
||||
--input bundle.tar.gz \
|
||||
--public-key /path/to/correct-key.pub
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Offline Kit Guide](../OFFLINE_KIT.md)
|
||||
- [Smart-Diff CLI](../cli/smart-diff-cli.md)
|
||||
- [Smart-Diff types](../api/smart-diff-types.md)
|
||||
- [Determinism gates](../testing/determinism-gates.md)
|
||||
69
docs/modules/airgap/guides/staleness-and-time.md
Normal file
69
docs/modules/airgap/guides/staleness-and-time.md
Normal file
@@ -0,0 +1,69 @@
|
||||
# Air-Gapped Time Anchors & Staleness Budgets
|
||||
|
||||
> **Audience:** AirGap Time/Controller/Policy guilds, DevOps
|
||||
>
|
||||
> **Purpose:** Document how air-gapped installations maintain trusted time anchors, compute staleness windows, and expose drift telemetry.
|
||||
|
||||
## 1. Overview
|
||||
|
||||
Air-gapped clusters cannot contact external NTP servers. StellaOps distributes signed time anchor tokens alongside mirror bundles so services can reason about freshness and seal state without external clocks.
|
||||
|
||||
Key goals:
|
||||
|
||||
- Provide deterministic time anchors signed by the mirror authority.
|
||||
- Track drift and staleness budgets for scanner reports, advisories, and runtime evidence.
|
||||
- Surface warnings to operators (UI/CLI/Notifier) before anchors expire.
|
||||
|
||||
## 2. Components
|
||||
|
||||
| Component | Responsibility |
|
||||
|-----------|----------------|
|
||||
| AirGap Controller | Stores the active `time_anchor` token and enforces sealed/unsealed transitions. |
|
||||
| AirGap Time service | Parses anchor bundles, validates signatures, records monotonic offsets, and exposes drift metrics. |
|
||||
| Scheduler & Policy Engine | Query the time service to gate scheduled runs and evidence evaluation. |
|
||||
| UI / Notifier | Display remaining budget and raise alerts when thresholds are crossed. |
|
||||
|
||||
## 3. Time Anchor Tokens
|
||||
|
||||
- Distributed as part of mirror/offline bundles (`airgap/time-anchor.json`).
|
||||
- Signed with mirror key; includes issuance time, validity window, and monotonic counter.
|
||||
- Validation steps:
|
||||
1. Verify detached signature.
|
||||
2. Compare bundle counter to previously applied anchors.
|
||||
3. Persist anchor with checksum for audit.
|
||||
|
||||
## 4. Staleness Budgets
|
||||
|
||||
Each tenant/configuration defines budgets:
|
||||
|
||||
- **Advisory freshness** – maximum age of advisory/VEX data before rescans are required.
|
||||
- **Scanner evidence** – acceptable drift between last scan and current anchor.
|
||||
- **Runtime posture** – tolerated drift before Notifier raises incidents.
|
||||
|
||||
AirGap Time calculates drift = `now(monotonic) - anchor.issued_at` and exposes:
|
||||
|
||||
- `/api/v1/time/status` – current anchor metadata, drift, remaining budget.
|
||||
- `/api/v1/time/metrics` – Prometheus counters (`airgap_anchor_drift_seconds`, `airgap_anchor_expiry_seconds`).
|
||||
|
||||
## 5. Operator Workflow
|
||||
|
||||
1. Import new mirror bundle (includes time anchor).
|
||||
2. AirGap Time validates and stores the anchor; Controller records audit entry.
|
||||
3. Services subscribe to change events and recompute drift.
|
||||
4. UI displays badge (green/amber/red) based on thresholds.
|
||||
5. Notifier sends alerts when drift exceeds warning or expiry limits.
|
||||
|
||||
## 6. Implementation Notes
|
||||
|
||||
- Use `IAirGapTimeStore` for persistence; default implementation relies on PostgreSQL with tenant partitioning.
|
||||
- Ensure deterministic JSON serialization (UTC ISO-8601 timestamps, sorted keys).
|
||||
- Test vectors located under `src/AirGap/StellaOps.AirGap.Time/fixtures/`.
|
||||
- For offline testing, simulate monotonic clock via `ITestClock` to avoid system clock drift in CI.
|
||||
- Staleness calculations use `StalenessCalculator` + `StalenessBudget`/`StalenessEvaluation` (see `src/AirGap/StellaOps.AirGap.Time/Services` and `.Models`); warning/breach thresholds must be non-negative and warning ≤ breach.
|
||||
|
||||
## 7. References
|
||||
|
||||
- `docs/modules/airgap/guides/airgap-mode.md`
|
||||
- `src/AirGap/StellaOps.AirGap.Time`
|
||||
- `src/AirGap/StellaOps.AirGap.Controller`
|
||||
- `src/AirGap/StellaOps.AirGap.Policy`
|
||||
316
docs/modules/airgap/guides/symbol-bundles.md
Normal file
316
docs/modules/airgap/guides/symbol-bundles.md
Normal file
@@ -0,0 +1,316 @@
|
||||
# Symbol Bundles for Air-Gapped Installations
|
||||
|
||||
**Reference:** SYMS-BUNDLE-401-014
|
||||
|
||||
This document describes how to create, verify, and deploy deterministic symbol bundles for air-gapped StellaOps installations.
|
||||
|
||||
## Overview
|
||||
|
||||
Symbol bundles package debug symbols (PDBs, DWARF, etc.) into a single archive with:
|
||||
- **Deterministic ordering** for reproducible builds
|
||||
- **BLAKE3 hashes** for content verification
|
||||
- **DSSE signatures** for authenticity
|
||||
- **Rekor checkpoints** for transparency log integration
|
||||
- **Merkle inclusion proofs** for offline verification
|
||||
|
||||
## Bundle Structure
|
||||
|
||||
```
|
||||
bundle-name-1.0.0.symbols.zip
|
||||
├── manifest.json # Bundle manifest with all metadata
|
||||
├── symbols/
|
||||
│ ├── {debug-id-1}/
|
||||
│ │ ├── myapp.exe.symbols # Symbol blob
|
||||
│ │ └── myapp.exe.symbols.json # Symbol manifest
|
||||
│ ├── {debug-id-2}/
|
||||
│ │ ├── libcrypto.so.symbols
|
||||
│ │ └── libcrypto.so.symbols.json
|
||||
│ └── ...
|
||||
```
|
||||
|
||||
## Creating a Bundle
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. Collect symbol manifests from CI builds or ingest tools
|
||||
2. Ensure all manifests follow the `*.symbols.json` naming convention
|
||||
3. Have signing keys available (if signing is required)
|
||||
|
||||
### Build Command
|
||||
|
||||
```bash
|
||||
# Basic bundle creation
|
||||
stella symbols bundle \
|
||||
--name "product-symbols" \
|
||||
--version "1.0.0" \
|
||||
--source ./symbols-dir \
|
||||
--output ./bundles
|
||||
|
||||
# With signing and Rekor submission
|
||||
stella symbols bundle \
|
||||
--name "product-symbols" \
|
||||
--version "1.0.0" \
|
||||
--source ./symbols-dir \
|
||||
--output ./bundles \
|
||||
--sign \
|
||||
--key ./signing-key.pem \
|
||||
--key-id "release-key-2025" \
|
||||
--rekor \
|
||||
--rekor-url https://rekor.sigstore.dev
|
||||
|
||||
# Filter by platform
|
||||
stella symbols bundle \
|
||||
--name "linux-symbols" \
|
||||
--version "1.0.0" \
|
||||
--source ./symbols-dir \
|
||||
--output ./bundles \
|
||||
--platform linux-x64
|
||||
```
|
||||
|
||||
### Bundle Options
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--name` | Bundle name (required) |
|
||||
| `--version` | Bundle version in SemVer format (required) |
|
||||
| `--source` | Source directory containing symbol manifests (required) |
|
||||
| `--output` | Output directory for bundle archive (required) |
|
||||
| `--platform` | Filter symbols by platform (e.g., linux-x64, win-x64) |
|
||||
| `--tenant` | Filter symbols by tenant ID |
|
||||
| `--sign` | Sign bundle with DSSE |
|
||||
| `--key` | Path to signing key (PEM-encoded private key) |
|
||||
| `--key-id` | Key ID for DSSE signature |
|
||||
| `--algorithm` | Signing algorithm (ecdsa-p256, ed25519, rsa-pss-sha256) |
|
||||
| `--rekor` | Submit to Rekor transparency log |
|
||||
| `--rekor-url` | Rekor server URL |
|
||||
| `--format` | Archive format: zip (default) or tar.gz |
|
||||
| `--compression` | Compression level (0-9, default: 6) |
|
||||
|
||||
## Verifying a Bundle
|
||||
|
||||
### Online Verification
|
||||
|
||||
```bash
|
||||
stella symbols verify --bundle ./product-symbols-1.0.0.symbols.zip
|
||||
```
|
||||
|
||||
### Offline Verification
|
||||
|
||||
For air-gapped environments, include the Rekor public key:
|
||||
|
||||
```bash
|
||||
stella symbols verify \
|
||||
--bundle ./product-symbols-1.0.0.symbols.zip \
|
||||
--public-key ./signing-public-key.pem \
|
||||
--rekor-offline \
|
||||
--rekor-key ./rekor-public-key.pem
|
||||
```
|
||||
|
||||
### Verification Output
|
||||
|
||||
```
|
||||
Bundle verification successful!
|
||||
Bundle ID: a1b2c3d4e5f6g7h8
|
||||
Name: product-symbols-1.0.0.symbols
|
||||
Version: 1.0.0
|
||||
Signature: valid (ecdsa-p256)
|
||||
Hash verification: 42/42 valid
|
||||
```
|
||||
|
||||
## Extracting Symbols
|
||||
|
||||
### Full Extraction
|
||||
|
||||
```bash
|
||||
stella symbols extract \
|
||||
--bundle ./product-symbols-1.0.0.symbols.zip \
|
||||
--output ./extracted-symbols
|
||||
```
|
||||
|
||||
### Platform-Filtered Extraction
|
||||
|
||||
```bash
|
||||
stella symbols extract \
|
||||
--bundle ./product-symbols-1.0.0.symbols.zip \
|
||||
--output ./linux-symbols \
|
||||
--platform linux-x64
|
||||
```
|
||||
|
||||
### Manifests Only
|
||||
|
||||
```bash
|
||||
stella symbols extract \
|
||||
--bundle ./product-symbols-1.0.0.symbols.zip \
|
||||
--output ./manifests-only \
|
||||
--manifests-only
|
||||
```
|
||||
|
||||
## Inspecting Bundles
|
||||
|
||||
```bash
|
||||
# Basic info
|
||||
stella symbols inspect --bundle ./product-symbols-1.0.0.symbols.zip
|
||||
|
||||
# With entry listing
|
||||
stella symbols inspect --bundle ./product-symbols-1.0.0.symbols.zip --entries
|
||||
```
|
||||
|
||||
## Bundle Manifest Schema
|
||||
|
||||
The bundle manifest (`manifest.json`) follows this schema:
|
||||
|
||||
```json
|
||||
{
|
||||
"schemaVersion": "stellaops.symbols.bundle/v1",
|
||||
"bundleId": "blake3-hash-of-content",
|
||||
"name": "product-symbols",
|
||||
"version": "1.0.0",
|
||||
"createdAt": "2025-12-14T10:30:00Z",
|
||||
"platform": null,
|
||||
"tenantId": null,
|
||||
"entries": [
|
||||
{
|
||||
"debugId": "abc123def456",
|
||||
"codeId": "...",
|
||||
"binaryName": "myapp.exe",
|
||||
"platform": "win-x64",
|
||||
"format": "pe",
|
||||
"manifestHash": "blake3...",
|
||||
"blobHash": "blake3...",
|
||||
"blobSizeBytes": 102400,
|
||||
"archivePath": "symbols/abc123def456/myapp.exe.symbols",
|
||||
"symbolCount": 5000
|
||||
}
|
||||
],
|
||||
"totalSizeBytes": 10485760,
|
||||
"signature": {
|
||||
"signed": true,
|
||||
"algorithm": "ecdsa-p256",
|
||||
"keyId": "release-key-2025",
|
||||
"dsseDigest": "sha256:...",
|
||||
"signedAt": "2025-12-14T10:30:00Z",
|
||||
"publicKey": "-----BEGIN PUBLIC KEY-----..."
|
||||
},
|
||||
"rekorCheckpoint": {
|
||||
"rekorUrl": "https://rekor.sigstore.dev",
|
||||
"logEntryId": "...",
|
||||
"logIndex": 12345678,
|
||||
"integratedTime": "2025-12-14T10:30:01Z",
|
||||
"rootHash": "sha256:...",
|
||||
"treeSize": 987654321,
|
||||
"inclusionProof": {
|
||||
"logIndex": 12345678,
|
||||
"rootHash": "sha256:...",
|
||||
"treeSize": 987654321,
|
||||
"hashes": ["sha256:...", "sha256:..."]
|
||||
},
|
||||
"logPublicKey": "-----BEGIN PUBLIC KEY-----..."
|
||||
},
|
||||
"hashAlgorithm": "blake3"
|
||||
}
|
||||
```
|
||||
|
||||
## Air-Gap Deployment Workflow
|
||||
|
||||
### 1. Create Bundle (Online Environment)
|
||||
|
||||
```bash
|
||||
# On the online build server
|
||||
stella symbols bundle \
|
||||
--name "release-v2.0.0-symbols" \
|
||||
--version "2.0.0" \
|
||||
--source /build/symbols \
|
||||
--output /export \
|
||||
--sign --key /keys/release.pem \
|
||||
--rekor
|
||||
```
|
||||
|
||||
### 2. Transfer to Air-Gapped Environment
|
||||
|
||||
Copy the following files to the air-gapped environment:
|
||||
- `release-v2.0.0-symbols-2.0.0.symbols.zip`
|
||||
- `release-v2.0.0-symbols-2.0.0.manifest.json`
|
||||
- `signing-public-key.pem` (if not already present)
|
||||
- `rekor-public-key.pem` (for Rekor offline verification)
|
||||
|
||||
### 3. Verify (Air-Gapped Environment)
|
||||
|
||||
```bash
|
||||
# On the air-gapped server
|
||||
stella symbols verify \
|
||||
--bundle ./release-v2.0.0-symbols-2.0.0.symbols.zip \
|
||||
--public-key ./signing-public-key.pem \
|
||||
--rekor-offline \
|
||||
--rekor-key ./rekor-public-key.pem
|
||||
```
|
||||
|
||||
### 4. Extract and Deploy
|
||||
|
||||
```bash
|
||||
# Extract to symbols server directory
|
||||
stella symbols extract \
|
||||
--bundle ./release-v2.0.0-symbols-2.0.0.symbols.zip \
|
||||
--output /var/stellaops/symbols \
|
||||
--verify
|
||||
```
|
||||
|
||||
## Determinism Guarantees
|
||||
|
||||
Symbol bundles are deterministic:
|
||||
|
||||
1. **Entry ordering**: Entries sorted by debug ID, then binary name (lexicographic)
|
||||
2. **Hash algorithm**: BLAKE3 for all content hashes
|
||||
3. **Timestamps**: UTC ISO-8601 format
|
||||
4. **JSON serialization**: Canonical form (no whitespace, sorted keys)
|
||||
5. **Archive entries**: Sorted by path within archive
|
||||
|
||||
This ensures that given the same input manifests, the same bundle (excluding signatures) is produced.
|
||||
|
||||
## CI Integration
|
||||
|
||||
### GitHub Actions Example
|
||||
|
||||
```yaml
|
||||
- name: Build symbol bundle
|
||||
run: |
|
||||
stella symbols bundle \
|
||||
--name "${{ github.repository }}-symbols" \
|
||||
--version "${{ github.ref_name }}" \
|
||||
--source ./build/symbols \
|
||||
--output ./dist \
|
||||
--sign --key ${{ secrets.SIGNING_KEY }} \
|
||||
--rekor
|
||||
|
||||
- name: Upload bundle artifact
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: symbol-bundle
|
||||
path: ./dist/*.symbols.zip
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "No symbol manifests found"
|
||||
|
||||
Ensure manifests follow the `*.symbols.json` naming convention and are not DSSE envelopes (`*.dsse.json`).
|
||||
|
||||
### "Signature verification failed"
|
||||
|
||||
Check that:
|
||||
1. The public key matches the signing key
|
||||
2. The bundle has not been modified after signing
|
||||
3. The key ID matches what was used during signing
|
||||
|
||||
### "Rekor inclusion proof invalid"
|
||||
|
||||
For offline verification:
|
||||
1. Ensure the Rekor public key is current
|
||||
2. The checkpoint was created when the log was online
|
||||
3. The tree size hasn't changed since the checkpoint
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Offline Kit Guide](../OFFLINE_KIT.md)
|
||||
- [Symbol Server Architecture](../modules/scanner/architecture.md)
|
||||
- [DSSE Signing Guide](../modules/signer/architecture.md)
|
||||
- [Rekor Integration](../modules/attestor/architecture.md)
|
||||
15
docs/modules/airgap/guides/time-anchor-schema.md
Normal file
15
docs/modules/airgap/guides/time-anchor-schema.md
Normal file
@@ -0,0 +1,15 @@
|
||||
# Time Anchor JSON schema (prep for AIRGAP-TIME-57-001)
|
||||
|
||||
Artifact: `docs/modules/airgap/schemas/time-anchor-schema.json`
|
||||
|
||||
Highlights:
|
||||
- Required: `anchorTime` (RFC3339), `source` (`roughtime`|`rfc3161`), `format` string, `tokenDigest` (sha256 hex of token bytes).
|
||||
- Optional: `signatureFingerprint` (hex), `verification.status` (`unknown|passed|failed`) + `reason`.
|
||||
- No additional properties to keep payload deterministic.
|
||||
|
||||
Intended use:
|
||||
- AirGap Time Guild can embed this in sealed-mode configs and validation endpoints.
|
||||
- Mirror/OCI timelines can cite the digest + source without needing full token parsing.
|
||||
|
||||
Notes:
|
||||
- Trust roots and final signature fingerprint rules stay TBD; placeholders remain optional to avoid blocking until roots are issued.
|
||||
48
docs/modules/airgap/guides/time-anchor-trust-roots.md
Normal file
48
docs/modules/airgap/guides/time-anchor-trust-roots.md
Normal file
@@ -0,0 +1,48 @@
|
||||
# Time Anchor Trust Roots (draft) — for AIRGAP-TIME-57-001
|
||||
|
||||
Provides a minimal, deterministic format for distributing trust roots used to validate time tokens (Roughtime and RFC3161) in sealed/offline environments.
|
||||
|
||||
## Artefacts
|
||||
- JSON schema: `docs/modules/airgap/schemas/time-anchor-schema.json`
|
||||
- Trust roots bundle (draft): `docs/modules/airgap/samples/time-anchor-trust-roots.json`
|
||||
|
||||
## Bundle format (`time-anchor-trust-roots.json`)
|
||||
```json
|
||||
{
|
||||
"version": 1,
|
||||
"roughtime": [
|
||||
{
|
||||
"name": "stellaops-test-roughtime",
|
||||
"publicKeyBase64": "BASE64_ED25519_PUBLIC_KEY",
|
||||
"validFrom": "2025-01-01T00:00:00Z",
|
||||
"validTo": "2026-01-01T00:00:00Z"
|
||||
}
|
||||
],
|
||||
"rfc3161": [
|
||||
{
|
||||
"name": "stellaops-test-tsa",
|
||||
"certificatePem": "-----BEGIN CERTIFICATE-----...-----END CERTIFICATE-----",
|
||||
"validFrom": "2025-01-01T00:00:00Z",
|
||||
"validTo": "2026-01-01T00:00:00Z",
|
||||
"fingerprintSha256": "HEX_SHA256"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
- All times are UTC ISO-8601.
|
||||
- Fields are deterministic; no optional properties other than multiple entries per list.
|
||||
- Consumers must reject expired roots and enforce matching token format (Roughtime vs RFC3161).
|
||||
|
||||
## Usage guidance
|
||||
- Ship the bundle with the air-gapped deployment alongside the time-anchor schema.
|
||||
- Configure AirGap Time service to load roots from a sealed path; do not fetch over network.
|
||||
- Rotate by bumping `version`, adding new entries, and setting `validFrom/validTo`; keep prior roots until all deployments roll.
|
||||
|
||||
## Next steps
|
||||
- Replace placeholder values with production Roughtime public keys and TSA certificates once issued by Security.
|
||||
- Add regression tests in `StellaOps.AirGap.Time.Tests` that load this bundle and validate sample tokens once real roots are present.
|
||||
- CI/Dev unblock: you can test end-to-end with a throwaway root by:
|
||||
1. Generate Ed25519 key for Roughtime: `openssl genpkey -algorithm Ed25519 -out rtime-dev.pem && openssl pkey -in rtime-dev.pem -pubout -out rtime-dev.pub`.
|
||||
2. Base64-encode the public key (`base64 -w0 rtime-dev.pub`) and place into `publicKeyBase64`; set validity to a short window.
|
||||
3. Point `AirGap:TrustRootFile` at your edited bundle and set `AirGap:AllowUntrustedAnchors=true` only in dev.
|
||||
4. Run `scripts/mirror/verify_thin_bundle.py --time-root docs/modules/airgap/samples/time-anchor-trust-roots.json` to ensure bundle is parsable.
|
||||
21
docs/modules/airgap/guides/time-anchor-verification-gap.md
Normal file
21
docs/modules/airgap/guides/time-anchor-verification-gap.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# Time Anchor Verification Gap (AIRGAP-TIME-57-001 follow-up)
|
||||
|
||||
## Status (2025-11-20)
|
||||
- Parser: Roughtime verifier now checks Ed25519 signature; RFC3161 verifier uses SignedCms signature validation and signing time attribute. Still needs final trust root bundle + fixture alignment.
|
||||
- Staleness: calculator + budgets landed; loader accepts hex fixtures.
|
||||
- Verification: pipeline (`TimeVerificationService`) active; awaiting guild-provided trust roots (format + key IDs) for production readiness and to update tests/fixtures.
|
||||
|
||||
## What’s missing
|
||||
- Roughtime parser: parse signed responses, extract `timestamp`, `radius`, `verifier` public key; verify signature.
|
||||
- RFC3161 parser: decode ASN.1 TimeStampToken, verify signer chain against provided trust roots, extract nonce/ts.
|
||||
- Trust roots: final format (JWK vs PEM) and key IDs to align with `TrustRootConfig`/Time service.
|
||||
|
||||
## Proposed plan
|
||||
1) Receive finalized token format + trust-root bundle from Time Guild.
|
||||
2) Implement format-specific verifiers with validating tests using provided fixtures.
|
||||
3) Expose `/api/v1/time/status` returning anchor metadata + staleness; wire telemetry counters/alerts per sealed diagnostics doc.
|
||||
|
||||
## Owners
|
||||
- AirGap Time Guild (format decision + trust roots)
|
||||
- AirGap Importer Guild (bundle delivery of anchors)
|
||||
- Observability Guild (telemetry wiring)
|
||||
60
docs/modules/airgap/guides/time-api.md
Normal file
60
docs/modules/airgap/guides/time-api.md
Normal file
@@ -0,0 +1,60 @@
|
||||
# AirGap Time API (status + anchor ingest)
|
||||
|
||||
## Endpoints
|
||||
|
||||
- `POST /api/v1/time/anchor`
|
||||
- Body (JSON):
|
||||
- `tenantId` (string, required)
|
||||
- `hexToken` (string, required) — hex-encoded Roughtime or RFC3161 token.
|
||||
- `format` (string, required) — `Roughtime` or `Rfc3161`.
|
||||
- `trustRootKeyId` (string, required)
|
||||
- `trustRootAlgorithm` (string, required)
|
||||
- `trustRootPublicKeyBase64` (string, required) — pubkey (Ed25519 for Roughtime, RSA for RFC3161).
|
||||
- `warningSeconds` (number, optional)
|
||||
- `breachSeconds` (number, optional)
|
||||
- Response: `TimeStatusDto` (anchor + staleness snapshot) or 400 with reason (`token-hex-invalid`, `roughtime-signature-invalid`, `rfc3161-verify-failed:*`, etc.).
|
||||
- Example:
|
||||
```bash
|
||||
curl -s -X POST http://localhost:5000/api/v1/time/anchor \
|
||||
-H 'content-type: application/json' \
|
||||
-d '{
|
||||
"tenantId":"tenant-default",
|
||||
"hexToken":"01020304deadbeef",
|
||||
"format":"Roughtime",
|
||||
"trustRootKeyId":"root-1",
|
||||
"trustRootAlgorithm":"ed25519",
|
||||
"trustRootPublicKeyBase64":"<base64-ed25519-public-key>",
|
||||
"warningSeconds":3600,
|
||||
"breachSeconds":7200
|
||||
}'
|
||||
```
|
||||
|
||||
- `GET /api/v1/time/status?tenantId=<id>`
|
||||
- Returns `TimeStatusDto` with anchor metadata and staleness flags. 400 if `tenantId` missing.
|
||||
|
||||
- `GET /healthz/ready`
|
||||
- Health check: `Healthy` when anchor present and not stale; `Degraded` when warning threshold crossed; `Unhealthy` when missing/stale. Uses configured tenant/budgets.
|
||||
|
||||
## Config
|
||||
|
||||
`appsettings.json` (see `docs/modules/airgap/samples/time-config-sample.json`):
|
||||
```json
|
||||
{
|
||||
"AirGap": {
|
||||
"TenantId": "tenant-default",
|
||||
"Staleness": {
|
||||
"WarningSeconds": 3600,
|
||||
"BreachSeconds": 7200
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Startup validation
|
||||
- The host runs sealed-mode validation at startup using the configured tenant and budgets.
|
||||
- Fails closed with `sealed-startup-blocked:<reason>` if anchor is missing/stale or budgets mismatch.
|
||||
|
||||
## Notes
|
||||
- Roughtime verifier checks Ed25519 signatures (message||signature framing).
|
||||
- RFC3161 verifier uses SignedCms signature verification and signing-time attribute for anchor time.
|
||||
- DTO serialization is stable (ISO-8601 UTC timestamps, fields fixed).
|
||||
367
docs/modules/airgap/guides/triage-airgap-workflows.md
Normal file
367
docs/modules/airgap/guides/triage-airgap-workflows.md
Normal file
@@ -0,0 +1,367 @@
|
||||
# Triage Air-Gap Workflows
|
||||
|
||||
**Sprint:** SPRINT_3600_0001_0001
|
||||
**Task:** TRI-MASTER-0006 - Document air-gap triage workflows
|
||||
|
||||
## Overview
|
||||
|
||||
This document describes how to perform vulnerability triage in fully air-gapped environments. The triage workflow supports offline evidence bundles, decision capture, and replay token generation.
|
||||
|
||||
## Workflow 1: Offline Triage with Evidence Bundles
|
||||
|
||||
### Step 1: Export Evidence Bundle (Connected Machine)
|
||||
|
||||
```bash
|
||||
# Export triage bundle for specific findings
|
||||
stellaops triage export \
|
||||
--scan-id scan-12345678 \
|
||||
--findings CVE-2024-1234,CVE-2024-5678 \
|
||||
--include-evidence \
|
||||
--include-graph \
|
||||
--output triage-bundle.stella.bundle.tgz
|
||||
|
||||
# Export entire scan for offline review
|
||||
stellaops triage export \
|
||||
--scan-id scan-12345678 \
|
||||
--all-findings \
|
||||
--output full-triage-bundle.stella.bundle.tgz
|
||||
```
|
||||
|
||||
### Step 2: Bundle Contents
|
||||
|
||||
The `.stella.bundle.tgz` archive contains:
|
||||
|
||||
```
|
||||
triage-bundle.stella.bundle.tgz/
|
||||
├── manifest.json # Signed bundle manifest
|
||||
├── findings/
|
||||
│ ├── index.json # Finding list with IDs
|
||||
│ ├── CVE-2024-1234.json # Finding details
|
||||
│ └── CVE-2024-5678.json
|
||||
├── evidence/
|
||||
│ ├── reachability/ # Reachability proofs
|
||||
│ ├── callstack/ # Call stack snippets
|
||||
│ ├── vex/ # VEX/CSAF statements
|
||||
│ └── provenance/ # Provenance data
|
||||
├── graph/
|
||||
│ ├── nodes.ndjson # Dependency graph nodes
|
||||
│ └── edges.ndjson # Graph edges
|
||||
├── feeds/
|
||||
│ └── snapshot.json # Feed snapshot metadata
|
||||
└── signature.dsse # DSSE envelope
|
||||
```
|
||||
|
||||
### Step 3: Transfer to Air-Gapped Environment
|
||||
|
||||
Transfer using approved methods:
|
||||
- USB media (security scanned)
|
||||
- Optical media
|
||||
- Data diode
|
||||
|
||||
### Step 4: Import and Verify
|
||||
|
||||
On the air-gapped machine:
|
||||
|
||||
```bash
|
||||
# Verify bundle integrity
|
||||
stellaops triage verify-bundle \
|
||||
--input triage-bundle.stella.bundle.tgz \
|
||||
--public-key /path/to/signing-key.pub
|
||||
|
||||
# Import for offline triage
|
||||
stellaops triage import \
|
||||
--input triage-bundle.stella.bundle.tgz \
|
||||
--workspace /opt/stellaops/triage
|
||||
```
|
||||
|
||||
### Step 5: Perform Offline Triage
|
||||
|
||||
```bash
|
||||
# List findings in bundle
|
||||
stellaops triage list \
|
||||
--workspace /opt/stellaops/triage
|
||||
|
||||
# View finding with evidence
|
||||
stellaops triage show CVE-2024-1234 \
|
||||
--workspace /opt/stellaops/triage \
|
||||
--show-evidence
|
||||
|
||||
# Make triage decision
|
||||
stellaops triage decide CVE-2024-1234 \
|
||||
--workspace /opt/stellaops/triage \
|
||||
--status not_affected \
|
||||
--justification "Code path is unreachable due to config gating" \
|
||||
--reviewer "security-team"
|
||||
```
|
||||
|
||||
### Step 6: Export Decisions
|
||||
|
||||
```bash
|
||||
# Export decisions for sync back
|
||||
stellaops triage export-decisions \
|
||||
--workspace /opt/stellaops/triage \
|
||||
--output decisions-2025-01-15.json \
|
||||
--sign
|
||||
```
|
||||
|
||||
### Step 7: Sync Decisions (Connected Machine)
|
||||
|
||||
```bash
|
||||
# Import and apply decisions
|
||||
stellaops triage import-decisions \
|
||||
--input decisions-2025-01-15.json \
|
||||
--verify \
|
||||
--apply
|
||||
```
|
||||
|
||||
## Workflow 2: Batch Offline Triage
|
||||
|
||||
For high-volume environments.
|
||||
|
||||
### Step 1: Export Batch Bundle
|
||||
|
||||
```bash
|
||||
# Export all untriaged findings
|
||||
stellaops triage export-batch \
|
||||
--query "status=untriaged AND priority>=0.7" \
|
||||
--limit 100 \
|
||||
--output batch-triage-2025-01-15.stella.bundle.tgz
|
||||
```
|
||||
|
||||
### Step 2: Offline Batch Processing
|
||||
|
||||
```bash
|
||||
# Interactive batch triage
|
||||
stellaops triage batch \
|
||||
--workspace /opt/stellaops/triage \
|
||||
--input batch-triage-2025-01-15.stella.bundle.tgz
|
||||
|
||||
# Keyboard shortcuts enabled:
|
||||
# j/k - Next/Previous finding
|
||||
# a - Accept (affected)
|
||||
# n - Not affected
|
||||
# w - Will not fix
|
||||
# f - False positive
|
||||
# u - Undo last decision
|
||||
# q - Quit (saves progress)
|
||||
```
|
||||
|
||||
### Step 3: Export and Sync
|
||||
|
||||
```bash
|
||||
# Export batch decisions
|
||||
stellaops triage export-decisions \
|
||||
--workspace /opt/stellaops/triage \
|
||||
--format json \
|
||||
--sign \
|
||||
--output batch-decisions.json
|
||||
```
|
||||
|
||||
## Workflow 3: Evidence-First Offline Review
|
||||
|
||||
### Step 1: Pre-compute Evidence
|
||||
|
||||
On connected machine:
|
||||
|
||||
```bash
|
||||
# Generate evidence for all high-priority findings
|
||||
stellaops evidence generate \
|
||||
--scan-id scan-12345678 \
|
||||
--priority-min 0.7 \
|
||||
--output-dir ./evidence-pack
|
||||
|
||||
# Include:
|
||||
# - Reachability analysis
|
||||
# - Call stack traces
|
||||
# - VEX lookups
|
||||
# - Dependency graph snippets
|
||||
```
|
||||
|
||||
### Step 2: Package with Findings
|
||||
|
||||
```bash
|
||||
stellaops triage package \
|
||||
--scan-id scan-12345678 \
|
||||
--evidence-dir ./evidence-pack \
|
||||
--output evidence-triage.stella.bundle.tgz
|
||||
```
|
||||
|
||||
### Step 3: Offline Review with Evidence
|
||||
|
||||
```bash
|
||||
# Evidence-first view
|
||||
stellaops triage show CVE-2024-1234 \
|
||||
--workspace /opt/stellaops/triage \
|
||||
--evidence-first
|
||||
|
||||
# Output:
|
||||
# ═══════════════════════════════════════════
|
||||
# CVE-2024-1234 · lodash@4.17.20
|
||||
# ═══════════════════════════════════════════
|
||||
#
|
||||
# EVIDENCE SUMMARY
|
||||
# ────────────────
|
||||
# Reachability: EXECUTED (tier 2/3)
|
||||
# └─ main.js:42 → utils.js:15 → lodash/merge
|
||||
#
|
||||
# Call Stack:
|
||||
# 1. main.js:42 handleRequest()
|
||||
# 2. utils.js:15 mergeConfig()
|
||||
# 3. lodash:merge <vulnerable>
|
||||
#
|
||||
# VEX Status: No statement found
|
||||
# EPSS: 0.45 (Medium)
|
||||
# KEV: No
|
||||
#
|
||||
# ─────────────────────────────────────────────
|
||||
# Press [a]ffected, [n]ot affected, [s]kip...
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
| Variable | Description | Default |
|
||||
|----------|-------------|---------|
|
||||
| `STELLAOPS_OFFLINE` | Enable offline mode | `false` |
|
||||
| `STELLAOPS_TRIAGE_WORKSPACE` | Triage workspace path | `~/.stellaops/triage` |
|
||||
| `STELLAOPS_BUNDLE_VERIFY` | Verify bundle signatures | `true` |
|
||||
| `STELLAOPS_DECISION_SIGN` | Sign exported decisions | `true` |
|
||||
|
||||
### Config File
|
||||
|
||||
```yaml
|
||||
# ~/.stellaops/triage.yaml
|
||||
offline:
|
||||
enabled: true
|
||||
workspace: /opt/stellaops/triage
|
||||
bundle_verify: true
|
||||
|
||||
decisions:
|
||||
require_justification: true
|
||||
sign_exports: true
|
||||
|
||||
keyboard:
|
||||
enabled: true
|
||||
vim_mode: true
|
||||
```
|
||||
|
||||
## Bundle Format Specification
|
||||
|
||||
### manifest.json
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "1.0",
|
||||
"type": "triage-bundle",
|
||||
"created_at": "2025-01-15T10:00:00Z",
|
||||
"scan_id": "scan-12345678",
|
||||
"finding_count": 25,
|
||||
"feed_snapshot": "sha256:abc123...",
|
||||
"graph_revision": "sha256:def456...",
|
||||
"signatures": {
|
||||
"manifest": "sha256:ghi789...",
|
||||
"dsse_envelope": "signature.dsse"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Decision Format
|
||||
|
||||
```json
|
||||
{
|
||||
"finding_id": "finding-12345678",
|
||||
"vuln_key": "CVE-2024-1234:pkg:npm/lodash@4.17.20",
|
||||
"status": "not_affected",
|
||||
"justification": "Code path gated by feature flag",
|
||||
"reviewer": "security-team",
|
||||
"decided_at": "2025-01-15T14:30:00Z",
|
||||
"replay_token": "rt_abc123...",
|
||||
"evidence_refs": [
|
||||
"evidence/reachability/CVE-2024-1234.json"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Replay Tokens
|
||||
|
||||
Each decision generates a replay token for audit trail:
|
||||
|
||||
```bash
|
||||
# View replay token
|
||||
stellaops triage show-token rt_abc123...
|
||||
|
||||
# Output:
|
||||
# Replay Token: rt_abc123...
|
||||
# ─────────────────────────────
|
||||
# Finding: CVE-2024-1234
|
||||
# Decision: not_affected
|
||||
# Evidence Hash: sha256:xyz789...
|
||||
# Feed Snapshot: sha256:abc123...
|
||||
# Decided: 2025-01-15T14:30:00Z
|
||||
# Reviewer: security-team
|
||||
```
|
||||
|
||||
### Verify Token
|
||||
|
||||
```bash
|
||||
stellaops triage verify-token rt_abc123... \
|
||||
--public-key /path/to/key.pub
|
||||
|
||||
# ✓ Token signature valid
|
||||
# ✓ Evidence hash matches
|
||||
# ✓ Feed snapshot verified
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Error: Bundle signature invalid
|
||||
|
||||
```
|
||||
Error: Bundle signature verification failed
|
||||
```
|
||||
|
||||
**Solution:** Ensure the correct public key is used:
|
||||
```bash
|
||||
stellaops triage verify-bundle \
|
||||
--input bundle.tgz \
|
||||
--public-key /path/to/correct-key.pub \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Error: Evidence not found
|
||||
|
||||
```
|
||||
Error: Evidence for CVE-2024-1234 not included in bundle
|
||||
```
|
||||
|
||||
**Solution:** Re-export with evidence:
|
||||
```bash
|
||||
stellaops triage export \
|
||||
--scan-id scan-12345678 \
|
||||
--findings CVE-2024-1234 \
|
||||
--include-evidence \
|
||||
--output bundle.tgz
|
||||
```
|
||||
|
||||
### Error: Decision sync conflict
|
||||
|
||||
```
|
||||
Error: Finding CVE-2024-1234 has newer decision on server
|
||||
```
|
||||
|
||||
**Solution:** Review and resolve:
|
||||
```bash
|
||||
stellaops triage import-decisions \
|
||||
--input decisions.json \
|
||||
--conflict-mode review
|
||||
|
||||
# Options: keep-local, keep-server, newest, review
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Offline Kit Guide](../OFFLINE_KIT.md)
|
||||
- [Vulnerability Explorer guide](../VULNERABILITY_EXPLORER_GUIDE.md)
|
||||
- [Triage contract](../api/triage.contract.v1.md)
|
||||
- [Console accessibility](../accessibility.md)
|
||||
60
docs/modules/airgap/runbooks/av-scan.md
Normal file
60
docs/modules/airgap/runbooks/av-scan.md
Normal file
@@ -0,0 +1,60 @@
|
||||
# AV/YARA Scan Runbook (AIRGAP-AV-510-011)
|
||||
|
||||
Purpose: ensure every offline-kit bundle is scanned pre-publish and post-ingest, with deterministic reports and optional signatures.
|
||||
|
||||
## Inputs
|
||||
- Bundle directory containing `manifest.json` and payload files.
|
||||
- AV scanner (e.g., ClamAV) and optional YARA rule set available locally (no network).
|
||||
|
||||
## Steps (offline)
|
||||
1. Scan all bundle files:
|
||||
```bash
|
||||
clamscan -r --max-filesize=2G --max-scansize=4G --no-summary bundle/ > reports/av-scan.txt
|
||||
```
|
||||
2. Convert to structured report:
|
||||
```bash
|
||||
python - <<'PY'
|
||||
import hashlib, json, pathlib, sys
|
||||
root = pathlib.Path("bundle")
|
||||
report = {
|
||||
"scanner": "clamav",
|
||||
"scannerVersion": "1.4.1",
|
||||
"startedAt": "2025-12-02T00:02:00Z",
|
||||
"completedAt": "2025-12-02T00:04:30Z",
|
||||
"status": "clean",
|
||||
"artifacts": [],
|
||||
"errors": []
|
||||
}
|
||||
for path in root.glob("**/*"):
|
||||
if path.is_file():
|
||||
h = hashlib.sha256(path.read_bytes()).hexdigest()
|
||||
report["artifacts"].append({
|
||||
"path": str(path.relative_to(root)),
|
||||
"sha256": h,
|
||||
"result": "clean",
|
||||
"yaraRules": []
|
||||
})
|
||||
json.dump(report, sys.stdout, indent=2)
|
||||
PY
|
||||
```
|
||||
3. Validate report against schema:
|
||||
```bash
|
||||
jq empty --argfile schema docs/modules/airgap/schemas/av-report.schema.json 'input' < docs/modules/airgap/samples/av-report.sample.json >/dev/null
|
||||
```
|
||||
4. Optionally sign report (detached):
|
||||
```bash
|
||||
openssl dgst -sha256 -sign airgap-av-key.pem reports/av-report.json > reports/av-report.sig
|
||||
```
|
||||
5. Update `manifest.json`:
|
||||
- Set `avScan.status` to `clean` or `findings`.
|
||||
- `avScan.reportPath` and `avScan.reportSha256` must match the generated report.
|
||||
|
||||
## Acceptance checks
|
||||
- Report validates against `docs/modules/airgap/schemas/av-report.schema.json`.
|
||||
- `manifest.json` hashes updated and verified via `src/AirGap/scripts/verify-manifest.sh`.
|
||||
- If any artifact result is `malicious`/`suspicious`, bundle must be rejected and re-scanned after remediation.
|
||||
|
||||
## References
|
||||
- Manifest schema: `docs/modules/airgap/schemas/manifest.schema.json`
|
||||
- Sample report: `docs/modules/airgap/samples/av-report.sample.json`
|
||||
- Manifest verifier: `src/AirGap/scripts/verify-manifest.sh`
|
||||
57
docs/modules/airgap/runbooks/import-verify.md
Normal file
57
docs/modules/airgap/runbooks/import-verify.md
Normal file
@@ -0,0 +1,57 @@
|
||||
# Offline Kit Import Verification Runbook
|
||||
|
||||
This runbook supports AIRGAP-MANIFEST-510-010, AIRGAP-REPLAY-510-013, and AIRGAP-VERIFY-510-014. It validates bundles fully offline and enforces replay depth.
|
||||
|
||||
## Replay depth levels (manifest `replayPolicy`)
|
||||
- `hash-only`: verify manifest/bundle digests, staleness window, optional signature.
|
||||
- `full-recompute`: hash-only + every chunk hash + AV report hash.
|
||||
- `policy-freeze`: full-recompute + manifest policies must include the sealed policy hash (prevents imports with drifting policy/graph material).
|
||||
|
||||
## Quick steps
|
||||
|
||||
```bash
|
||||
src/AirGap/scripts/verify-kit.sh \
|
||||
--manifest offline-kit/manifest.json \
|
||||
--bundle offline-kit/bundle.tar.gz \
|
||||
--signature offline-kit/manifest.sig --pubkey offline-kit/manifest.pub.pem \
|
||||
--av-report offline-kit/reports/av-report.json \
|
||||
--receipt offline-kit/receipts/ingress.json \
|
||||
--sealed-policy-hash "aa55..." \
|
||||
--depth policy-freeze
|
||||
```
|
||||
|
||||
## What the script enforces
|
||||
1) Manifest & bundle digests match (`hashes.*`).
|
||||
2) Optional manifest signature is valid (OpenSSL).
|
||||
3) Staleness: `createdAt` must be within `stalenessWindowHours` of `--now` (defaults to UTC now).
|
||||
4) AV: `avScan.status` must not be `findings`; if `reportSha256` is present, the provided report hash must match.
|
||||
5) Chunks (full-recompute/policy-freeze): every `chunks[].path` exists relative to the manifest and matches its recorded SHA-256.
|
||||
6) Policy-freeze: `--sealed-policy-hash` must appear in `policies[].sha256`.
|
||||
7) Optional: `--expected-graph-sha` checks the graph chunk hash; `--receipt` reuses `verify-receipt.sh` to bind the receipt to the manifest/bundle hashes.
|
||||
|
||||
Exit codes: hash mismatch (3/4), staleness (5), AV issues (6–8), chunk drift (9–10), graph mismatch (11), policy drift (12–13), bad depth (14).
|
||||
|
||||
## Controller verify endpoint (server-side guard)
|
||||
|
||||
`POST /system/airgap/verify` (scope `airgap:verify`) expects `VerifyRequest`:
|
||||
|
||||
```jsonc
|
||||
{
|
||||
"depth": "PolicyFreeze",
|
||||
"manifestSha256": "...",
|
||||
"bundleSha256": "...",
|
||||
"computedManifestSha256": "...", // from offline verifier
|
||||
"computedBundleSha256": "...",
|
||||
"manifestCreatedAt": "2025-12-02T00:00:00Z",
|
||||
"stalenessWindowHours": 168,
|
||||
"bundlePolicyHash": "aa55...",
|
||||
"sealedPolicyHash": "aa55..." // optional, controller fills from state if omitted
|
||||
}
|
||||
```
|
||||
|
||||
The controller applies the same replay rules and returns `{ "valid": true|false, "reason": "..." }`.
|
||||
|
||||
## References
|
||||
- Schema: `docs/modules/airgap/schemas/manifest.schema.json`
|
||||
- Samples: `docs/modules/airgap/samples/offline-kit-manifest.sample.json`, `docs/modules/airgap/samples/av-report.sample.json`, `docs/modules/airgap/samples/receipt.sample.json`
|
||||
- Scripts: `src/AirGap/scripts/verify-kit.sh`, `src/AirGap/scripts/verify-manifest.sh`, `src/AirGap/scripts/verify-receipt.sh`
|
||||
39
docs/modules/airgap/runbooks/quarantine-investigation.md
Normal file
39
docs/modules/airgap/runbooks/quarantine-investigation.md
Normal file
@@ -0,0 +1,39 @@
|
||||
# AirGap Quarantine Investigation Runbook
|
||||
|
||||
## Purpose
|
||||
Quarantine preserves failed bundle imports for offline forensic analysis. It keeps the original bundle and the verification context (reason + logs) so operators can diagnose tampering, trust-root drift, or packaging issues without re-running in an online environment.
|
||||
|
||||
## Location & Structure
|
||||
Default root: `/updates/quarantine`
|
||||
|
||||
Per-tenant layout:
|
||||
`/updates/quarantine/<tenantId>/<timestamp>-<reason>-<id>/`
|
||||
|
||||
Removal staging:
|
||||
`/updates/quarantine/<tenantId>/.removed/<quarantineId>/`
|
||||
|
||||
## Files in a quarantine entry
|
||||
- `bundle.tar.zst` - the original bundle as provided
|
||||
- `manifest.json` - bundle manifest (when available)
|
||||
- `verification.log` - validation step output (TUF/DSSE/Merkle/rotation/monotonicity, etc.)
|
||||
- `failure-reason.txt` - human-readable failure summary (reason + timestamp + metadata)
|
||||
- `quarantine.json` - structured metadata for listing/automation
|
||||
|
||||
## Investigation steps (offline)
|
||||
1. Identify the tenant and locate the quarantine root on the importer host.
|
||||
2. Pick the newest quarantine entry for the tenant (timestamp prefix).
|
||||
3. Read `failure-reason.txt` first to capture the top-level reason and metadata.
|
||||
4. Review `verification.log` for the precise failing step.
|
||||
5. If needed, extract and inspect `bundle.tar.zst` in an isolated workspace (no network).
|
||||
6. Decide whether the entry should be retained (for audit) or removed after investigation.
|
||||
|
||||
## Removal & Retention
|
||||
- Removal requires a human-provided reason (audit trail). Implementations should use the quarantine service’s remove operation which moves entries under `.removed/`.
|
||||
- Retention and quota controls are configured via `AirGap:Quarantine` settings (root, TTL, max size); TTL cleanup can remove entries older than the retention period.
|
||||
|
||||
## Common failure categories
|
||||
- `tuf:*` - invalid/expired metadata or snapshot hash mismatch
|
||||
- `dsse:*` - signature invalid or trust root mismatch
|
||||
- `merkle-*` - payload entry set invalid or empty
|
||||
- `rotation:*` - root rotation policy failure (dual approval, no-op rotation, etc.)
|
||||
- `version-non-monotonic:*` - rollback prevention triggered (force activation requires a justification)
|
||||
23
docs/modules/airgap/samples/av-report.sample.json
Normal file
23
docs/modules/airgap/samples/av-report.sample.json
Normal file
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"$schema": "../av-report.schema.json",
|
||||
"scanner": "clamav",
|
||||
"scannerVersion": "1.4.1",
|
||||
"startedAt": "2025-12-02T00:02:00Z",
|
||||
"completedAt": "2025-12-02T00:04:30Z",
|
||||
"status": "clean",
|
||||
"artifacts": [
|
||||
{
|
||||
"path": "chunks/advisories-0001.tzst",
|
||||
"sha256": "1234123412341234123412341234123412341234123412341234123412341234",
|
||||
"result": "clean",
|
||||
"yaraRules": []
|
||||
},
|
||||
{
|
||||
"path": "chunks/vex-0001.tzst",
|
||||
"sha256": "4321432143214321432143214321432143214321432143214321432143214321",
|
||||
"result": "clean",
|
||||
"yaraRules": []
|
||||
}
|
||||
],
|
||||
"errors": []
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
a:1
|
||||
b:2
|
||||
c:3
|
||||
44
docs/modules/airgap/samples/offline-kit-manifest.sample.json
Normal file
44
docs/modules/airgap/samples/offline-kit-manifest.sample.json
Normal file
@@ -0,0 +1,44 @@
|
||||
{
|
||||
"$schema": "../manifest.schema.json",
|
||||
"schemaVersion": "1.0.0",
|
||||
"bundleId": "offline-kit:concelier:2025-12-02",
|
||||
"tenant": "default",
|
||||
"environment": "prod",
|
||||
"createdAt": "2025-12-02T00:00:00Z",
|
||||
"stalenessWindowHours": 168,
|
||||
"replayPolicy": "policy-freeze",
|
||||
"tools": [
|
||||
{ "name": "concelier-exporter", "version": "2.5.0", "sha256": "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcd" },
|
||||
{ "name": "trivy-db", "version": "0.48.0", "sha256": "89abcdef0123456789abcdef0123456789abcdef0123456789abcdef01234567" }
|
||||
],
|
||||
"feeds": [
|
||||
{ "name": "redhat-csaf", "snapshot": "2025-12-01", "sha256": "fedcba9876543210fedcba9876543210fedcba9876543210fedcba9876543210", "stalenessHours": 72 },
|
||||
{ "name": "osv", "snapshot": "2025-12-01T23:00:00Z", "sha256": "0f0e0d0c0b0a09080706050403020100ffeeddccbbaa99887766554433221100", "stalenessHours": 24 }
|
||||
],
|
||||
"policies": [
|
||||
{ "name": "policy-bundle", "version": "1.4.2", "sha256": "aa55aa55aa55aa55aa55aa55aa55aa55aa55aa55aa55aa55aa55aa55aa55aa55" }
|
||||
],
|
||||
"chunks": [
|
||||
{ "path": "chunks/advisories-0001.tzst", "sha256": "1234123412341234123412341234123412341234123412341234123412341234", "size": 1048576, "kind": "advisory" },
|
||||
{ "path": "chunks/vex-0001.tzst", "sha256": "4321432143214321432143214321432143214321432143214321432143214321", "size": 524288, "kind": "vex" }
|
||||
],
|
||||
"avScan": {
|
||||
"status": "clean",
|
||||
"scanner": "clamav 1.4.1",
|
||||
"scanAt": "2025-12-02T00:05:00Z",
|
||||
"reportPath": "reports/av-scan.txt",
|
||||
"reportSha256": "bb66bb66bb66bb66bb66bb66bb66bb66bb66bb66bb66bb66bb66bb66bb66bb66"
|
||||
},
|
||||
"hashes": {
|
||||
"manifestSha256": "29d58b9fdc5c4e65b26c03f3bd9f442ff0c7f8514b8a9225f8b6417ffabc0101",
|
||||
"bundleSha256": "d3c3f6c75c6a3f0906bcee457cc77a2d6d7c0f9d1a1d7da78c0d2ab8e0dba111"
|
||||
},
|
||||
"signatures": [
|
||||
{
|
||||
"type": "dsse",
|
||||
"keyId": "airgap-manifest-dev",
|
||||
"signature": "MEQCIGVyb3JrZXktc2lnbmF0dXJlLXNob3J0",
|
||||
"envelopeDigest": "sha256:cc77cc77cc77cc77cc77cc77cc77cc77cc77cc77cc77cc77cc77cc77cc77cc77"
|
||||
}
|
||||
]
|
||||
}
|
||||
21
docs/modules/airgap/samples/receipt.sample.json
Normal file
21
docs/modules/airgap/samples/receipt.sample.json
Normal file
@@ -0,0 +1,21 @@
|
||||
{
|
||||
"$schema": "../receipt.schema.json",
|
||||
"schemaVersion": "1.0.0",
|
||||
"receiptId": "receipt:ingress:2025-12-02T00-00Z",
|
||||
"direction": "ingress",
|
||||
"bundleId": "offline-kit:concelier:2025-12-02",
|
||||
"tenant": "default",
|
||||
"operator": { "id": "op-123", "role": "airgap-controller" },
|
||||
"occurredAt": "2025-12-02T00:06:00Z",
|
||||
"decision": "allow",
|
||||
"hashes": {
|
||||
"bundleSha256": "d3c3f6c75c6a3f0906bcee457cc77a2d6d7c0f9d1a1d7da78c0d2ab8e0dba111",
|
||||
"manifestSha256": "29d58b9fdc5c4e65b26c03f3bd9f442ff0c7f8514b8a9225f8b6417ffabc0101"
|
||||
},
|
||||
"dsse": {
|
||||
"envelopeDigest": "sha256:cc77cc77cc77cc77cc77cc77cc77cc77cc77cc77cc77cc77cc77cc77cc77cc77",
|
||||
"signer": "airgap-receipts-dev",
|
||||
"rekorUuid": "11111111-2222-3333-4444-555555555555"
|
||||
},
|
||||
"notes": "Ingress verified, AV clean, staleness within window."
|
||||
}
|
||||
36
docs/modules/airgap/schemas/av-report.schema.json
Normal file
36
docs/modules/airgap/schemas/av-report.schema.json
Normal file
@@ -0,0 +1,36 @@
|
||||
{
|
||||
"$schema": "https://json-schema.org/draft/2020-12/schema",
|
||||
"$id": "https://stellaops.local/airgap/av-report.schema.json",
|
||||
"title": "Offline AV/YARA Scan Report",
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": ["scanner", "scannerVersion", "startedAt", "completedAt", "status", "artifacts"],
|
||||
"properties": {
|
||||
"scanner": { "type": "string" },
|
||||
"scannerVersion": { "type": "string" },
|
||||
"startedAt": { "type": "string", "format": "date-time" },
|
||||
"completedAt": { "type": "string", "format": "date-time" },
|
||||
"status": { "type": "string", "enum": ["clean", "findings", "error"] },
|
||||
"signature": { "type": "string", "description": "Optional detached signature over this report (base64)" },
|
||||
"artifacts": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": ["path", "sha256", "result"],
|
||||
"properties": {
|
||||
"path": { "type": "string" },
|
||||
"sha256": { "type": "string", "pattern": "^[A-Fa-f0-9]{64}$" },
|
||||
"result": { "type": "string", "enum": ["clean", "suspicious", "malicious", "error"] },
|
||||
"yaraRules": { "type": "array", "items": { "type": "string" }, "uniqueItems": true },
|
||||
"notes": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"uniqueItems": true
|
||||
},
|
||||
"errors": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" }
|
||||
}
|
||||
}
|
||||
}
|
||||
123
docs/modules/airgap/schemas/manifest.schema.json
Normal file
123
docs/modules/airgap/schemas/manifest.schema.json
Normal file
@@ -0,0 +1,123 @@
|
||||
{
|
||||
"$schema": "https://json-schema.org/draft/2020-12/schema",
|
||||
"$id": "https://stellaops.local/airgap/manifest.schema.json",
|
||||
"title": "Offline Kit Manifest",
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"schemaVersion",
|
||||
"bundleId",
|
||||
"tenant",
|
||||
"environment",
|
||||
"createdAt",
|
||||
"stalenessWindowHours",
|
||||
"replayPolicy",
|
||||
"tools",
|
||||
"feeds",
|
||||
"policies",
|
||||
"chunks",
|
||||
"hashes"
|
||||
],
|
||||
"properties": {
|
||||
"schemaVersion": { "type": "string", "pattern": "^1\\.\\d+\\.\\d+$" },
|
||||
"bundleId": { "type": "string", "pattern": "^offline-kit:[A-Za-z0-9._:-]+$" },
|
||||
"tenant": { "type": "string", "minLength": 1 },
|
||||
"environment": { "type": "string", "enum": ["prod", "stage", "dev", "test"] },
|
||||
"createdAt": { "type": "string", "format": "date-time" },
|
||||
"stalenessWindowHours": { "type": "integer", "minimum": 0 },
|
||||
"replayPolicy": { "type": "string", "enum": ["hash-only", "full-recompute", "policy-freeze"] },
|
||||
"tools": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": ["name", "version", "sha256"],
|
||||
"properties": {
|
||||
"name": { "type": "string" },
|
||||
"version": { "type": "string" },
|
||||
"sha256": { "type": "string", "pattern": "^[A-Fa-f0-9]{64}$" }
|
||||
}
|
||||
},
|
||||
"uniqueItems": true
|
||||
},
|
||||
"feeds": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": ["name", "snapshot", "sha256"],
|
||||
"properties": {
|
||||
"name": { "type": "string" },
|
||||
"snapshot": { "type": "string" },
|
||||
"sha256": { "type": "string", "pattern": "^[A-Fa-f0-9]{64}$" },
|
||||
"stalenessHours": { "type": "integer", "minimum": 0 }
|
||||
}
|
||||
},
|
||||
"uniqueItems": true
|
||||
},
|
||||
"policies": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": ["name", "version", "sha256"],
|
||||
"properties": {
|
||||
"name": { "type": "string" },
|
||||
"version": { "type": "string" },
|
||||
"sha256": { "type": "string", "pattern": "^[A-Fa-f0-9]{64}$" }
|
||||
}
|
||||
},
|
||||
"uniqueItems": true
|
||||
},
|
||||
"chunks": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": ["path", "sha256", "size"],
|
||||
"properties": {
|
||||
"path": { "type": "string" },
|
||||
"sha256": { "type": "string", "pattern": "^[A-Fa-f0-9]{64}$" },
|
||||
"size": { "type": "integer", "minimum": 0 },
|
||||
"kind": { "type": "string", "enum": ["advisory", "sbom", "vex", "policy", "graph", "tooling", "other"] }
|
||||
}
|
||||
},
|
||||
"uniqueItems": true
|
||||
},
|
||||
"avScan": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": ["status"],
|
||||
"properties": {
|
||||
"status": { "type": "string", "enum": ["not_run", "clean", "findings"] },
|
||||
"scanner": { "type": "string" },
|
||||
"scanAt": { "type": "string", "format": "date-time" },
|
||||
"reportPath": { "type": "string" },
|
||||
"reportSha256": { "type": "string", "pattern": "^[A-Fa-f0-9]{64}$" }
|
||||
}
|
||||
},
|
||||
"hashes": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": ["manifestSha256", "bundleSha256"],
|
||||
"properties": {
|
||||
"manifestSha256": { "type": "string", "pattern": "^[A-Fa-f0-9]{64}$" },
|
||||
"bundleSha256": { "type": "string", "pattern": "^[A-Fa-f0-9]{64}$" }
|
||||
}
|
||||
},
|
||||
"signatures": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": ["type", "keyId", "signature"],
|
||||
"properties": {
|
||||
"type": { "type": "string", "enum": ["dsse", "jws-detached"] },
|
||||
"keyId": { "type": "string" },
|
||||
"signature": { "type": "string" },
|
||||
"envelopeDigest": { "type": "string", "pattern": "^sha256:[A-Fa-f0-9]{64}$" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
55
docs/modules/airgap/schemas/receipt.schema.json
Normal file
55
docs/modules/airgap/schemas/receipt.schema.json
Normal file
@@ -0,0 +1,55 @@
|
||||
{
|
||||
"$schema": "https://json-schema.org/draft/2020-12/schema",
|
||||
"$id": "https://stellaops.local/airgap/receipt.schema.json",
|
||||
"title": "AirGap Ingress/Egress Receipt",
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"schemaVersion",
|
||||
"receiptId",
|
||||
"direction",
|
||||
"bundleId",
|
||||
"tenant",
|
||||
"operator",
|
||||
"occurredAt",
|
||||
"decision",
|
||||
"hashes"
|
||||
],
|
||||
"properties": {
|
||||
"schemaVersion": { "type": "string", "pattern": "^1\\.\\d+\\.\\d+$" },
|
||||
"receiptId": { "type": "string", "pattern": "^receipt:[A-Za-z0-9._:-]+$" },
|
||||
"direction": { "type": "string", "enum": ["ingress", "egress"] },
|
||||
"bundleId": { "type": "string", "pattern": "^offline-kit:[A-Za-z0-9._:-]+$" },
|
||||
"tenant": { "type": "string", "minLength": 1 },
|
||||
"operator": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": ["id", "role"],
|
||||
"properties": {
|
||||
"id": { "type": "string" },
|
||||
"role": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"occurredAt": { "type": "string", "format": "date-time" },
|
||||
"decision": { "type": "string", "enum": ["allow", "deny", "quarantine"] },
|
||||
"hashes": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": ["bundleSha256", "manifestSha256"],
|
||||
"properties": {
|
||||
"bundleSha256": { "type": "string", "pattern": "^[A-Fa-f0-9]{64}$" },
|
||||
"manifestSha256": { "type": "string", "pattern": "^[A-Fa-f0-9]{64}$" }
|
||||
}
|
||||
},
|
||||
"dsse": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"envelopeDigest": { "type": "string", "pattern": "^sha256:[A-Fa-f0-9]{64}$" },
|
||||
"signer": { "type": "string" },
|
||||
"rekorUuid": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"notes": { "type": "string" }
|
||||
}
|
||||
}
|
||||
43
docs/modules/airgap/schemas/time-anchor-schema.json
Normal file
43
docs/modules/airgap/schemas/time-anchor-schema.json
Normal file
@@ -0,0 +1,43 @@
|
||||
{
|
||||
"$schema": "https://json-schema.org/draft/2020-12/schema",
|
||||
"title": "StellaOps Time Anchor",
|
||||
"type": "object",
|
||||
"required": ["anchorTime", "source", "format", "tokenDigest"],
|
||||
"properties": {
|
||||
"anchorTime": {
|
||||
"description": "UTC timestamp asserted by the time token (RFC3339/ISO-8601)",
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"source": {
|
||||
"description": "Logical source of the time token (e.g., roughtime",
|
||||
"type": "string",
|
||||
"enum": ["roughtime", "rfc3161"]
|
||||
},
|
||||
"format": {
|
||||
"description": "Payload format identifier (e.g., draft-roughtime-v1, rfc3161)",
|
||||
"type": "string"
|
||||
},
|
||||
"tokenDigest": {
|
||||
"description": "SHA-256 of the raw time token bytes, hex-encoded",
|
||||
"type": "string",
|
||||
"pattern": "^[0-9a-fA-F]{64}$"
|
||||
},
|
||||
"signatureFingerprint": {
|
||||
"description": "Fingerprint of the signer key (hex); optional until trust roots finalized",
|
||||
"type": "string",
|
||||
"pattern": "^[0-9a-fA-F]{16,128}$"
|
||||
},
|
||||
"verification": {
|
||||
"description": "Result of local verification (if performed)",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"status": {"type": "string", "enum": ["unknown", "passed", "failed"]},
|
||||
"reason": {"type": "string"}
|
||||
},
|
||||
"required": ["status"],
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
}
|
||||
20
docs/modules/airgap/schemas/time-anchor-trust-roots.json
Normal file
20
docs/modules/airgap/schemas/time-anchor-trust-roots.json
Normal file
@@ -0,0 +1,20 @@
|
||||
{
|
||||
"version": 1,
|
||||
"roughtime": [
|
||||
{
|
||||
"name": "stellaops-test-roughtime",
|
||||
"publicKeyBase64": "dGVzdC1yb3VnaHRpbWUtcHViLWtleQ==",
|
||||
"validFrom": "2025-01-01T00:00:00Z",
|
||||
"validTo": "2026-01-01T00:00:00Z"
|
||||
}
|
||||
],
|
||||
"rfc3161": [
|
||||
{
|
||||
"name": "stellaops-test-tsa",
|
||||
"certificatePem": "-----BEGIN CERTIFICATE-----\nMIIBszCCAVmgAwIBAgIUYPXPLACEHOLDERKEYm7ri5bzsYqvSwwDQYJKoZIhvcNAQELBQAwETEPMA0GA1UEAwwGU3RlbGxhMB4XDTI1MDEwMTAwMDAwMFoXDTI2MDEwMTAwMDAwMFowETEPMA0GA1UEAwwGU3RlbGxhMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAEPLACEHOLDERuQjVekA7gQtaQ6UiI4bYbw2bG8xwDthQqLehCDXXWix9TAAEbnII1xF4Zk12Y0wUjiJB82H4x6HTDY0Hes74AUFyi0A39p0Y0ffSZlnzCwzmxrSYzYHbpbb8WZKGa+jUzBRMB0GA1UdDgQWBBSPLACEHOLDERRoKdqaLKv8Bf+FfoUzAfBgNVHSMEGDAWgBSPLACEHOLDERRoKdqaLKv8Bf+FfoUzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQCPLACEHOLDER\n-----END CERTIFICATE-----",
|
||||
"validFrom": "2025-01-01T00:00:00Z",
|
||||
"validTo": "2026-01-01T00:00:00Z",
|
||||
"fingerprintSha256": "0000000000000000000000000000000000000000000000000000000000000000"
|
||||
}
|
||||
]
|
||||
}
|
||||
9
docs/modules/airgap/schemas/time-config-sample.json
Normal file
9
docs/modules/airgap/schemas/time-config-sample.json
Normal file
@@ -0,0 +1,9 @@
|
||||
{
|
||||
"AirGap": {
|
||||
"TenantId": "tenant-default",
|
||||
"Staleness": {
|
||||
"WarningSeconds": 3600,
|
||||
"BreachSeconds": 7200
|
||||
}
|
||||
}
|
||||
}
|
||||
13
docs/modules/aoc/guides/aoc-guardrails.md
Normal file
13
docs/modules/aoc/guides/aoc-guardrails.md
Normal file
@@ -0,0 +1,13 @@
|
||||
# Aggregation-Only Contract (AOC) Guardrails
|
||||
|
||||
The Aggregation-Only Contract keeps ingestion services deterministic and policy-neutral. Use these checkpoints whenever you add or modify backlog items:
|
||||
|
||||
1. **Ingestion writes raw facts only.** Concelier and Excititor append immutable observations/linksets. No precedence, severity, suppression, or "safe fix" hints may be computed at ingest time.
|
||||
2. **Derived semantics live elsewhere.** Policy Engine overlays, Vuln Explorer composition, and downstream reporting layers attach severity, precedence, policy verdicts, and UI hints.
|
||||
3. **Provenance is mandatory.** Every ingestion write must include original source metadata, digests, and signing/provenance evidence when available. Reject writes lacking provenance.
|
||||
4. **Deterministic outputs.** Given the same inputs, ingestion must produce identical documents, hashes, and event payloads across reruns.
|
||||
5. **Guardrails everywhere.** Roslyn analyzers, schema validators, and CI smoke tests should fail builds that attempt forbidden writes.
|
||||
|
||||
For detailed roles and ownership boundaries, see `AGENTS.md` at the repo root and the module-specific dossiers under `docs/modules/<module>/architecture.md`.
|
||||
|
||||
Need the full contract? Read the [Aggregation-Only Contract reference](aggregation-only-contract.md) for schemas, error codes, and migration guidance.
|
||||
130
docs/modules/aoc/guides/guard-library.md
Normal file
130
docs/modules/aoc/guides/guard-library.md
Normal file
@@ -0,0 +1,130 @@
|
||||
# Aggregation-Only Guard Library Reference
|
||||
|
||||
> **Packages:** `StellaOps.Aoc`, `StellaOps.Aoc.AspNetCore`
|
||||
> **Related tasks:** `WEB-AOC-19-001`, `WEB-AOC-19-003`, `DEVOPS-AOC-19-001`
|
||||
> **Audience:** Concelier/Excititor service owners, Platform guild, QA
|
||||
|
||||
The Aggregation-Only Contract (AOC) guard library enforces the canonical ingestion
|
||||
rules described in `docs/modules/concelier/guides/aggregation-only-contract.md`. Service owners
|
||||
should use the guard whenever raw advisory or VEX payloads are accepted so that
|
||||
forbidden fields are rejected long before they reach PostgreSQL.
|
||||
|
||||
## Packages
|
||||
|
||||
### `StellaOps.Aoc`
|
||||
- `IAocGuard` / `AocWriteGuard` — validate JSON payloads and emit `AocGuardResult`.
|
||||
- `AocGuardOptions` — toggles for signature enforcement, tenant requirements, and required top-level fields.
|
||||
- `AocViolation` / `AocViolationCode` — structured violations surfaced to callers.
|
||||
- `AocError` — canonical error DTO (`code`, `message`, `violations[]`) re-used by HTTP helpers, CLI tooling, and telemetry.
|
||||
- `ServiceCollectionExtensions.AddAocGuard()` — DI helper that registers the singleton guard.
|
||||
- `AocGuardExtensions.ValidateOrThrow()` — throws `AocGuardException` when validation fails.
|
||||
|
||||
### `StellaOps.Aoc.AspNetCore`
|
||||
- `AocGuardEndpointFilter<TRequest>` — Minimal API endpoint filter that evaluates request payloads through the guard before invoking handlers.
|
||||
- `AocHttpResults.Problem()` — Produces a RFC 7807 payload that includes violation codes, suitable for API responses.
|
||||
|
||||
## Minimal API integration
|
||||
|
||||
```csharp
|
||||
using StellaOps.Aoc;
|
||||
using StellaOps.Aoc.AspNetCore.Routing;
|
||||
using StellaOps.Aoc.AspNetCore.Results;
|
||||
|
||||
var builder = WebApplication.CreateBuilder(args);
|
||||
|
||||
builder.Services.AddAocGuard();
|
||||
builder.Services.Configure<AocGuardOptions>(options =>
|
||||
{
|
||||
options.RequireSignatureMetadata = true;
|
||||
options.RequireTenant = true;
|
||||
});
|
||||
|
||||
var app = builder.Build();
|
||||
|
||||
app.MapPost("/ingest", async (IngestionRequest request, IAocGuard guard, ILogger<Program> logger) =>
|
||||
{
|
||||
// additional application logic
|
||||
return Results.Accepted();
|
||||
})
|
||||
.AddEndpointFilter(new AocGuardEndpointFilter<IngestionRequest>(
|
||||
request => new object?[] { request.Payload },
|
||||
serializerOptions: null,
|
||||
guardOptions: null))
|
||||
.ProducesProblem(StatusCodes.Status400BadRequest)
|
||||
.WithTags("AOC");
|
||||
|
||||
app.UseExceptionHandler(errorApp =>
|
||||
{
|
||||
errorApp.Run(async context =>
|
||||
{
|
||||
var exceptionHandler = context.Features.Get<IExceptionHandlerFeature>();
|
||||
if (exceptionHandler?.Error is AocGuardException guardException)
|
||||
{
|
||||
var result = AocHttpResults.Problem(context, guardException);
|
||||
await result.ExecuteAsync(context);
|
||||
return;
|
||||
}
|
||||
|
||||
context.Response.StatusCode = StatusCodes.Status500InternalServerError;
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
Key points:
|
||||
- Register the guard singleton before wiring repositories or worker services.
|
||||
- Use `AocGuardEndpointFilter<TRequest>` to protect Minimal API endpoints. The `payloadSelector`
|
||||
can yield multiple payloads (e.g. batch ingestion) and the filter will validate each one.
|
||||
- Prefer the `RequireAocGuard` extension when wiring endpoints; it wraps `AddEndpointFilter`
|
||||
and handles single-payload scenarios without additional boilerplate.
|
||||
- Wrap guard exceptions with `AocHttpResults.Problem` to ensure clients receive machine-readable codes (`ERR_AOC_00x`). The helper now emits the serialized `AocError` under the `error` extension for consumers that want a typed payload.
|
||||
|
||||
### Allowed top-level fields
|
||||
|
||||
`AocWriteGuard` enforces the contract’s top-level allowlist: `_id`, `tenant`, `source`, `upstream`,
|
||||
`content`, `identifiers`, `linkset`, `supersedes`, `createdAt`/`created_at`, `ingestedAt`/`ingested_at`, and `attributes`.
|
||||
Unknown fields produce `ERR_AOC_007` violations. When staging schema changes, extend the allowlist through
|
||||
`AocGuardOptions.AllowedTopLevelFields`:
|
||||
|
||||
```csharp
|
||||
builder.Services.Configure<AocGuardOptions>(options =>
|
||||
{
|
||||
options.AllowedTopLevelFields =
|
||||
options.AllowedTopLevelFields.Add("experimental_field");
|
||||
});
|
||||
```
|
||||
|
||||
## Worker / repository usage
|
||||
|
||||
Inject `IAocGuard` (or a module-specific wrapper such as `IVexRawWriteGuard`) anywhere documents
|
||||
are persisted. Call `ValidateOrThrow` before writes to guarantee fail-fast behaviour, for example:
|
||||
|
||||
```csharp
|
||||
public sealed class AdvisoryRawRepository
|
||||
{
|
||||
private readonly IAocGuard _guard;
|
||||
|
||||
public AdvisoryRawRepository(IAocGuard guard) => _guard = guard;
|
||||
|
||||
public Task WriteAsync(JsonDocument document, CancellationToken cancellationToken)
|
||||
{
|
||||
_guard.ValidateOrThrow(document.RootElement);
|
||||
// proceed with storage logic
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Configuration tips
|
||||
|
||||
- Adjust `AocGuardOptions.RequiredTopLevelFields` when staging new schema changes. All configured names are case-insensitive.
|
||||
- Extend `AllowedTopLevelFields` for temporary schema experiments so that guard runs stay clean while the contract is updated.
|
||||
- Set `RequireSignatureMetadata = false` for legacy feeds that do not provide signature envelopes yet; track the waiver in the module backlog.
|
||||
- Use module-specific wrappers (`AddConcelierAocGuards`, `AddExcititorAocGuards`) to combine guard registration with domain exceptions and metrics.
|
||||
|
||||
## Testing guidance
|
||||
|
||||
- Unit-test guard behaviour with fixture payloads (see `src/Aoc/__Tests`).
|
||||
- Service-level tests should assert that ingestion endpoints return `ERR_AOC_*` codes via `AocHttpResults`.
|
||||
- CI must run `stella aoc verify` once CLI support lands (`DEVOPS-AOC-19-002`).
|
||||
- Roslyn analyzer enforcement (`WEB-AOC-19-003`) will ensure the guard is registered; keep services wired through the shared extensions to prepare for that gate.
|
||||
|
||||
For questions or updates, coordinate with the BE‑Base Platform guild and reference `WEB-AOC-19-001`.
|
||||
308
docs/modules/attestor/cosign-interop.md
Normal file
308
docs/modules/attestor/cosign-interop.md
Normal file
@@ -0,0 +1,308 @@
|
||||
# Cosign Interoperability Guide
|
||||
|
||||
This document describes how to verify StellaOps attestations using [cosign](https://github.com/sigstore/cosign) and how to import cosign-created attestations into StellaOps.
|
||||
|
||||
## Overview
|
||||
|
||||
StellaOps attestations use the [DSSE (Dead Simple Signing Envelope)](https://github.com/secure-systems-lab/dsse) format and OCI Distribution Spec 1.1 referrers API for attachment, which is compatible with cosign's attestation workflow.
|
||||
|
||||
**Sprint Reference:** `SPRINT_20251228_002_BE_oci_attestation_attach` (T6)
|
||||
|
||||
## Verifying StellaOps Attestations with Cosign
|
||||
|
||||
### Basic Verification
|
||||
|
||||
```bash
|
||||
# Verify any attestation attached to an image
|
||||
cosign verify-attestation \
|
||||
--type custom \
|
||||
--certificate-identity-regexp '.*' \
|
||||
--certificate-oidc-issuer-regexp '.*' \
|
||||
registry.example.com/app:v1.0.0
|
||||
|
||||
# Verify a specific predicate type
|
||||
cosign verify-attestation \
|
||||
--type stellaops.io/predicates/scan-result@v1 \
|
||||
--certificate-identity-regexp '.*' \
|
||||
--certificate-oidc-issuer-regexp '.*' \
|
||||
registry.example.com/app:v1.0.0
|
||||
```
|
||||
|
||||
### Verification with Trust Roots
|
||||
|
||||
StellaOps supports both keyless (Sigstore Fulcio) and key-based signing:
|
||||
|
||||
#### Keyless Verification (Sigstore)
|
||||
```bash
|
||||
# Verify attestation signed with keyless mode
|
||||
cosign verify-attestation \
|
||||
--type stellaops.io/predicates/scan-result@v1 \
|
||||
--certificate-identity 'scanner@stellaops.io' \
|
||||
--certificate-oidc-issuer 'https://oauth2.sigstore.dev/auth' \
|
||||
registry.example.com/app:v1.0.0
|
||||
```
|
||||
|
||||
#### Key-Based Verification
|
||||
```bash
|
||||
# Verify attestation signed with a specific key
|
||||
cosign verify-attestation \
|
||||
--type stellaops.io/predicates/scan-result@v1 \
|
||||
--key /path/to/public-key.pem \
|
||||
registry.example.com/app:v1.0.0
|
||||
```
|
||||
|
||||
### Rekor Transparency Log Verification
|
||||
|
||||
When StellaOps attestations are recorded in Rekor, cosign automatically verifies the inclusion proof:
|
||||
|
||||
```bash
|
||||
# Verify with Rekor inclusion proof
|
||||
cosign verify-attestation \
|
||||
--type stellaops.io/predicates/scan-result@v1 \
|
||||
--certificate-identity-regexp '.*' \
|
||||
--certificate-oidc-issuer-regexp '.*' \
|
||||
--rekor-url https://rekor.sigstore.dev \
|
||||
registry.example.com/app:v1.0.0
|
||||
|
||||
# Skip Rekor verification (offline environments)
|
||||
cosign verify-attestation \
|
||||
--type stellaops.io/predicates/scan-result@v1 \
|
||||
--key /path/to/public-key.pem \
|
||||
--insecure-ignore-tlog \
|
||||
registry.example.com/app:v1.0.0
|
||||
```
|
||||
|
||||
## StellaOps Predicate Types
|
||||
|
||||
StellaOps uses the following predicate type URIs:
|
||||
|
||||
| Predicate Type | Description | cosign `--type` |
|
||||
|----------------|-------------|-----------------|
|
||||
| `stellaops.io/predicates/scan-result@v1` | Vulnerability scan results | `stellaops.io/predicates/scan-result@v1` |
|
||||
| `stellaops.io/predicates/sbom@v1` | Software Bill of Materials | `stellaops.io/predicates/sbom@v1` |
|
||||
| `stellaops.io/predicates/vex@v1` | Vulnerability Exploitability eXchange | `stellaops.io/predicates/vex@v1` |
|
||||
| `https://slsa.dev/provenance/v1` | SLSA Provenance | `slsaprovenance` |
|
||||
|
||||
### Predicate Type Aliases
|
||||
|
||||
For convenience, cosign supports type aliases:
|
||||
|
||||
```bash
|
||||
# These are equivalent for SLSA provenance
|
||||
cosign verify-attestation --type slsaprovenance ...
|
||||
cosign verify-attestation --type https://slsa.dev/provenance/v1 ...
|
||||
```
|
||||
|
||||
## Importing Cosign Attestations into StellaOps
|
||||
|
||||
StellaOps can consume attestations created by cosign:
|
||||
|
||||
### CLI Import
|
||||
|
||||
```bash
|
||||
# Fetch cosign attestation and import to StellaOps
|
||||
cosign download attestation registry.example.com/app:v1.0.0 > attestation.json
|
||||
|
||||
# Import into StellaOps
|
||||
stella attest import \
|
||||
--envelope attestation.json \
|
||||
--image registry.example.com/app:v1.0.0
|
||||
```
|
||||
|
||||
### API Import
|
||||
|
||||
```bash
|
||||
curl -X POST https://stellaops.example.com/api/v1/attestations/import \
|
||||
-H "Content-Type: application/json" \
|
||||
-d @attestation.json
|
||||
```
|
||||
|
||||
## Annotation Compatibility
|
||||
|
||||
StellaOps uses the following annotations on attestation manifests:
|
||||
|
||||
| Annotation Key | Description | Cosign Equivalent |
|
||||
|----------------|-------------|-------------------|
|
||||
| `org.opencontainers.image.created` | Creation timestamp | Standard OCI |
|
||||
| `dev.stellaops/predicate-type` | Predicate type URI | `dev.cosignproject.cosign/predicateType` |
|
||||
| `dev.stellaops/tenant` | StellaOps tenant ID | Custom |
|
||||
| `dev.stellaops/scan-id` | Associated scan ID | Custom |
|
||||
| `dev.sigstore.cosign/signature` | Signature placeholder | Standard Sigstore |
|
||||
|
||||
### Custom Annotations
|
||||
|
||||
You can add custom annotations when attaching attestations:
|
||||
|
||||
```bash
|
||||
# Stella CLI with custom annotations
|
||||
stella attest attach \
|
||||
--image registry.example.com/app:v1.0.0 \
|
||||
--attestation scan.json \
|
||||
--annotation "org.example/team=security" \
|
||||
--annotation "org.example/policy-version=2.0"
|
||||
```
|
||||
|
||||
## Media Types
|
||||
|
||||
StellaOps attestations use standard media types:
|
||||
|
||||
| Media Type | Usage |
|
||||
|------------|-------|
|
||||
| `application/vnd.dsse.envelope.v1+json` | DSSE envelope containing attestation |
|
||||
| `application/vnd.in-toto+json` | In-toto attestation payload |
|
||||
| `application/vnd.oci.image.manifest.v1+json` | OCI manifest for referrers |
|
||||
|
||||
## Trust Root Configuration
|
||||
|
||||
### Sigstore Trust Roots
|
||||
|
||||
For keyless verification, configure the Sigstore trust bundle:
|
||||
|
||||
```yaml
|
||||
# stellaops.yaml
|
||||
attestation:
|
||||
trustRoots:
|
||||
sigstore:
|
||||
enabled: true
|
||||
fulcioUrl: https://fulcio.sigstore.dev
|
||||
rekorUrl: https://rekor.sigstore.dev
|
||||
ctlogUrl: https://ctfe.sigstore.dev
|
||||
```
|
||||
|
||||
### Custom Trust Roots
|
||||
|
||||
For enterprise deployments with private Sigstore instances:
|
||||
|
||||
```yaml
|
||||
# stellaops.yaml
|
||||
attestation:
|
||||
trustRoots:
|
||||
sigstore:
|
||||
enabled: true
|
||||
fulcioUrl: https://fulcio.internal.example.com
|
||||
rekorUrl: https://rekor.internal.example.com
|
||||
trustedRootPem: /etc/stellaops/sigstore-root.pem
|
||||
```
|
||||
|
||||
### Air-Gapped Environments
|
||||
|
||||
For offline verification:
|
||||
|
||||
```yaml
|
||||
# stellaops.yaml
|
||||
attestation:
|
||||
trustRoots:
|
||||
offline: true
|
||||
bundlePath: /etc/stellaops/trust-bundle.json
|
||||
```
|
||||
|
||||
## Policy Integration
|
||||
|
||||
Attestation verification can be integrated into admission control policies:
|
||||
|
||||
### Gatekeeper/OPA Policy Example
|
||||
|
||||
```rego
|
||||
package stellaops.attestation
|
||||
|
||||
deny[msg] {
|
||||
input.kind == "Pod"
|
||||
container := input.spec.containers[_]
|
||||
image := container.image
|
||||
|
||||
# Require scan attestation
|
||||
not has_valid_attestation(image, "stellaops.io/predicates/scan-result@v1")
|
||||
|
||||
msg := sprintf("Image %v missing valid scan attestation", [image])
|
||||
}
|
||||
|
||||
has_valid_attestation(image, predicate_type) {
|
||||
attestation := stellaops.get_attestation(image, predicate_type)
|
||||
stellaops.verify_attestation(attestation)
|
||||
}
|
||||
```
|
||||
|
||||
### Kyverno Policy Example
|
||||
|
||||
```yaml
|
||||
apiVersion: kyverno.io/v1
|
||||
kind: ClusterPolicy
|
||||
metadata:
|
||||
name: require-stellaops-attestation
|
||||
spec:
|
||||
validationFailureAction: Enforce
|
||||
rules:
|
||||
- name: check-scan-attestation
|
||||
match:
|
||||
resources:
|
||||
kinds:
|
||||
- Pod
|
||||
verifyImages:
|
||||
- imageReferences:
|
||||
- "*"
|
||||
attestations:
|
||||
- predicateType: stellaops.io/predicates/scan-result@v1
|
||||
attestors:
|
||||
- entries:
|
||||
- keyless:
|
||||
issuer: https://oauth2.sigstore.dev/auth
|
||||
subject: scanner@stellaops.io
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
#### No Attestations Found
|
||||
|
||||
```bash
|
||||
# List all attestations attached to an image
|
||||
cosign tree registry.example.com/app:v1.0.0
|
||||
|
||||
# Or use stella CLI
|
||||
stella attest oci-list --image registry.example.com/app:v1.0.0
|
||||
```
|
||||
|
||||
#### Signature Verification Failed
|
||||
|
||||
Check that you're using the correct verification key or identity:
|
||||
|
||||
```bash
|
||||
# Inspect the attestation to see signer identity
|
||||
cosign verify-attestation \
|
||||
--type stellaops.io/predicates/scan-result@v1 \
|
||||
--certificate-identity-regexp '.*' \
|
||||
--certificate-oidc-issuer-regexp '.*' \
|
||||
--output text \
|
||||
registry.example.com/app:v1.0.0 | jq '.optional.Issuer, .optional.Subject'
|
||||
```
|
||||
|
||||
#### Rekor Entry Not Found
|
||||
|
||||
If the attestation was created without Rekor submission:
|
||||
|
||||
```bash
|
||||
cosign verify-attestation \
|
||||
--insecure-ignore-tlog \
|
||||
--key /path/to/public-key.pem \
|
||||
registry.example.com/app:v1.0.0
|
||||
```
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Enable verbose output for troubleshooting:
|
||||
|
||||
```bash
|
||||
COSIGN_EXPERIMENTAL=1 cosign verify-attestation \
|
||||
--verbose \
|
||||
--type stellaops.io/predicates/scan-result@v1 \
|
||||
registry.example.com/app:v1.0.0
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- [Cosign Documentation](https://docs.sigstore.dev/cosign/overview/)
|
||||
- [DSSE Specification](https://github.com/secure-systems-lab/dsse)
|
||||
- [In-toto Attestation Framework](https://in-toto.io/)
|
||||
- [OCI Distribution Spec 1.1 Referrers](https://github.com/opencontainers/distribution-spec/blob/main/spec.md#referrers)
|
||||
- [StellaOps Attestor Architecture](../modules/attestor/architecture.md)
|
||||
217
docs/modules/attestor/guides/README.md
Normal file
217
docs/modules/attestor/guides/README.md
Normal file
@@ -0,0 +1,217 @@
|
||||
# SBOM Interoperability Testing
|
||||
|
||||
## Overview
|
||||
|
||||
StellaOps SBOM interoperability tests ensure compatibility with third-party security tools in the ecosystem. The tests validate that StellaOps-generated SBOMs can be consumed by popular tools like Grype, and that findings parity remains above 95%.
|
||||
|
||||
## Test Coverage
|
||||
|
||||
### SBOM Formats
|
||||
|
||||
| Format | Version | Status | Parity Target |
|
||||
|--------|---------|--------|---------------|
|
||||
| CycloneDX | 1.6 | ✅ Supported | 95%+ |
|
||||
| SPDX | 3.0.1 | ✅ Supported | 95%+ |
|
||||
|
||||
### Third-Party Tools
|
||||
|
||||
| Tool | Purpose | Version | Status |
|
||||
|------|---------|---------|--------|
|
||||
| Syft | SBOM Generation | Latest | ✅ Compatible |
|
||||
| Grype | Vulnerability Scanning | Latest | ✅ Compatible |
|
||||
| cosign | Attestation | Latest | ✅ Compatible |
|
||||
|
||||
## Parity Expectations
|
||||
|
||||
### What is Parity?
|
||||
|
||||
Parity measures how closely StellaOps vulnerability findings match those from third-party tools like Grype when scanning the same SBOM.
|
||||
|
||||
**Formula:**
|
||||
```
|
||||
Parity % = (Matching Findings / Total Unique Findings) × 100
|
||||
```
|
||||
|
||||
**Target:** ≥95% parity for both CycloneDX and SPDX formats
|
||||
|
||||
### Known Differences
|
||||
|
||||
The following differences are **acceptable** and expected:
|
||||
|
||||
#### 1. VEX Application
|
||||
- **Difference:** StellaOps applies VEX documents, Grype may not
|
||||
- **Impact:** StellaOps may show fewer vulnerabilities
|
||||
- **Acceptable:** Yes - this is a feature, not a bug
|
||||
|
||||
#### 2. Feed Coverage
|
||||
- **Difference:** Tool-specific vulnerability databases
|
||||
- **Examples:**
|
||||
- StellaOps may have distro-specific feeds Grype lacks
|
||||
- Grype may have GitHub Advisory feeds StellaOps doesn't prioritize
|
||||
- **Acceptable:** Within 5% tolerance
|
||||
|
||||
#### 3. Version Matching Semantics
|
||||
- **Difference:** Interpretation of version ranges
|
||||
- **Examples:**
|
||||
- SemVer vs non-SemVer handling
|
||||
- Epoch handling in RPM/Debian packages
|
||||
- **Acceptable:** When using distro-native comparators
|
||||
|
||||
#### 4. Package Identification (PURL)
|
||||
- **Difference:** PURL generation strategies
|
||||
- **Examples:**
|
||||
- `pkg:npm/package` vs `pkg:npm/package@version`
|
||||
- Namespace handling
|
||||
- **Acceptable:** When functionally equivalent
|
||||
|
||||
## Running Interop Tests
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Install required tools:
|
||||
|
||||
```bash
|
||||
# Install Syft
|
||||
curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin
|
||||
|
||||
# Install Grype
|
||||
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin
|
||||
|
||||
# Install cosign
|
||||
curl -sSfL https://github.com/sigstore/cosign/releases/latest/download/cosign-linux-amd64 -o /usr/local/bin/cosign
|
||||
chmod +x /usr/local/bin/cosign
|
||||
```
|
||||
|
||||
### Local Execution
|
||||
|
||||
```bash
|
||||
# Run all interop tests
|
||||
dotnet test tests/interop/StellaOps.Interop.Tests
|
||||
|
||||
# Run CycloneDX tests only
|
||||
dotnet test tests/interop/StellaOps.Interop.Tests --filter "Format=CycloneDX"
|
||||
|
||||
# Run SPDX tests only
|
||||
dotnet test tests/interop/StellaOps.Interop.Tests --filter "Format=SPDX"
|
||||
|
||||
# Run parity tests
|
||||
dotnet test tests/interop/StellaOps.Interop.Tests --filter "Category=Parity"
|
||||
```
|
||||
|
||||
### CI Execution
|
||||
|
||||
Interop tests run automatically on:
|
||||
- Pull requests affecting scanner or SBOM code
|
||||
- Nightly schedule (6 AM UTC)
|
||||
- Manual workflow dispatch
|
||||
|
||||
See `.gitea/workflows/interop-e2e.yml` for CI configuration.
|
||||
|
||||
## Test Images
|
||||
|
||||
The following container images are used for interop testing:
|
||||
|
||||
| Image | Purpose | Characteristics |
|
||||
|-------|---------|-----------------|
|
||||
| `alpine:3.18` | Distro packages | APK packages, minimal |
|
||||
| `debian:12-slim` | Distro packages | DEB packages, medium |
|
||||
| `ubuntu:22.04` | Distro packages | DEB packages, larger |
|
||||
| `node:20-alpine` | Language packages | NPM packages |
|
||||
| `python:3.12-slim` | Language packages | Pip packages |
|
||||
| `golang:1.22-alpine` | Language packages | Go modules |
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Parity Below Threshold
|
||||
|
||||
If parity drops below 95%:
|
||||
|
||||
1. **Check for feed updates**
|
||||
- Grype may have newer vulnerability data
|
||||
- Update StellaOps feeds
|
||||
|
||||
2. **Review differences**
|
||||
- Run parity analysis: `dotnet test --filter "Category=Parity" --logger "console;verbosity=detailed"`
|
||||
- Categorize differences using `FindingsParityAnalyzer`
|
||||
|
||||
3. **Validate with golden corpus**
|
||||
- Compare against known-good results in `bench/golden-corpus/categories/interop/`
|
||||
|
||||
4. **Update acceptable differences**
|
||||
- Document new acceptable differences in this README
|
||||
- Adjust tolerance if justified
|
||||
|
||||
### Tool Installation Failures
|
||||
|
||||
If Syft/Grype/cosign fail to install:
|
||||
|
||||
```bash
|
||||
# Check versions
|
||||
syft --version
|
||||
grype --version
|
||||
cosign version
|
||||
|
||||
# Reinstall if needed
|
||||
rm /usr/local/bin/{syft,grype,cosign}
|
||||
# Re-run installation commands
|
||||
```
|
||||
|
||||
### SBOM Validation Failures
|
||||
|
||||
If SBOMs fail schema validation:
|
||||
|
||||
1. Verify format version:
|
||||
```bash
|
||||
jq '.specVersion' sbom-cyclonedx.json # Should be "1.6"
|
||||
jq '.spdxVersion' sbom-spdx.json # Should be "SPDX-3.0"
|
||||
```
|
||||
|
||||
2. Validate against official schemas:
|
||||
```bash
|
||||
# CycloneDX
|
||||
npm install -g @cyclonedx/cdx-cli
|
||||
cdx-cli validate --input-file sbom-cyclonedx.json
|
||||
|
||||
# SPDX (TODO: Add SPDX validation tool)
|
||||
```
|
||||
|
||||
## Continuous Improvement
|
||||
|
||||
### Adding New Test Cases
|
||||
|
||||
1. Add new image to test matrix in `*RoundTripTests.cs`
|
||||
2. Update `TestImages` member data
|
||||
3. Run locally to verify
|
||||
4. Submit PR with updated tests
|
||||
|
||||
### Updating Parity Thresholds
|
||||
|
||||
Current threshold: **95%**
|
||||
|
||||
To adjust:
|
||||
1. Document justification in sprint file
|
||||
2. Update `tolerancePercent` parameter in test calls
|
||||
3. Update this README
|
||||
|
||||
### Tool Version Pinning
|
||||
|
||||
Tools are currently installed from `latest`. To pin versions:
|
||||
|
||||
1. Update `.gitea/workflows/interop-e2e.yml`
|
||||
2. Specify version in install commands
|
||||
3. Document version compatibility in this README
|
||||
|
||||
## References
|
||||
|
||||
- [CycloneDX 1.6 Specification](https://cyclonedx.org/docs/1.6/)
|
||||
- [SPDX 3.0.1 Specification](https://spdx.github.io/spdx-spec/v3.0/)
|
||||
- [Syft Documentation](https://github.com/anchore/syft)
|
||||
- [Grype Documentation](https://github.com/anchore/grype)
|
||||
- [cosign Documentation](https://github.com/sigstore/cosign)
|
||||
|
||||
## Contacts
|
||||
|
||||
For questions about interop testing:
|
||||
- **Sprint:** SPRINT_5100_0003_0001
|
||||
- **Owner:** QA Team
|
||||
- **Dependencies:** Sprint 5100.0001.0002 (Evidence Index)
|
||||
630
docs/modules/attestor/guides/cosign-integration.md
Normal file
630
docs/modules/attestor/guides/cosign-integration.md
Normal file
@@ -0,0 +1,630 @@
|
||||
# Cosign Integration Guide
|
||||
|
||||
> **Status:** Ready for Production
|
||||
> **Last Updated:** 2025-12-23
|
||||
> **Prerequisites:** Cosign v2.x, StellaOps CLI v1.5+
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This guide explains how to integrate StellaOps with [Cosign](https://docs.sigstore.dev/cosign/overview/), the signing tool from the Sigstore project. StellaOps can verify and ingest SBOM attestations signed with Cosign, enabling seamless interoperability with the broader supply chain security ecosystem.
|
||||
|
||||
**Key Capabilities:**
|
||||
- ✅ Verify Cosign-signed attestations (SPDX + CycloneDX)
|
||||
- ✅ Extract SBOMs from Cosign DSSE envelopes
|
||||
- ✅ Upload attested SBOMs to StellaOps for scanning
|
||||
- ✅ Offline verification with bundled trust roots
|
||||
- ✅ Custom trust root configuration for air-gapped environments
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Verify a Cosign-Signed Attestation
|
||||
|
||||
```bash
|
||||
# Verify attestation and extract SBOM
|
||||
stella attest verify \
|
||||
--envelope attestation.dsse.json \
|
||||
--root /path/to/fulcio-root.pem \
|
||||
--extract-sbom sbom.json
|
||||
|
||||
# Upload extracted SBOM for scanning
|
||||
stella sbom upload \
|
||||
--file sbom.json \
|
||||
--artifact myapp:v1.2.3
|
||||
```
|
||||
|
||||
### 2. End-to-End: Cosign → StellaOps
|
||||
|
||||
```bash
|
||||
# Step 1: Sign SBOM with Cosign
|
||||
cosign attest --predicate sbom.spdx.json \
|
||||
--type spdx \
|
||||
--key cosign.key \
|
||||
myregistry/myapp:v1.2.3
|
||||
|
||||
# Step 2: Fetch attestation
|
||||
cosign verify-attestation myregistry/myapp:v1.2.3 \
|
||||
--key cosign.pub \
|
||||
--type spdx \
|
||||
--output-file attestation.dsse.json
|
||||
|
||||
# Step 3: Verify with StellaOps
|
||||
stella attest verify \
|
||||
--envelope attestation.dsse.json \
|
||||
--extract-sbom sbom.spdx.json
|
||||
|
||||
# Step 4: Scan with StellaOps
|
||||
stella scan sbom sbom.spdx.json \
|
||||
--output results.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Supported Predicate Types
|
||||
|
||||
StellaOps supports standard SBOM predicate types used by Cosign:
|
||||
|
||||
| Predicate Type | Format | Cosign Flag | StellaOps Support |
|
||||
|----------------|--------|-------------|-------------------|
|
||||
| `https://spdx.dev/Document` | SPDX 3.0.1 | `--type spdx` | ✅ Full support |
|
||||
| `https://spdx.org/spdxdocs/spdx-v2.3-*` | SPDX 2.3 | `--type spdx` | ✅ Full support |
|
||||
| `https://cyclonedx.org/bom` | CycloneDX 1.4-1.7 | `--type cyclonedx` | ✅ Full support |
|
||||
| `https://slsa.dev/provenance/v1` | SLSA v1.0 | `--type slsaprovenance` | ✅ Metadata only |
|
||||
|
||||
---
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Workflow 1: Keyless Signing (Fulcio)
|
||||
|
||||
**Use Case:** Sign attestations using ephemeral keys from Fulcio (requires OIDC).
|
||||
|
||||
```bash
|
||||
# Step 1: Generate SBOM (using Syft as example)
|
||||
syft myregistry/myapp:v1.2.3 -o spdx-json=sbom.spdx.json
|
||||
|
||||
# Step 2: Sign with Cosign (keyless)
|
||||
cosign attest --predicate sbom.spdx.json \
|
||||
--type spdx \
|
||||
myregistry/myapp:v1.2.3
|
||||
|
||||
# Step 3: Verify with StellaOps (uses Sigstore public instance)
|
||||
stella attest verify-image myregistry/myapp:v1.2.3 \
|
||||
--type spdx \
|
||||
--extract-sbom sbom-verified.spdx.json
|
||||
|
||||
# Step 4: Scan
|
||||
stella scan sbom sbom-verified.spdx.json
|
||||
```
|
||||
|
||||
**Trust Configuration:**
|
||||
StellaOps defaults to the Sigstore public instance trust roots:
|
||||
- Fulcio root: https://fulcio.sigstore.dev/api/v2/trustBundle
|
||||
- Rekor instance: https://rekor.sigstore.dev
|
||||
|
||||
### Workflow 2: Key-Based Signing
|
||||
|
||||
**Use Case:** Sign attestations with your own keys (air-gapped environments).
|
||||
|
||||
```bash
|
||||
# Step 1: Generate key pair (one-time)
|
||||
cosign generate-key-pair
|
||||
|
||||
# Step 2: Sign SBOM
|
||||
cosign attest --predicate sbom.spdx.json \
|
||||
--type spdx \
|
||||
--key cosign.key \
|
||||
myregistry/myapp:v1.2.3
|
||||
|
||||
# Step 3: Export attestation
|
||||
cosign verify-attestation myregistry/myapp:v1.2.3 \
|
||||
--key cosign.pub \
|
||||
--type spdx \
|
||||
--output-file attestation.dsse.json
|
||||
|
||||
# Step 4: Verify with StellaOps (custom public key)
|
||||
stella attest verify \
|
||||
--envelope attestation.dsse.json \
|
||||
--public-key cosign.pub \
|
||||
--extract-sbom sbom.spdx.json
|
||||
|
||||
# Step 5: Upload to StellaOps
|
||||
stella sbom upload --file sbom.spdx.json --artifact myapp:v1.2.3
|
||||
```
|
||||
|
||||
### Workflow 3: CycloneDX Attestations
|
||||
|
||||
**Use Case:** Work with CycloneDX BOMs from Trivy.
|
||||
|
||||
```bash
|
||||
# Step 1: Generate CycloneDX SBOM with Trivy
|
||||
trivy image myregistry/myapp:v1.2.3 \
|
||||
--format cyclonedx \
|
||||
--output sbom.cdx.json
|
||||
|
||||
# Step 2: Sign with Cosign
|
||||
cosign attest --predicate sbom.cdx.json \
|
||||
--type cyclonedx \
|
||||
--key cosign.key \
|
||||
myregistry/myapp:v1.2.3
|
||||
|
||||
# Step 3: Fetch and verify
|
||||
cosign verify-attestation myregistry/myapp:v1.2.3 \
|
||||
--key cosign.pub \
|
||||
--type cyclonedx \
|
||||
--output-file attestation.dsse.json
|
||||
|
||||
stella attest verify \
|
||||
--envelope attestation.dsse.json \
|
||||
--public-key cosign.pub \
|
||||
--extract-sbom sbom.cdx.json
|
||||
|
||||
# Step 4: Scan
|
||||
stella scan sbom sbom.cdx.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## CLI Reference
|
||||
|
||||
### `stella attest verify`
|
||||
|
||||
Verify a Cosign-signed attestation and optionally extract the SBOM.
|
||||
|
||||
```bash
|
||||
stella attest verify [OPTIONS]
|
||||
|
||||
Options:
|
||||
--envelope FILE DSSE envelope file (required)
|
||||
--root FILE Fulcio root certificate (for keyless)
|
||||
--public-key FILE Public key file (for key-based)
|
||||
--extract-sbom FILE Extract SBOM to file
|
||||
--offline Offline verification mode
|
||||
--checkpoint FILE Rekor checkpoint for offline verification
|
||||
--trust-root DIR Directory with trust roots
|
||||
--output FILE Verification report output (JSON)
|
||||
|
||||
Examples:
|
||||
# Keyless verification (Sigstore public instance)
|
||||
stella attest verify --envelope attestation.dsse.json
|
||||
|
||||
# Key-based verification
|
||||
stella attest verify \
|
||||
--envelope attestation.dsse.json \
|
||||
--public-key cosign.pub
|
||||
|
||||
# Extract SBOM during verification
|
||||
stella attest verify \
|
||||
--envelope attestation.dsse.json \
|
||||
--extract-sbom sbom.json
|
||||
|
||||
# Offline verification
|
||||
stella attest verify \
|
||||
--envelope attestation.dsse.json \
|
||||
--offline \
|
||||
--trust-root /opt/stellaops/trust-roots \
|
||||
--checkpoint rekor-checkpoint.json
|
||||
```
|
||||
|
||||
### `stella attest extract-sbom`
|
||||
|
||||
Extract SBOM from a DSSE envelope without verification.
|
||||
|
||||
```bash
|
||||
stella attest extract-sbom [OPTIONS]
|
||||
|
||||
Options:
|
||||
--envelope FILE DSSE envelope file (required)
|
||||
--output FILE Output SBOM file (required)
|
||||
--format FORMAT Force format (spdx|cyclonedx)
|
||||
|
||||
Example:
|
||||
stella attest extract-sbom \
|
||||
--envelope attestation.dsse.json \
|
||||
--output sbom.spdx.json
|
||||
```
|
||||
|
||||
### `stella attest verify-image`
|
||||
|
||||
Verify attestations attached to an OCI image.
|
||||
|
||||
```bash
|
||||
stella attest verify-image IMAGE [OPTIONS]
|
||||
|
||||
Options:
|
||||
--type TYPE Predicate type (spdx|cyclonedx|slsaprovenance)
|
||||
--extract-sbom FILE Extract SBOM to file
|
||||
--public-key FILE Public key (for key-based signing)
|
||||
--offline Offline mode
|
||||
|
||||
Example:
|
||||
stella attest verify-image myregistry/myapp:v1.2.3 \
|
||||
--type spdx \
|
||||
--extract-sbom sbom.spdx.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Trust Configuration
|
||||
|
||||
### Default Trust Roots (Public Sigstore)
|
||||
|
||||
StellaOps defaults to the Sigstore public instance:
|
||||
|
||||
```yaml
|
||||
# Default configuration (built-in)
|
||||
attestor:
|
||||
trustRoots:
|
||||
sigstore:
|
||||
enabled: true
|
||||
fulcioRootUrl: https://fulcio.sigstore.dev/api/v2/trustBundle
|
||||
rekorInstanceUrl: https://rekor.sigstore.dev
|
||||
cacheTTL: 24h
|
||||
```
|
||||
|
||||
### Custom Trust Roots (Air-Gapped)
|
||||
|
||||
For air-gapped environments, provide trust roots offline:
|
||||
|
||||
```yaml
|
||||
# /etc/stellaops/attestor.yaml
|
||||
attestor:
|
||||
trustRoots:
|
||||
custom:
|
||||
enabled: true
|
||||
fulcioRoots:
|
||||
- /opt/stellaops/trust-roots/fulcio-root.pem
|
||||
- /opt/stellaops/trust-roots/fulcio-intermediate.pem
|
||||
rekorPublicKeys:
|
||||
- /opt/stellaops/trust-roots/rekor.pub
|
||||
ctfePublicKeys:
|
||||
- /opt/stellaops/trust-roots/ctfe.pub
|
||||
```
|
||||
|
||||
**Trust Root Bundle Structure:**
|
||||
```
|
||||
/opt/stellaops/trust-roots/
|
||||
├── fulcio-root.pem # Fulcio root CA
|
||||
├── fulcio-intermediate.pem # Fulcio intermediate CA (optional)
|
||||
├── rekor.pub # Rekor public key
|
||||
├── ctfe.pub # Certificate Transparency log public key
|
||||
└── checkpoints/ # Cached Rekor checkpoints
|
||||
└── rekor-checkpoint.json
|
||||
```
|
||||
|
||||
### Downloading Trust Roots
|
||||
|
||||
```bash
|
||||
# Download Sigstore public trust bundle
|
||||
curl -o trust-bundle.json \
|
||||
https://tuf.sigstore.dev/targets/trusted_root.json
|
||||
|
||||
# Extract Fulcio roots
|
||||
stella trust extract-fulcio-roots \
|
||||
--bundle trust-bundle.json \
|
||||
--output /opt/stellaops/trust-roots/
|
||||
|
||||
# Extract Rekor public keys
|
||||
stella trust extract-rekor-keys \
|
||||
--bundle trust-bundle.json \
|
||||
--output /opt/stellaops/trust-roots/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Offline Verification
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. Trust roots downloaded and extracted
|
||||
2. Rekor checkpoint bundle downloaded
|
||||
3. Attestation DSSE envelope available locally
|
||||
|
||||
### Workflow
|
||||
|
||||
```bash
|
||||
# Step 1: Download trust bundle (online, one-time)
|
||||
stella trust download --output /opt/stellaops/trust-roots/
|
||||
|
||||
# Step 2: Download Rekor checkpoint (online, periodic)
|
||||
stella trust checkpoint download \
|
||||
--output /opt/stellaops/trust-roots/checkpoints/rekor-checkpoint.json
|
||||
|
||||
# Step 3: Verify offline (air-gapped environment)
|
||||
stella attest verify \
|
||||
--envelope attestation.dsse.json \
|
||||
--offline \
|
||||
--trust-root /opt/stellaops/trust-roots \
|
||||
--checkpoint /opt/stellaops/trust-roots/checkpoints/rekor-checkpoint.json \
|
||||
--extract-sbom sbom.json
|
||||
|
||||
# Step 4: Scan offline
|
||||
stella scan sbom sbom.json --offline
|
||||
```
|
||||
|
||||
### Checkpoint Freshness
|
||||
|
||||
Rekor checkpoints should be refreshed periodically:
|
||||
- **High Security:** Daily updates
|
||||
- **Standard:** Weekly updates
|
||||
- **Low Risk:** Monthly updates
|
||||
|
||||
Set a reminder to refresh checkpoints:
|
||||
```bash
|
||||
# Cron job (daily at 2 AM)
|
||||
0 2 * * * /usr/local/bin/stella trust checkpoint download --output /opt/stellaops/trust-roots/checkpoints/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Error: "Unsupported predicate type"
|
||||
|
||||
**Cause:** The DSSE envelope contains a predicate type not supported by StellaOps.
|
||||
|
||||
**Solution:** Check the predicate type:
|
||||
```bash
|
||||
stella attest inspect --envelope attestation.dsse.json
|
||||
|
||||
# Output will show:
|
||||
# Predicate Type: https://example.com/custom-type
|
||||
# Supported: false
|
||||
```
|
||||
|
||||
If the predicate is SPDX or CycloneDX, ensure you're using StellaOps CLI v1.5+.
|
||||
|
||||
### Error: "Signature verification failed"
|
||||
|
||||
**Cause:** The signature cannot be verified against the provided trust roots.
|
||||
|
||||
**Troubleshooting Steps:**
|
||||
1. Check trust root configuration:
|
||||
```bash
|
||||
stella attest verify --envelope attestation.dsse.json --debug
|
||||
```
|
||||
|
||||
2. Verify the public key matches:
|
||||
```bash
|
||||
# Extract public key from certificate in DSSE envelope
|
||||
stella attest inspect --envelope attestation.dsse.json --show-cert
|
||||
|
||||
# Compare with your public key
|
||||
cat cosign.pub
|
||||
```
|
||||
|
||||
3. For keyless signing, ensure Fulcio root is correct:
|
||||
```bash
|
||||
# Test Fulcio connectivity
|
||||
curl -v https://fulcio.sigstore.dev/api/v2/trustBundle
|
||||
```
|
||||
|
||||
### Error: "Failed to extract SBOM"
|
||||
|
||||
**Cause:** The predicate payload is not a valid SBOM.
|
||||
|
||||
**Solution:** Inspect the predicate:
|
||||
```bash
|
||||
stella attest inspect --envelope attestation.dsse.json --show-predicate
|
||||
```
|
||||
|
||||
Check if the predicate type matches the actual content:
|
||||
- `https://spdx.dev/Document` should contain SPDX JSON
|
||||
- `https://cyclonedx.org/bom` should contain CycloneDX JSON
|
||||
|
||||
### Warning: "Checkpoint is stale"
|
||||
|
||||
**Cause:** The Rekor checkpoint is older than the freshness threshold (default: 7 days).
|
||||
|
||||
**Solution:** Download a fresh checkpoint:
|
||||
```bash
|
||||
stella trust checkpoint download \
|
||||
--output /opt/stellaops/trust-roots/checkpoints/rekor-checkpoint.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Verify Before Extraction
|
||||
|
||||
Always verify the attestation signature before extracting the SBOM:
|
||||
|
||||
```bash
|
||||
# ✅ GOOD: Verify then extract
|
||||
stella attest verify \
|
||||
--envelope attestation.dsse.json \
|
||||
--extract-sbom sbom.json
|
||||
|
||||
# ❌ BAD: Extract without verification
|
||||
stella attest extract-sbom \
|
||||
--envelope attestation.dsse.json \
|
||||
--output sbom.json
|
||||
```
|
||||
|
||||
### 2. Use Keyless Signing for Public Images
|
||||
|
||||
For public container images, use keyless signing (Fulcio):
|
||||
- No key management overhead
|
||||
- Identity verified via OIDC
|
||||
- Transparent via Rekor
|
||||
|
||||
```bash
|
||||
# Keyless signing (recommended for public images)
|
||||
cosign attest --predicate sbom.spdx.json \
|
||||
--type spdx \
|
||||
myregistry/publicapp:v1.0.0
|
||||
```
|
||||
|
||||
### 3. Use Key-Based Signing for Private/Air-Gapped
|
||||
|
||||
For private registries or air-gapped environments, use key-based signing:
|
||||
- Full control over keys
|
||||
- No external dependencies
|
||||
- Works offline
|
||||
|
||||
```bash
|
||||
# Key-based signing (recommended for private/air-gapped)
|
||||
cosign attest --predicate sbom.spdx.json \
|
||||
--type spdx \
|
||||
--key cosign.key \
|
||||
myregistry/privateapp:v1.0.0
|
||||
```
|
||||
|
||||
### 4. Automate Trust Root Updates
|
||||
|
||||
Set up automated trust root updates:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# /usr/local/bin/update-trust-roots.sh
|
||||
|
||||
set -e
|
||||
|
||||
TRUST_DIR=/opt/stellaops/trust-roots
|
||||
|
||||
# Download latest trust bundle
|
||||
stella trust download --output $TRUST_DIR --force
|
||||
|
||||
# Download fresh checkpoint
|
||||
stella trust checkpoint download --output $TRUST_DIR/checkpoints/
|
||||
|
||||
# Verify trust roots
|
||||
stella trust verify --trust-root $TRUST_DIR
|
||||
|
||||
echo "Trust roots updated successfully"
|
||||
```
|
||||
|
||||
### 5. Include Attestation in CI/CD
|
||||
|
||||
Integrate attestation verification into your CI/CD pipeline:
|
||||
|
||||
**GitHub Actions Example:**
|
||||
```yaml
|
||||
name: Verify SBOM Attestation
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
|
||||
jobs:
|
||||
verify:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Install StellaOps CLI
|
||||
run: |
|
||||
curl -sSfL https://cli.stellaops.io/install.sh | sh
|
||||
sudo mv stella /usr/local/bin/
|
||||
|
||||
- name: Verify attestation
|
||||
run: |
|
||||
stella attest verify-image \
|
||||
${{ env.IMAGE_REF }} \
|
||||
--type spdx \
|
||||
--extract-sbom sbom.spdx.json
|
||||
|
||||
- name: Scan SBOM
|
||||
run: |
|
||||
stella scan sbom sbom.spdx.json \
|
||||
--policy production \
|
||||
--fail-on blocked
|
||||
|
||||
- name: Upload results
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: scan-results
|
||||
path: sbom.spdx.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Advanced Topics
|
||||
|
||||
### Multi-Signature Verification
|
||||
|
||||
Cosign supports multiple signatures on a single attestation. StellaOps verifies all signatures:
|
||||
|
||||
```bash
|
||||
# Attestation with multiple signatures
|
||||
cosign verify-attestation myregistry/myapp:v1.2.3 \
|
||||
--key cosign-key1.pub \
|
||||
--key cosign-key2.pub \
|
||||
--type spdx \
|
||||
--output-file attestation.dsse.json
|
||||
|
||||
# StellaOps verifies all signatures
|
||||
stella attest verify \
|
||||
--envelope attestation.dsse.json \
|
||||
--public-key cosign-key1.pub \
|
||||
--public-key cosign-key2.pub \
|
||||
--require-all-signatures
|
||||
```
|
||||
|
||||
### Custom Predicate Types
|
||||
|
||||
If you have custom predicate types, register them with StellaOps:
|
||||
|
||||
```yaml
|
||||
# /etc/stellaops/attestor.yaml
|
||||
attestor:
|
||||
predicates:
|
||||
custom:
|
||||
- type: https://example.com/custom-sbom@v1
|
||||
parser: custom-sbom-parser
|
||||
schema: /opt/stellaops/schemas/custom-sbom.schema.json
|
||||
```
|
||||
|
||||
### Batch Verification
|
||||
|
||||
Verify multiple attestations in batch:
|
||||
|
||||
```bash
|
||||
# Create batch file
|
||||
cat > batch.txt <<EOF
|
||||
attestation1.dsse.json
|
||||
attestation2.dsse.json
|
||||
attestation3.dsse.json
|
||||
EOF
|
||||
|
||||
# Batch verify
|
||||
stella attest verify-batch \
|
||||
--input batch.txt \
|
||||
--public-key cosign.pub \
|
||||
--output-dir verified-sboms/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
### External Documentation
|
||||
- [Cosign Documentation](https://docs.sigstore.dev/cosign/overview/)
|
||||
- [Sigstore Trust Root Specification](https://github.com/sigstore/root-signing)
|
||||
- [in-toto Attestation Specification](https://github.com/in-toto/attestation)
|
||||
- [SPDX 3.0.1 Specification](https://spdx.github.io/spdx-spec/v3.0.1/)
|
||||
- [CycloneDX 1.6 Specification](https://cyclonedx.org/docs/1.6/)
|
||||
|
||||
### StellaOps Documentation
|
||||
- [Attestor Architecture](../modules/attestor/architecture.md)
|
||||
- [Standard Predicate Types](../modules/attestor/predicate-parsers.md)
|
||||
- [CLI Reference](../API_CLI_REFERENCE.md)
|
||||
- [Offline Kit Guide](../OFFLINE_KIT.md)
|
||||
|
||||
---
|
||||
|
||||
## Feedback
|
||||
|
||||
Found an issue or have a suggestion? Please report it:
|
||||
- GitHub: https://github.com/stella-ops/stella-ops/issues
|
||||
- Docs: https://docs.stellaops.io/integrations/cosign
|
||||
- Community: https://community.stellaops.io/c/integrations
|
||||
|
||||
---
|
||||
|
||||
**Last Updated:** 2025-12-23
|
||||
**Applies To:** StellaOps CLI v1.5+, Cosign v2.x
|
||||
49
docs/modules/attestor/schemas/artifacts.md
Normal file
49
docs/modules/attestor/schemas/artifacts.md
Normal file
@@ -0,0 +1,49 @@
|
||||
# Artifacts Schema (DOCS-ORCH-34-004)
|
||||
|
||||
Last updated: 2025-11-25
|
||||
|
||||
## Purpose
|
||||
Describe artifact kinds produced by Orchestrator runs and how they are stored, hashed, and referenced.
|
||||
|
||||
## Artifact kinds
|
||||
- **log**: NDJSON log fragment for a step/run.
|
||||
- **metrics**: Prometheus/OpenMetrics snapshot for a step/run.
|
||||
- **output**: arbitrary task output (JSON, NDJSON, binary), content-addressed.
|
||||
- **manifest**: bundle manifest listing artifacts and hashes.
|
||||
|
||||
## Schema (common fields)
|
||||
```json
|
||||
{
|
||||
"kind": "log|metrics|output|manifest",
|
||||
"tenant": "acme",
|
||||
"dagId": "string",
|
||||
"runId": "string",
|
||||
"stepId": "string",
|
||||
"contentType": "application/json",
|
||||
"hash": "sha256:<hex>",
|
||||
"size": 1234,
|
||||
"createdUtc": "2025-11-25T00:00:00Z",
|
||||
"traceId": "optional",
|
||||
"encryption": "none|aes256-gcm",
|
||||
"compression": "none|gzip"
|
||||
}
|
||||
```
|
||||
|
||||
## Storage rules
|
||||
- Content-addressed by `sha256` (lowercase hex). Filenames may use `<hash>`; metadata kept in Mongo with tenant scoping.
|
||||
- Immutable; new versions create new hashes.
|
||||
- Optional encryption: AES-256-GCM with keys from Authority `secretRef`; never store keys alongside artifacts.
|
||||
- Compression optional (gzip) but hash is computed on compressed bytes; record `compression`.
|
||||
|
||||
## Access & security
|
||||
- Tenant-scoped reads; artifacts cannot be shared across tenants.
|
||||
- No secrets stored; redact before writing. Logs/metrics already redacted at source.
|
||||
- Access control enforced via orchestrator scopes; audit log every download/export.
|
||||
|
||||
## Offline posture
|
||||
- Artifacts may be exported as tarball with manifest (`manifest` kind) that lists hash, size, compression/encryption flags.
|
||||
- Imports verify manifest hash and per-artifact hash before accepting.
|
||||
|
||||
## Determinism
|
||||
- Hash and size recorded at creation; manifests sorted by `kind`, then `dagId`, `runId`, `stepId`, `hash`.
|
||||
- Timestamps UTC ISO-8601; NDJSON ordering stable.
|
||||
169
docs/modules/attestor/schemas/calibration-manifest.schema.json
Normal file
169
docs/modules/attestor/schemas/calibration-manifest.schema.json
Normal file
@@ -0,0 +1,169 @@
|
||||
{
|
||||
"$schema": "https://json-schema.org/draft/2020-12/schema",
|
||||
"$id": "https://stella-ops.org/schemas/calibration-manifest/1.0.0",
|
||||
"title": "Calibration Manifest",
|
||||
"description": "Record of trust vector calibration based on post-mortem truth comparison",
|
||||
"type": "object",
|
||||
"required": ["manifestId", "sourceId", "epochNumber", "calibratedAt"],
|
||||
"properties": {
|
||||
"manifestId": {
|
||||
"type": "string",
|
||||
"description": "Unique identifier for this calibration record"
|
||||
},
|
||||
"sourceId": {
|
||||
"type": "string",
|
||||
"description": "VEX source being calibrated"
|
||||
},
|
||||
"tenant": {
|
||||
"type": "string",
|
||||
"description": "Tenant scope (optional for global calibration)"
|
||||
},
|
||||
"epochNumber": {
|
||||
"type": "integer",
|
||||
"description": "Calibration epoch number",
|
||||
"minimum": 1
|
||||
},
|
||||
"previousVector": {
|
||||
"$ref": "#/$defs/TrustVectorValues"
|
||||
},
|
||||
"calibratedVector": {
|
||||
"$ref": "#/$defs/TrustVectorValues"
|
||||
},
|
||||
"delta": {
|
||||
"$ref": "#/$defs/CalibrationDelta"
|
||||
},
|
||||
"comparison": {
|
||||
"$ref": "#/$defs/ComparisonResult"
|
||||
},
|
||||
"detectedBias": {
|
||||
"type": "string",
|
||||
"description": "Detected bias type, if any",
|
||||
"enum": ["optimistic_bias", "pessimistic_bias", "scope_bias", "none"]
|
||||
},
|
||||
"configuration": {
|
||||
"$ref": "#/$defs/CalibrationConfiguration"
|
||||
},
|
||||
"calibratedAt": {
|
||||
"type": "string",
|
||||
"description": "When calibration was performed",
|
||||
"format": "date-time"
|
||||
},
|
||||
"manifestDigest": {
|
||||
"type": "string",
|
||||
"description": "SHA256 digest of this manifest",
|
||||
"pattern": "^sha256:[a-f0-9]{64}$"
|
||||
}
|
||||
},
|
||||
"$defs": {
|
||||
"TrustVectorValues": {
|
||||
"type": "object",
|
||||
"description": "Trust vector component values",
|
||||
"required": ["provenance", "coverage", "replayability"],
|
||||
"properties": {
|
||||
"provenance": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"coverage": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"replayability": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
}
|
||||
}
|
||||
},
|
||||
"CalibrationDelta": {
|
||||
"type": "object",
|
||||
"description": "Adjustment applied to trust vector",
|
||||
"properties": {
|
||||
"deltaP": {
|
||||
"type": "number",
|
||||
"description": "Change in provenance score"
|
||||
},
|
||||
"deltaC": {
|
||||
"type": "number",
|
||||
"description": "Change in coverage score"
|
||||
},
|
||||
"deltaR": {
|
||||
"type": "number",
|
||||
"description": "Change in replayability score"
|
||||
}
|
||||
}
|
||||
},
|
||||
"ComparisonResult": {
|
||||
"type": "object",
|
||||
"description": "Result of comparing claims to post-mortem truth",
|
||||
"required": ["sourceId", "accuracy"],
|
||||
"properties": {
|
||||
"sourceId": {
|
||||
"type": "string"
|
||||
},
|
||||
"accuracy": {
|
||||
"type": "number",
|
||||
"description": "Accuracy score (0-1)",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"totalClaims": {
|
||||
"type": "integer",
|
||||
"description": "Total claims evaluated",
|
||||
"minimum": 0
|
||||
},
|
||||
"correctClaims": {
|
||||
"type": "integer",
|
||||
"description": "Claims matching post-mortem truth",
|
||||
"minimum": 0
|
||||
},
|
||||
"evaluationPeriodStart": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"evaluationPeriodEnd": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
}
|
||||
}
|
||||
},
|
||||
"CalibrationConfiguration": {
|
||||
"type": "object",
|
||||
"description": "Configuration used for calibration",
|
||||
"properties": {
|
||||
"learningRate": {
|
||||
"type": "number",
|
||||
"description": "Learning rate per epoch",
|
||||
"default": 0.02
|
||||
},
|
||||
"maxAdjustmentPerEpoch": {
|
||||
"type": "number",
|
||||
"description": "Maximum adjustment per epoch",
|
||||
"default": 0.05
|
||||
},
|
||||
"minValue": {
|
||||
"type": "number",
|
||||
"description": "Minimum trust component value",
|
||||
"default": 0.10
|
||||
},
|
||||
"maxValue": {
|
||||
"type": "number",
|
||||
"description": "Maximum trust component value",
|
||||
"default": 1.00
|
||||
},
|
||||
"momentumFactor": {
|
||||
"type": "number",
|
||||
"description": "Momentum factor for smoothing",
|
||||
"default": 0.9
|
||||
},
|
||||
"accuracyThreshold": {
|
||||
"type": "number",
|
||||
"description": "Threshold above which no calibration is needed",
|
||||
"default": 0.95
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
137
docs/modules/attestor/schemas/claim-score.schema.json
Normal file
137
docs/modules/attestor/schemas/claim-score.schema.json
Normal file
@@ -0,0 +1,137 @@
|
||||
{
|
||||
"$schema": "https://json-schema.org/draft/2020-12/schema",
|
||||
"$id": "https://stella-ops.org/schemas/claim-score/1.0.0",
|
||||
"title": "Claim Score",
|
||||
"description": "VEX claim scoring result using the trust lattice formula: ClaimScore = BaseTrust * M * F",
|
||||
"type": "object",
|
||||
"required": ["sourceId", "status", "claimScore"],
|
||||
"properties": {
|
||||
"sourceId": {
|
||||
"type": "string",
|
||||
"description": "Identifier of the VEX source"
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"description": "VEX status claimed",
|
||||
"enum": ["affected", "not_affected", "fixed", "under_investigation"]
|
||||
},
|
||||
"trustVector": {
|
||||
"$ref": "#/$defs/TrustVectorScores"
|
||||
},
|
||||
"baseTrust": {
|
||||
"type": "number",
|
||||
"description": "Computed base trust from trust vector",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"claimStrength": {
|
||||
"$ref": "#/$defs/ClaimStrength"
|
||||
},
|
||||
"strengthMultiplier": {
|
||||
"type": "number",
|
||||
"description": "Strength multiplier (M) based on evidence quality",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"freshnessMultiplier": {
|
||||
"type": "number",
|
||||
"description": "Freshness decay multiplier (F)",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"freshnessDetails": {
|
||||
"$ref": "#/$defs/FreshnessDetails"
|
||||
},
|
||||
"claimScore": {
|
||||
"type": "number",
|
||||
"description": "Final claim score = BaseTrust * M * F",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"scopeSpecificity": {
|
||||
"type": "integer",
|
||||
"description": "Scope specificity level (higher = more specific)",
|
||||
"minimum": 0
|
||||
},
|
||||
"issuedAt": {
|
||||
"type": "string",
|
||||
"description": "When the VEX claim was issued",
|
||||
"format": "date-time"
|
||||
},
|
||||
"evaluatedAt": {
|
||||
"type": "string",
|
||||
"description": "When the score was computed",
|
||||
"format": "date-time"
|
||||
}
|
||||
},
|
||||
"$defs": {
|
||||
"TrustVectorScores": {
|
||||
"type": "object",
|
||||
"description": "Trust vector component scores",
|
||||
"properties": {
|
||||
"provenance": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"coverage": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"replayability": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
}
|
||||
}
|
||||
},
|
||||
"ClaimStrength": {
|
||||
"type": "object",
|
||||
"description": "Claim strength evidence classification",
|
||||
"properties": {
|
||||
"level": {
|
||||
"type": "string",
|
||||
"description": "Strength level",
|
||||
"enum": [
|
||||
"exploitability_with_reachability",
|
||||
"config_with_evidence",
|
||||
"vendor_blanket",
|
||||
"under_investigation"
|
||||
]
|
||||
},
|
||||
"multiplier": {
|
||||
"type": "number",
|
||||
"description": "Corresponding multiplier value",
|
||||
"enum": [1.00, 0.80, 0.60, 0.40]
|
||||
}
|
||||
}
|
||||
},
|
||||
"FreshnessDetails": {
|
||||
"type": "object",
|
||||
"description": "Freshness decay calculation details",
|
||||
"properties": {
|
||||
"ageDays": {
|
||||
"type": "number",
|
||||
"description": "Age of the claim in days"
|
||||
},
|
||||
"halfLifeDays": {
|
||||
"type": "number",
|
||||
"description": "Half-life used for decay calculation",
|
||||
"default": 90
|
||||
},
|
||||
"floor": {
|
||||
"type": "number",
|
||||
"description": "Minimum freshness value",
|
||||
"default": 0.35
|
||||
},
|
||||
"decayValue": {
|
||||
"type": "number",
|
||||
"description": "Computed decay value before floor application",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
84
docs/modules/attestor/schemas/trust-vector.schema.json
Normal file
84
docs/modules/attestor/schemas/trust-vector.schema.json
Normal file
@@ -0,0 +1,84 @@
|
||||
{
|
||||
"$schema": "https://json-schema.org/draft/2020-12/schema",
|
||||
"$id": "https://stella-ops.org/schemas/trust-vector/1.0.0",
|
||||
"title": "Trust Vector",
|
||||
"description": "3-component trust vector for VEX sources (Provenance, Coverage, Replayability)",
|
||||
"type": "object",
|
||||
"required": ["provenance", "coverage", "replayability"],
|
||||
"properties": {
|
||||
"sourceId": {
|
||||
"type": "string",
|
||||
"description": "Identifier of the VEX source"
|
||||
},
|
||||
"sourceClass": {
|
||||
"type": "string",
|
||||
"description": "Classification of the source",
|
||||
"enum": ["vendor", "distro", "internal", "hub", "attestation"]
|
||||
},
|
||||
"provenance": {
|
||||
"type": "number",
|
||||
"description": "Cryptographic and process integrity score [0..1]",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"coverage": {
|
||||
"type": "number",
|
||||
"description": "Scope match precision score [0..1]",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"replayability": {
|
||||
"type": "number",
|
||||
"description": "Determinism and input pinning score [0..1]",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"weights": {
|
||||
"$ref": "#/$defs/TrustWeights"
|
||||
},
|
||||
"baseTrust": {
|
||||
"type": "number",
|
||||
"description": "Computed base trust: wP*P + wC*C + wR*R",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"computedAt": {
|
||||
"type": "string",
|
||||
"description": "Timestamp when this vector was computed",
|
||||
"format": "date-time"
|
||||
},
|
||||
"version": {
|
||||
"type": "string",
|
||||
"description": "Version of the trust vector configuration"
|
||||
}
|
||||
},
|
||||
"$defs": {
|
||||
"TrustWeights": {
|
||||
"type": "object",
|
||||
"description": "Weights for trust vector components",
|
||||
"properties": {
|
||||
"provenance": {
|
||||
"type": "number",
|
||||
"description": "Weight for provenance component (wP)",
|
||||
"minimum": 0,
|
||||
"maximum": 1,
|
||||
"default": 0.45
|
||||
},
|
||||
"coverage": {
|
||||
"type": "number",
|
||||
"description": "Weight for coverage component (wC)",
|
||||
"minimum": 0,
|
||||
"maximum": 1,
|
||||
"default": 0.35
|
||||
},
|
||||
"replayability": {
|
||||
"type": "number",
|
||||
"description": "Weight for replayability component (wR)",
|
||||
"minimum": 0,
|
||||
"maximum": 1,
|
||||
"default": 0.20
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
194
docs/modules/attestor/schemas/verdict-manifest.schema.json
Normal file
194
docs/modules/attestor/schemas/verdict-manifest.schema.json
Normal file
@@ -0,0 +1,194 @@
|
||||
{
|
||||
"$schema": "https://json-schema.org/draft/2020-12/schema",
|
||||
"$id": "https://stella-ops.org/schemas/verdict-manifest/1.0.0",
|
||||
"title": "Verdict Manifest",
|
||||
"description": "A signed, immutable record of a VEX decisioning outcome that enables deterministic replay and audit compliance.",
|
||||
"type": "object",
|
||||
"required": [
|
||||
"manifestId",
|
||||
"tenant",
|
||||
"assetDigest",
|
||||
"vulnerabilityId",
|
||||
"inputs",
|
||||
"result",
|
||||
"policyHash",
|
||||
"latticeVersion",
|
||||
"evaluatedAt",
|
||||
"manifestDigest"
|
||||
],
|
||||
"properties": {
|
||||
"manifestId": {
|
||||
"type": "string",
|
||||
"description": "Unique identifier in format: verd:{tenant}:{asset_short}:{vuln_id}:{timestamp}",
|
||||
"pattern": "^verd:[a-z0-9-]+:[a-f0-9]+:[A-Z0-9-]+:[0-9]+$"
|
||||
},
|
||||
"tenant": {
|
||||
"type": "string",
|
||||
"description": "Tenant identifier for multi-tenancy",
|
||||
"minLength": 1
|
||||
},
|
||||
"assetDigest": {
|
||||
"type": "string",
|
||||
"description": "SHA256 digest of the asset/SBOM",
|
||||
"pattern": "^sha256:[a-f0-9]{64}$"
|
||||
},
|
||||
"vulnerabilityId": {
|
||||
"type": "string",
|
||||
"description": "CVE, GHSA, or vendor vulnerability identifier",
|
||||
"minLength": 1
|
||||
},
|
||||
"inputs": {
|
||||
"$ref": "#/$defs/VerdictInputs"
|
||||
},
|
||||
"result": {
|
||||
"$ref": "#/$defs/VerdictResult"
|
||||
},
|
||||
"policyHash": {
|
||||
"type": "string",
|
||||
"description": "SHA256 hash of the policy configuration",
|
||||
"pattern": "^sha256:[a-f0-9]{64}$"
|
||||
},
|
||||
"latticeVersion": {
|
||||
"type": "string",
|
||||
"description": "Semantic version of the trust lattice algorithm",
|
||||
"pattern": "^[0-9]+\\.[0-9]+\\.[0-9]+$"
|
||||
},
|
||||
"evaluatedAt": {
|
||||
"type": "string",
|
||||
"description": "ISO 8601 UTC timestamp of evaluation",
|
||||
"format": "date-time"
|
||||
},
|
||||
"manifestDigest": {
|
||||
"type": "string",
|
||||
"description": "SHA256 digest of the canonical manifest (excluding this field)",
|
||||
"pattern": "^sha256:[a-f0-9]{64}$"
|
||||
}
|
||||
},
|
||||
"$defs": {
|
||||
"VerdictInputs": {
|
||||
"type": "object",
|
||||
"description": "All inputs pinned for deterministic replay",
|
||||
"required": ["sbomDigests", "vulnFeedSnapshotIds", "vexDocumentDigests", "clockCutoff"],
|
||||
"properties": {
|
||||
"sbomDigests": {
|
||||
"type": "array",
|
||||
"description": "SHA256 digests of SBOM documents used",
|
||||
"items": {
|
||||
"type": "string",
|
||||
"pattern": "^sha256:[a-f0-9]{64}$"
|
||||
}
|
||||
},
|
||||
"vulnFeedSnapshotIds": {
|
||||
"type": "array",
|
||||
"description": "Identifiers for vulnerability feed snapshots",
|
||||
"items": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"vexDocumentDigests": {
|
||||
"type": "array",
|
||||
"description": "SHA256 digests of VEX documents considered",
|
||||
"items": {
|
||||
"type": "string",
|
||||
"pattern": "^sha256:[a-f0-9]{64}$"
|
||||
}
|
||||
},
|
||||
"reachabilityGraphIds": {
|
||||
"type": "array",
|
||||
"description": "Identifiers for call graph snapshots",
|
||||
"items": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"clockCutoff": {
|
||||
"type": "string",
|
||||
"description": "Timestamp used for freshness calculations",
|
||||
"format": "date-time"
|
||||
}
|
||||
}
|
||||
},
|
||||
"VerdictResult": {
|
||||
"type": "object",
|
||||
"description": "The verdict and explanation",
|
||||
"required": ["status", "confidence", "explanations"],
|
||||
"properties": {
|
||||
"status": {
|
||||
"type": "string",
|
||||
"description": "Final verdict status",
|
||||
"enum": ["affected", "not_affected", "fixed", "under_investigation"]
|
||||
},
|
||||
"confidence": {
|
||||
"type": "number",
|
||||
"description": "Numeric confidence score",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"explanations": {
|
||||
"type": "array",
|
||||
"description": "Per-source breakdown of scoring",
|
||||
"items": {
|
||||
"$ref": "#/$defs/VerdictExplanation"
|
||||
}
|
||||
},
|
||||
"evidenceRefs": {
|
||||
"type": "array",
|
||||
"description": "Links to attestations and proof bundles",
|
||||
"items": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"VerdictExplanation": {
|
||||
"type": "object",
|
||||
"description": "Explanation of how a source contributed to the verdict",
|
||||
"required": ["sourceId", "reason", "claimScore"],
|
||||
"properties": {
|
||||
"sourceId": {
|
||||
"type": "string",
|
||||
"description": "Identifier of the VEX source"
|
||||
},
|
||||
"reason": {
|
||||
"type": "string",
|
||||
"description": "Human-readable explanation"
|
||||
},
|
||||
"provenanceScore": {
|
||||
"type": "number",
|
||||
"description": "Provenance component of trust vector",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"coverageScore": {
|
||||
"type": "number",
|
||||
"description": "Coverage component of trust vector",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"replayabilityScore": {
|
||||
"type": "number",
|
||||
"description": "Replayability component of trust vector",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"strengthMultiplier": {
|
||||
"type": "number",
|
||||
"description": "Claim strength multiplier (M)",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"freshnessMultiplier": {
|
||||
"type": "number",
|
||||
"description": "Freshness decay multiplier (F)",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
"claimScore": {
|
||||
"type": "number",
|
||||
"description": "Final claim score = BaseTrust * M * F",
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
310
docs/modules/authority/guides
Normal file
310
docs/modules/authority/guides
Normal file
@@ -0,0 +1,310 @@
|
||||
# Identity Constraints for Keyless Verification
|
||||
|
||||
## Overview
|
||||
|
||||
Keyless signing binds cryptographic signatures to OIDC identities. When verifying signatures, you must specify which identities are trusted. This document covers identity constraint patterns for all supported CI/CD platforms.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### Certificate Identity
|
||||
|
||||
The certificate identity is the subject claim from the OIDC token, embedded in the Fulcio certificate. It identifies:
|
||||
|
||||
- **Who** created the signature (repository, branch, workflow)
|
||||
- **When** the signature was created (within the certificate validity window)
|
||||
- **Where** the signing happened (CI platform, environment)
|
||||
|
||||
### OIDC Issuer
|
||||
|
||||
The OIDC issuer is the URL of the identity provider that issued the token. Each CI platform has its own issuer:
|
||||
|
||||
| Platform | Issuer URL |
|
||||
|----------|------------|
|
||||
| GitHub Actions | `https://token.actions.githubusercontent.com` |
|
||||
| GitLab CI (SaaS) | `https://gitlab.com` |
|
||||
| GitLab CI (Self-hosted) | `https://your-gitlab-instance.com` |
|
||||
| Gitea | `https://your-gitea-instance.com` |
|
||||
|
||||
### Verification Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ Verification Process │
|
||||
├─────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ 1. Extract certificate from attestation │
|
||||
│ └─▶ Contains: subject, issuer, SAN, validity period │
|
||||
│ │
|
||||
│ 2. Validate certificate chain │
|
||||
│ └─▶ Chains to trusted Fulcio root │
|
||||
│ │
|
||||
│ 3. Check OIDC issuer │
|
||||
│ └─▶ Must match --certificate-oidc-issuer │
|
||||
│ │
|
||||
│ 4. Check certificate identity │
|
||||
│ └─▶ Subject must match --certificate-identity pattern │
|
||||
│ │
|
||||
│ 5. Verify Rekor inclusion (if required) │
|
||||
│ └─▶ Signature logged during certificate validity │
|
||||
│ │
|
||||
│ 6. Verify signature │
|
||||
│ └─▶ Signature valid for artifact digest │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Platform-Specific Patterns
|
||||
|
||||
### GitHub Actions
|
||||
|
||||
GitHub Actions OIDC tokens include rich context about the workflow execution.
|
||||
|
||||
#### Token Claims
|
||||
|
||||
| Claim | Description | Example |
|
||||
|-------|-------------|---------|
|
||||
| `sub` | Subject (identity) | `repo:org/repo:ref:refs/heads/main` |
|
||||
| `repository` | Full repository name | `org/repo` |
|
||||
| `repository_owner` | Organization/user | `org` |
|
||||
| `ref` | Git ref | `refs/heads/main` |
|
||||
| `ref_type` | Ref type | `branch` or `tag` |
|
||||
| `job_workflow_ref` | Workflow file | `.github/workflows/release.yml@refs/heads/main` |
|
||||
| `environment` | Deployment environment | `production` |
|
||||
|
||||
#### Identity Patterns
|
||||
|
||||
| Constraint | Pattern | Example |
|
||||
|------------|---------|---------|
|
||||
| Any ref | `repo:<owner>/<repo>:.*` | `repo:stellaops/scanner:.*` |
|
||||
| Main branch | `repo:<owner>/<repo>:ref:refs/heads/main` | `repo:stellaops/scanner:ref:refs/heads/main` |
|
||||
| Any branch | `repo:<owner>/<repo>:ref:refs/heads/.*` | `repo:stellaops/scanner:ref:refs/heads/.*` |
|
||||
| Version tags | `repo:<owner>/<repo>:ref:refs/tags/v.*` | `repo:stellaops/scanner:ref:refs/tags/v.*` |
|
||||
| Environment | `repo:<owner>/<repo>:environment:<env>` | `repo:stellaops/scanner:environment:production` |
|
||||
| Workflow | (use SAN) | N/A |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Accept only main branch
|
||||
stella attest verify \
|
||||
--artifact sha256:abc123... \
|
||||
--certificate-identity "repo:stellaops/scanner:ref:refs/heads/main" \
|
||||
--certificate-oidc-issuer "https://token.actions.githubusercontent.com"
|
||||
|
||||
# Accept main or release branches
|
||||
stella attest verify \
|
||||
--artifact sha256:abc123... \
|
||||
--certificate-identity "repo:stellaops/scanner:ref:refs/heads/(main|release/.*)" \
|
||||
--certificate-oidc-issuer "https://token.actions.githubusercontent.com"
|
||||
|
||||
# Accept any version tag
|
||||
stella attest verify \
|
||||
--artifact sha256:abc123... \
|
||||
--certificate-identity "repo:stellaops/scanner:ref:refs/tags/v[0-9]+\.[0-9]+\.[0-9]+.*" \
|
||||
--certificate-oidc-issuer "https://token.actions.githubusercontent.com"
|
||||
|
||||
# Accept production environment only
|
||||
stella attest verify \
|
||||
--artifact sha256:abc123... \
|
||||
--certificate-identity "repo:stellaops/scanner:environment:production" \
|
||||
--certificate-oidc-issuer "https://token.actions.githubusercontent.com"
|
||||
```
|
||||
|
||||
### GitLab CI
|
||||
|
||||
GitLab CI provides OIDC tokens with project and pipeline context.
|
||||
|
||||
#### Token Claims
|
||||
|
||||
| Claim | Description | Example |
|
||||
|-------|-------------|---------|
|
||||
| `sub` | Subject | `project_path:group/project:ref_type:branch:ref:main` |
|
||||
| `project_path` | Full project path | `stellaops/scanner` |
|
||||
| `namespace_path` | Namespace | `stellaops` |
|
||||
| `ref` | Git ref | `main` |
|
||||
| `ref_type` | Ref type | `branch` or `tag` |
|
||||
| `ref_protected` | Protected ref | `true` or `false` |
|
||||
| `environment` | Environment name | `production` |
|
||||
| `pipeline_source` | Trigger source | `push`, `web`, `schedule` |
|
||||
|
||||
#### Identity Patterns
|
||||
|
||||
| Constraint | Pattern | Example |
|
||||
|------------|---------|---------|
|
||||
| Any ref | `project_path:<group>/<project>:.*` | `project_path:stellaops/scanner:.*` |
|
||||
| Main branch | `project_path:<group>/<project>:ref_type:branch:ref:main` | Full pattern |
|
||||
| Protected refs | `project_path:<group>/<project>:ref_protected:true` | Full pattern |
|
||||
| Tags | `project_path:<group>/<project>:ref_type:tag:ref:.*` | Full pattern |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Accept main branch only
|
||||
stella attest verify \
|
||||
--artifact sha256:abc123... \
|
||||
--certificate-identity "project_path:stellaops/scanner:ref_type:branch:ref:main" \
|
||||
--certificate-oidc-issuer "https://gitlab.com"
|
||||
|
||||
# Accept any protected ref
|
||||
stella attest verify \
|
||||
--artifact sha256:abc123... \
|
||||
--certificate-identity "project_path:stellaops/scanner:ref_protected:true.*" \
|
||||
--certificate-oidc-issuer "https://gitlab.com"
|
||||
|
||||
# Self-hosted GitLab
|
||||
stella attest verify \
|
||||
--artifact sha256:abc123... \
|
||||
--certificate-identity "project_path:mygroup/myproject:.*" \
|
||||
--certificate-oidc-issuer "https://gitlab.internal.example.com"
|
||||
```
|
||||
|
||||
### Gitea
|
||||
|
||||
Gitea OIDC tokens follow a similar pattern to GitHub Actions.
|
||||
|
||||
#### Token Claims
|
||||
|
||||
| Claim | Description | Example |
|
||||
|-------|-------------|---------|
|
||||
| `sub` | Subject | `org/repo:ref:refs/heads/main` |
|
||||
| `repository` | Repository path | `org/repo` |
|
||||
| `ref` | Git ref | `refs/heads/main` |
|
||||
|
||||
#### Identity Patterns
|
||||
|
||||
| Constraint | Pattern | Example |
|
||||
|------------|---------|---------|
|
||||
| Any ref | `<org>/<repo>:.*` | `stellaops/scanner:.*` |
|
||||
| Main branch | `<org>/<repo>:ref:refs/heads/main` | `stellaops/scanner:ref:refs/heads/main` |
|
||||
| Tags | `<org>/<repo>:ref:refs/tags/.*` | `stellaops/scanner:ref:refs/tags/.*` |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Accept main branch
|
||||
stella attest verify \
|
||||
--artifact sha256:abc123... \
|
||||
--certificate-identity "stella-ops.org/git.stella-ops.org:ref:refs/heads/main" \
|
||||
--certificate-oidc-issuer "https://git.stella-ops.org"
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Security Recommendations
|
||||
|
||||
1. **Always Constrain to Repository**
|
||||
|
||||
Never accept wildcards that could match any repository:
|
||||
|
||||
```bash
|
||||
# BAD - accepts any repository
|
||||
--certificate-identity "repo:.*"
|
||||
|
||||
# GOOD - specific repository
|
||||
--certificate-identity "repo:stellaops/scanner:.*"
|
||||
```
|
||||
|
||||
2. **Prefer Branch/Tag Constraints for Production**
|
||||
|
||||
```bash
|
||||
# Better - only main branch
|
||||
--certificate-identity "repo:stellaops/scanner:ref:refs/heads/main"
|
||||
|
||||
# Even better - only signed tags
|
||||
--certificate-identity "repo:stellaops/scanner:ref:refs/tags/v.*"
|
||||
```
|
||||
|
||||
3. **Use Environment Constraints When Available**
|
||||
|
||||
```bash
|
||||
# Most specific - production environment only
|
||||
--certificate-identity "repo:stellaops/scanner:environment:production"
|
||||
```
|
||||
|
||||
4. **Always Require Rekor Proofs**
|
||||
|
||||
```bash
|
||||
# Always include --require-rekor for production
|
||||
stella attest verify \
|
||||
--artifact sha256:... \
|
||||
--certificate-identity "..." \
|
||||
--certificate-oidc-issuer "..." \
|
||||
--require-rekor
|
||||
```
|
||||
|
||||
5. **Pin Trusted Issuers**
|
||||
|
||||
Only trust expected OIDC issuers. Never accept `.*` for issuer.
|
||||
|
||||
### Common Patterns
|
||||
|
||||
#### Multi-Environment Trust
|
||||
|
||||
```yaml
|
||||
# GitHub Actions - Different constraints per environment
|
||||
staging:
|
||||
identity: "repo:myorg/myrepo:ref:refs/heads/.*"
|
||||
|
||||
production:
|
||||
identity: "repo:myorg/myrepo:ref:refs/(heads/main|tags/v.*)"
|
||||
```
|
||||
|
||||
#### Cross-Repository Trust
|
||||
|
||||
```bash
|
||||
# Trust signatures from multiple repositories
|
||||
stella attest verify \
|
||||
--artifact sha256:... \
|
||||
--certificate-identity "repo:myorg/(repo1|repo2|repo3):ref:refs/heads/main" \
|
||||
--certificate-oidc-issuer "https://token.actions.githubusercontent.com"
|
||||
```
|
||||
|
||||
#### Organization-Wide Trust
|
||||
|
||||
```bash
|
||||
# Trust any repository in organization (use with caution)
|
||||
stella attest verify \
|
||||
--artifact sha256:... \
|
||||
--certificate-identity "repo:myorg/.*:ref:refs/heads/main" \
|
||||
--certificate-oidc-issuer "https://token.actions.githubusercontent.com"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Errors
|
||||
|
||||
| Error | Cause | Solution |
|
||||
|-------|-------|----------|
|
||||
| `identity mismatch` | Pattern doesn't match certificate subject | Check ref format (refs/heads/ vs branch name) |
|
||||
| `issuer mismatch` | Wrong OIDC issuer URL | Use correct issuer for platform |
|
||||
| `certificate expired` | Signing cert expired, no Rekor proof | Ensure `--require-rekor` and Rekor was used at signing |
|
||||
| `no attestations found` | Attestation not attached to artifact | Verify attestation was pushed to registry |
|
||||
|
||||
### Debugging Identity Patterns
|
||||
|
||||
```bash
|
||||
# Inspect certificate to see actual identity
|
||||
stella attest inspect \
|
||||
--artifact sha256:... \
|
||||
--show-cert
|
||||
|
||||
# Expected output:
|
||||
# Certificate Subject: repo:stellaops/scanner:ref:refs/heads/main
|
||||
# Certificate Issuer: https://token.actions.githubusercontent.com
|
||||
# Certificate SAN: https://github.com/stellaops/scanner/.github/workflows/release.yml@refs/heads/main
|
||||
```
|
||||
|
||||
### Testing Patterns
|
||||
|
||||
```bash
|
||||
# Test pattern matching locally
|
||||
echo "repo:myorg/myrepo:ref:refs/heads/main" | \
|
||||
grep -E "repo:myorg/myrepo:ref:refs/heads/(main|develop)"
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Keyless Signing Guide](../modules/signer/guides/keyless-signing.md)
|
||||
- [GitHub Actions Templates](../../.github/workflows/examples/)
|
||||
- [GitLab CI Templates](../../deploy/gitlab/examples/)
|
||||
- [Sigstore Documentation](https://docs.sigstore.dev/)
|
||||
@@ -80,12 +80,12 @@
|
||||
docker compose up -d
|
||||
curl -fsS http://localhost:8080/health
|
||||
```
|
||||
6. **Validate JWKS and tokens:** call `/jwks` and issue a short-lived token via the CLI to confirm key material matches expectations. If the restored environment requires a fresh signing key, follow the rotation SOP in [`docs/11_AUTHORITY.md`](../../../11_AUTHORITY.md) using `ops/authority/key-rotation.sh` to invoke `/internal/signing/rotate`.
|
||||
6. **Validate JWKS and tokens:** call `/jwks` and issue a short-lived token via the CLI to confirm key material matches expectations. If the restored environment requires a fresh signing key, follow the rotation SOP in [`docs/AUTHORITY.md`](../../../AUTHORITY.md) using `ops/authority/key-rotation.sh` to invoke `/internal/signing/rotate`.
|
||||
|
||||
## Disaster Recovery Notes
|
||||
- **Air-gapped replication:** replicate archives via the Offline Update Kit transport channels; never attach USB devices without scanning.
|
||||
- **Retention:** maintain 30 daily snapshots + 12 monthly archival copies. Rotate encryption keys annually.
|
||||
- **Key compromise:** if signing keys are suspected compromised, restore from the latest clean backup, rotate via OPS3 (see `ops/authority/key-rotation.sh` and [`docs/11_AUTHORITY.md`](../../../11_AUTHORITY.md)), and publish a revocation notice.
|
||||
- **Key compromise:** if signing keys are suspected compromised, restore from the latest clean backup, rotate via OPS3 (see `ops/authority/key-rotation.sh` and [`docs/AUTHORITY.md`](../../../AUTHORITY.md)), and publish a revocation notice.
|
||||
- **PostgreSQL version:** keep dump/restore images pinned to the deployment version (compose uses `postgres:16`). Npgsql 8.x requires PostgreSQL **12+**—clusters still on older versions must be upgraded before restore.
|
||||
|
||||
## Verification Checklist
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Authority Signing Key Rotation Playbook
|
||||
|
||||
> **Status:** Authored 2025-10-12 as part of OPS3.KEY-ROTATION rollout.
|
||||
> Use together with `docs/11_AUTHORITY.md` (Authority service guide) and the automation shipped under `ops/authority/`.
|
||||
> Use together with `docs/AUTHORITY.md` (Authority service guide) and the automation shipped under `ops/authority/`.
|
||||
|
||||
## 1. Overview
|
||||
|
||||
@@ -78,7 +78,7 @@ Treat these as examples; real environments must maintain their own PEM material.
|
||||
|
||||
## 6. References
|
||||
|
||||
- `docs/11_AUTHORITY.md` – Architecture and rotation SOP (Section 5).
|
||||
- `docs/AUTHORITY.md` – Architecture and rotation SOP (Section 5).
|
||||
- `docs/modules/authority/operations/backup-restore.md` – Recovery flow referencing this playbook.
|
||||
- `ops/authority/README.md` – CLI usage and examples.
|
||||
- `scripts/rotate-policy-cli-secret.sh` – Helper to mint new `policy-cli` shared secrets when policy scope bundles change.
|
||||
|
||||
@@ -398,7 +398,7 @@ stella benchmark verify <CLAIM_ID>
|
||||
stella benchmark claims --output docs/claims-index.md
|
||||
|
||||
# Generate marketing battlecard
|
||||
stella benchmark battlecard --output docs/marketing/battlecard.md
|
||||
stella benchmark battlecard --output docs/product/battlecard.md
|
||||
|
||||
# Show comparison summary
|
||||
stella benchmark summary --format table|json|markdown
|
||||
|
||||
10000
docs/modules/binary-index/samples/products-10k.ndjson
Normal file
10000
docs/modules/binary-index/samples/products-10k.ndjson
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1 @@
|
||||
caa79c83b5a9affc3b9cc4e54a516281ddceff4804ce853fee3b62d7afb7ab69 products-10k.ndjson
|
||||
@@ -307,7 +307,7 @@ Policy Engine v2 pipelines now fail fast if policy documents are malformed. Afte
|
||||
dotnet run \
|
||||
--project src/Tools/PolicyDslValidator/PolicyDslValidator.csproj \
|
||||
-- \
|
||||
--strict docs/examples/policies/*.yaml
|
||||
--strict docs/samples/policy/*.yaml
|
||||
```
|
||||
|
||||
- `--strict` treats warnings as errors so missing metadata doesn’t slip through.
|
||||
|
||||
460
docs/modules/cli/guides/admin/admin-reference.md
Normal file
460
docs/modules/cli/guides/admin/admin-reference.md
Normal file
@@ -0,0 +1,460 @@
|
||||
# stella admin - Administrative Operations Reference
|
||||
|
||||
**Sprint:** SPRINT_4100_0006_0005 - Admin Utility Integration
|
||||
|
||||
## Overview
|
||||
|
||||
The `stella admin` command group provides administrative operations for platform management. These commands require elevated authentication and are used for policy management, user administration, feed configuration, and system maintenance.
|
||||
|
||||
## Authentication
|
||||
|
||||
Admin commands require one of the following authentication methods:
|
||||
|
||||
1. **OpTok with admin scopes** (recommended for production):
|
||||
```bash
|
||||
stella auth login
|
||||
# Obtain OpTok with admin.* scopes
|
||||
stella admin policy export
|
||||
```
|
||||
|
||||
2. **Bootstrap API key** (for initial setup before Authority configured):
|
||||
```bash
|
||||
export STELLAOPS_BOOTSTRAP_KEY="bootstrap-key-from-backend-config"
|
||||
stella admin users add admin@example.com --role admin
|
||||
```
|
||||
|
||||
### Required Scopes
|
||||
|
||||
| Command Group | Required Scope | Purpose |
|
||||
|---------------|----------------|---------|
|
||||
| `stella admin policy` | `admin.policy` | Policy management operations |
|
||||
| `stella admin users` | `admin.users` | User administration |
|
||||
| `stella admin feeds` | `admin.feeds` | Feed management |
|
||||
| `stella admin system` | `admin.platform` | System operations |
|
||||
|
||||
## Command Reference
|
||||
|
||||
### stella admin policy
|
||||
|
||||
Policy management commands for exporting, importing, and validating platform policies.
|
||||
|
||||
#### stella admin policy export
|
||||
|
||||
Export the active policy snapshot to a file or stdout.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
stella admin policy export [--output <path>] [--verbose]
|
||||
```
|
||||
|
||||
**Options:**
|
||||
- `-o, --output <path>` - Output file path (stdout if omitted)
|
||||
- `-v, --verbose` - Enable verbose output
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Export to stdout
|
||||
stella admin policy export
|
||||
|
||||
# Export to file
|
||||
stella admin policy export --output policy-backup.yaml
|
||||
|
||||
# Export with timestamp
|
||||
stella admin policy export --output backup-$(date +%F).yaml
|
||||
```
|
||||
|
||||
#### stella admin policy import
|
||||
|
||||
Import policy from a YAML or JSON file.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
stella admin policy import --file <path> [--validate-only] [--verbose]
|
||||
```
|
||||
|
||||
**Options:**
|
||||
- `-f, --file <path>` - Policy file to import (required)
|
||||
- `--validate-only` - Validate without importing
|
||||
- `-v, --verbose` - Enable verbose output
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Validate policy before importing
|
||||
stella admin policy import --file new-policy.yaml --validate-only
|
||||
|
||||
# Import policy
|
||||
stella admin policy import --file new-policy.yaml
|
||||
```
|
||||
|
||||
#### stella admin policy validate
|
||||
|
||||
Validate a policy file without importing.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
stella admin policy validate --file <path> [--verbose]
|
||||
```
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
stella admin policy validate --file policy.yaml
|
||||
```
|
||||
|
||||
#### stella admin policy list
|
||||
|
||||
List all policy revisions.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
stella admin policy list [--format <format>] [--verbose]
|
||||
```
|
||||
|
||||
**Options:**
|
||||
- `--format <format>` - Output format: `table` (default), `json`
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# List as table
|
||||
stella admin policy list
|
||||
|
||||
# List as JSON
|
||||
stella admin policy list --format json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella admin users
|
||||
|
||||
User management commands for adding, removing, and updating users.
|
||||
|
||||
#### stella admin users list
|
||||
|
||||
List platform users.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
stella admin users list [--role <role>] [--format <format>] [--verbose]
|
||||
```
|
||||
|
||||
**Options:**
|
||||
- `--role <role>` - Filter by role
|
||||
- `--format <format>` - Output format: `table` (default), `json`
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# List all users
|
||||
stella admin users list
|
||||
|
||||
# List all admins
|
||||
stella admin users list --role admin
|
||||
|
||||
# List as JSON
|
||||
stella admin users list --format json
|
||||
```
|
||||
|
||||
#### stella admin users add
|
||||
|
||||
Add a new user to the platform.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
stella admin users add <email> --role <role> [--tenant <id>] [--verbose]
|
||||
```
|
||||
|
||||
**Arguments:**
|
||||
- `<email>` - User email address
|
||||
|
||||
**Options:**
|
||||
- `-r, --role <role>` - User role (required)
|
||||
- `-t, --tenant <id>` - Tenant ID (default if omitted)
|
||||
|
||||
**Available Roles:**
|
||||
- `admin` - Full platform access
|
||||
- `security-engineer` - Security operations
|
||||
- `developer` - Development access
|
||||
- `viewer` - Read-only access
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Add admin user
|
||||
stella admin users add admin@example.com --role admin
|
||||
|
||||
# Add security engineer for specific tenant
|
||||
stella admin users add alice@example.com --role security-engineer --tenant acme-corp
|
||||
```
|
||||
|
||||
#### stella admin users revoke
|
||||
|
||||
Revoke user access.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
stella admin users revoke <email> [--confirm] [--verbose]
|
||||
```
|
||||
|
||||
**Arguments:**
|
||||
- `<email>` - User email address
|
||||
|
||||
**Options:**
|
||||
- `--confirm` - Confirm revocation (required for safety)
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Revoke user (requires --confirm)
|
||||
stella admin users revoke bob@example.com --confirm
|
||||
```
|
||||
|
||||
**Note:** The `--confirm` flag is required to prevent accidental user removal.
|
||||
|
||||
#### stella admin users update
|
||||
|
||||
Update user role.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
stella admin users update <email> --role <role> [--verbose]
|
||||
```
|
||||
|
||||
**Arguments:**
|
||||
- `<email>` - User email address
|
||||
|
||||
**Options:**
|
||||
- `-r, --role <role>` - New user role (required)
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Promote user to admin
|
||||
stella admin users update alice@example.com --role admin
|
||||
|
||||
# Change to viewer role
|
||||
stella admin users update bob@example.com --role viewer
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella admin feeds
|
||||
|
||||
Advisory feed management commands.
|
||||
|
||||
#### stella admin feeds list
|
||||
|
||||
List configured advisory feeds.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
stella admin feeds list [--format <format>] [--verbose]
|
||||
```
|
||||
|
||||
**Options:**
|
||||
- `--format <format>` - Output format: `table` (default), `json`
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# List feeds as table
|
||||
stella admin feeds list
|
||||
|
||||
# List feeds as JSON
|
||||
stella admin feeds list --format json
|
||||
```
|
||||
|
||||
#### stella admin feeds status
|
||||
|
||||
Show feed synchronization status.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
stella admin feeds status [--source <id>] [--verbose]
|
||||
```
|
||||
|
||||
**Options:**
|
||||
- `-s, --source <id>` - Filter by source ID (all if omitted)
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Show status for all feeds
|
||||
stella admin feeds status
|
||||
|
||||
# Show status for specific feed
|
||||
stella admin feeds status --source nvd
|
||||
```
|
||||
|
||||
#### stella admin feeds refresh
|
||||
|
||||
Trigger feed refresh.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
stella admin feeds refresh [--source <id>] [--force] [--verbose]
|
||||
```
|
||||
|
||||
**Options:**
|
||||
- `-s, --source <id>` - Refresh specific source (all if omitted)
|
||||
- `--force` - Force refresh (ignore cache)
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Refresh all feeds
|
||||
stella admin feeds refresh
|
||||
|
||||
# Force refresh specific feed
|
||||
stella admin feeds refresh --source nvd --force
|
||||
|
||||
# Refresh OSV feed
|
||||
stella admin feeds refresh --source osv
|
||||
```
|
||||
|
||||
#### stella admin feeds history
|
||||
|
||||
Show feed synchronization history.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
stella admin feeds history --source <id> [--limit <n>] [--verbose]
|
||||
```
|
||||
|
||||
**Options:**
|
||||
- `-s, --source <id>` - Source ID (required)
|
||||
- `-n, --limit <n>` - Limit number of results (default: 10)
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Show last 10 syncs for NVD
|
||||
stella admin feeds history --source nvd
|
||||
|
||||
# Show last 50 syncs for OSV
|
||||
stella admin feeds history --source osv --limit 50
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella admin system
|
||||
|
||||
System management and health commands.
|
||||
|
||||
#### stella admin system status
|
||||
|
||||
Show system health status.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
stella admin system status [--format <format>] [--verbose]
|
||||
```
|
||||
|
||||
**Options:**
|
||||
- `--format <format>` - Output format: `table` (default), `json`
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Show status as table
|
||||
stella admin system status
|
||||
|
||||
# Show status as JSON
|
||||
stella admin system status --format json
|
||||
```
|
||||
|
||||
#### stella admin system info
|
||||
|
||||
Show system version, build, and configuration information.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
stella admin system info [--verbose]
|
||||
```
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
stella admin system info
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
Admin commands can be configured via `appsettings.admin.yaml`:
|
||||
|
||||
```yaml
|
||||
StellaOps:
|
||||
Backend:
|
||||
BaseUrl: "https://api.stellaops.example.com"
|
||||
Auth:
|
||||
OpTok:
|
||||
Enabled: true
|
||||
|
||||
Admin:
|
||||
DefaultTenant: "default"
|
||||
RequireConfirmation: true
|
||||
AuditLog:
|
||||
Enabled: true
|
||||
OutputPath: "~/.stellaops/admin-audit.jsonl"
|
||||
```
|
||||
|
||||
See `etc/appsettings.admin.yaml.example` for full configuration options.
|
||||
|
||||
## Backend API Endpoints
|
||||
|
||||
Admin commands call the following backend APIs:
|
||||
|
||||
| Endpoint | Method | Command |
|
||||
|----------|--------|---------|
|
||||
| `/api/v1/admin/policy/export` | GET | `stella admin policy export` |
|
||||
| `/api/v1/admin/policy/import` | POST | `stella admin policy import` |
|
||||
| `/api/v1/admin/policy/validate` | POST | `stella admin policy validate` |
|
||||
| `/api/v1/admin/policy/revisions` | GET | `stella admin policy list` |
|
||||
| `/api/v1/admin/users` | GET | `stella admin users list` |
|
||||
| `/api/v1/admin/users` | POST | `stella admin users add` |
|
||||
| `/api/v1/admin/users/{email}` | DELETE | `stella admin users revoke` |
|
||||
| `/api/v1/admin/users/{email}` | PATCH | `stella admin users update` |
|
||||
| `/api/v1/admin/feeds` | GET | `stella admin feeds list` |
|
||||
| `/api/v1/admin/feeds/status` | GET | `stella admin feeds status` |
|
||||
| `/api/v1/admin/feeds/{id}/refresh` | POST | `stella admin feeds refresh` |
|
||||
| `/api/v1/admin/feeds/{id}/history` | GET | `stella admin feeds history` |
|
||||
| `/api/v1/admin/system/status` | GET | `stella admin system status` |
|
||||
| `/api/v1/admin/system/info` | GET | `stella admin system info` |
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Authentication Required**: All admin commands require valid OpTok or bootstrap key
|
||||
2. **Scope Validation**: Backend validates admin.* scopes for all operations
|
||||
3. **Audit Logging**: All admin operations are logged to audit trail
|
||||
4. **Confirmation for Destructive Ops**: Commands like `revoke` require `--confirm` flag
|
||||
5. **Bootstrap Mode**: Bootstrap key should only be used for initial setup
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Authentication Errors
|
||||
|
||||
```
|
||||
HTTP 401: Unauthorized
|
||||
```
|
||||
|
||||
**Solution**: Ensure you have a valid OpTok with admin scopes:
|
||||
```bash
|
||||
stella auth login
|
||||
stella admin policy export
|
||||
```
|
||||
|
||||
### Missing Scopes
|
||||
|
||||
```
|
||||
HTTP 403: Forbidden - insufficient scopes
|
||||
```
|
||||
|
||||
**Solution**: Request OpTok with required admin.* scopes from platform administrator.
|
||||
|
||||
### Backend API Not Available
|
||||
|
||||
```
|
||||
HTTP Error: Connection refused
|
||||
```
|
||||
|
||||
**Solution**: Verify backend URL in configuration:
|
||||
```bash
|
||||
export STELLAOPS_BACKEND__BASEURL="https://api.stellaops.example.com"
|
||||
stella admin system status
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- [CLI Reference](../API_CLI_REFERENCE.md)
|
||||
- [Authority Documentation](../AUTHORITY.md)
|
||||
- [Operational Procedures](../operations/administration.md)
|
||||
@@ -60,4 +60,4 @@ Offline/air-gapped usage patterns for the Stella CLI.
|
||||
## Tips
|
||||
- Keep bundles on read-only media to avoid hash drift.
|
||||
- Use `--dry-run` to validate without writing to registries.
|
||||
- Pair with `docs/airgap/overview.md` and `docs/airgap/sealing-and-egress.md` for policy context.
|
||||
- Pair with `docs/modules/airgap/guides/overview.md` and `docs/modules/airgap/guides/sealing-and-egress.md` for policy context.
|
||||
|
||||
215
docs/modules/cli/guides/commands/audit-pack.md
Normal file
215
docs/modules/cli/guides/commands/audit-pack.md
Normal file
@@ -0,0 +1,215 @@
|
||||
# Audit Pack CLI Commands
|
||||
|
||||
## Overview
|
||||
|
||||
The `stella audit-pack` command provides functionality for exporting, importing, verifying, and replaying audit packs for compliance and verification workflows.
|
||||
|
||||
## Commands
|
||||
|
||||
### Export
|
||||
|
||||
Export an audit pack from a scan result.
|
||||
|
||||
```bash
|
||||
stella audit-pack export --scan-id <id> --output audit-pack.tar.gz
|
||||
|
||||
# With signing
|
||||
stella audit-pack export --scan-id <id> --sign --key signing-key.pem --output audit-pack.tar.gz
|
||||
|
||||
# Minimize size
|
||||
stella audit-pack export --scan-id <id> --minimize --output audit-pack.tar.gz
|
||||
```
|
||||
|
||||
**Options:**
|
||||
- `--scan-id <id>` - Scan ID to export
|
||||
- `--output <path>` - Output file path (tar.gz)
|
||||
- `--sign` - Sign the audit pack
|
||||
- `--key <path>` - Signing key path (required if --sign)
|
||||
- `--minimize` - Minimize bundle size (only required feeds/policies)
|
||||
- `--name <name>` - Custom pack name
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
stella audit-pack export \
|
||||
--scan-id abc123 \
|
||||
--sign \
|
||||
--key ~/.stella/keys/signing-key.pem \
|
||||
--output compliance-pack-2025-12.tar.gz
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Verify
|
||||
|
||||
Verify audit pack integrity and signatures.
|
||||
|
||||
```bash
|
||||
stella audit-pack verify audit-pack.tar.gz
|
||||
|
||||
# Skip signature verification
|
||||
stella audit-pack verify --no-verify-signatures audit-pack.tar.gz
|
||||
```
|
||||
|
||||
**Options:**
|
||||
- `--no-verify-signatures` - Skip signature verification
|
||||
- `--json` - Output results as JSON
|
||||
|
||||
**Output:**
|
||||
```
|
||||
✅ Audit Pack Verification
|
||||
Pack ID: abc-123-def-456
|
||||
Created: 2025-12-22T00:00:00Z
|
||||
Files: 42 (all digests valid)
|
||||
Signature: Valid (verified with trust root 'prod-ca')
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Info
|
||||
|
||||
Display information about an audit pack.
|
||||
|
||||
```bash
|
||||
stella audit-pack info audit-pack.tar.gz
|
||||
|
||||
# JSON output
|
||||
stella audit-pack info --json audit-pack.tar.gz
|
||||
```
|
||||
|
||||
**Output:**
|
||||
```
|
||||
Audit Pack Information
|
||||
Pack ID: abc-123-def-456
|
||||
Name: compliance-pack-2025-12
|
||||
Created: 2025-12-22T00:00:00Z
|
||||
Schema: 1.0.0
|
||||
|
||||
Contents:
|
||||
Run Manifest: included
|
||||
Verdict: included
|
||||
Evidence: included
|
||||
SBOMs: 2 (CycloneDX, SPDX)
|
||||
Attestations: 3
|
||||
VEX Docs: 1
|
||||
Trust Roots: 2
|
||||
|
||||
Bundle:
|
||||
Feeds: 4 (NVD, GHSA, Debian, Alpine)
|
||||
Policies: 2 (default, strict)
|
||||
Size: 42.5 MB
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Replay
|
||||
|
||||
Replay scan from audit pack and compare results.
|
||||
|
||||
```bash
|
||||
stella audit-pack replay audit-pack.tar.gz --output replay-result.json
|
||||
|
||||
# Show differences
|
||||
stella audit-pack replay audit-pack.tar.gz --show-diff
|
||||
```
|
||||
|
||||
**Options:**
|
||||
- `--output <path>` - Write replay results to file
|
||||
- `--show-diff` - Display verdict differences
|
||||
- `--json` - JSON output format
|
||||
|
||||
**Output:**
|
||||
```
|
||||
✅ Replay Complete
|
||||
Original Verdict Digest: abc123...
|
||||
Replayed Verdict Digest: abc123...
|
||||
Match: Identical
|
||||
Duration: 1.2s
|
||||
|
||||
Verdict Comparison:
|
||||
✅ All findings match
|
||||
✅ All severities match
|
||||
✅ VEX statements identical
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Verify and Replay (Combined)
|
||||
|
||||
Verify integrity and replay in one command.
|
||||
|
||||
```bash
|
||||
stella audit-pack verify-and-replay audit-pack.tar.gz
|
||||
```
|
||||
|
||||
This combines `verify` and `replay` for a complete verification workflow.
|
||||
|
||||
**Output:**
|
||||
```
|
||||
Step 1/2: Verifying audit pack...
|
||||
✅ Integrity verified
|
||||
✅ Signatures valid
|
||||
|
||||
Step 2/2: Replaying scan...
|
||||
✅ Replay complete
|
||||
✅ Verdicts match
|
||||
|
||||
Overall Status: PASSED
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Exit Codes
|
||||
|
||||
| Code | Meaning |
|
||||
|------|---------|
|
||||
| 0 | Success |
|
||||
| 1 | Verification failed |
|
||||
| 2 | Replay failed |
|
||||
| 3 | Verdicts don't match |
|
||||
| 10 | Invalid arguments |
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables
|
||||
|
||||
- `STELLAOPS_AUDIT_PACK_VERIFY_SIGS` - Default signature verification (true/false)
|
||||
- `STELLAOPS_AUDIT_PACK_TRUST_ROOTS` - Directory containing trust roots
|
||||
- `STELLAOPS_OFFLINE_BUNDLE` - Offline bundle path for replay
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Full Compliance Workflow
|
||||
|
||||
```bash
|
||||
# 1. Export audit pack from scan
|
||||
stella audit-pack export \
|
||||
--scan-id prod-scan-2025-12-22 \
|
||||
--sign \
|
||||
--key production-signing-key.pem \
|
||||
--output compliance-pack.tar.gz
|
||||
|
||||
# 2. Transfer to auditor environment (air-gapped)
|
||||
scp compliance-pack.tar.gz auditor@secure-env:/audit/
|
||||
|
||||
# 3. Auditor verifies and replays
|
||||
ssh auditor@secure-env
|
||||
stella audit-pack verify-and-replay /audit/compliance-pack.tar.gz
|
||||
|
||||
# Output:
|
||||
# ✅ Verification PASSED
|
||||
# ✅ Replay PASSED - Verdicts identical
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
CLI commands are implemented in:
|
||||
- `src/Cli/StellaOps.Cli/Commands/AuditPackCommands.cs`
|
||||
|
||||
Backend services:
|
||||
- `StellaOps.AuditPack.Services.AuditPackBuilder`
|
||||
- `StellaOps.AuditPack.Services.AuditPackImporter`
|
||||
- `StellaOps.AuditPack.Services.AuditPackReplayer`
|
||||
@@ -56,5 +56,5 @@ Authenticate:
|
||||
stella auth login
|
||||
```
|
||||
|
||||
See: `docs/10_CONCELIER_CLI_QUICKSTART.md` and `docs/modules/concelier/operations/authority-audit-runbook.md`.
|
||||
See: `docs/CONCELIER_CLI_QUICKSTART.md` and `docs/modules/concelier/operations/authority-audit-runbook.md`.
|
||||
|
||||
|
||||
263
docs/modules/cli/guides/commands/drift.md
Normal file
263
docs/modules/cli/guides/commands/drift.md
Normal file
@@ -0,0 +1,263 @@
|
||||
# Drift CLI Reference
|
||||
|
||||
**Sprint:** SPRINT_3600_0004_0001
|
||||
**Task:** UI-024 - Update CLI documentation for drift commands
|
||||
|
||||
## Overview
|
||||
|
||||
The Drift CLI provides commands for detecting and analyzing reachability drift between scan results. Reachability drift occurs when the call paths to vulnerable code change between builds, potentially altering the risk profile of an application.
|
||||
|
||||
## Commands
|
||||
|
||||
### stellaops drift
|
||||
|
||||
Parent command for reachability drift operations.
|
||||
|
||||
```bash
|
||||
stellaops drift <SUBCOMMAND> [OPTIONS]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stellaops drift compare
|
||||
|
||||
Compare reachability between two scans or graph snapshots.
|
||||
|
||||
```bash
|
||||
stellaops drift compare [OPTIONS]
|
||||
```
|
||||
|
||||
#### Required Options
|
||||
|
||||
| Option | Alias | Description |
|
||||
|--------|-------|-------------|
|
||||
| `--base <ID>` | `-b` | Base scan/graph ID or commit SHA for comparison |
|
||||
|
||||
#### Optional Options
|
||||
|
||||
| Option | Alias | Description | Default |
|
||||
|--------|-------|-------------|---------|
|
||||
| `--head <ID>` | `-h` | Head scan/graph ID or commit SHA | latest |
|
||||
| `--image <REF>` | `-i` | Container image reference (digest or tag) | - |
|
||||
| `--repo <REPO>` | `-r` | Repository reference (owner/repo) | - |
|
||||
| `--output <FMT>` | `-o` | Output format: `table`, `json`, `sarif` | `table` |
|
||||
| `--min-severity <SEV>` | | Minimum severity: `critical`, `high`, `medium`, `low`, `info` | `medium` |
|
||||
| `--only-increases` | | Only show sinks with increased reachability | `false` |
|
||||
| `--verbose` | | Enable verbose output | `false` |
|
||||
|
||||
#### Examples
|
||||
|
||||
##### Compare by scan IDs
|
||||
|
||||
```bash
|
||||
stellaops drift compare --base abc123 --head def456
|
||||
```
|
||||
|
||||
##### Compare by commit SHAs
|
||||
|
||||
```bash
|
||||
stellaops drift compare --base HEAD~1 --head HEAD --repo myorg/myapp
|
||||
```
|
||||
|
||||
##### Filter to risk increases only
|
||||
|
||||
```bash
|
||||
stellaops drift compare --base abc123 --only-increases --min-severity high
|
||||
```
|
||||
|
||||
##### Output as JSON
|
||||
|
||||
```bash
|
||||
stellaops drift compare --base abc123 --output json > drift.json
|
||||
```
|
||||
|
||||
##### Output as SARIF for CI integration
|
||||
|
||||
```bash
|
||||
stellaops drift compare --base abc123 --output sarif > drift.sarif
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stellaops drift show
|
||||
|
||||
Display details of a previously computed drift result.
|
||||
|
||||
```bash
|
||||
stellaops drift show [OPTIONS]
|
||||
```
|
||||
|
||||
#### Required Options
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--id <ID>` | Drift result ID to display |
|
||||
|
||||
#### Optional Options
|
||||
|
||||
| Option | Alias | Description | Default |
|
||||
|--------|-------|-------------|---------|
|
||||
| `--output <FMT>` | `-o` | Output format: `table`, `json`, `sarif` | `table` |
|
||||
| `--expand-paths` | | Show full call paths instead of compressed view | `false` |
|
||||
| `--verbose` | | Enable verbose output | `false` |
|
||||
|
||||
#### Examples
|
||||
|
||||
##### Show drift result
|
||||
|
||||
```bash
|
||||
stellaops drift show --id drift-abc123
|
||||
```
|
||||
|
||||
##### Show with expanded paths
|
||||
|
||||
```bash
|
||||
stellaops drift show --id drift-abc123 --expand-paths
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Formats
|
||||
|
||||
### Table Format (Default)
|
||||
|
||||
Human-readable table output using Spectre.Console:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Reachability Drift (abc123) │
|
||||
├───────────────────────────────┬─────────────────────────────┤
|
||||
│ Metric │ Value │
|
||||
├───────────────────────────────┼─────────────────────────────┤
|
||||
│ Trend │ ↑ Increasing │
|
||||
│ Net Risk Delta │ +3 │
|
||||
│ Increased │ 4 │
|
||||
│ Decreased │ 1 │
|
||||
│ New Sinks │ 2 │
|
||||
│ Removed Sinks │ 0 │
|
||||
└───────────────────────────────┴─────────────────────────────┘
|
||||
|
||||
┌──────────────┬──────────────────────┬───────────────┬─────────────────────────┬───────┐
|
||||
│ Severity │ Sink │ CVE │ Bucket Change │ Delta │
|
||||
├──────────────┼──────────────────────┼───────────────┼─────────────────────────┼───────┤
|
||||
│ CRITICAL │ SqlConnection.Open │ CVE-2024-1234 │ Runtime → Entrypoint │ +2 │
|
||||
│ HIGH │ XmlParser.Parse │ CVE-2024-5678 │ Unknown → Direct │ +1 │
|
||||
└──────────────┴──────────────────────┴───────────────┴─────────────────────────┴───────┘
|
||||
```
|
||||
|
||||
### JSON Format
|
||||
|
||||
Structured JSON for programmatic processing:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "abc123",
|
||||
"comparedAt": "2025-12-18T10:30:00Z",
|
||||
"baseGraphId": "base-graph-id",
|
||||
"headGraphId": "head-graph-id",
|
||||
"summary": {
|
||||
"totalSinks": 42,
|
||||
"increasedReachability": 4,
|
||||
"decreasedReachability": 1,
|
||||
"unchangedReachability": 35,
|
||||
"newSinks": 2,
|
||||
"removedSinks": 0,
|
||||
"riskTrend": "increasing",
|
||||
"netRiskDelta": 3
|
||||
},
|
||||
"driftedSinks": [
|
||||
{
|
||||
"sinkSymbol": "SqlConnection.Open",
|
||||
"cveId": "CVE-2024-1234",
|
||||
"severity": "critical",
|
||||
"previousBucket": "runtime",
|
||||
"currentBucket": "entrypoint",
|
||||
"isRiskIncrease": true,
|
||||
"riskDelta": 2
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### SARIF Format
|
||||
|
||||
SARIF 2.1.0 output for CI/CD integration:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "2.1.0",
|
||||
"$schema": "https://raw.githubusercontent.com/oasis-tcs/sarif-spec/master/Schemata/sarif-schema-2.1.0.json",
|
||||
"runs": [
|
||||
{
|
||||
"tool": {
|
||||
"driver": {
|
||||
"name": "StellaOps Drift",
|
||||
"version": "1.0.0",
|
||||
"informationUri": "https://stellaops.io/docs/drift"
|
||||
}
|
||||
},
|
||||
"results": [
|
||||
{
|
||||
"ruleId": "CVE-2024-1234",
|
||||
"level": "error",
|
||||
"message": {
|
||||
"text": "Reachability changed: runtime → entrypoint"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Exit Codes
|
||||
|
||||
| Code | Description |
|
||||
|------|-------------|
|
||||
| `0` | Success (no risk increases or within threshold) |
|
||||
| `1` | Error during execution |
|
||||
| `2` | Risk increases detected |
|
||||
| `3` | Critical risk increases detected |
|
||||
|
||||
---
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
### GitHub Actions
|
||||
|
||||
```yaml
|
||||
- name: Check Reachability Drift
|
||||
run: |
|
||||
stellaops drift compare \
|
||||
--base ${{ github.event.pull_request.base.sha }} \
|
||||
--head ${{ github.sha }} \
|
||||
--repo ${{ github.repository }} \
|
||||
--output sarif > drift.sarif
|
||||
continue-on-error: true
|
||||
|
||||
- name: Upload SARIF
|
||||
uses: github/codeql-action/upload-sarif@v2
|
||||
with:
|
||||
sarif_file: drift.sarif
|
||||
```
|
||||
|
||||
### GitLab CI
|
||||
|
||||
```yaml
|
||||
drift-check:
|
||||
script:
|
||||
- stellaops drift compare --base $CI_MERGE_REQUEST_DIFF_BASE_SHA --head $CI_COMMIT_SHA --output sarif > drift.sarif
|
||||
artifacts:
|
||||
reports:
|
||||
sast: drift.sarif
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Reachability Analysis](../reachability/README.md)
|
||||
- [Smart-Diff CLI](./smart-diff-cli.md)
|
||||
- [VEX Decisioning](../vex/decisioning.md)
|
||||
558
docs/modules/cli/guides/commands/reachability-reference.md
Normal file
558
docs/modules/cli/guides/commands/reachability-reference.md
Normal file
@@ -0,0 +1,558 @@
|
||||
# Reachability CLI Reference
|
||||
|
||||
**Sprint:** SPRINT_3500_0004_0004
|
||||
**Version:** 1.0.0
|
||||
|
||||
## Overview
|
||||
|
||||
The Reachability CLI commands enable call graph management, reachability computation, and explain queries. All commands support air-gapped operation.
|
||||
|
||||
---
|
||||
|
||||
## Commands
|
||||
|
||||
### stella reachability
|
||||
|
||||
Manage reachability analysis.
|
||||
|
||||
```bash
|
||||
stella reachability <SUBCOMMAND> [OPTIONS]
|
||||
```
|
||||
|
||||
#### Subcommands
|
||||
|
||||
| Subcommand | Description |
|
||||
|------------|-------------|
|
||||
| `compute` | Trigger reachability computation |
|
||||
| `findings` | List reachability findings |
|
||||
| `explain` | Explain reachability verdict |
|
||||
| `explain-all` | Export all explanations |
|
||||
| `summary` | Show reachability summary |
|
||||
| `job-status` | Check computation job status |
|
||||
| `job-logs` | View job logs |
|
||||
| `job-cancel` | Cancel running job |
|
||||
|
||||
---
|
||||
|
||||
### stella reachability compute
|
||||
|
||||
Trigger reachability computation for a scan.
|
||||
|
||||
```bash
|
||||
stella reachability compute [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Scan ID | Required |
|
||||
| `--max-depth <N>` | Maximum path length to explore | 10 |
|
||||
| `--indirect-resolution <MODE>` | Handle indirect calls: `conservative`, `aggressive`, `skip` | `conservative` |
|
||||
| `--timeout <DURATION>` | Maximum computation time | 300s |
|
||||
| `--parallel` | Enable parallel BFS | `true` |
|
||||
| `--include-runtime` | Merge runtime evidence | `true` |
|
||||
| `--offline` | Run in offline mode | `false` |
|
||||
| `--symbol-db <PATH>` | Symbol resolution database | System default |
|
||||
| `--deterministic` | Enable deterministic mode | `true` |
|
||||
| `--seed <BASE64>` | Random seed for determinism | Auto |
|
||||
| `--graph-digest <HASH>` | Use specific call graph version | Latest |
|
||||
| `--partition-by <KEY>` | Partition analysis: `artifact`, `entrypoint` | — |
|
||||
| `--force` | Force recomputation | `false` |
|
||||
| `--wait` | Wait for completion | `false` |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Basic computation
|
||||
stella reachability compute --scan-id $SCAN_ID
|
||||
|
||||
# With custom options
|
||||
stella reachability compute --scan-id $SCAN_ID \
|
||||
--max-depth 20 \
|
||||
--timeout 600s \
|
||||
--indirect-resolution conservative
|
||||
|
||||
# Wait for completion
|
||||
stella reachability compute --scan-id $SCAN_ID --wait
|
||||
|
||||
# Offline computation
|
||||
stella reachability compute --scan-id $SCAN_ID --offline
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella reachability findings
|
||||
|
||||
List reachability findings for a scan.
|
||||
|
||||
```bash
|
||||
stella reachability findings [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Scan ID | Required |
|
||||
| `--status <STATUS>` | Filter by status (comma-separated) | All |
|
||||
| `--cve <ID>` | Filter by CVE ID | — |
|
||||
| `--purl <PURL>` | Filter by package URL | — |
|
||||
| `--min-confidence <N>` | Minimum confidence (0-1) | 0 |
|
||||
| `--output <PATH>` | Output file path | stdout |
|
||||
| `--output-format <FMT>` | Format: `json`, `yaml`, `table`, `sarif` | `table` |
|
||||
|
||||
#### Status Values
|
||||
|
||||
| Status | Description |
|
||||
|--------|-------------|
|
||||
| `UNREACHABLE` | No path found |
|
||||
| `POSSIBLY_REACHABLE` | Path with heuristic edges |
|
||||
| `REACHABLE_STATIC` | Statically proven path |
|
||||
| `REACHABLE_PROVEN` | Runtime confirmed |
|
||||
| `UNKNOWN` | Insufficient data |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# List all findings
|
||||
stella reachability findings --scan-id $SCAN_ID
|
||||
|
||||
# Filter by status
|
||||
stella reachability findings --scan-id $SCAN_ID \
|
||||
--status REACHABLE_STATIC,REACHABLE_PROVEN
|
||||
|
||||
# Export as SARIF for CI
|
||||
stella reachability findings --scan-id $SCAN_ID \
|
||||
--status REACHABLE_STATIC,REACHABLE_PROVEN \
|
||||
--output-format sarif \
|
||||
--output findings.sarif
|
||||
|
||||
# JSON output
|
||||
stella reachability findings --scan-id $SCAN_ID --output-format json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella reachability explain
|
||||
|
||||
Explain a reachability verdict.
|
||||
|
||||
```bash
|
||||
stella reachability explain [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Scan ID | Required |
|
||||
| `--cve <ID>` | CVE ID | Required |
|
||||
| `--purl <PURL>` | Package URL | Required |
|
||||
| `--all-paths` | Show all paths, not just shortest | `false` |
|
||||
| `--max-paths <N>` | Maximum paths to show | 5 |
|
||||
| `--verbose` | Show detailed explanation | `false` |
|
||||
| `--offline` | Run in offline mode | `false` |
|
||||
| `--output <PATH>` | Output file path | stdout |
|
||||
| `--output-format <FMT>` | Format: `json`, `yaml`, `text` | `text` |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Explain single finding
|
||||
stella reachability explain --scan-id $SCAN_ID \
|
||||
--cve CVE-2024-1234 \
|
||||
--purl "pkg:npm/lodash@4.17.20"
|
||||
|
||||
# Show all paths
|
||||
stella reachability explain --scan-id $SCAN_ID \
|
||||
--cve CVE-2024-1234 \
|
||||
--purl "pkg:npm/lodash@4.17.20" \
|
||||
--all-paths
|
||||
|
||||
# JSON output
|
||||
stella reachability explain --scan-id $SCAN_ID \
|
||||
--cve CVE-2024-1234 \
|
||||
--purl "pkg:npm/lodash@4.17.20" \
|
||||
--output-format json
|
||||
```
|
||||
|
||||
#### Output Example
|
||||
|
||||
```
|
||||
Status: REACHABLE_STATIC
|
||||
Confidence: 0.70
|
||||
|
||||
Shortest Path (depth=3):
|
||||
[0] MyApp.Controllers.OrdersController::Get(Guid)
|
||||
Entrypoint: HTTP GET /api/orders/{id}
|
||||
[1] MyApp.Services.OrderService::Process(Order)
|
||||
Edge: static (direct_call)
|
||||
[2] Lodash.merge(Object, Object) [VULNERABLE]
|
||||
Edge: static (direct_call)
|
||||
|
||||
Why Reachable:
|
||||
- Static call path exists from HTTP entrypoint /api/orders/{id}
|
||||
- All edges are statically proven (no heuristics)
|
||||
- Vulnerable function Lodash.merge() is directly invoked
|
||||
|
||||
Confidence Factors:
|
||||
staticPathExists: +0.50
|
||||
noHeuristicEdges: +0.20
|
||||
runtimeConfirmed: +0.00
|
||||
|
||||
Alternative Paths: 2
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella reachability explain-all
|
||||
|
||||
Export all reachability explanations.
|
||||
|
||||
```bash
|
||||
stella reachability explain-all [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Scan ID | Required |
|
||||
| `--status <STATUS>` | Filter by status | All |
|
||||
| `--output <PATH>` | Output file path | Required |
|
||||
| `--offline` | Run in offline mode | `false` |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Export all explanations
|
||||
stella reachability explain-all --scan-id $SCAN_ID --output explanations.json
|
||||
|
||||
# Export only reachable findings
|
||||
stella reachability explain-all --scan-id $SCAN_ID \
|
||||
--status REACHABLE_STATIC,REACHABLE_PROVEN \
|
||||
--output reachable-explanations.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella reachability summary
|
||||
|
||||
Show reachability summary for a scan.
|
||||
|
||||
```bash
|
||||
stella reachability summary [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Scan ID | Required |
|
||||
| `--output-format <FMT>` | Format: `json`, `yaml`, `table` | `table` |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Show summary
|
||||
stella reachability summary --scan-id $SCAN_ID
|
||||
|
||||
# Output:
|
||||
# Total vulnerabilities: 45
|
||||
# Unreachable: 38 (84%)
|
||||
# Possibly reachable: 4 (9%)
|
||||
# Reachable (static): 2 (4%)
|
||||
# Reachable (proven): 1 (2%)
|
||||
# Unknown: 0 (0%)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella reachability job-status
|
||||
|
||||
Check computation job status.
|
||||
|
||||
```bash
|
||||
stella reachability job-status [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--job-id <ID>` | Job ID | Required |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
stella reachability job-status --job-id reachability-job-001
|
||||
|
||||
# Output:
|
||||
# Status: running
|
||||
# Progress: 67% (8,234 / 12,345 nodes visited)
|
||||
# Started: 2025-12-20T10:00:00Z
|
||||
# Estimated completion: 2025-12-20T10:02:30Z
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Call Graph Commands
|
||||
|
||||
### stella scan graph
|
||||
|
||||
Manage call graphs.
|
||||
|
||||
```bash
|
||||
stella scan graph <SUBCOMMAND> [OPTIONS]
|
||||
```
|
||||
|
||||
#### Subcommands
|
||||
|
||||
| Subcommand | Description |
|
||||
|------------|-------------|
|
||||
| `upload` | Upload call graph |
|
||||
| `summary` | Show call graph summary |
|
||||
| `entrypoints` | List entrypoints |
|
||||
| `export` | Export call graph |
|
||||
| `validate` | Validate call graph |
|
||||
| `visualize` | Generate visualization |
|
||||
| `convert` | Convert graph format |
|
||||
| `partition` | Partition large graph |
|
||||
| `merge` | Merge multiple graphs |
|
||||
|
||||
---
|
||||
|
||||
### stella scan graph upload
|
||||
|
||||
Upload a call graph to a scan.
|
||||
|
||||
```bash
|
||||
stella scan graph upload [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Scan ID | Required |
|
||||
| `--file <PATH>` | Call graph file | Required |
|
||||
| `--format <FMT>` | Format: `json`, `ndjson` | Auto-detect |
|
||||
| `--streaming` | Use streaming upload | `false` |
|
||||
| `--framework <NAME>` | Framework hint | Auto-detect |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Basic upload
|
||||
stella scan graph upload --scan-id $SCAN_ID --file callgraph.json
|
||||
|
||||
# Streaming upload (large graphs)
|
||||
stella scan graph upload --scan-id $SCAN_ID \
|
||||
--file callgraph.ndjson \
|
||||
--format ndjson \
|
||||
--streaming
|
||||
|
||||
# With framework hint
|
||||
stella scan graph upload --scan-id $SCAN_ID \
|
||||
--file callgraph.json \
|
||||
--framework aspnetcore
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella scan graph summary
|
||||
|
||||
Show call graph summary.
|
||||
|
||||
```bash
|
||||
stella scan graph summary [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Scan ID | Required |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
stella scan graph summary --scan-id $SCAN_ID
|
||||
|
||||
# Output:
|
||||
# Nodes: 12,345
|
||||
# Edges: 56,789
|
||||
# Entrypoints: 42
|
||||
# Languages: [dotnet, java]
|
||||
# Size: 15.2 MB
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella scan graph entrypoints
|
||||
|
||||
List detected entrypoints.
|
||||
|
||||
```bash
|
||||
stella scan graph entrypoints [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Scan ID | Required |
|
||||
| `--verbose` | Show detailed info | `false` |
|
||||
| `--output-format <FMT>` | Format: `json`, `yaml`, `table` | `table` |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# List entrypoints
|
||||
stella scan graph entrypoints --scan-id $SCAN_ID
|
||||
|
||||
# Output:
|
||||
# Kind | Route | Framework | Node
|
||||
# ─────────┼─────────────────────┼─────────────┼────────────────
|
||||
# http | GET /api/orders | aspnetcore | OrdersController::Get
|
||||
# http | POST /api/orders | aspnetcore | OrdersController::Create
|
||||
# grpc | OrderService.Get | grpc-dotnet | OrderService::GetOrder
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella scan graph validate
|
||||
|
||||
Validate call graph structure.
|
||||
|
||||
```bash
|
||||
stella scan graph validate [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Validate uploaded graph | — |
|
||||
| `--file <PATH>` | Validate local file | — |
|
||||
| `--strict` | Enable strict validation | `false` |
|
||||
|
||||
#### Validation Checks
|
||||
|
||||
- All edge targets exist as nodes
|
||||
- Entrypoints reference valid nodes
|
||||
- No orphan nodes
|
||||
- No cycles in entrypoint definitions
|
||||
- Schema compliance
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Validate uploaded graph
|
||||
stella scan graph validate --scan-id $SCAN_ID
|
||||
|
||||
# Validate before upload
|
||||
stella scan graph validate --file callgraph.json --strict
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella scan graph visualize
|
||||
|
||||
Generate call graph visualization.
|
||||
|
||||
```bash
|
||||
stella scan graph visualize [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Scan ID | Required |
|
||||
| `--node <ID>` | Center on specific node | — |
|
||||
| `--depth <N>` | Visualization depth | 3 |
|
||||
| `--output <PATH>` | Output file (SVG/PNG/DOT) | Required |
|
||||
| `--format <FMT>` | Format: `svg`, `png`, `dot` | `svg` |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Visualize subgraph
|
||||
stella scan graph visualize --scan-id $SCAN_ID \
|
||||
--node sha256:node123... \
|
||||
--depth 3 \
|
||||
--output subgraph.svg
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Common Options
|
||||
|
||||
### Authentication
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--token <TOKEN>` | OAuth bearer token |
|
||||
| `--token-file <PATH>` | File containing token |
|
||||
| `--profile <NAME>` | Use named profile |
|
||||
|
||||
### Output
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--quiet` | Suppress non-error output |
|
||||
| `--verbose` | Enable verbose output |
|
||||
| `--debug` | Enable debug logging |
|
||||
| `--no-color` | Disable colored output |
|
||||
|
||||
### Connection
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--endpoint <URL>` | Scanner API endpoint |
|
||||
| `--timeout <DURATION>` | Request timeout |
|
||||
| `--insecure` | Skip TLS verification |
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Description |
|
||||
|----------|-------------|
|
||||
| `STELLA_TOKEN` | OAuth token |
|
||||
| `STELLA_ENDPOINT` | API endpoint |
|
||||
| `STELLA_PROFILE` | Profile name |
|
||||
| `STELLA_OFFLINE` | Offline mode |
|
||||
| `STELLA_SYMBOL_DB` | Symbol database path |
|
||||
|
||||
---
|
||||
|
||||
## Exit Codes
|
||||
|
||||
| Code | Meaning |
|
||||
|------|---------|
|
||||
| 0 | Success |
|
||||
| 1 | General error |
|
||||
| 2 | Invalid arguments |
|
||||
| 3 | Authentication failed |
|
||||
| 4 | Resource not found |
|
||||
| 5 | Computation failed |
|
||||
| 6 | Network error |
|
||||
| 10 | Timeout |
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Score Proofs CLI Reference](./score-proofs-cli-reference.md)
|
||||
- [Unknowns CLI Reference](./unknowns-cli-reference.md)
|
||||
- [Reachability API Reference](../api/score-proofs-reachability-api-reference.md)
|
||||
- [Reachability Runbook](../operations/reachability-runbook.md)
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Version**: 1.0.0
|
||||
**Sprint**: 3500.0004.0004
|
||||
1061
docs/modules/cli/guides/commands/reference.md
Normal file
1061
docs/modules/cli/guides/commands/reference.md
Normal file
File diff suppressed because it is too large
Load Diff
40
docs/modules/cli/guides/commands/sbomer.md
Normal file
40
docs/modules/cli/guides/commands/sbomer.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# stella sbomer (DOCS-CLI-DET-01)
|
||||
|
||||
Offline-first usage of `stella sbomer` verbs with deterministic outputs.
|
||||
|
||||
## Prerequisites
|
||||
- Install CLI from offline bundle; ensure `local-nugets/` is available.
|
||||
- Export images/charts locally; no network access required during commands.
|
||||
|
||||
## Commands
|
||||
- `stella sbomer layer <image>`
|
||||
- Emits deterministic SBOM per layer; options: `--format cyclonedx|spdx`, `--output <path>`, `--deterministic` (default true).
|
||||
- `stella sbomer compose <manifest>`
|
||||
- Merges layer SBOMs with stable ordering; rejects missing hashes.
|
||||
- `stella sbomer drift <baseline> <current>`
|
||||
- Computes drift; returns machine-readable diff with stable ordering.
|
||||
- `stella sbomer verify <sbom> --hash <sha256>`
|
||||
- Validates hash/signature if provided; offline only.
|
||||
|
||||
## Determinism rules
|
||||
- Use fixed sort keys (component name, version, purl) when composing.
|
||||
- All timestamps forced to `1970-01-01T00:00:00Z` unless `--timestamp` supplied.
|
||||
- GUID/UUID generation disabled; use content hashes as IDs.
|
||||
- Outputs written in UTF-8 with LF line endings; no BOM.
|
||||
|
||||
## Examples
|
||||
```bash
|
||||
# generate layer SBOM
|
||||
stella sbomer layer ghcr.io/acme/app:1.2.3 --format cyclonedx --output app.cdx.json
|
||||
|
||||
# compose
|
||||
stella sbomer compose app.cdx.json lib.cdx.json --output combined.cdx.json
|
||||
|
||||
# drift
|
||||
stella sbomer drift baseline.cdx.json combined.cdx.json --output drift.json
|
||||
```
|
||||
|
||||
## Offline tips
|
||||
- Preload registries; set `STELLA_SBOMER_OFFLINE=true` to prevent remote pulls.
|
||||
- Configure cache dir via `STELLA_CACHE_DIR` for reproducible paths.
|
||||
- For air-gapped logs, use `--log-format json` and capture to file for later analysis.
|
||||
450
docs/modules/cli/guides/commands/score-proofs-reference.md
Normal file
450
docs/modules/cli/guides/commands/score-proofs-reference.md
Normal file
@@ -0,0 +1,450 @@
|
||||
# Score Proofs CLI Reference
|
||||
|
||||
**Sprint:** SPRINT_3500_0004_0004
|
||||
**Version:** 1.0.0
|
||||
|
||||
## Overview
|
||||
|
||||
The Score Proofs CLI commands enable score computation, replay, proof verification, and proof bundle management. All commands support air-gapped operation.
|
||||
|
||||
---
|
||||
|
||||
## Commands
|
||||
|
||||
### stella score
|
||||
|
||||
Compute or replay vulnerability scores.
|
||||
|
||||
```bash
|
||||
stella score <SUBCOMMAND> [OPTIONS]
|
||||
```
|
||||
|
||||
#### Subcommands
|
||||
|
||||
| Subcommand | Description |
|
||||
|------------|-------------|
|
||||
| `compute` | Compute scores for a scan |
|
||||
| `replay` | Replay score computation with different inputs |
|
||||
| `show` | Display score details for a scan |
|
||||
| `diff` | Compare scores between runs |
|
||||
| `manifest` | View/export scan manifest |
|
||||
| `inputs` | List scoring inputs |
|
||||
|
||||
---
|
||||
|
||||
### stella score compute
|
||||
|
||||
Compute vulnerability scores for a scan.
|
||||
|
||||
```bash
|
||||
stella score compute [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Scan ID to compute scores for | Required |
|
||||
| `--deterministic` | Enable deterministic mode | `true` |
|
||||
| `--seed <BASE64>` | Random seed for determinism | Auto-generated |
|
||||
| `--output <PATH>` | Output file path | stdout |
|
||||
| `--output-format <FMT>` | Format: `json`, `yaml`, `table` | `table` |
|
||||
| `--include-proof` | Include proof ledger in output | `false` |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Compute scores
|
||||
stella score compute --scan-id $SCAN_ID
|
||||
|
||||
# Compute with proof output
|
||||
stella score compute --scan-id $SCAN_ID --include-proof --output-format json
|
||||
|
||||
# Compute in deterministic mode with fixed seed
|
||||
stella score compute --scan-id $SCAN_ID --deterministic --seed "AQIDBA=="
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella score replay
|
||||
|
||||
Replay score computation with updated feeds or policies.
|
||||
|
||||
```bash
|
||||
stella score replay [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Scan ID to replay | Required |
|
||||
| `--feed-snapshot <HASH>` | Override feed snapshot hash | Current |
|
||||
| `--vex-snapshot <HASH>` | Override VEX snapshot hash | Current |
|
||||
| `--policy-snapshot <HASH>` | Override policy hash | Current |
|
||||
| `--use-original-snapshots` | Use exact original snapshots | `false` |
|
||||
| `--diff` | Show diff from original | `false` |
|
||||
| `--skip-unchanged` | Skip if no input changes | `false` |
|
||||
| `--offline` | Run in offline mode | `false` |
|
||||
| `--bundle <PATH>` | Use offline bundle for replay | — |
|
||||
| `--output <PATH>` | Output file path | stdout |
|
||||
| `--output-format <FMT>` | Format: `json`, `yaml`, `table` | `table` |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Replay with current feeds
|
||||
stella score replay --scan-id $SCAN_ID
|
||||
|
||||
# Replay with specific feed snapshot
|
||||
stella score replay --scan-id $SCAN_ID --feed-snapshot sha256:newfeed...
|
||||
|
||||
# Replay and compare with original
|
||||
stella score replay --scan-id $SCAN_ID --diff
|
||||
|
||||
# Replay with original snapshots (exact reproduction)
|
||||
stella score replay --scan-id $SCAN_ID --use-original-snapshots
|
||||
|
||||
# Offline replay
|
||||
stella score replay --scan-id $SCAN_ID --offline --bundle /path/to/bundle.zip
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella score show
|
||||
|
||||
Display score details for a scan.
|
||||
|
||||
```bash
|
||||
stella score show [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Scan ID | Required |
|
||||
| `--verbose` | Show detailed breakdown | `false` |
|
||||
| `--include-evidence` | Include evidence references | `false` |
|
||||
| `--output-format <FMT>` | Format: `json`, `yaml`, `table` | `table` |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Show score summary
|
||||
stella score show --scan-id $SCAN_ID
|
||||
|
||||
# Show detailed breakdown
|
||||
stella score show --scan-id $SCAN_ID --verbose
|
||||
|
||||
# JSON output
|
||||
stella score show --scan-id $SCAN_ID --output-format json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella score diff
|
||||
|
||||
Compare scores between two runs.
|
||||
|
||||
```bash
|
||||
stella score diff [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Scan ID to compare | Required |
|
||||
| `--original` | Compare with original score | `false` |
|
||||
| `--replayed` | Compare with most recent replay | `false` |
|
||||
| `--base <RUN_ID>` | Base run ID for comparison | — |
|
||||
| `--target <RUN_ID>` | Target run ID for comparison | — |
|
||||
| `--output-format <FMT>` | Format: `json`, `yaml`, `table` | `table` |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Compare original vs replayed
|
||||
stella score diff --scan-id $SCAN_ID --original --replayed
|
||||
|
||||
# Compare two specific runs
|
||||
stella score diff --scan-id $SCAN_ID --base run-001 --target run-002
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella score manifest
|
||||
|
||||
View or export scan manifest.
|
||||
|
||||
```bash
|
||||
stella score manifest [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Scan ID | Required |
|
||||
| `--output <PATH>` | Output file path | stdout |
|
||||
| `--include-dsse` | Include DSSE envelope | `false` |
|
||||
| `--verify` | Verify DSSE signature | `false` |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# View manifest
|
||||
stella score manifest --scan-id $SCAN_ID
|
||||
|
||||
# Export with DSSE
|
||||
stella score manifest --scan-id $SCAN_ID --include-dsse --output manifest.json
|
||||
|
||||
# Verify manifest signature
|
||||
stella score manifest --scan-id $SCAN_ID --verify
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Proof Commands
|
||||
|
||||
### stella proof
|
||||
|
||||
Manage proof bundles.
|
||||
|
||||
```bash
|
||||
stella proof <SUBCOMMAND> [OPTIONS]
|
||||
```
|
||||
|
||||
#### Subcommands
|
||||
|
||||
| Subcommand | Description |
|
||||
|------------|-------------|
|
||||
| `verify` | Verify a proof bundle |
|
||||
| `download` | Download proof bundle |
|
||||
| `export` | Export proof bundle |
|
||||
| `inspect` | Inspect proof bundle contents |
|
||||
| `status` | Check proof status |
|
||||
| `list` | List proofs for a scan |
|
||||
| `retrieve` | Retrieve from cold storage |
|
||||
|
||||
---
|
||||
|
||||
### stella proof verify
|
||||
|
||||
Verify a proof bundle.
|
||||
|
||||
```bash
|
||||
stella proof verify [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--bundle-id <HASH>` | Proof bundle ID (sha256:...) | — |
|
||||
| `--bundle <PATH>` | Local proof bundle file | — |
|
||||
| `--offline` | Skip Rekor verification | `false` |
|
||||
| `--skip-rekor` | Alias for --offline | `false` |
|
||||
| `--check-rekor` | Force Rekor verification | `false` |
|
||||
| `--trust-anchor <PATH>` | Trust anchor file | System default |
|
||||
| `--public-key <PATH>` | Public key file | — |
|
||||
| `--self-contained` | Use embedded trust anchors | `false` |
|
||||
| `--verbose` | Show detailed verification | `false` |
|
||||
| `--check <CHECK>` | Verify specific check only | All |
|
||||
|
||||
#### Verification Checks
|
||||
|
||||
| Check | Description |
|
||||
|-------|-------------|
|
||||
| `signatureValid` | DSSE signature verification |
|
||||
| `idRecomputed` | Content-addressed ID match |
|
||||
| `merklePathValid` | Merkle tree construction |
|
||||
| `rekorInclusion` | Transparency log entry |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Verify online
|
||||
stella proof verify --bundle-id sha256:proof123...
|
||||
|
||||
# Verify offline
|
||||
stella proof verify --bundle proof.zip --offline
|
||||
|
||||
# Verify with specific trust anchor
|
||||
stella proof verify --bundle proof.zip --offline --trust-anchor anchors.json
|
||||
|
||||
# Verify specific check
|
||||
stella proof verify --bundle-id sha256:proof123... --check signatureValid
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella proof download
|
||||
|
||||
Download proof bundle.
|
||||
|
||||
```bash
|
||||
stella proof download [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Scan ID | Required |
|
||||
| `--root-hash <HASH>` | Specific proof root hash | Latest |
|
||||
| `--output <PATH>` | Output file path | `proof-{scanId}.zip` |
|
||||
| `--all` | Download all proofs for scan | `false` |
|
||||
| `--output-dir <PATH>` | Output directory (with --all) | `.` |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Download latest proof
|
||||
stella proof download --scan-id $SCAN_ID --output proof.zip
|
||||
|
||||
# Download specific proof
|
||||
stella proof download --scan-id $SCAN_ID --root-hash sha256:proof123... --output proof.zip
|
||||
|
||||
# Download all proofs
|
||||
stella proof download --scan-id $SCAN_ID --all --output-dir ./proofs/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella proof export
|
||||
|
||||
Export proof bundle with additional data.
|
||||
|
||||
```bash
|
||||
stella proof export [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Scan ID | Required |
|
||||
| `--portable` | Create self-contained portable bundle | `false` |
|
||||
| `--include-manifest` | Include scan manifest | `true` |
|
||||
| `--include-chain` | Include full proof chain | `false` |
|
||||
| `--include-trust-anchors` | Include trust anchor keys | `false` |
|
||||
| `--output <PATH>` | Output file path | Required |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Export standard bundle
|
||||
stella proof export --scan-id $SCAN_ID --output proof-bundle.zip
|
||||
|
||||
# Export portable bundle (for offline verification)
|
||||
stella proof export --scan-id $SCAN_ID --portable --include-trust-anchors --output portable.zip
|
||||
|
||||
# Export with full chain
|
||||
stella proof export --scan-id $SCAN_ID --include-chain --output full-bundle.zip
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella proof inspect
|
||||
|
||||
Inspect proof bundle contents.
|
||||
|
||||
```bash
|
||||
stella proof inspect [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--bundle <PATH>` | Proof bundle file | Required |
|
||||
| `--output-dir <PATH>` | Extract to directory | — |
|
||||
| `--show-manifest` | Display manifest | `false` |
|
||||
| `--show-proof` | Display proof nodes | `false` |
|
||||
| `--show-meta` | Display metadata | `false` |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# List bundle contents
|
||||
stella proof inspect --bundle proof.zip
|
||||
|
||||
# Extract and inspect
|
||||
stella proof inspect --bundle proof.zip --output-dir ./inspection/
|
||||
|
||||
# Show manifest
|
||||
stella proof inspect --bundle proof.zip --show-manifest
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Common Options
|
||||
|
||||
### Authentication
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--token <TOKEN>` | OAuth bearer token |
|
||||
| `--token-file <PATH>` | File containing token |
|
||||
| `--profile <NAME>` | Use named profile |
|
||||
|
||||
### Output
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--quiet` | Suppress non-error output |
|
||||
| `--verbose` | Enable verbose output |
|
||||
| `--debug` | Enable debug logging |
|
||||
| `--no-color` | Disable colored output |
|
||||
|
||||
### Connection
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--endpoint <URL>` | Scanner API endpoint |
|
||||
| `--timeout <DURATION>` | Request timeout (e.g., 30s, 5m) |
|
||||
| `--insecure` | Skip TLS verification (dev only) |
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Description | Equivalent Option |
|
||||
|----------|-------------|-------------------|
|
||||
| `STELLA_TOKEN` | OAuth token | `--token` |
|
||||
| `STELLA_ENDPOINT` | API endpoint | `--endpoint` |
|
||||
| `STELLA_PROFILE` | Profile name | `--profile` |
|
||||
| `STELLA_OFFLINE` | Offline mode | `--offline` |
|
||||
| `STELLA_TRUST_ANCHOR` | Trust anchor path | `--trust-anchor` |
|
||||
|
||||
---
|
||||
|
||||
## Exit Codes
|
||||
|
||||
| Code | Meaning |
|
||||
|------|---------|
|
||||
| 0 | Success |
|
||||
| 1 | General error |
|
||||
| 2 | Invalid arguments |
|
||||
| 3 | Authentication failed |
|
||||
| 4 | Resource not found |
|
||||
| 5 | Verification failed |
|
||||
| 6 | Network error |
|
||||
| 10 | Timeout |
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Reachability CLI Reference](./reachability-cli-reference.md)
|
||||
- [Unknowns CLI Reference](./unknowns-cli-reference.md)
|
||||
- [Score Proofs API Reference](../api/score-proofs-reachability-api-reference.md)
|
||||
- [Score Proofs Runbook](../operations/score-proofs-runbook.md)
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Version**: 1.0.0
|
||||
**Sprint**: 3500.0004.0004
|
||||
284
docs/modules/cli/guides/commands/smart-diff.md
Normal file
284
docs/modules/cli/guides/commands/smart-diff.md
Normal file
@@ -0,0 +1,284 @@
|
||||
# Smart-Diff CLI Reference
|
||||
|
||||
**Sprint:** SPRINT_3500_0001_0001
|
||||
**Task:** SDIFF-MASTER-0008 - Update CLI documentation with smart-diff commands
|
||||
|
||||
## Overview
|
||||
|
||||
Smart-Diff analyzes changes between container image versions to identify material risk changes. It detects reachability shifts, VEX status changes, binary hardening regressions, and intelligence signal updates.
|
||||
|
||||
## Commands
|
||||
|
||||
### stellaops smart-diff
|
||||
|
||||
Compare two artifacts and report material risk changes.
|
||||
|
||||
```bash
|
||||
stellaops smart-diff [OPTIONS]
|
||||
```
|
||||
|
||||
#### Required Options
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--base <ARTIFACT>` | Base artifact (image digest, SBOM path, or purl) |
|
||||
| `--target <ARTIFACT>` | Target artifact to compare against base |
|
||||
|
||||
#### Output Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--output <PATH>` | Output file path | stdout |
|
||||
| `--output-format <FMT>` | Output format: `json`, `yaml`, `table`, `sarif` | `table` |
|
||||
| `--output-dir <DIR>` | Output directory for bundle format | - |
|
||||
| `--include-proofs` | Include proof ledger in output | `false` |
|
||||
| `--include-evidence` | Include raw evidence data | `false` |
|
||||
| `--pretty` | Pretty-print JSON/YAML output | `false` |
|
||||
|
||||
#### Analysis Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--rules <PATH>` | Custom detection rules file | built-in |
|
||||
| `--config <PATH>` | Scoring configuration file | default config |
|
||||
| `--tier <TIER>` | Filter by evidence tier: `imported`, `executed`, `tainted_sink` | all |
|
||||
| `--min-priority <N>` | Minimum priority score (0-1) | 0.0 |
|
||||
| `--include-unchanged` | Include unchanged findings | `false` |
|
||||
|
||||
#### Feed Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--feed-snapshot <HASH>` | Use specific feed snapshot | latest |
|
||||
| `--offline` | Run in offline mode | `false` |
|
||||
| `--feed-dir <PATH>` | Local feed directory | - |
|
||||
|
||||
### Examples
|
||||
|
||||
#### Basic Comparison
|
||||
|
||||
```bash
|
||||
# Compare two image versions
|
||||
stellaops smart-diff \
|
||||
--base registry.example.com/app:v1.0.0 \
|
||||
--target registry.example.com/app:v1.1.0
|
||||
|
||||
# Output:
|
||||
# Smart-Diff Report: app:v1.0.0 → app:v1.1.0
|
||||
# ═══════════════════════════════════════════
|
||||
#
|
||||
# Summary:
|
||||
# Total Changes: 5
|
||||
# Risk Increased: 2
|
||||
# Risk Decreased: 3
|
||||
# Hardening Regressions: 1
|
||||
#
|
||||
# Material Changes:
|
||||
# ┌─────────────────┬──────────────────┬──────────┬──────────┐
|
||||
# │ Vulnerability │ Component │ Change │ Priority │
|
||||
# ├─────────────────┼──────────────────┼──────────┼──────────┤
|
||||
# │ CVE-2024-1234 │ lodash@4.17.20 │ +reach │ 0.85 │
|
||||
# │ CVE-2024-5678 │ requests@2.28.0 │ +kev │ 0.95 │
|
||||
# │ CVE-2024-9999 │ urllib3@1.26.0 │ -reach │ 0.60 │
|
||||
# └─────────────────┴──────────────────┴──────────┴──────────┘
|
||||
```
|
||||
|
||||
#### SARIF Output for CI/CD
|
||||
|
||||
```bash
|
||||
# Generate SARIF for GitHub Actions
|
||||
stellaops smart-diff \
|
||||
--base app:v1.0.0 \
|
||||
--target app:v1.1.0 \
|
||||
--output-format sarif \
|
||||
--output results.sarif
|
||||
```
|
||||
|
||||
#### Filtered Analysis
|
||||
|
||||
```bash
|
||||
# Only show high-priority changes
|
||||
stellaops smart-diff \
|
||||
--base app:v1 \
|
||||
--target app:v2 \
|
||||
--min-priority 0.7 \
|
||||
--output-format json
|
||||
|
||||
# Only tainted_sink tier findings
|
||||
stellaops smart-diff \
|
||||
--base app:v1 \
|
||||
--target app:v2 \
|
||||
--tier tainted_sink
|
||||
```
|
||||
|
||||
#### Export with Proofs
|
||||
|
||||
```bash
|
||||
# Full export with proof bundle
|
||||
stellaops smart-diff \
|
||||
--base app:v1 \
|
||||
--target app:v2 \
|
||||
--output-dir ./smart-diff-export \
|
||||
--include-proofs \
|
||||
--include-evidence
|
||||
|
||||
# Creates:
|
||||
# ./smart-diff-export/
|
||||
# ├── manifest.json
|
||||
# ├── diff-results.json
|
||||
# ├── proofs/
|
||||
# └── evidence/
|
||||
```
|
||||
|
||||
#### Offline Mode
|
||||
|
||||
```bash
|
||||
# Use local feeds only
|
||||
STELLAOPS_OFFLINE=true stellaops smart-diff \
|
||||
--base sbom-v1.json \
|
||||
--target sbom-v2.json \
|
||||
--feed-dir /opt/stellaops/feeds
|
||||
```
|
||||
|
||||
### stellaops smart-diff show
|
||||
|
||||
Display results from a saved smart-diff report.
|
||||
|
||||
```bash
|
||||
stellaops smart-diff show [OPTIONS] <INPUT>
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--format <FMT>` | Output format: `table`, `json`, `yaml` | `table` |
|
||||
| `--filter <EXPR>` | Filter expression (e.g., `priority>=0.8`) | - |
|
||||
| `--sort <FIELD>` | Sort field: `priority`, `vuln`, `component` | `priority` |
|
||||
| `--limit <N>` | Maximum results to show | all |
|
||||
|
||||
#### Example
|
||||
|
||||
```bash
|
||||
# Show top 5 highest priority changes
|
||||
stellaops smart-diff show \
|
||||
--sort priority \
|
||||
--limit 5 \
|
||||
smart-diff-report.json
|
||||
```
|
||||
|
||||
### stellaops smart-diff verify
|
||||
|
||||
Verify a smart-diff report's proof bundle.
|
||||
|
||||
```bash
|
||||
stellaops smart-diff verify [OPTIONS] <INPUT>
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--proof-bundle <PATH>` | Proof bundle path | inferred |
|
||||
| `--public-key <PATH>` | Public key for signature verification | - |
|
||||
| `--strict` | Fail on any warning | `false` |
|
||||
|
||||
#### Example
|
||||
|
||||
```bash
|
||||
# Verify report integrity
|
||||
stellaops smart-diff verify \
|
||||
--proof-bundle ./proofs \
|
||||
--public-key /path/to/key.pub \
|
||||
smart-diff-report.json
|
||||
|
||||
# Output:
|
||||
# ✓ Manifest hash verified: sha256:abc123...
|
||||
# ✓ Proof ledger valid (45 nodes)
|
||||
# ✓ Root hash matches
|
||||
# ✓ Signature valid (key: CN=scanner.stellaops.io)
|
||||
```
|
||||
|
||||
### stellaops smart-diff replay
|
||||
|
||||
Re-run smart-diff with different feed or config.
|
||||
|
||||
```bash
|
||||
stellaops smart-diff replay [OPTIONS] <SCAN-ID>
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--feed-snapshot <HASH>` | Use specific feed snapshot | latest |
|
||||
| `--config <PATH>` | Different scoring config | original |
|
||||
| `--dry-run` | Preview without saving | `false` |
|
||||
|
||||
#### Example
|
||||
|
||||
```bash
|
||||
# Replay with new feed
|
||||
stellaops smart-diff replay \
|
||||
--feed-snapshot sha256:abc123... \
|
||||
scan-12345678
|
||||
|
||||
# Preview impact of config change
|
||||
stellaops smart-diff replay \
|
||||
--config strict-scoring.json \
|
||||
--dry-run \
|
||||
scan-12345678
|
||||
```
|
||||
|
||||
## Exit Codes
|
||||
|
||||
| Code | Meaning |
|
||||
|------|---------|
|
||||
| 0 | Success, no material changes |
|
||||
| 1 | Success, material changes found |
|
||||
| 2 | Success, hardening regressions found |
|
||||
| 3 | Success, KEV additions found |
|
||||
| 10 | Invalid arguments |
|
||||
| 11 | Artifact not found |
|
||||
| 12 | Feed not available |
|
||||
| 20 | Verification failed |
|
||||
| 99 | Internal error |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Description |
|
||||
|----------|-------------|
|
||||
| `STELLAOPS_OFFLINE` | Run in offline mode |
|
||||
| `STELLAOPS_FEED_DIR` | Local feed directory |
|
||||
| `STELLAOPS_CONFIG` | Default config file |
|
||||
| `STELLAOPS_OUTPUT_FORMAT` | Default output format |
|
||||
|
||||
## Configuration File
|
||||
|
||||
```yaml
|
||||
# ~/.stellaops/smart-diff.yaml
|
||||
defaults:
|
||||
output_format: json
|
||||
include_proofs: true
|
||||
min_priority: 0.3
|
||||
|
||||
scoring:
|
||||
reachability_flip_up_weight: 1.0
|
||||
kev_added_weight: 1.5
|
||||
hardening_regression_weight: 0.8
|
||||
|
||||
rules:
|
||||
custom_path: /path/to/custom-rules.json
|
||||
```
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `stellaops scan` - Full vulnerability scan
|
||||
- `stellaops score replay` - Score replay
|
||||
- `stellaops verify-bundle` - Verify proof bundles
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Smart-Diff Air-Gap Workflows](../airgap/smart-diff-airgap-workflows.md)
|
||||
- [SARIF Integration](../ci/sarif-integration.md)
|
||||
- [Scoring Configuration](../ci/scoring-configuration.md)
|
||||
323
docs/modules/cli/guides/commands/triage.md
Normal file
323
docs/modules/cli/guides/commands/triage.md
Normal file
@@ -0,0 +1,323 @@
|
||||
# Triage CLI Reference
|
||||
|
||||
**Sprint:** SPRINT_3600_0001_0001
|
||||
**Task:** TRI-MASTER-0008 - Update CLI documentation with offline commands
|
||||
|
||||
## Overview
|
||||
|
||||
The Triage CLI provides commands for vulnerability triage, decision management, and offline workflows. It supports evidence-based decision making and audit-ready replay tokens.
|
||||
|
||||
## Commands
|
||||
|
||||
### stellaops triage list
|
||||
|
||||
List findings for triage.
|
||||
|
||||
```bash
|
||||
stellaops triage list [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Filter by scan ID | - |
|
||||
| `--status <STATUS>` | Filter: `untriaged`, `affected`, `not_affected`, `wont_fix`, `false_positive` | all |
|
||||
| `--priority-min <N>` | Minimum priority (0-1) | 0 |
|
||||
| `--priority-max <N>` | Maximum priority (0-1) | 1 |
|
||||
| `--sort <FIELD>` | Sort: `priority`, `vuln`, `component`, `created` | `priority` |
|
||||
| `--format <FMT>` | Output: `table`, `json`, `csv` | `table` |
|
||||
| `--limit <N>` | Max results | 50 |
|
||||
| `--workspace <PATH>` | Offline workspace | - |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# List untriaged high-priority findings
|
||||
stellaops triage list \
|
||||
--scan-id scan-12345678 \
|
||||
--status untriaged \
|
||||
--priority-min 0.7
|
||||
|
||||
# Export for review
|
||||
stellaops triage list \
|
||||
--scan-id scan-12345678 \
|
||||
--format json > findings.json
|
||||
```
|
||||
|
||||
### stellaops triage show
|
||||
|
||||
Show finding details with evidence.
|
||||
|
||||
```bash
|
||||
stellaops triage show <FINDING-ID> [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--show-evidence` | Include full evidence | `false` |
|
||||
| `--evidence-first` | Lead with evidence summary | `false` |
|
||||
| `--show-history` | Show decision history | `false` |
|
||||
| `--format <FMT>` | Output: `text`, `json`, `yaml` | `text` |
|
||||
| `--workspace <PATH>` | Offline workspace | - |
|
||||
|
||||
#### Example
|
||||
|
||||
```bash
|
||||
# Show with evidence
|
||||
stellaops triage show CVE-2024-1234 \
|
||||
--show-evidence \
|
||||
--evidence-first
|
||||
|
||||
# Output:
|
||||
# ═══════════════════════════════════════════
|
||||
# CVE-2024-1234 · pkg:npm/lodash@4.17.20
|
||||
# ═══════════════════════════════════════════
|
||||
#
|
||||
# EVIDENCE
|
||||
# ────────
|
||||
# Reachability: TAINTED_SINK (tier 3/3)
|
||||
# └─ api.js:42 → utils.js:15 → lodash/merge
|
||||
#
|
||||
# Call Stack:
|
||||
# 1. api.js:42 handleUserInput()
|
||||
# 2. utils.js:15 processData()
|
||||
# 3. lodash:merge <vulnerable sink>
|
||||
#
|
||||
# VEX: No statement
|
||||
# EPSS: 0.67 (High)
|
||||
# KEV: No
|
||||
#
|
||||
# VULNERABILITY
|
||||
# ─────────────
|
||||
# CVE-2024-1234: Prototype Pollution in lodash
|
||||
# CVSS: 7.5 (High)
|
||||
# CWE: CWE-1321
|
||||
#
|
||||
# STATUS: untriaged
|
||||
```
|
||||
|
||||
### stellaops triage decide
|
||||
|
||||
Record a triage decision.
|
||||
|
||||
```bash
|
||||
stellaops triage decide <FINDING-ID> [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--status <STATUS>` | Required: `affected`, `not_affected`, `wont_fix`, `false_positive` | - |
|
||||
| `--justification <TEXT>` | Decision justification | - |
|
||||
| `--reviewer <NAME>` | Reviewer identifier | current user |
|
||||
| `--vex-emit` | Emit VEX statement | `false` |
|
||||
| `--workspace <PATH>` | Offline workspace | - |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Mark as not affected
|
||||
stellaops triage decide CVE-2024-1234 \
|
||||
--status not_affected \
|
||||
--justification "Feature gated, unreachable in production"
|
||||
|
||||
# Mark affected and emit VEX
|
||||
stellaops triage decide CVE-2024-5678 \
|
||||
--status affected \
|
||||
--justification "In use, remediation planned" \
|
||||
--vex-emit
|
||||
```
|
||||
|
||||
### stellaops triage batch
|
||||
|
||||
Interactive batch triage mode.
|
||||
|
||||
```bash
|
||||
stellaops triage batch [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Scan to triage | - |
|
||||
| `--query <EXPR>` | Filter expression | - |
|
||||
| `--input <PATH>` | Offline bundle | - |
|
||||
| `--workspace <PATH>` | Offline workspace | - |
|
||||
|
||||
#### Keyboard Shortcuts
|
||||
|
||||
| Key | Action |
|
||||
|-----|--------|
|
||||
| `j` / `↓` | Next finding |
|
||||
| `k` / `↑` | Previous finding |
|
||||
| `a` | Mark affected |
|
||||
| `n` | Mark not affected |
|
||||
| `w` | Mark won't fix |
|
||||
| `f` | Mark false positive |
|
||||
| `e` | Show full evidence |
|
||||
| `g` | Show graph context |
|
||||
| `u` | Undo last decision |
|
||||
| `/` | Search findings |
|
||||
| `?` | Show help |
|
||||
| `q` | Save and quit |
|
||||
|
||||
#### Example
|
||||
|
||||
```bash
|
||||
# Interactive triage
|
||||
stellaops triage batch \
|
||||
--scan-id scan-12345678 \
|
||||
--query "priority>=0.5"
|
||||
```
|
||||
|
||||
### stellaops triage export
|
||||
|
||||
Export findings for offline triage.
|
||||
|
||||
```bash
|
||||
stellaops triage export [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Scan to export | required |
|
||||
| `--findings <IDS>` | Specific finding IDs (comma-separated) | - |
|
||||
| `--all-findings` | Export all findings | `false` |
|
||||
| `--include-evidence` | Include evidence data | `true` |
|
||||
| `--include-graph` | Include dependency graph | `true` |
|
||||
| `--output <PATH>` | Output path (.stella.bundle.tgz) | required |
|
||||
| `--sign` | Sign the bundle | `true` |
|
||||
|
||||
#### Example
|
||||
|
||||
```bash
|
||||
# Export specific findings
|
||||
stellaops triage export \
|
||||
--scan-id scan-12345678 \
|
||||
--findings CVE-2024-1234,CVE-2024-5678 \
|
||||
--output triage-bundle.stella.bundle.tgz
|
||||
```
|
||||
|
||||
### stellaops triage import
|
||||
|
||||
Import offline bundle for triage.
|
||||
|
||||
```bash
|
||||
stellaops triage import [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--input <PATH>` | Bundle path | required |
|
||||
| `--workspace <PATH>` | Target workspace | `~/.stellaops/triage` |
|
||||
| `--verify` | Verify signature | `true` |
|
||||
| `--public-key <PATH>` | Public key for verification | - |
|
||||
|
||||
### stellaops triage export-decisions
|
||||
|
||||
Export decisions for sync.
|
||||
|
||||
```bash
|
||||
stellaops triage export-decisions [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--workspace <PATH>` | Workspace path | required |
|
||||
| `--output <PATH>` | Output path | required |
|
||||
| `--format <FMT>` | Format: `json`, `ndjson` | `json` |
|
||||
| `--sign` | Sign output | `true` |
|
||||
|
||||
### stellaops triage import-decisions
|
||||
|
||||
Import and apply decisions.
|
||||
|
||||
```bash
|
||||
stellaops triage import-decisions [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--input <PATH>` | Decisions file | required |
|
||||
| `--verify` | Verify signatures | `true` |
|
||||
| `--apply` | Apply to server | `false` |
|
||||
| `--dry-run` | Preview only | `false` |
|
||||
| `--conflict-mode <MODE>` | Conflict handling: `keep-local`, `keep-server`, `newest`, `review` | `review` |
|
||||
|
||||
### stellaops triage verify-bundle
|
||||
|
||||
Verify bundle integrity.
|
||||
|
||||
```bash
|
||||
stellaops triage verify-bundle [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--input <PATH>` | Bundle path | required |
|
||||
| `--public-key <PATH>` | Public key | required |
|
||||
| `--strict` | Fail on warnings | `false` |
|
||||
|
||||
### stellaops triage show-token
|
||||
|
||||
Display replay token details.
|
||||
|
||||
```bash
|
||||
stellaops triage show-token <TOKEN>
|
||||
```
|
||||
|
||||
### stellaops triage verify-token
|
||||
|
||||
Verify replay token.
|
||||
|
||||
```bash
|
||||
stellaops triage verify-token <TOKEN> [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--public-key <PATH>` | Public key | required |
|
||||
|
||||
## Exit Codes
|
||||
|
||||
| Code | Meaning |
|
||||
|------|---------|
|
||||
| 0 | Success |
|
||||
| 1 | Findings require attention |
|
||||
| 10 | Invalid arguments |
|
||||
| 11 | Resource not found |
|
||||
| 20 | Verification failed |
|
||||
| 21 | Signature invalid |
|
||||
| 30 | Conflict detected |
|
||||
| 99 | Internal error |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Description |
|
||||
|----------|-------------|
|
||||
| `STELLAOPS_OFFLINE` | Enable offline mode |
|
||||
| `STELLAOPS_TRIAGE_WORKSPACE` | Default workspace |
|
||||
| `STELLAOPS_REVIEWER` | Default reviewer name |
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Triage Air-Gap Workflows](../airgap/triage-airgap-workflows.md)
|
||||
- [Keyboard Shortcuts](./keyboard-shortcuts.md)
|
||||
- [Triage API Reference](../api/triage-api.md)
|
||||
532
docs/modules/cli/guides/commands/unknowns-reference.md
Normal file
532
docs/modules/cli/guides/commands/unknowns-reference.md
Normal file
@@ -0,0 +1,532 @@
|
||||
# Unknowns CLI Reference
|
||||
|
||||
**Sprint:** SPRINT_3500_0004_0004
|
||||
**Version:** 1.0.0
|
||||
|
||||
## Overview
|
||||
|
||||
The Unknowns CLI commands manage components that cannot be analyzed due to missing data, unrecognized formats, or resolution failures. These commands support triage workflows, escalation, and resolution tracking.
|
||||
|
||||
---
|
||||
|
||||
## Commands
|
||||
|
||||
### stella unknowns
|
||||
|
||||
Manage unknowns registry.
|
||||
|
||||
```bash
|
||||
stella unknowns <SUBCOMMAND> [OPTIONS]
|
||||
```
|
||||
|
||||
#### Subcommands
|
||||
|
||||
| Subcommand | Description |
|
||||
|------------|-------------|
|
||||
| `list` | List unknowns |
|
||||
| `show` | Show unknown details |
|
||||
| `summary` | Show unknowns summary |
|
||||
| `escalate` | Escalate unknown |
|
||||
| `resolve` | Mark unknown resolved |
|
||||
| `suppress` | Suppress unknown |
|
||||
| `bulk-triage` | Bulk triage unknowns |
|
||||
| `export` | Export unknowns |
|
||||
| `import` | Import unknown resolutions |
|
||||
|
||||
---
|
||||
|
||||
### stella unknowns list
|
||||
|
||||
List unknowns for a scan or workspace.
|
||||
|
||||
```bash
|
||||
stella unknowns list [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Filter by scan ID | — |
|
||||
| `--workspace-id <ID>` | Filter by workspace ID | — |
|
||||
| `--status <STATUS>` | Filter by status | All |
|
||||
| `--category <CAT>` | Filter by category | All |
|
||||
| `--priority <PRI>` | Filter by priority (1-10) | All |
|
||||
| `--min-score <N>` | Minimum 2-factor score | 0 |
|
||||
| `--max-age <DURATION>` | Maximum age | — |
|
||||
| `--purl <PATTERN>` | Filter by PURL pattern | — |
|
||||
| `--output <PATH>` | Output file path | stdout |
|
||||
| `--output-format <FMT>` | Format: `json`, `yaml`, `table`, `csv` | `table` |
|
||||
| `--limit <N>` | Maximum results | 100 |
|
||||
| `--offset <N>` | Pagination offset | 0 |
|
||||
| `--sort <FIELD>` | Sort field | `priority` |
|
||||
| `--order <DIR>` | Sort direction: `asc`, `desc` | `desc` |
|
||||
|
||||
#### Status Values
|
||||
|
||||
| Status | Description |
|
||||
|--------|-------------|
|
||||
| `pending` | Awaiting triage |
|
||||
| `escalated` | Escalated for manual review |
|
||||
| `suppressed` | Suppressed (accepted risk) |
|
||||
| `resolved` | Resolved |
|
||||
|
||||
#### Category Values
|
||||
|
||||
| Category | Description |
|
||||
|----------|-------------|
|
||||
| `unmapped_purl` | No CPE/OVAL mapping |
|
||||
| `checksum_miss` | Binary checksum not in DB |
|
||||
| `language_gap` | Unsupported language |
|
||||
| `parsing_failure` | Manifest parsing failed |
|
||||
| `network_timeout` | Feed unavailable |
|
||||
| `unrecognized_format` | Unknown format |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# List all pending unknowns
|
||||
stella unknowns list --status pending
|
||||
|
||||
# List high-priority unknowns
|
||||
stella unknowns list --min-score 7
|
||||
|
||||
# List by category
|
||||
stella unknowns list --category unmapped_purl
|
||||
|
||||
# Export to CSV
|
||||
stella unknowns list --scan-id $SCAN_ID --output-format csv --output unknowns.csv
|
||||
|
||||
# Filter by PURL pattern
|
||||
stella unknowns list --purl "pkg:npm/*"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella unknowns show
|
||||
|
||||
Show details of a specific unknown.
|
||||
|
||||
```bash
|
||||
stella unknowns show [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--id <ID>` | Unknown ID | Required |
|
||||
| `--verbose` | Show extended details | `false` |
|
||||
| `--output-format <FMT>` | Format: `json`, `yaml`, `text` | `text` |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Show unknown details
|
||||
stella unknowns show --id unknown-001
|
||||
|
||||
# Output:
|
||||
# ID: unknown-001
|
||||
# PURL: pkg:npm/left-pad@1.3.0
|
||||
# Category: unmapped_purl
|
||||
# Status: pending
|
||||
# Priority: 6
|
||||
# Score: 7.2 (vuln: 3, impact: 4.2)
|
||||
# Created: 2025-12-20T10:00:00Z
|
||||
# Scans Affected: 5
|
||||
# Reason: No CVE/advisory mapping exists for this package
|
||||
|
||||
# Verbose output
|
||||
stella unknowns show --id unknown-001 --verbose
|
||||
|
||||
# JSON output
|
||||
stella unknowns show --id unknown-001 --output-format json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella unknowns summary
|
||||
|
||||
Show unknowns summary statistics.
|
||||
|
||||
```bash
|
||||
stella unknowns summary [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Filter by scan ID | — |
|
||||
| `--workspace-id <ID>` | Filter by workspace ID | — |
|
||||
| `--output-format <FMT>` | Format: `json`, `yaml`, `table` | `table` |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Summary for workspace
|
||||
stella unknowns summary --workspace-id $WS_ID
|
||||
|
||||
# Output:
|
||||
# Total unknowns: 127
|
||||
#
|
||||
# By Status:
|
||||
# pending: 89
|
||||
# escalated: 15
|
||||
# suppressed: 12
|
||||
# resolved: 11
|
||||
#
|
||||
# By Category:
|
||||
# unmapped_purl: 67
|
||||
# checksum_miss: 34
|
||||
# language_gap: 18
|
||||
# parsing_failure: 8
|
||||
#
|
||||
# Priority Distribution:
|
||||
# High (8-10): 12
|
||||
# Medium (5-7): 45
|
||||
# Low (1-4): 70
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella unknowns escalate
|
||||
|
||||
Escalate an unknown for manual review.
|
||||
|
||||
```bash
|
||||
stella unknowns escalate [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--id <ID>` | Unknown ID | Required |
|
||||
| `--reason <TEXT>` | Escalation reason | — |
|
||||
| `--assignee <USER>` | Assign to user/team | — |
|
||||
| `--severity <LEVEL>` | Severity: `low`, `medium`, `high`, `critical` | `medium` |
|
||||
| `--due-date <DATE>` | Due date (ISO 8601) | — |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Basic escalation
|
||||
stella unknowns escalate --id unknown-001 --reason "Potential supply chain risk"
|
||||
|
||||
# Escalate with assignment
|
||||
stella unknowns escalate --id unknown-001 \
|
||||
--reason "Missing mapping for critical dependency" \
|
||||
--assignee security-team \
|
||||
--severity high \
|
||||
--due-date 2025-12-27
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella unknowns resolve
|
||||
|
||||
Mark an unknown as resolved.
|
||||
|
||||
```bash
|
||||
stella unknowns resolve [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--id <ID>` | Unknown ID | Required |
|
||||
| `--resolution <TYPE>` | Resolution type | Required |
|
||||
| `--comment <TEXT>` | Resolution comment | — |
|
||||
| `--mapping <JSON>` | Custom mapping data | — |
|
||||
| `--evidence <PATH>` | Evidence file | — |
|
||||
|
||||
#### Resolution Types
|
||||
|
||||
| Type | Description |
|
||||
|------|-------------|
|
||||
| `mapped` | Package/CVE mapping added |
|
||||
| `not_applicable` | Not applicable to context |
|
||||
| `false_positive` | Detection was incorrect |
|
||||
| `accepted_risk` | Risk accepted |
|
||||
| `replaced` | Component replaced |
|
||||
| `removed` | Component removed |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Resolve with mapping
|
||||
stella unknowns resolve --id unknown-001 \
|
||||
--resolution mapped \
|
||||
--comment "Added CPE mapping to internal DB"
|
||||
|
||||
# Resolve as accepted risk
|
||||
stella unknowns resolve --id unknown-001 \
|
||||
--resolution accepted_risk \
|
||||
--comment "Internal component, no external exposure"
|
||||
|
||||
# Resolve with evidence
|
||||
stella unknowns resolve --id unknown-001 \
|
||||
--resolution not_applicable \
|
||||
--evidence ./analysis-report.pdf
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella unknowns suppress
|
||||
|
||||
Suppress an unknown (accept risk).
|
||||
|
||||
```bash
|
||||
stella unknowns suppress [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--id <ID>` | Unknown ID | Required |
|
||||
| `--reason <TEXT>` | Suppression reason | Required |
|
||||
| `--expires <DATE>` | Expiration date | — |
|
||||
| `--scope <SCOPE>` | Scope: `scan`, `workspace`, `global` | `scan` |
|
||||
| `--approver <USER>` | Approver name/email | — |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Suppress with expiration
|
||||
stella unknowns suppress --id unknown-001 \
|
||||
--reason "Internal tooling, no risk exposure" \
|
||||
--expires 2026-01-01
|
||||
|
||||
# Workspace-wide suppression
|
||||
stella unknowns suppress --id unknown-001 \
|
||||
--reason "Deprecated component, scheduled for removal" \
|
||||
--scope workspace \
|
||||
--approver security@example.com
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella unknowns bulk-triage
|
||||
|
||||
Bulk triage multiple unknowns.
|
||||
|
||||
```bash
|
||||
stella unknowns bulk-triage [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--file <PATH>` | Triage decisions file (JSON/YAML) | Required |
|
||||
| `--dry-run` | Preview changes | `false` |
|
||||
| `--continue-on-error` | Continue on individual failures | `false` |
|
||||
|
||||
#### Input File Format
|
||||
|
||||
```json
|
||||
{
|
||||
"decisions": [
|
||||
{
|
||||
"id": "unknown-001",
|
||||
"action": "resolve",
|
||||
"resolution": "mapped",
|
||||
"comment": "Added mapping"
|
||||
},
|
||||
{
|
||||
"id": "unknown-002",
|
||||
"action": "suppress",
|
||||
"reason": "Accepted risk",
|
||||
"expires": "2026-01-01"
|
||||
},
|
||||
{
|
||||
"id": "unknown-003",
|
||||
"action": "escalate",
|
||||
"reason": "Needs security review",
|
||||
"assignee": "security-team"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Bulk triage with preview
|
||||
stella unknowns bulk-triage --file triage-decisions.json --dry-run
|
||||
|
||||
# Apply bulk triage
|
||||
stella unknowns bulk-triage --file triage-decisions.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella unknowns export
|
||||
|
||||
Export unknowns data.
|
||||
|
||||
```bash
|
||||
stella unknowns export [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scan-id <ID>` | Filter by scan ID | — |
|
||||
| `--workspace-id <ID>` | Filter by workspace ID | — |
|
||||
| `--status <STATUS>` | Filter by status | All |
|
||||
| `--output <PATH>` | Output file path | Required |
|
||||
| `--format <FMT>` | Format: `json`, `yaml`, `csv`, `ndjson` | `json` |
|
||||
| `--include-history` | Include resolution history | `false` |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Export all unknowns
|
||||
stella unknowns export --workspace-id $WS_ID --output unknowns.json
|
||||
|
||||
# Export pending as CSV
|
||||
stella unknowns export --status pending --output pending.csv --format csv
|
||||
|
||||
# Export with history
|
||||
stella unknowns export --scan-id $SCAN_ID \
|
||||
--output unknowns-history.json \
|
||||
--include-history
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### stella unknowns import
|
||||
|
||||
Import unknown resolutions.
|
||||
|
||||
```bash
|
||||
stella unknowns import [OPTIONS]
|
||||
```
|
||||
|
||||
#### Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--file <PATH>` | Resolutions file | Required |
|
||||
| `--format <FMT>` | Format: `json`, `yaml`, `csv` | Auto-detect |
|
||||
| `--dry-run` | Preview import | `false` |
|
||||
| `--conflict <MODE>` | Conflict handling: `skip`, `update`, `error` | `skip` |
|
||||
|
||||
#### Examples
|
||||
|
||||
```bash
|
||||
# Import resolutions
|
||||
stella unknowns import --file resolutions.json
|
||||
|
||||
# Preview import
|
||||
stella unknowns import --file resolutions.json --dry-run
|
||||
|
||||
# Update existing
|
||||
stella unknowns import --file resolutions.json --conflict update
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Common Options
|
||||
|
||||
### Authentication
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--token <TOKEN>` | OAuth bearer token |
|
||||
| `--token-file <PATH>` | File containing token |
|
||||
| `--profile <NAME>` | Use named profile |
|
||||
|
||||
### Output
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--quiet` | Suppress non-error output |
|
||||
| `--verbose` | Enable verbose output |
|
||||
| `--debug` | Enable debug logging |
|
||||
| `--no-color` | Disable colored output |
|
||||
|
||||
### Connection
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--endpoint <URL>` | Scanner API endpoint |
|
||||
| `--timeout <DURATION>` | Request timeout |
|
||||
| `--insecure` | Skip TLS verification |
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Description |
|
||||
|----------|-------------|
|
||||
| `STELLA_TOKEN` | OAuth token |
|
||||
| `STELLA_ENDPOINT` | API endpoint |
|
||||
| `STELLA_PROFILE` | Profile name |
|
||||
| `STELLA_WORKSPACE` | Default workspace ID |
|
||||
|
||||
---
|
||||
|
||||
## Exit Codes
|
||||
|
||||
| Code | Meaning |
|
||||
|------|---------|
|
||||
| 0 | Success |
|
||||
| 1 | General error |
|
||||
| 2 | Invalid arguments |
|
||||
| 3 | Authentication failed |
|
||||
| 4 | Resource not found |
|
||||
| 5 | Operation failed |
|
||||
| 6 | Network error |
|
||||
|
||||
---
|
||||
|
||||
## Workflows
|
||||
|
||||
### Daily Triage Workflow
|
||||
|
||||
```bash
|
||||
# 1. Check summary
|
||||
stella unknowns summary --workspace-id $WS_ID
|
||||
|
||||
# 2. List high-priority pending
|
||||
stella unknowns list --status pending --min-score 7
|
||||
|
||||
# 3. Review and escalate critical items
|
||||
stella unknowns escalate --id unknown-001 \
|
||||
--reason "Security review needed" \
|
||||
--severity high
|
||||
|
||||
# 4. Bulk resolve known patterns
|
||||
stella unknowns bulk-triage --file daily-resolutions.json
|
||||
```
|
||||
|
||||
### Weekly Report Export
|
||||
|
||||
```bash
|
||||
# Export all unknowns with history
|
||||
stella unknowns export \
|
||||
--workspace-id $WS_ID \
|
||||
--include-history \
|
||||
--output weekly-unknowns-$(date +%Y%m%d).json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Score Proofs CLI Reference](./score-proofs-cli-reference.md)
|
||||
- [Reachability CLI Reference](./reachability-cli-reference.md)
|
||||
- [Unknowns API Reference](../api/score-proofs-reachability-api-reference.md)
|
||||
- [Unknowns Queue Runbook](../operations/unknowns-queue-runbook.md)
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Version**: 1.0.0
|
||||
**Sprint**: 3500.0004.0004
|
||||
656
docs/modules/cli/guides/compliance.md
Normal file
656
docs/modules/cli/guides/compliance.md
Normal file
@@ -0,0 +1,656 @@
|
||||
# stella CLI - Regional Cryptographic Compliance Guide
|
||||
|
||||
**Sprint:** SPRINT_4100_0006_0006 - CLI Documentation Overhaul
|
||||
|
||||
## Overview
|
||||
|
||||
StellaOps CLI supports regional cryptographic algorithms to comply with national and international cryptographic standards and regulations. This guide covers compliance requirements for:
|
||||
|
||||
- **GOST** (Russia and CIS states)
|
||||
- **eIDAS** (European Union)
|
||||
- **SM** (China)
|
||||
|
||||
**Important:** Use the distribution appropriate for your jurisdiction. Unauthorized export or use of regional cryptographic implementations may violate export control laws.
|
||||
|
||||
---
|
||||
|
||||
## GOST (Russia and CIS States)
|
||||
|
||||
### Overview
|
||||
|
||||
**GOST** (Государственный стандарт, State Standard) refers to the family of Russian cryptographic standards mandated for government and regulated sectors in Russia and CIS states.
|
||||
|
||||
**Applicable Jurisdictions:** Russia, Belarus, Kazakhstan, Armenia, Kyrgyzstan
|
||||
|
||||
**Legal Basis:**
|
||||
- Federal Law No. 63-FZ "On Electronic Signature" (2011)
|
||||
- FSTEC (Federal Service for Technical and Export Control) regulations
|
||||
- GOST standards published by Rosstandart
|
||||
|
||||
---
|
||||
|
||||
### GOST Standards
|
||||
|
||||
| Standard | Name | Purpose |
|
||||
|----------|------|---------|
|
||||
| **GOST R 34.10-2012** | Digital Signature Algorithm | Elliptic curve digital signatures (256-bit and 512-bit) |
|
||||
| **GOST R 34.11-2012** (Streebog) | Hash Function | Cryptographic hash (256-bit and 512-bit) |
|
||||
| **GOST R 34.12-2015** (Kuznyechik) | Block Cipher | Symmetric encryption (256-bit key) |
|
||||
| **GOST R 34.12-2015** (Magma) | Block Cipher | Legacy symmetric encryption (256-bit key, formerly GOST 28147-89) |
|
||||
| **GOST R 34.13-2015** | Cipher Modes | Modes of operation for block ciphers |
|
||||
|
||||
---
|
||||
|
||||
### Crypto Providers
|
||||
|
||||
The `stella-russia` distribution includes three GOST providers:
|
||||
|
||||
#### 1. CryptoPro CSP (Recommended for Production)
|
||||
|
||||
**Provider:** Commercial CSP from CryptoPro
|
||||
**Certification:** FSTEC-certified
|
||||
**License:** Commercial (required for production use)
|
||||
|
||||
**Installation:**
|
||||
```bash
|
||||
# Install CryptoPro CSP (requires license)
|
||||
sudo ./install.sh
|
||||
|
||||
# Verify installation
|
||||
/opt/cprocsp/bin/amd64/csptestf -absorb -alg GR3411_2012_256
|
||||
```
|
||||
|
||||
**Configuration:**
|
||||
```yaml
|
||||
StellaOps:
|
||||
Crypto:
|
||||
Providers:
|
||||
Gost:
|
||||
CryptoProCsp:
|
||||
Enabled: true
|
||||
ContainerName: "StellaOps-GOST-2024"
|
||||
ProviderType: 80 # PROV_GOST_2012_256
|
||||
```
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
stella crypto sign \
|
||||
--provider gost \
|
||||
--algorithm GOST12-256 \
|
||||
--key-id gost-prod-key \
|
||||
--file document.pdf \
|
||||
--output document.pdf.sig
|
||||
```
|
||||
|
||||
#### 2. OpenSSL-GOST (Open Source, Non-certified)
|
||||
|
||||
**Provider:** OpenSSL with GOST engine
|
||||
**Certification:** Not FSTEC-certified (development/testing only)
|
||||
**License:** Open source
|
||||
|
||||
**Installation:**
|
||||
```bash
|
||||
# Install OpenSSL with GOST engine
|
||||
sudo apt install openssl gost-engine
|
||||
|
||||
# Verify installation
|
||||
openssl engine gost
|
||||
```
|
||||
|
||||
**Configuration:**
|
||||
```yaml
|
||||
StellaOps:
|
||||
Crypto:
|
||||
Providers:
|
||||
Gost:
|
||||
OpenSslGost:
|
||||
Enabled: true
|
||||
EnginePath: "/usr/lib/x86_64-linux-gnu/engines-1.1/gost.so"
|
||||
```
|
||||
|
||||
#### 3. PKCS#11 (HSM Support)
|
||||
|
||||
**Provider:** PKCS#11 interface to hardware security modules
|
||||
**Certification:** Depends on HSM (e.g., Rutoken, JaCarta)
|
||||
**License:** Depends on HSM vendor
|
||||
|
||||
**Configuration:**
|
||||
```yaml
|
||||
StellaOps:
|
||||
Crypto:
|
||||
Providers:
|
||||
Gost:
|
||||
Pkcs11:
|
||||
Enabled: true
|
||||
LibraryPath: "/usr/lib/librtpkcs11ecp.so"
|
||||
SlotId: 0
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Algorithms
|
||||
|
||||
| Algorithm | Description | GOST Standard | Key Size | Recommended |
|
||||
|-----------|-------------|---------------|----------|-------------|
|
||||
| `GOST12-256` | GOST R 34.10-2012 (256-bit) | GOST R 34.10-2012 | 256-bit | ✅ Yes |
|
||||
| `GOST12-512` | GOST R 34.10-2012 (512-bit) | GOST R 34.10-2012 | 512-bit | ✅ Yes |
|
||||
| `GOST2001` | GOST R 34.10-2001 (legacy) | GOST R 34.10-2001 | 256-bit | ⚠️ Legacy |
|
||||
|
||||
**Recommendation:** Use `GOST12-256` or `GOST12-512` for new implementations. `GOST2001` is supported for backward compatibility only.
|
||||
|
||||
---
|
||||
|
||||
### Configuration Example
|
||||
|
||||
```yaml
|
||||
# appsettings.gost.yaml
|
||||
|
||||
StellaOps:
|
||||
Backend:
|
||||
BaseUrl: "https://api.stellaops.ru"
|
||||
|
||||
Crypto:
|
||||
DefaultProvider: "gost"
|
||||
|
||||
Profiles:
|
||||
- name: "gost-prod-signing"
|
||||
provider: "gost"
|
||||
algorithm: "GOST12-256"
|
||||
keyId: "gost-prod-key-2024"
|
||||
|
||||
- name: "gost-qualified-signature"
|
||||
provider: "gost"
|
||||
algorithm: "GOST12-512"
|
||||
keyId: "gost-qes-key"
|
||||
|
||||
Providers:
|
||||
Gost:
|
||||
CryptoProCsp:
|
||||
Enabled: true
|
||||
ContainerName: "StellaOps-GOST"
|
||||
ProviderType: 80
|
||||
|
||||
Keys:
|
||||
- KeyId: "gost-prod-key-2024"
|
||||
Algorithm: "GOST12-256"
|
||||
Source: "csp"
|
||||
FriendlyName: "Production GOST Signing Key 2024"
|
||||
|
||||
- KeyId: "gost-qes-key"
|
||||
Algorithm: "GOST12-512"
|
||||
Source: "csp"
|
||||
FriendlyName: "Qualified Electronic Signature Key"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Test Vectors (FSTEC Compliance)
|
||||
|
||||
Verify your GOST implementation with official test vectors:
|
||||
|
||||
```bash
|
||||
# Test vector from GOST R 34.11-2012 (Streebog hash)
|
||||
echo -n "012345678901234567890123456789012345678901234567890123456789012" | \
|
||||
openssl dgst -engine gost -streebog256
|
||||
|
||||
# Expected output:
|
||||
# 9d151eefd8590b89daa6ba6cb74af9275dd051026bb149a452fd84e5e57b5500
|
||||
```
|
||||
|
||||
**Official Test Vectors:**
|
||||
- GOST R 34.10-2012: [TC26 GitHub](https://github.com/tc26/gost-crypto/blob/master/test_vectors/)
|
||||
- GOST R 34.11-2012: [RFC 6986 Appendix A](https://datatracker.ietf.org/doc/html/rfc6986#appendix-A)
|
||||
|
||||
---
|
||||
|
||||
### Compliance Checklist
|
||||
|
||||
- [ ] Use FSTEC-certified cryptographic provider (CryptoPro CSP or certified HSM)
|
||||
- [ ] Use GOST R 34.10-2012 (not legacy GOST 2001) for new signatures
|
||||
- [ ] Use GOST R 34.11-2012 (Streebog) for hashing
|
||||
- [ ] Store private keys in certified HSM for qualified signatures
|
||||
- [ ] Maintain key management records per FSTEC requirements
|
||||
- [ ] Obtain certificate from accredited Russian CA for qualified signatures
|
||||
- [ ] Verify signatures against FSTEC test vectors
|
||||
|
||||
---
|
||||
|
||||
### Legal Considerations
|
||||
|
||||
**Export Control:**
|
||||
- GOST implementations are subject to Russian export control laws
|
||||
- Distribution outside Russia/CIS may require special permissions
|
||||
- StellaOps `stella-russia` distribution is authorized for Russia/CIS only
|
||||
|
||||
**Qualified Electronic Signatures:**
|
||||
- Qualified signatures require accredited CA certificate
|
||||
- Accredited CAs: [Ministry of Digital Development list](https://digital.gov.ru/en/)
|
||||
- Private keys must be stored in FSTEC-certified HSM
|
||||
|
||||
---
|
||||
|
||||
## eIDAS (European Union)
|
||||
|
||||
### Overview
|
||||
|
||||
**eIDAS** (electronic IDentification, Authentication and trust Services) is the EU regulation (No 910/2014) governing electronic signatures, seals, and trust services across EU member states.
|
||||
|
||||
**Applicable Jurisdictions:** All 27 EU member states + EEA (Norway, Iceland, Liechtenstein)
|
||||
|
||||
**Legal Basis:**
|
||||
- Regulation (EU) No 910/2014 (eIDAS Regulation)
|
||||
- ETSI standards for implementation
|
||||
- National laws implementing eIDAS
|
||||
|
||||
---
|
||||
|
||||
### Signature Levels
|
||||
|
||||
| Level | Name | Description | Recommended Use |
|
||||
|-------|------|-------------|-----------------|
|
||||
| **QES** | Qualified Electronic Signature | Equivalent to handwritten signature | Contracts, legal documents |
|
||||
| **AES** | Advanced Electronic Signature | High assurance, not qualified | Internal approvals, workflows |
|
||||
| **AdES** | Advanced Electronic Signature | Basic electronic signature | General document signing |
|
||||
|
||||
---
|
||||
|
||||
### Crypto Providers
|
||||
|
||||
The `stella-eu` distribution includes eIDAS-compliant providers:
|
||||
|
||||
#### 1. TSP Client (Remote Qualified Signature)
|
||||
|
||||
**Provider:** Trust Service Provider remote signing client
|
||||
**Certification:** Depends on TSP (must be EU-qualified)
|
||||
**License:** Subscription-based (per TSP)
|
||||
|
||||
**Configuration:**
|
||||
```yaml
|
||||
StellaOps:
|
||||
Crypto:
|
||||
Providers:
|
||||
Eidas:
|
||||
TspClient:
|
||||
Enabled: true
|
||||
TspUrl: "https://tsp.example.eu/api/v1/sign"
|
||||
ApiKey: "${EIDAS_TSP_API_KEY}"
|
||||
CertificateId: "qes-cert-2024"
|
||||
```
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
# Sign with QES (Qualified Electronic Signature)
|
||||
stella crypto sign \
|
||||
--provider eidas \
|
||||
--algorithm ECDSA-P256-QES \
|
||||
--key-id qes-cert-2024 \
|
||||
--file contract.pdf \
|
||||
--output contract.pdf.sig
|
||||
```
|
||||
|
||||
#### 2. Local Signer (Advanced Signature)
|
||||
|
||||
**Provider:** Local signing with software keys
|
||||
**Certification:** Not qualified (AES/AdES only)
|
||||
**License:** Open source
|
||||
|
||||
**Configuration:**
|
||||
```yaml
|
||||
StellaOps:
|
||||
Crypto:
|
||||
Providers:
|
||||
Eidas:
|
||||
LocalSigner:
|
||||
Enabled: true
|
||||
KeyStorePath: "/etc/stellaops/eidas-keys"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Standards
|
||||
|
||||
| Standard | Name | Purpose |
|
||||
|----------|------|---------|
|
||||
| **ETSI EN 319 412** | Certificate Profiles | Requirements for certificates (QES, AES) |
|
||||
| **ETSI EN 319 102** | Signature Policies | Signature policy requirements |
|
||||
| **ETSI EN 319 142** | PAdES (PDF Signatures) | PDF Advanced Electronic Signatures |
|
||||
| **ETSI TS 119 432** | Remote Signing | Remote signature creation protocols |
|
||||
| **ETSI EN 319 401** | Trust Service Providers | TSP requirements and policies |
|
||||
|
||||
---
|
||||
|
||||
### Algorithms
|
||||
|
||||
| Algorithm | Description | Signature Level | Recommended |
|
||||
|-----------|-------------|-----------------|-------------|
|
||||
| `ECDSA-P256-QES` | ECDSA with P-256 curve (QES) | QES | ✅ Yes |
|
||||
| `ECDSA-P384-QES` | ECDSA with P-384 curve (QES) | QES | ✅ Yes |
|
||||
| `RSA-2048-QES` | RSA 2048-bit (QES) | QES | ⚠️ Use ECDSA |
|
||||
| `ECDSA-P256-AES` | ECDSA with P-256 curve (AES) | AES | ✅ Yes |
|
||||
|
||||
**Recommendation:** Use ECDSA P-256 or P-384 for new implementations. RSA is supported but ECDSA is preferred.
|
||||
|
||||
---
|
||||
|
||||
### Configuration Example
|
||||
|
||||
```yaml
|
||||
# appsettings.eidas.yaml
|
||||
|
||||
StellaOps:
|
||||
Backend:
|
||||
BaseUrl: "https://api.stellaops.eu"
|
||||
|
||||
Crypto:
|
||||
DefaultProvider: "eidas"
|
||||
|
||||
Profiles:
|
||||
- name: "eidas-qes"
|
||||
provider: "eidas"
|
||||
algorithm: "ECDSA-P256-QES"
|
||||
keyId: "qes-cert-2024"
|
||||
|
||||
- name: "eidas-aes"
|
||||
provider: "eidas"
|
||||
algorithm: "ECDSA-P256-AES"
|
||||
keyId: "aes-cert-2024"
|
||||
|
||||
Providers:
|
||||
Eidas:
|
||||
TspClient:
|
||||
Enabled: true
|
||||
TspUrl: "https://tsp.example.eu/api/v1/sign"
|
||||
ApiKey: "${EIDAS_TSP_API_KEY}"
|
||||
|
||||
# Qualified Trust Service Provider
|
||||
TspProfile:
|
||||
Name: "Example Trust Services Provider"
|
||||
QualifiedStatus: true
|
||||
Country: "DE"
|
||||
TrustedListUrl: "https://tsp.example.eu/tsl.xml"
|
||||
|
||||
Keys:
|
||||
- KeyId: "qes-cert-2024"
|
||||
Algorithm: "ECDSA-P256-QES"
|
||||
Source: "tsp"
|
||||
SignatureLevel: "QES"
|
||||
FriendlyName: "Qualified Electronic Signature 2024"
|
||||
|
||||
- KeyId: "aes-cert-2024"
|
||||
Algorithm: "ECDSA-P256-AES"
|
||||
Source: "local"
|
||||
SignatureLevel: "AES"
|
||||
FriendlyName: "Advanced Electronic Signature 2024"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### EU Trusted List Validation
|
||||
|
||||
Verify TSP is on the EU Trusted List:
|
||||
|
||||
```bash
|
||||
# Download EU Trusted List
|
||||
wget https://ec.europa.eu/tools/lotl/eu-lotl.xml
|
||||
|
||||
# Validate TSP certificate against trusted list
|
||||
stella crypto verify-tsp \
|
||||
--tsp-cert tsp-certificate.pem \
|
||||
--trusted-list eu-lotl.xml
|
||||
```
|
||||
|
||||
**Official EU Trusted List:**
|
||||
- https://ec.europa.eu/digital-building-blocks/wikis/display/DIGITAL/EU+Trusted+Lists
|
||||
|
||||
---
|
||||
|
||||
### Compliance Checklist
|
||||
|
||||
#### For QES (Qualified Electronic Signature):
|
||||
|
||||
- [ ] Use EU-qualified Trust Service Provider (on EU Trusted List)
|
||||
- [ ] Verify TSP certificate is qualified according to ETSI EN 319 412-2
|
||||
- [ ] Use signature policy compliant with ETSI EN 319 102-1
|
||||
- [ ] Include qualified certificate in signature
|
||||
- [ ] Use qualified signature creation device (QSCD) for key storage
|
||||
- [ ] Validate against EU Trusted List before accepting signatures
|
||||
- [ ] Maintain signature validation for 30+ years (long-term validation)
|
||||
|
||||
#### For AES (Advanced Electronic Signature):
|
||||
|
||||
- [ ] Uniquely linked to signatory
|
||||
- [ ] Capable of identifying signatory
|
||||
- [ ] Created using secure signature creation data
|
||||
- [ ] Linked to signed data to detect alterations
|
||||
|
||||
---
|
||||
|
||||
### Legal Considerations
|
||||
|
||||
**Cross-border Recognition:**
|
||||
- QES has same legal effect as handwritten signature in all EU member states
|
||||
- AES/AdES may have varying legal recognition across member states
|
||||
|
||||
**Long-term Validation:**
|
||||
- QES must remain verifiable for decades
|
||||
- Use AdES with long-term validation (LTV) attributes
|
||||
- Timestamp signatures to prove time of signing
|
||||
|
||||
**Data Protection (GDPR):**
|
||||
- eIDAS signatures may contain personal data
|
||||
- Comply with GDPR when processing signature certificates
|
||||
- Obtain consent for processing qualified certificate data
|
||||
|
||||
---
|
||||
|
||||
## SM (China)
|
||||
|
||||
### Overview
|
||||
|
||||
**SM** (ShāngMì, 商密, Commercial Cipher) refers to China's national cryptographic algorithms mandated by OSCCA (Office of State Commercial Cryptography Administration).
|
||||
|
||||
**Applicable Jurisdiction:** People's Republic of China
|
||||
|
||||
**Legal Basis:**
|
||||
- Cryptography Law of PRC (2020)
|
||||
- GM/T standards published by OSCCA
|
||||
- MLPS 2.0 (Multi-Level Protection Scheme 2.0)
|
||||
|
||||
---
|
||||
|
||||
### SM Standards
|
||||
|
||||
| Standard | Name | Purpose |
|
||||
|----------|------|---------|
|
||||
| **GM/T 0003-2012** (SM2) | Public Key Cryptographic Algorithm | Elliptic curve signatures and encryption (256-bit) |
|
||||
| **GM/T 0004-2012** (SM3) | Cryptographic Hash Algorithm | Hash function (256-bit output) |
|
||||
| **GM/T 0002-2012** (SM4) | Block Cipher Algorithm | Symmetric encryption (128-bit key) |
|
||||
| **GM/T 0009-2012** (SM9) | Identity-Based Cryptography | Identity-based encryption and signatures |
|
||||
|
||||
---
|
||||
|
||||
### Crypto Providers
|
||||
|
||||
The `stella-china` distribution includes SM providers:
|
||||
|
||||
#### 1. GmSSL (Open Source)
|
||||
|
||||
**Provider:** GmSSL library
|
||||
**Certification:** Not OSCCA-certified (development/testing only)
|
||||
**License:** Apache 2.0
|
||||
|
||||
**Installation:**
|
||||
```bash
|
||||
# Install GmSSL
|
||||
sudo apt install gmssl
|
||||
|
||||
# Verify installation
|
||||
gmssl version
|
||||
```
|
||||
|
||||
**Configuration:**
|
||||
```yaml
|
||||
StellaOps:
|
||||
Crypto:
|
||||
Providers:
|
||||
Sm:
|
||||
GmSsl:
|
||||
Enabled: true
|
||||
LibraryPath: "/usr/lib/libgmssl.so"
|
||||
```
|
||||
|
||||
#### 2. Commercial CSP (OSCCA-certified)
|
||||
|
||||
**Provider:** OSCCA-certified commercial CSP
|
||||
**Certification:** OSCCA-certified (required for production)
|
||||
**License:** Commercial (vendor-specific)
|
||||
|
||||
**Configuration:**
|
||||
```yaml
|
||||
StellaOps:
|
||||
Crypto:
|
||||
Providers:
|
||||
Sm:
|
||||
CommercialCsp:
|
||||
Enabled: true
|
||||
VendorId: "vendor-name"
|
||||
DeviceId: "device-serial"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Algorithms
|
||||
|
||||
| Algorithm | Description | GM Standard | Key Size | Recommended |
|
||||
|-----------|-------------|-------------|----------|-------------|
|
||||
| `SM2` | Elliptic curve signature and encryption | GM/T 0003-2012 | 256-bit | ✅ Yes |
|
||||
| `SM3` | Cryptographic hash | GM/T 0004-2012 | 256-bit output | ✅ Yes |
|
||||
| `SM4` | Block cipher | GM/T 0002-2012 | 128-bit key | ✅ Yes |
|
||||
| `SM9` | Identity-based crypto | GM/T 0009-2012 | 256-bit | ⚠️ Specialized |
|
||||
|
||||
---
|
||||
|
||||
### Configuration Example
|
||||
|
||||
```yaml
|
||||
# appsettings.sm.yaml
|
||||
|
||||
StellaOps:
|
||||
Backend:
|
||||
BaseUrl: "https://api.stellaops.cn"
|
||||
|
||||
Crypto:
|
||||
DefaultProvider: "sm"
|
||||
|
||||
Profiles:
|
||||
- name: "sm-prod-signing"
|
||||
provider: "sm"
|
||||
algorithm: "SM2"
|
||||
keyId: "sm-prod-key-2024"
|
||||
|
||||
Providers:
|
||||
Sm:
|
||||
GmSsl:
|
||||
Enabled: true
|
||||
LibraryPath: "/usr/lib/libgmssl.so"
|
||||
|
||||
Keys:
|
||||
- KeyId: "sm-prod-key-2024"
|
||||
Algorithm: "SM2"
|
||||
Source: "file"
|
||||
FilePath: "/etc/stellaops/keys/sm-key.pem"
|
||||
FriendlyName: "Production SM2 Signing Key 2024"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Usage Example
|
||||
|
||||
```bash
|
||||
# Sign with SM2
|
||||
stella crypto sign \
|
||||
--provider sm \
|
||||
--algorithm SM2 \
|
||||
--key-id sm-prod-key-2024 \
|
||||
--file document.pdf \
|
||||
--output document.pdf.sig
|
||||
|
||||
# Verify SM2 signature
|
||||
stella crypto verify \
|
||||
--provider sm \
|
||||
--algorithm SM2 \
|
||||
--key-id sm-prod-key-2024 \
|
||||
--file document.pdf \
|
||||
--signature document.pdf.sig
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Test Vectors (OSCCA Compliance)
|
||||
|
||||
Verify your SM implementation with official test vectors:
|
||||
|
||||
```bash
|
||||
# Test vector from GM/T 0004-2012 (SM3 hash)
|
||||
echo -n "abc" | gmssl sm3
|
||||
|
||||
# Expected output:
|
||||
# 66c7f0f462eeedd9d1f2d46bdc10e4e24167c4875cf2f7a2297da02b8f4ba8e0
|
||||
```
|
||||
|
||||
**Official Test Vectors:**
|
||||
- SM2: [GM/T 0003-2012 Appendix A](http://www.gmbz.org.cn/main/viewfile/20180108023812835219.html)
|
||||
- SM3: [GM/T 0004-2012 Appendix A](http://www.gmbz.org.cn/main/viewfile/20180108023528214322.html)
|
||||
|
||||
---
|
||||
|
||||
### Compliance Checklist
|
||||
|
||||
- [ ] Use OSCCA-certified cryptographic product for production
|
||||
- [ ] Use SM2 for digital signatures (not RSA/ECDSA)
|
||||
- [ ] Use SM3 for hashing (not SHA-256)
|
||||
- [ ] Use SM4 for symmetric encryption (not AES)
|
||||
- [ ] Obtain commercial cipher product model certificate
|
||||
- [ ] Register commercial cipher use with local authorities (MLPS 2.0)
|
||||
- [ ] Store keys in OSCCA-certified hardware for sensitive applications
|
||||
|
||||
---
|
||||
|
||||
### Legal Considerations
|
||||
|
||||
**Export Control:**
|
||||
- SM implementations are subject to Chinese export control laws
|
||||
- Distribution outside China may require special permissions
|
||||
- StellaOps `stella-china` distribution is authorized for China only
|
||||
|
||||
**MLPS 2.0 Requirements:**
|
||||
- Level 2+: SM algorithms recommended
|
||||
- Level 3+: SM algorithms mandatory
|
||||
- Level 4+: SM algorithms + OSCCA-certified hardware mandatory
|
||||
|
||||
**Commercial Cipher Regulations:**
|
||||
- Commercial use requires OSCCA product certification
|
||||
- Open-source implementations (GmSSL) for development/testing only
|
||||
- Production systems must use OSCCA-certified CSPs
|
||||
|
||||
---
|
||||
|
||||
## Distribution Selection
|
||||
|
||||
| Your Location | Required Compliance | Distribution |
|
||||
|---------------|---------------------|--------------|
|
||||
| Russia, CIS | GOST R 34.10-2012 (government/regulated) | `stella-russia` |
|
||||
| EU Member State | eIDAS QES (legal documents) | `stella-eu` |
|
||||
| China | SM2/SM3/SM4 (MLPS 2.0 Level 3+) | `stella-china` |
|
||||
| Other | None (international standards) | `stella-international` |
|
||||
|
||||
---
|
||||
|
||||
## See Also
|
||||
|
||||
- [CLI Overview](README.md) - Installation and quick start
|
||||
- [CLI Architecture](architecture.md) - Plugin architecture
|
||||
- [Command Reference](command-reference.md) - Crypto command usage
|
||||
- [Crypto Plugin Development](crypto-plugins.md) - Develop custom plugins
|
||||
- [Distribution Matrix](distribution-matrix.md) - Build and distribution guide
|
||||
- [Troubleshooting](troubleshooting.md) - Common compliance issues
|
||||
304
docs/modules/cli/guides/crypto/crypto-commands.md
Normal file
304
docs/modules/cli/guides/crypto/crypto-commands.md
Normal file
@@ -0,0 +1,304 @@
|
||||
# Crypto Commands
|
||||
|
||||
**Sprint**: SPRINT_4100_0006_0001
|
||||
**Status**: Implemented
|
||||
**Distribution Support**: International, Russia (GOST), EU (eIDAS), China (SM)
|
||||
|
||||
## Overview
|
||||
|
||||
The `stella crypto` command group provides cryptographic operations with regional compliance support. The available crypto providers depend on your distribution build.
|
||||
|
||||
## Distribution Matrix
|
||||
|
||||
| Distribution | Build Flag | Crypto Standards | Providers |
|
||||
|--------------|------------|------------------|-----------|
|
||||
| **International** | (default) | NIST/FIPS | BouncyCastle (ECDSA, RSA, EdDSA) |
|
||||
| **Russia** | `StellaOpsEnableGOST=true` | GOST R 34.10-2012<br>GOST R 34.11-2012<br>GOST R 34.12-2015 | CryptoPro CSP<br>OpenSSL GOST<br>PKCS#11 GOST |
|
||||
| **EU** | `StellaOpsEnableEIDAS=true` | eIDAS Regulation 910/2014<br>ETSI EN 319 412 | Remote TSP (QES)<br>Local PKCS#12 (AdES) |
|
||||
| **China** | `StellaOpsEnableSM=true` | GM/T 0003-2012 (SM2)<br>GM/T 0004-2012 (SM3)<br>GM/T 0002-2012 (SM4) | Remote CSP<br>GmSSL |
|
||||
|
||||
## Commands
|
||||
|
||||
### `stella crypto sign`
|
||||
|
||||
Sign artifacts using configured crypto provider.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
stella crypto sign --input <file> [options]
|
||||
```
|
||||
|
||||
**Options:**
|
||||
- `--input <path>` - Path to file to sign (required)
|
||||
- `--output <path>` - Output path for signature (default: `<input>.sig`)
|
||||
- `--provider <name>` - Override crypto provider (e.g., `gost-cryptopro`, `eidas-tsp`, `sm-remote`)
|
||||
- `--key-id <id>` - Key identifier for signing
|
||||
- `--format <format>` - Signature format: `dsse`, `jws`, `raw` (default: `dsse`)
|
||||
- `--detached` - Create detached signature (default: true)
|
||||
- `--verbose` - Show detailed output
|
||||
|
||||
**Examples:**
|
||||
|
||||
```bash
|
||||
# Sign with default provider
|
||||
stella crypto sign --input artifact.tar.gz
|
||||
|
||||
# Sign with specific GOST provider
|
||||
stella crypto sign --input artifact.tar.gz --provider gost-cryptopro --key-id prod-signing-2025
|
||||
|
||||
# Sign with eIDAS QES
|
||||
stella crypto sign --input contract.pdf --provider eidas-tsp --format jws
|
||||
```
|
||||
|
||||
### `stella crypto verify`
|
||||
|
||||
Verify signatures using configured crypto provider.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
stella crypto verify --input <file> [options]
|
||||
```
|
||||
|
||||
**Options:**
|
||||
- `--input <path>` - Path to file to verify (required)
|
||||
- `--signature <path>` - Path to signature file (default: `<input>.sig`)
|
||||
- `--provider <name>` - Override crypto provider
|
||||
- `--trust-policy <path>` - Path to trust policy YAML file
|
||||
- `--format <format>` - Signature format: `dsse`, `jws`, `raw` (auto-detect if omitted)
|
||||
- `--verbose` - Show detailed output
|
||||
|
||||
**Examples:**
|
||||
|
||||
```bash
|
||||
# Verify with auto-detected signature
|
||||
stella crypto verify --input artifact.tar.gz
|
||||
|
||||
# Verify with trust policy
|
||||
stella crypto verify --input artifact.tar.gz --trust-policy ./policies/production-trust.yaml
|
||||
|
||||
# Verify specific provider signature
|
||||
stella crypto verify --input contract.pdf --provider eidas-tsp --signature contract.jws
|
||||
```
|
||||
|
||||
### `stella crypto profiles`
|
||||
|
||||
List available crypto providers and their capabilities.
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
stella crypto profiles [options]
|
||||
```
|
||||
|
||||
**Options:**
|
||||
- `--details` - Show detailed provider capabilities
|
||||
- `--provider <name>` - Filter by provider name
|
||||
- `--test` - Run provider diagnostics and connectivity tests
|
||||
- `--verbose` - Show detailed output
|
||||
|
||||
**Examples:**
|
||||
|
||||
```bash
|
||||
# List all providers
|
||||
stella crypto profiles
|
||||
|
||||
# Show detailed capabilities
|
||||
stella crypto profiles --details
|
||||
|
||||
# Test GOST provider connectivity
|
||||
stella crypto profiles --provider gost --test
|
||||
```
|
||||
|
||||
**Output Distribution Info:**
|
||||
|
||||
The `profiles` command shows which regional crypto plugins are enabled:
|
||||
|
||||
```
|
||||
Distribution Information:
|
||||
┌──────────────────┬─────────┐
|
||||
│ Feature │ Status │
|
||||
├──────────────────┼─────────┤
|
||||
│ GOST (Russia) │ Enabled │
|
||||
│ eIDAS (EU) │ Disabled│
|
||||
│ SM (China) │ Disabled│
|
||||
│ BouncyCastle │ Enabled │
|
||||
└──────────────────┴─────────┘
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Quick Start
|
||||
|
||||
1. Copy example configuration:
|
||||
```bash
|
||||
cp src/Cli/StellaOps.Cli/appsettings.crypto.yaml.example appsettings.crypto.yaml
|
||||
```
|
||||
|
||||
2. Set active profile:
|
||||
```yaml
|
||||
StellaOps:
|
||||
Crypto:
|
||||
Registry:
|
||||
ActiveProfile: "russia-prod" # or "eu-prod", "china-prod", "international"
|
||||
```
|
||||
|
||||
3. Configure provider credentials:
|
||||
```bash
|
||||
export STELLAOPS_CRYPTO_KEYSTORE_PASSWORD="your-password"
|
||||
export STELLAOPS_GOST_CONTAINER_NAME="your-container" # For GOST
|
||||
export STELLAOPS_EIDAS_TSP_API_KEY="your-api-key" # For eIDAS
|
||||
export STELLAOPS_SM_CSP_API_KEY="your-api-key" # For SM
|
||||
```
|
||||
|
||||
### Profile Configuration
|
||||
|
||||
See `appsettings.crypto.yaml.example` for detailed configuration examples for each distribution.
|
||||
|
||||
**Key sections:**
|
||||
- `Profiles.<profile>.PreferredProviders` - Provider precedence order
|
||||
- `Profiles.<profile>.Providers.<name>.Configuration` - Provider-specific settings
|
||||
- `Validation` - Startup validation rules
|
||||
- `Attestation.Dsse` - DSSE envelope settings
|
||||
- `Kms` - Key Management Service integration
|
||||
|
||||
## Build Instructions
|
||||
|
||||
### International Distribution (Default)
|
||||
|
||||
```bash
|
||||
dotnet build src/Cli/StellaOps.Cli/StellaOps.Cli.csproj
|
||||
```
|
||||
|
||||
### Russia Distribution (GOST)
|
||||
|
||||
```bash
|
||||
dotnet build src/Cli/StellaOps.Cli/StellaOps.Cli.csproj \
|
||||
-p:StellaOpsEnableGOST=true
|
||||
```
|
||||
|
||||
### EU Distribution (eIDAS)
|
||||
|
||||
```bash
|
||||
dotnet build src/Cli/StellaOps.Cli/StellaOps.Cli.csproj \
|
||||
-p:StellaOpsEnableEIDAS=true
|
||||
```
|
||||
|
||||
### China Distribution (SM)
|
||||
|
||||
```bash
|
||||
dotnet build src/Cli/StellaOps.Cli/StellaOps.Cli.csproj \
|
||||
-p:StellaOpsEnableSM=true
|
||||
```
|
||||
|
||||
### Multi-Region Distribution
|
||||
|
||||
```bash
|
||||
dotnet build src/Cli/StellaOps.Cli/StellaOps.Cli.csproj \
|
||||
-p:StellaOpsEnableGOST=true \
|
||||
-p:StellaOpsEnableEIDAS=true \
|
||||
-p:StellaOpsEnableSM=true
|
||||
```
|
||||
|
||||
**Note:** Multi-region builds include all crypto plugins but only activate those configured in the active profile.
|
||||
|
||||
## Compliance Notes
|
||||
|
||||
### GOST (Russia)
|
||||
|
||||
- **Algorithms**: GOST R 34.10-2012 (256/512-bit), GOST R 34.11-2012, GOST R 34.12-2015
|
||||
- **CSP Support**: CryptoPro CSP, OpenSSL GOST engine, PKCS#11 tokens
|
||||
- **Certification**: Certified by FSB (Federal Security Service of Russia)
|
||||
- **Use Cases**: Government contracts, regulated industries in Russia
|
||||
|
||||
### eIDAS (EU)
|
||||
|
||||
- **Regulation**: (EU) No 910/2014
|
||||
- **Signature Levels**:
|
||||
- QES (Qualified Electronic Signature) - Legal equivalence to handwritten
|
||||
- AES (Advanced Electronic Signature)
|
||||
- AdES (Advanced Electronic Signature with validation data)
|
||||
- **Trust Anchors**: EU Trusted List (EUTL)
|
||||
- **Use Cases**: Legal contracts, public procurement, cross-border transactions
|
||||
|
||||
### SM/ShangMi (China)
|
||||
|
||||
- **Standards**: GM/T 0003-2012 (SM2), GM/T 0004-2012 (SM3), GM/T 0002-2012 (SM4)
|
||||
- **Authority**: OSCCA (Office of State Commercial Cryptography Administration)
|
||||
- **Algorithms**: SM2 (elliptic curve), SM3 (hash), SM4 (block cipher)
|
||||
- **Use Cases**: Government systems, financial services, critical infrastructure in China
|
||||
|
||||
## Migration from `cryptoru` CLI
|
||||
|
||||
The standalone `cryptoru` CLI is deprecated. Functionality has been integrated into `stella crypto`:
|
||||
|
||||
| Old Command | New Command |
|
||||
|-------------|-------------|
|
||||
| `cryptoru providers` | `stella crypto profiles` or `stella crypto providers` |
|
||||
| `cryptoru sign` | `stella crypto sign` |
|
||||
|
||||
**Migration Steps:**
|
||||
|
||||
1. Update scripts to use `stella crypto` instead of `cryptoru`
|
||||
2. Update configuration from `cryptoru.yaml` to `appsettings.crypto.yaml`
|
||||
3. The `cryptoru` tool will be removed in StellaOps 2.0 (sunset date: 2025-07-01)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "No crypto providers available"
|
||||
|
||||
**Cause**: CLI built without regional crypto flags, or providers not registered.
|
||||
|
||||
**Solution**:
|
||||
1. Check build flags: `stella crypto profiles` shows distribution info
|
||||
2. Rebuild with appropriate flag (e.g., `-p:StellaOpsEnableGOST=true`)
|
||||
3. Verify `appsettings.crypto.yaml` configuration
|
||||
|
||||
### "Provider not found"
|
||||
|
||||
**Cause**: Active profile references unavailable provider.
|
||||
|
||||
**Solution**:
|
||||
1. List available providers: `stella crypto profiles`
|
||||
2. Update active profile in configuration
|
||||
3. Or override with `--provider` flag
|
||||
|
||||
### GOST Provider Initialization Failed
|
||||
|
||||
**Cause**: CryptoPro CSP not installed or configured.
|
||||
|
||||
**Solution**:
|
||||
1. Install CryptoPro CSP 5.0+
|
||||
2. Configure container: `csptest -keyset -enum_cont -fqcn -verifyc`
|
||||
3. Set environment: `export STELLAOPS_GOST_CONTAINER_NAME="your-container"`
|
||||
|
||||
### eIDAS TSP Connection Error
|
||||
|
||||
**Cause**: TSP endpoint unreachable or invalid API key.
|
||||
|
||||
**Solution**:
|
||||
1. Verify TSP endpoint: `curl -I https://tsp.example.eu/api/v1`
|
||||
2. Check API key: `export STELLAOPS_EIDAS_TSP_API_KEY="valid-key"`
|
||||
3. Review TSP logs for authentication errors
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Cryptography Architecture](../architecture/cryptography.md)
|
||||
- [Compliance Matrix](../compliance/crypto-standards.md)
|
||||
- [Configuration Reference](../configuration/crypto.md)
|
||||
- [Air-Gap Operation](../operations/airgap.md#crypto-bundles)
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Key Protection**: Never commit private keys or credentials to version control
|
||||
2. **Environment Variables**: Use secure secret management (Vault, AWS Secrets Manager)
|
||||
3. **Trust Policies**: Validate certificate chains and revocation status
|
||||
4. **Audit Trail**: Enable crypto operation logging for compliance
|
||||
5. **Key Rotation**: Implement periodic key rotation policies
|
||||
6. **Disaster Recovery**: Backup key material securely
|
||||
|
||||
## Support
|
||||
|
||||
For regional crypto compliance questions:
|
||||
- **GOST**: Contact your CryptoPro representative
|
||||
- **eIDAS**: Consult qualified Trust Service Provider (TSP)
|
||||
- **SM**: Contact OSCCA-certified crypto service provider
|
||||
- **General**: StellaOps support team (support@stella-ops.org)
|
||||
1017
docs/modules/cli/guides/crypto/crypto-plugins.md
Normal file
1017
docs/modules/cli/guides/crypto/crypto-plugins.md
Normal file
File diff suppressed because it is too large
Load Diff
694
docs/modules/cli/guides/distribution-matrix.md
Normal file
694
docs/modules/cli/guides/distribution-matrix.md
Normal file
@@ -0,0 +1,694 @@
|
||||
# stella CLI - Build and Distribution Matrix
|
||||
|
||||
**Sprint:** SPRINT_4100_0006_0006 - CLI Documentation Overhaul
|
||||
|
||||
## Overview
|
||||
|
||||
StellaOps CLI is distributed in **four regional variants** to comply with export control regulations and cryptographic standards. Each distribution includes different cryptographic plugins based on regional requirements.
|
||||
|
||||
**Key Principles:**
|
||||
1. **Build-time Selection**: Crypto plugins are conditionally compiled based on build flags
|
||||
2. **Export Compliance**: Each distribution complies with export control laws
|
||||
3. **Deterministic Builds**: Same source + flags = same binary (reproducible builds)
|
||||
4. **Validation**: Automated validation ensures correct plugin inclusion
|
||||
|
||||
---
|
||||
|
||||
## Distribution Matrix
|
||||
|
||||
| Distribution | Crypto Plugins | Build Flag | Target Audience | Export Restrictions |
|
||||
|--------------|----------------|------------|-----------------|---------------------|
|
||||
| **stella-international** | Default (.NET, BouncyCastle) | None | Global (unrestricted) | ✅ No restrictions |
|
||||
| **stella-russia** | Default + GOST | `StellaOpsEnableGOST=true` | Russia, CIS states | ⚠️ Russia/CIS only |
|
||||
| **stella-eu** | Default + eIDAS | `StellaOpsEnableEIDAS=true` | European Union | ⚠️ EU/EEA only |
|
||||
| **stella-china** | Default + SM | `StellaOpsEnableSM=true` | China | ⚠️ China only |
|
||||
|
||||
---
|
||||
|
||||
## Crypto Provider Matrix
|
||||
|
||||
| Provider | International | Russia | EU | China |
|
||||
|----------|---------------|--------|-----|-------|
|
||||
| **.NET Crypto** (RSA, ECDSA, EdDSA) | ✅ | ✅ | ✅ | ✅ |
|
||||
| **BouncyCastle** (Extended algorithms) | ✅ | ✅ | ✅ | ✅ |
|
||||
| **GOST** (R 34.10-2012, R 34.11-2012) | ❌ | ✅ | ❌ | ❌ |
|
||||
| **eIDAS** (QES, AES, AdES) | ❌ | ❌ | ✅ | ❌ |
|
||||
| **SM** (SM2, SM3, SM4) | ❌ | ❌ | ❌ | ✅ |
|
||||
|
||||
---
|
||||
|
||||
## Build Instructions
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- .NET 10 SDK
|
||||
- Git
|
||||
- Docker (for Linux builds on Windows/macOS)
|
||||
|
||||
### Build Environment Setup
|
||||
|
||||
```bash
|
||||
# Clone repository
|
||||
git clone https://git.stella-ops.org/stella-ops.org/git.stella-ops.org
|
||||
cd git.stella-ops.org
|
||||
|
||||
# Verify .NET SDK
|
||||
dotnet --version
|
||||
# Expected: 10.0.0 or later
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Building Regional Distributions
|
||||
|
||||
### 1. International Distribution (Default)
|
||||
|
||||
**Includes:** Default crypto providers only (no regional algorithms)
|
||||
|
||||
**Build Command:**
|
||||
```bash
|
||||
dotnet publish src/Cli/StellaOps.Cli \
|
||||
--configuration Release \
|
||||
--runtime linux-x64 \
|
||||
--self-contained true \
|
||||
--output dist/stella-international-linux-x64
|
||||
```
|
||||
|
||||
**Supported Platforms:**
|
||||
- `linux-x64` - Linux x86_64
|
||||
- `linux-arm64` - Linux ARM64
|
||||
- `osx-x64` - macOS Intel
|
||||
- `osx-arm64` - macOS Apple Silicon
|
||||
- `win-x64` - Windows x64
|
||||
|
||||
**Example (all platforms):**
|
||||
```bash
|
||||
# Linux x64
|
||||
dotnet publish src/Cli/StellaOps.Cli \
|
||||
--configuration Release \
|
||||
--runtime linux-x64 \
|
||||
--self-contained true \
|
||||
--output dist/stella-international-linux-x64
|
||||
|
||||
# Linux ARM64
|
||||
dotnet publish src/Cli/StellaOps.Cli \
|
||||
--configuration Release \
|
||||
--runtime linux-arm64 \
|
||||
--self-contained true \
|
||||
--output dist/stella-international-linux-arm64
|
||||
|
||||
# macOS Intel
|
||||
dotnet publish src/Cli/StellaOps.Cli \
|
||||
--configuration Release \
|
||||
--runtime osx-x64 \
|
||||
--self-contained true \
|
||||
--output dist/stella-international-osx-x64
|
||||
|
||||
# macOS Apple Silicon
|
||||
dotnet publish src/Cli/StellaOps.Cli \
|
||||
--configuration Release \
|
||||
--runtime osx-arm64 \
|
||||
--self-contained true \
|
||||
--output dist/stella-international-osx-arm64
|
||||
|
||||
# Windows x64
|
||||
dotnet publish src/Cli/StellaOps.Cli \
|
||||
--configuration Release \
|
||||
--runtime win-x64 \
|
||||
--self-contained true \
|
||||
--output dist/stella-international-win-x64
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. Russia Distribution (GOST)
|
||||
|
||||
**Includes:** Default + GOST crypto providers
|
||||
|
||||
**Build Command:**
|
||||
```bash
|
||||
dotnet publish src/Cli/StellaOps.Cli \
|
||||
--configuration Release \
|
||||
--runtime linux-x64 \
|
||||
--self-contained true \
|
||||
-p:StellaOpsEnableGOST=true \
|
||||
-p:DefineConstants="STELLAOPS_ENABLE_GOST" \
|
||||
--output dist/stella-russia-linux-x64
|
||||
```
|
||||
|
||||
**Important:** The build flag `StellaOpsEnableGOST=true` conditionally includes GOST plugin projects, and `DefineConstants` enables `#if STELLAOPS_ENABLE_GOST` preprocessor directives.
|
||||
|
||||
**Multi-platform Example:**
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# build-russia.sh - Build all Russia distributions
|
||||
|
||||
set -e
|
||||
|
||||
RUNTIMES=("linux-x64" "linux-arm64" "osx-x64" "osx-arm64" "win-x64")
|
||||
|
||||
for runtime in "${RUNTIMES[@]}"; do
|
||||
echo "Building stella-russia for $runtime..."
|
||||
dotnet publish src/Cli/StellaOps.Cli \
|
||||
--configuration Release \
|
||||
--runtime "$runtime" \
|
||||
--self-contained true \
|
||||
-p:StellaOpsEnableGOST=true \
|
||||
-p:DefineConstants="STELLAOPS_ENABLE_GOST" \
|
||||
--output "dist/stella-russia-$runtime"
|
||||
done
|
||||
|
||||
echo "All Russia distributions built successfully"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. EU Distribution (eIDAS)
|
||||
|
||||
**Includes:** Default + eIDAS crypto providers
|
||||
|
||||
**Build Command:**
|
||||
```bash
|
||||
dotnet publish src/Cli/StellaOps.Cli \
|
||||
--configuration Release \
|
||||
--runtime linux-x64 \
|
||||
--self-contained true \
|
||||
-p:StellaOpsEnableEIDAS=true \
|
||||
-p:DefineConstants="STELLAOPS_ENABLE_EIDAS" \
|
||||
--output dist/stella-eu-linux-x64
|
||||
```
|
||||
|
||||
**Multi-platform Example:**
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# build-eu.sh - Build all EU distributions
|
||||
|
||||
set -e
|
||||
|
||||
RUNTIMES=("linux-x64" "linux-arm64" "osx-x64" "osx-arm64" "win-x64")
|
||||
|
||||
for runtime in "${RUNTIMES[@]}"; do
|
||||
echo "Building stella-eu for $runtime..."
|
||||
dotnet publish src/Cli/StellaOps.Cli \
|
||||
--configuration Release \
|
||||
--runtime "$runtime" \
|
||||
--self-contained true \
|
||||
-p:StellaOpsEnableEIDAS=true \
|
||||
-p:DefineConstants="STELLAOPS_ENABLE_EIDAS" \
|
||||
--output "dist/stella-eu-$runtime"
|
||||
done
|
||||
|
||||
echo "All EU distributions built successfully"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4. China Distribution (SM)
|
||||
|
||||
**Includes:** Default + SM crypto providers
|
||||
|
||||
**Build Command:**
|
||||
```bash
|
||||
dotnet publish src/Cli/StellaOps.Cli \
|
||||
--configuration Release \
|
||||
--runtime linux-x64 \
|
||||
--self-contained true \
|
||||
-p:StellaOpsEnableSM=true \
|
||||
-p:DefineConstants="STELLAOPS_ENABLE_SM" \
|
||||
--output dist/stella-china-linux-x64
|
||||
```
|
||||
|
||||
**Multi-platform Example:**
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# build-china.sh - Build all China distributions
|
||||
|
||||
set -e
|
||||
|
||||
RUNTIMES=("linux-x64" "linux-arm64" "osx-x64" "osx-arm64" "win-x64")
|
||||
|
||||
for runtime in "${RUNTIMES[@]}"; do
|
||||
echo "Building stella-china for $runtime..."
|
||||
dotnet publish src/Cli/StellaOps.Cli \
|
||||
--configuration Release \
|
||||
--runtime "$runtime" \
|
||||
--self-contained true \
|
||||
-p:StellaOpsEnableSM=true \
|
||||
-p:DefineConstants="STELLAOPS_ENABLE_SM" \
|
||||
--output "dist/stella-china-$runtime"
|
||||
done
|
||||
|
||||
echo "All China distributions built successfully"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Build All Distributions
|
||||
|
||||
**Automated build script:**
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# build-all.sh - Build all distributions for all platforms
|
||||
|
||||
set -e
|
||||
|
||||
DISTRIBUTIONS=("international" "russia" "eu" "china")
|
||||
RUNTIMES=("linux-x64" "linux-arm64" "osx-x64" "osx-arm64" "win-x64")
|
||||
|
||||
build_distribution() {
|
||||
local dist=$1
|
||||
local runtime=$2
|
||||
local flags=""
|
||||
|
||||
case $dist in
|
||||
"russia")
|
||||
flags="-p:StellaOpsEnableGOST=true -p:DefineConstants=STELLAOPS_ENABLE_GOST"
|
||||
;;
|
||||
"eu")
|
||||
flags="-p:StellaOpsEnableEIDAS=true -p:DefineConstants=STELLAOPS_ENABLE_EIDAS"
|
||||
;;
|
||||
"china")
|
||||
flags="-p:StellaOpsEnableSM=true -p:DefineConstants=STELLAOPS_ENABLE_SM"
|
||||
;;
|
||||
esac
|
||||
|
||||
echo "Building stella-$dist for $runtime..."
|
||||
|
||||
dotnet publish src/Cli/StellaOps.Cli \
|
||||
--configuration Release \
|
||||
--runtime "$runtime" \
|
||||
--self-contained true \
|
||||
$flags \
|
||||
--output "dist/stella-$dist-$runtime"
|
||||
|
||||
# Create tarball (except Windows)
|
||||
if [[ ! $runtime =~ ^win ]]; then
|
||||
tar -czf "dist/stella-$dist-$runtime.tar.gz" -C "dist/stella-$dist-$runtime" .
|
||||
echo "✅ Created dist/stella-$dist-$runtime.tar.gz"
|
||||
else
|
||||
# Create zip for Windows
|
||||
(cd "dist/stella-$dist-$runtime" && zip -r "../stella-$dist-$runtime.zip" .)
|
||||
echo "✅ Created dist/stella-$dist-$runtime.zip"
|
||||
fi
|
||||
}
|
||||
|
||||
for dist in "${DISTRIBUTIONS[@]}"; do
|
||||
for runtime in "${RUNTIMES[@]}"; do
|
||||
build_distribution "$dist" "$runtime"
|
||||
done
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "🎉 All distributions built successfully!"
|
||||
echo "See dist/ directory for artifacts"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Distribution Validation
|
||||
|
||||
### Automated Validation Script
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# validate-distribution.sh - Validate distribution has correct plugins
|
||||
|
||||
set -e
|
||||
|
||||
DISTRIBUTION=$1 # international, russia, eu, china
|
||||
BINARY_PATH=$2
|
||||
|
||||
if [ -z "$DISTRIBUTION" ] || [ -z "$BINARY_PATH" ]; then
|
||||
echo "Usage: $0 <distribution> <binary-path>"
|
||||
echo "Example: $0 russia dist/stella-russia-linux-x64/stella"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Validating $DISTRIBUTION distribution: $BINARY_PATH"
|
||||
echo ""
|
||||
|
||||
# Function to check for symbol in binary
|
||||
has_symbol() {
|
||||
local symbol=$1
|
||||
if command -v objdump &> /dev/null; then
|
||||
objdump -p "$BINARY_PATH" 2>/dev/null | grep -q "$symbol"
|
||||
elif command -v nm &> /dev/null; then
|
||||
nm "$BINARY_PATH" 2>/dev/null | grep -q "$symbol"
|
||||
else
|
||||
# Fallback: check if binary contains string
|
||||
strings "$BINARY_PATH" 2>/dev/null | grep -q "$symbol"
|
||||
fi
|
||||
}
|
||||
|
||||
# Validation rules
|
||||
validate_international() {
|
||||
echo "Checking International distribution..."
|
||||
|
||||
# Should NOT contain regional plugins
|
||||
if has_symbol "GostCryptoProvider" || \
|
||||
has_symbol "EidasCryptoProvider" || \
|
||||
has_symbol "SmCryptoProvider"; then
|
||||
echo "❌ FAIL: International distribution contains restricted plugins"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "✅ PASS: International distribution valid (no restricted plugins)"
|
||||
return 0
|
||||
}
|
||||
|
||||
validate_russia() {
|
||||
echo "Checking Russia distribution..."
|
||||
|
||||
# Should contain GOST
|
||||
if ! has_symbol "GostCryptoProvider"; then
|
||||
echo "❌ FAIL: Russia distribution missing GOST plugin"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Should NOT contain eIDAS or SM
|
||||
if has_symbol "EidasCryptoProvider" || has_symbol "SmCryptoProvider"; then
|
||||
echo "❌ FAIL: Russia distribution contains non-GOST regional plugins"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "✅ PASS: Russia distribution valid (GOST included, no other regional plugins)"
|
||||
return 0
|
||||
}
|
||||
|
||||
validate_eu() {
|
||||
echo "Checking EU distribution..."
|
||||
|
||||
# Should contain eIDAS
|
||||
if ! has_symbol "EidasCryptoProvider"; then
|
||||
echo "❌ FAIL: EU distribution missing eIDAS plugin"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Should NOT contain GOST or SM
|
||||
if has_symbol "GostCryptoProvider" || has_symbol "SmCryptoProvider"; then
|
||||
echo "❌ FAIL: EU distribution contains non-eIDAS regional plugins"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "✅ PASS: EU distribution valid (eIDAS included, no other regional plugins)"
|
||||
return 0
|
||||
}
|
||||
|
||||
validate_china() {
|
||||
echo "Checking China distribution..."
|
||||
|
||||
# Should contain SM
|
||||
if ! has_symbol "SmCryptoProvider"; then
|
||||
echo "❌ FAIL: China distribution missing SM plugin"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Should NOT contain GOST or eIDAS
|
||||
if has_symbol "GostCryptoProvider" || has_symbol "EidasCryptoProvider"; then
|
||||
echo "❌ FAIL: China distribution contains non-SM regional plugins"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "✅ PASS: China distribution valid (SM included, no other regional plugins)"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Run validation
|
||||
case $DISTRIBUTION in
|
||||
"international")
|
||||
validate_international
|
||||
;;
|
||||
"russia")
|
||||
validate_russia
|
||||
;;
|
||||
"eu")
|
||||
validate_eu
|
||||
;;
|
||||
"china")
|
||||
validate_china
|
||||
;;
|
||||
*)
|
||||
echo "❌ ERROR: Unknown distribution '$DISTRIBUTION'"
|
||||
echo "Valid distributions: international, russia, eu, china"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
exit $?
|
||||
```
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
# Validate Russia distribution
|
||||
./validate-distribution.sh russia dist/stella-russia-linux-x64/stella
|
||||
|
||||
# Output:
|
||||
# Validating russia distribution: dist/stella-russia-linux-x64/stella
|
||||
#
|
||||
# Checking Russia distribution...
|
||||
# ✅ PASS: Russia distribution valid (GOST included, no other regional plugins)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Runtime Validation
|
||||
|
||||
Verify correct plugins are available at runtime:
|
||||
|
||||
```bash
|
||||
# International distribution
|
||||
./stella crypto providers
|
||||
# Expected output:
|
||||
# Available Crypto Providers:
|
||||
# - default (.NET Crypto, BouncyCastle)
|
||||
|
||||
# Russia distribution
|
||||
./stella crypto providers
|
||||
# Expected output:
|
||||
# Available Crypto Providers:
|
||||
# - default (.NET Crypto, BouncyCastle)
|
||||
# - gost (GOST R 34.10-2012, GOST R 34.11-2012)
|
||||
|
||||
# EU distribution
|
||||
./stella crypto providers
|
||||
# Expected output:
|
||||
# Available Crypto Providers:
|
||||
# - default (.NET Crypto, BouncyCastle)
|
||||
# - eidas (QES, AES, AdES)
|
||||
|
||||
# China distribution
|
||||
./stella crypto providers
|
||||
# Expected output:
|
||||
# Available Crypto Providers:
|
||||
# - default (.NET Crypto, BouncyCastle)
|
||||
# - sm (SM2, SM3, SM4)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Packaging
|
||||
|
||||
### Tarball Creation
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# package.sh - Create distribution tarballs
|
||||
|
||||
DIST=$1 # stella-russia-linux-x64
|
||||
OUTPUT_DIR="dist"
|
||||
|
||||
cd "$OUTPUT_DIR/$DIST"
|
||||
|
||||
# Create tarball
|
||||
tar -czf "../$DIST.tar.gz" .
|
||||
|
||||
echo "✅ Created $OUTPUT_DIR/$DIST.tar.gz"
|
||||
```
|
||||
|
||||
### Checksums
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# checksums.sh - Generate checksums for all distributions
|
||||
|
||||
cd dist
|
||||
|
||||
for tarball in *.tar.gz *.zip; do
|
||||
if [ -f "$tarball" ]; then
|
||||
sha256sum "$tarball" >> checksums.txt
|
||||
fi
|
||||
done
|
||||
|
||||
echo "✅ Checksums written to dist/checksums.txt"
|
||||
cat dist/checksums.txt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
### GitHub Actions / Gitea Actions
|
||||
|
||||
```yaml
|
||||
name: Build and Release CLI
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- 'v*'
|
||||
|
||||
jobs:
|
||||
build-matrix:
|
||||
strategy:
|
||||
matrix:
|
||||
distribution: [international, russia, eu, china]
|
||||
runtime: [linux-x64, linux-arm64, osx-x64, osx-arm64, win-x64]
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Setup .NET
|
||||
uses: actions/setup-dotnet@v4
|
||||
with:
|
||||
dotnet-version: '10.0.x'
|
||||
|
||||
- name: Build Distribution
|
||||
run: |
|
||||
FLAGS=""
|
||||
case "${{ matrix.distribution }}" in
|
||||
"russia")
|
||||
FLAGS="-p:StellaOpsEnableGOST=true -p:DefineConstants=STELLAOPS_ENABLE_GOST"
|
||||
;;
|
||||
"eu")
|
||||
FLAGS="-p:StellaOpsEnableEIDAS=true -p:DefineConstants=STELLAOPS_ENABLE_EIDAS"
|
||||
;;
|
||||
"china")
|
||||
FLAGS="-p:StellaOpsEnableSM=true -p:DefineConstants=STELLAOPS_ENABLE_SM"
|
||||
;;
|
||||
esac
|
||||
|
||||
dotnet publish src/Cli/StellaOps.Cli \
|
||||
--configuration Release \
|
||||
--runtime ${{ matrix.runtime }} \
|
||||
--self-contained true \
|
||||
$FLAGS \
|
||||
--output dist/stella-${{ matrix.distribution }}-${{ matrix.runtime }}
|
||||
|
||||
- name: Validate Distribution
|
||||
run: |
|
||||
chmod +x scripts/validate-distribution.sh
|
||||
./scripts/validate-distribution.sh \
|
||||
${{ matrix.distribution }} \
|
||||
dist/stella-${{ matrix.distribution }}-${{ matrix.runtime }}/stella
|
||||
|
||||
- name: Create Tarball
|
||||
if: ${{ !contains(matrix.runtime, 'win') }}
|
||||
run: |
|
||||
cd dist/stella-${{ matrix.distribution }}-${{ matrix.runtime }}
|
||||
tar -czf ../stella-${{ matrix.distribution }}-${{ matrix.runtime }}.tar.gz .
|
||||
|
||||
- name: Upload Artifact
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: stella-${{ matrix.distribution }}-${{ matrix.runtime }}
|
||||
path: dist/stella-${{ matrix.distribution }}-${{ matrix.runtime }}.tar.gz
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Distribution Deployment
|
||||
|
||||
### Release Structure
|
||||
|
||||
```
|
||||
releases/
|
||||
├── v2.1.0/
|
||||
│ ├── stella-international-linux-x64.tar.gz
|
||||
│ ├── stella-international-linux-arm64.tar.gz
|
||||
│ ├── stella-international-osx-x64.tar.gz
|
||||
│ ├── stella-international-osx-arm64.tar.gz
|
||||
│ ├── stella-international-win-x64.zip
|
||||
│ ├── stella-russia-linux-x64.tar.gz
|
||||
│ ├── stella-russia-linux-arm64.tar.gz
|
||||
│ ├── stella-russia-osx-x64.tar.gz
|
||||
│ ├── stella-russia-osx-arm64.tar.gz
|
||||
│ ├── stella-russia-win-x64.zip
|
||||
│ ├── stella-eu-linux-x64.tar.gz
|
||||
│ ├── stella-eu-linux-arm64.tar.gz
|
||||
│ ├── stella-eu-osx-x64.tar.gz
|
||||
│ ├── stella-eu-osx-arm64.tar.gz
|
||||
│ ├── stella-eu-win-x64.zip
|
||||
│ ├── stella-china-linux-x64.tar.gz
|
||||
│ ├── stella-china-linux-arm64.tar.gz
|
||||
│ ├── stella-china-osx-x64.tar.gz
|
||||
│ ├── stella-china-osx-arm64.tar.gz
|
||||
│ ├── stella-china-win-x64.zip
|
||||
│ ├── checksums.txt
|
||||
│ └── RELEASE_NOTES.md
|
||||
└── latest -> v2.1.0
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Download Links
|
||||
|
||||
**Public Release Server:**
|
||||
```
|
||||
https://releases.stella-ops.org/cli/
|
||||
├── latest/
|
||||
│ ├── stella-international-linux-x64.tar.gz
|
||||
│ ├── stella-russia-linux-x64.tar.gz
|
||||
│ ├── stella-eu-linux-x64.tar.gz
|
||||
│ └── stella-china-linux-x64.tar.gz
|
||||
├── v2.1.0/
|
||||
├── v2.0.0/
|
||||
└── checksums.txt
|
||||
```
|
||||
|
||||
**User Installation:**
|
||||
```bash
|
||||
# International (unrestricted)
|
||||
wget https://releases.stella-ops.org/cli/latest/stella-international-linux-x64.tar.gz
|
||||
|
||||
# Russia (GOST)
|
||||
wget https://releases.stella-ops.org/cli/russia/latest/stella-russia-linux-x64.tar.gz
|
||||
|
||||
# EU (eIDAS)
|
||||
wget https://releases.stella-ops.org/cli/eu/latest/stella-eu-linux-x64.tar.gz
|
||||
|
||||
# China (SM)
|
||||
wget https://releases.stella-ops.org/cli/china/latest/stella-china-linux-x64.tar.gz
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Legal & Export Control
|
||||
|
||||
### Export Control Statement
|
||||
|
||||
> StellaOps CLI regional distributions contain cryptographic software subject to export control laws.
|
||||
>
|
||||
> - **stella-international**: No export restrictions (standard commercial crypto)
|
||||
> - **stella-russia**: Authorized for Russia and CIS states only
|
||||
> - **stella-eu**: Authorized for EU/EEA member states only
|
||||
> - **stella-china**: Authorized for China only
|
||||
>
|
||||
> Unauthorized export, re-export, or transfer may violate applicable laws. Users are responsible for compliance with export control regulations in their jurisdiction.
|
||||
|
||||
### License Compliance
|
||||
|
||||
All distributions are licensed under **AGPL-3.0-or-later**, with regional plugins subject to additional vendor licenses (e.g., CryptoPro CSP requires commercial license).
|
||||
|
||||
---
|
||||
|
||||
## See Also
|
||||
|
||||
- [CLI Overview](README.md) - Installation and quick start
|
||||
- [CLI Architecture](architecture.md) - Plugin architecture
|
||||
- [Command Reference](command-reference.md) - Command usage
|
||||
- [Compliance Guide](compliance-guide.md) - Regional compliance requirements
|
||||
- [Crypto Plugins](crypto-plugins.md) - Plugin development
|
||||
- [Troubleshooting](troubleshooting.md) - Build and validation issues
|
||||
233
docs/modules/cli/guides/keyboard-shortcuts.md
Normal file
233
docs/modules/cli/guides/keyboard-shortcuts.md
Normal file
@@ -0,0 +1,233 @@
|
||||
# Keyboard Shortcuts Reference
|
||||
|
||||
**Sprint:** SPRINT_3600_0001_0001
|
||||
**Task:** TRI-MASTER-0010 - Document keyboard shortcuts in user guide
|
||||
|
||||
## Overview
|
||||
|
||||
StellaOps supports keyboard shortcuts for efficient triage and navigation. Shortcuts are available in the Web UI and CLI interactive modes.
|
||||
|
||||
## Triage View Shortcuts
|
||||
|
||||
### Navigation
|
||||
|
||||
| Key | Action | Context |
|
||||
|-----|--------|---------|
|
||||
| `j` / `↓` | Next finding | Finding list |
|
||||
| `k` / `↑` | Previous finding | Finding list |
|
||||
| `g g` | Go to first finding | Finding list |
|
||||
| `G` | Go to last finding | Finding list |
|
||||
| `Enter` | Open finding details | Finding list |
|
||||
| `Esc` | Close panel / Cancel | Any |
|
||||
|
||||
### Decision Actions
|
||||
|
||||
| Key | Action | Context |
|
||||
|-----|--------|---------|
|
||||
| `a` | Mark as Affected | Finding selected |
|
||||
| `n` | Mark as Not Affected | Finding selected |
|
||||
| `w` | Mark as Won't Fix | Finding selected |
|
||||
| `f` | Mark as False Positive | Finding selected |
|
||||
| `u` | Undo last decision | Any |
|
||||
| `Ctrl+z` | Undo | Any |
|
||||
|
||||
### Evidence & Context
|
||||
|
||||
| Key | Action | Context |
|
||||
|-----|--------|---------|
|
||||
| `e` | Toggle evidence panel | Finding selected |
|
||||
| `g` | Toggle graph view | Finding selected |
|
||||
| `c` | Show call stack | Finding selected |
|
||||
| `v` | Show VEX status | Finding selected |
|
||||
| `p` | Show provenance | Finding selected |
|
||||
| `d` | Show diff | Finding selected |
|
||||
|
||||
### Search & Filter
|
||||
|
||||
| Key | Action | Context |
|
||||
|-----|--------|---------|
|
||||
| `/` | Open search | Global |
|
||||
| `Ctrl+f` | Find in page | Global |
|
||||
| `Ctrl+k` | Quick filter | Global |
|
||||
| `x` | Clear filters | Filter active |
|
||||
|
||||
### View Controls
|
||||
|
||||
| Key | Action | Context |
|
||||
|-----|--------|---------|
|
||||
| `1` | Show all findings | View |
|
||||
| `2` | Show untriaged only | View |
|
||||
| `3` | Show affected only | View |
|
||||
| `4` | Show not affected | View |
|
||||
| `[` | Collapse all | List view |
|
||||
| `]` | Expand all | List view |
|
||||
| `Tab` | Next panel | Multi-panel |
|
||||
| `Shift+Tab` | Previous panel | Multi-panel |
|
||||
|
||||
### Bulk Actions
|
||||
|
||||
| Key | Action | Context |
|
||||
|-----|--------|---------|
|
||||
| `Space` | Toggle selection | Finding |
|
||||
| `Shift+j` | Select next | Selection mode |
|
||||
| `Shift+k` | Select previous | Selection mode |
|
||||
| `Ctrl+a` | Select all visible | Finding list |
|
||||
| `Shift+a` | Bulk: Affected | Selection |
|
||||
| `Shift+n` | Bulk: Not Affected | Selection |
|
||||
|
||||
## CLI Batch Mode Shortcuts
|
||||
|
||||
### Navigation
|
||||
|
||||
| Key | Action |
|
||||
|-----|--------|
|
||||
| `j` / `↓` | Next finding |
|
||||
| `k` / `↑` | Previous finding |
|
||||
| `Page Down` | Skip 10 forward |
|
||||
| `Page Up` | Skip 10 back |
|
||||
| `Home` | First finding |
|
||||
| `End` | Last finding |
|
||||
|
||||
### Decisions
|
||||
|
||||
| Key | Action |
|
||||
|-----|--------|
|
||||
| `a` | Affected |
|
||||
| `n` | Not affected |
|
||||
| `w` | Won't fix |
|
||||
| `f` | False positive |
|
||||
| `s` | Skip (no decision) |
|
||||
| `u` | Undo last |
|
||||
|
||||
### Information
|
||||
|
||||
| Key | Action |
|
||||
|-----|--------|
|
||||
| `e` | Show evidence |
|
||||
| `i` | Show full info |
|
||||
| `?` | Show help |
|
||||
|
||||
### Control
|
||||
|
||||
| Key | Action |
|
||||
|-----|--------|
|
||||
| `q` | Save and quit |
|
||||
| `Q` | Quit without saving |
|
||||
| `Ctrl+c` | Abort |
|
||||
|
||||
## Graph View Shortcuts
|
||||
|
||||
| Key | Action |
|
||||
|-----|--------|
|
||||
| `+` / `=` | Zoom in |
|
||||
| `-` | Zoom out |
|
||||
| `0` | Reset zoom |
|
||||
| `Arrow keys` | Pan view |
|
||||
| `f` | Fit to screen |
|
||||
| `h` | Highlight path to root |
|
||||
| `l` | Highlight dependents |
|
||||
| `Enter` | Select node |
|
||||
| `Esc` | Deselect |
|
||||
|
||||
## Dashboard Shortcuts
|
||||
|
||||
| Key | Action |
|
||||
|-----|--------|
|
||||
| `r` | Refresh data |
|
||||
| `t` | Toggle sidebar |
|
||||
| `m` | Open menu |
|
||||
| `s` | Open settings |
|
||||
| `?` | Show shortcuts |
|
||||
|
||||
## Scan View Shortcuts
|
||||
|
||||
| Key | Action |
|
||||
|-----|--------|
|
||||
| `j` / `k` | Navigate scans |
|
||||
| `Enter` | Open scan details |
|
||||
| `d` | Download report |
|
||||
| `c` | Compare scans |
|
||||
| `r` | Rescan |
|
||||
|
||||
## Configuration
|
||||
|
||||
### Enable/Disable Shortcuts
|
||||
|
||||
```yaml
|
||||
# ~/.stellaops/ui.yaml
|
||||
keyboard:
|
||||
enabled: true
|
||||
vim_mode: true # Use vim-style navigation
|
||||
|
||||
# Customize keys
|
||||
custom:
|
||||
next_finding: "j"
|
||||
prev_finding: "k"
|
||||
affected: "a"
|
||||
not_affected: "n"
|
||||
```
|
||||
|
||||
### CLI Configuration
|
||||
|
||||
```yaml
|
||||
# ~/.stellaops/cli.yaml
|
||||
interactive:
|
||||
keyboard_enabled: true
|
||||
confirm_quit: true
|
||||
auto_save: true
|
||||
```
|
||||
|
||||
### Web UI Settings
|
||||
|
||||
Access via **Settings → Keyboard Shortcuts**:
|
||||
|
||||
- Enable/disable shortcuts
|
||||
- Customize key bindings
|
||||
- Import/export configurations
|
||||
|
||||
## Accessibility
|
||||
|
||||
### Screen Reader Support
|
||||
|
||||
All keyboard shortcuts have equivalent menu actions:
|
||||
- Use `Alt` to access menu bar
|
||||
- Tab navigation for all controls
|
||||
- ARIA labels for all actions
|
||||
|
||||
### Motion Preferences
|
||||
|
||||
When `prefers-reduced-motion` is set:
|
||||
- Instant transitions replace animations
|
||||
- Focus indicators remain visible longer
|
||||
|
||||
## Quick Reference Card
|
||||
|
||||
```
|
||||
┌────────────────────────────────────────────┐
|
||||
│ STELLAOPS KEYBOARD SHORTCUTS │
|
||||
├────────────────────────────────────────────┤
|
||||
│ NAVIGATION │ DECISIONS │
|
||||
│ j/k Next/Prev │ a Affected │
|
||||
│ g g First │ n Not Affected │
|
||||
│ G Last │ w Won't Fix │
|
||||
│ Enter Open │ f False Positive │
|
||||
│ Esc Close │ u Undo │
|
||||
├─────────────────────┼──────────────────────┤
|
||||
│ EVIDENCE │ VIEW │
|
||||
│ e Evidence panel │ 1 All findings │
|
||||
│ g Graph view │ 2 Untriaged │
|
||||
│ c Call stack │ 3 Affected │
|
||||
│ v VEX status │ / Search │
|
||||
├─────────────────────┼──────────────────────┤
|
||||
│ BULK │ CONTROL │
|
||||
│ Space Select │ q Save & quit │
|
||||
│ Ctrl+a Select all │ ? Help │
|
||||
│ Shift+a Bulk affect │ Ctrl+z Undo │
|
||||
└─────────────────────┴──────────────────────┘
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Triage CLI Reference](./triage-cli.md)
|
||||
- [Web UI Guide](../UI_GUIDE.md)
|
||||
- [Accessibility Guide](../accessibility.md)
|
||||
219
docs/modules/cli/guides/migration.md
Normal file
219
docs/modules/cli/guides/migration.md
Normal file
@@ -0,0 +1,219 @@
|
||||
# CLI Consolidation Migration Guide
|
||||
|
||||
**Sprint:** SPRINT_5100_0001_0001
|
||||
**Status:** In Progress
|
||||
**Effective Date:** 2025-01-01 (deprecation begins)
|
||||
**Sunset Date:** 2025-07-01 (old CLIs removed)
|
||||
|
||||
## Overview
|
||||
|
||||
StellaOps is consolidating multiple standalone CLI tools into a single unified `stella` command with plugin-based subcommands. This improves developer experience, simplifies distribution, and ensures consistent behavior across all CLI operations.
|
||||
|
||||
## Migration Summary
|
||||
|
||||
| Old CLI | New Command | Status |
|
||||
|---------|-------------|--------|
|
||||
| `stella-aoc verify` | `stella aoc verify` | Available |
|
||||
| `stella-symbols ingest` | `stella symbols ingest` | Available |
|
||||
| `stella-symbols upload` | `stella symbols upload` | Available |
|
||||
| `stella-symbols verify` | `stella symbols verify` | Available |
|
||||
| `stella-symbols health` | `stella symbols health` | Available |
|
||||
| `cryptoru` | `cryptoru` (unchanged) | Separate |
|
||||
|
||||
**Note:** `cryptoru` CLI remains separate due to regional compliance requirements.
|
||||
|
||||
## Migration Steps
|
||||
|
||||
### 1. AOC CLI Migration
|
||||
|
||||
**Before (deprecated):**
|
||||
```bash
|
||||
stella-aoc verify --since 2025-01-01 --postgres "Host=localhost;..."
|
||||
```
|
||||
|
||||
**After:**
|
||||
```bash
|
||||
stella aoc verify --since 2025-01-01 --postgres "Host=localhost;..."
|
||||
```
|
||||
|
||||
**Command Options (unchanged):**
|
||||
- `--since, -s` - Git commit SHA or ISO timestamp to verify from (required)
|
||||
- `--postgres, -p` - PostgreSQL connection string (required)
|
||||
- `--output, -o` - Path for JSON output report
|
||||
- `--ndjson, -n` - Path for NDJSON output (one violation per line)
|
||||
- `--tenant, -t` - Filter by tenant ID
|
||||
- `--dry-run` - Validate configuration without querying database
|
||||
- `--verbose, -v` - Enable verbose output
|
||||
|
||||
### 2. Symbols CLI Migration
|
||||
|
||||
#### Ingest Command
|
||||
|
||||
**Before (deprecated):**
|
||||
```bash
|
||||
stella-symbols ingest --binary ./myapp --debug ./myapp.pdb --server https://symbols.example.com
|
||||
```
|
||||
|
||||
**After:**
|
||||
```bash
|
||||
stella symbols ingest --binary ./myapp --debug ./myapp.pdb --server https://symbols.example.com
|
||||
```
|
||||
|
||||
#### Upload Command
|
||||
|
||||
**Before (deprecated):**
|
||||
```bash
|
||||
stella-symbols upload --manifest ./manifest.json --server https://symbols.example.com
|
||||
```
|
||||
|
||||
**After:**
|
||||
```bash
|
||||
stella symbols upload --manifest ./manifest.json --server https://symbols.example.com
|
||||
```
|
||||
|
||||
#### Verify Command
|
||||
|
||||
**Before (deprecated):**
|
||||
```bash
|
||||
stella-symbols verify --path ./manifest.json
|
||||
```
|
||||
|
||||
**After:**
|
||||
```bash
|
||||
stella symbols verify --path ./manifest.json
|
||||
```
|
||||
|
||||
#### Health Command
|
||||
|
||||
**Before (deprecated):**
|
||||
```bash
|
||||
stella-symbols health --server https://symbols.example.com
|
||||
```
|
||||
|
||||
**After:**
|
||||
```bash
|
||||
stella symbols health --server https://symbols.example.com
|
||||
```
|
||||
|
||||
## CI/CD Updates
|
||||
|
||||
### GitHub Actions
|
||||
|
||||
**Before:**
|
||||
```yaml
|
||||
- name: Verify AOC compliance
|
||||
run: stella-aoc verify --since ${{ github.event.before }} --postgres "$POSTGRES_CONN"
|
||||
```
|
||||
|
||||
**After:**
|
||||
```yaml
|
||||
- name: Verify AOC compliance
|
||||
run: stella aoc verify --since ${{ github.event.before }} --postgres "$POSTGRES_CONN"
|
||||
```
|
||||
|
||||
### GitLab CI
|
||||
|
||||
**Before:**
|
||||
```yaml
|
||||
aoc-verify:
|
||||
script:
|
||||
- stella-aoc verify --since $CI_COMMIT_BEFORE_SHA --postgres "$POSTGRES_CONN"
|
||||
```
|
||||
|
||||
**After:**
|
||||
```yaml
|
||||
aoc-verify:
|
||||
script:
|
||||
- stella aoc verify --since $CI_COMMIT_BEFORE_SHA --postgres "$POSTGRES_CONN"
|
||||
```
|
||||
|
||||
### Shell Scripts
|
||||
|
||||
Update any shell scripts that invoke the old CLIs:
|
||||
|
||||
```bash
|
||||
# Find and replace patterns
|
||||
sed -i 's/stella-aoc /stella aoc /g' scripts/*.sh
|
||||
sed -i 's/stella-symbols /stella symbols /g' scripts/*.sh
|
||||
```
|
||||
|
||||
## Deprecation Timeline
|
||||
|
||||
| Date | Action |
|
||||
|------|--------|
|
||||
| 2025-01-01 | Deprecation warnings added to old CLIs |
|
||||
| 2025-03-01 | Warning frequency increased (every invocation) |
|
||||
| 2025-05-01 | Old CLIs emit error + warning, still functional |
|
||||
| 2025-07-01 | Old CLIs removed from distribution |
|
||||
|
||||
## Deprecation Warnings
|
||||
|
||||
When using deprecated CLIs, you will see warnings like:
|
||||
|
||||
```
|
||||
[DEPRECATED] stella-aoc is deprecated and will be removed on 2025-07-01.
|
||||
Please migrate to: stella aoc verify ...
|
||||
See: https://docs.stellaops.io/cli/migration
|
||||
```
|
||||
|
||||
## Plugin Architecture
|
||||
|
||||
The new `stella` CLI uses a plugin architecture. Plugins are automatically discovered from:
|
||||
- `<stella-install-dir>/plugins/cli/`
|
||||
- Custom directories via `STELLAOPS_CLI_PLUGINS_DIR`
|
||||
|
||||
Each plugin provides:
|
||||
- A manifest file (`*.manifest.json`)
|
||||
- A .NET assembly implementing `ICliCommandModule`
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Plugin Not Found
|
||||
|
||||
If a subcommand is not available:
|
||||
|
||||
1. Check plugin directory exists:
|
||||
```bash
|
||||
ls $(dirname $(which stella))/plugins/cli/
|
||||
```
|
||||
|
||||
2. Verify manifest file:
|
||||
```bash
|
||||
cat $(dirname $(which stella))/plugins/cli/StellaOps.Cli.Plugins.Aoc/stellaops.cli.plugins.aoc.manifest.json
|
||||
```
|
||||
|
||||
3. Enable verbose logging:
|
||||
```bash
|
||||
stella --verbose aoc verify ...
|
||||
```
|
||||
|
||||
### Version Compatibility
|
||||
|
||||
Ensure all components are from the same release:
|
||||
```bash
|
||||
stella --version
|
||||
# StellaOps CLI v1.0.0
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
The unified CLI respects all existing environment variables:
|
||||
|
||||
| Variable | Description |
|
||||
|----------|-------------|
|
||||
| `STELLAOPS_BACKEND_URL` | Backend API URL |
|
||||
| `STELLAOPS_CLI_PLUGINS_DIR` | Custom plugins directory |
|
||||
| `STELLAOPS_AUTHORITY_URL` | Authority service URL |
|
||||
| `STELLAOPS_LOG_LEVEL` | Logging verbosity |
|
||||
|
||||
## Getting Help
|
||||
|
||||
- Documentation: https://docs.stellaops.io/cli
|
||||
- Issues: https://github.com/stellaops/stellaops/issues
|
||||
- Migration support: support@stellaops.io
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [CLI Reference](../API_CLI_REFERENCE.md)
|
||||
- [Audit Pack Commands](./audit-pack-commands.md)
|
||||
- [Unknowns CLI Reference](./unknowns-cli-reference.md)
|
||||
508
docs/modules/cli/guides/quickstart.md
Normal file
508
docs/modules/cli/guides/quickstart.md
Normal file
@@ -0,0 +1,508 @@
|
||||
# stella CLI - Overview and Quick Start
|
||||
|
||||
**Sprint:** SPRINT_4100_0006_0006 - CLI Documentation Overhaul
|
||||
|
||||
## Overview
|
||||
|
||||
`stella` is the unified command-line interface for StellaOps, a self-hostable, sovereign container-security platform. It provides vulnerability scanning, SBOM generation, cryptographic signing, policy management, and platform administration capabilities.
|
||||
|
||||
**Key Features:**
|
||||
- **Vulnerability Scanning**: Container image scanning with VEX-first decisioning
|
||||
- **SBOM Generation**: SPDX 3.0.1 and CycloneDX 1.7 support
|
||||
- **Cryptographic Compliance**: Regional crypto support (GOST, eIDAS, SM algorithms)
|
||||
- **Platform Administration**: User, policy, and feed management
|
||||
- **Offline-first**: Air-gapped operation support
|
||||
- **Multi-tenant**: Tenant isolation and RBAC
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Installation
|
||||
|
||||
#### Option 1: .NET Tool (Recommended)
|
||||
|
||||
```bash
|
||||
# Install globally as .NET tool
|
||||
dotnet tool install --global StellaOps.Cli
|
||||
|
||||
# Verify installation
|
||||
stella --version
|
||||
```
|
||||
|
||||
#### Option 2: Binary Download
|
||||
|
||||
```bash
|
||||
# Download for your platform
|
||||
wget https://releases.stella-ops.org/cli/latest/stella-linux-x64.tar.gz
|
||||
tar -xzf stella-linux-x64.tar.gz
|
||||
sudo mv stella /usr/local/bin/
|
||||
|
||||
# Verify installation
|
||||
stella --version
|
||||
```
|
||||
|
||||
#### Option 3: Package Managers
|
||||
|
||||
```bash
|
||||
# Debian/Ubuntu
|
||||
sudo apt install stellaops-cli
|
||||
|
||||
# RHEL/CentOS
|
||||
sudo yum install stellaops-cli
|
||||
|
||||
# macOS (Homebrew)
|
||||
brew install stella-ops/tap/stella
|
||||
```
|
||||
|
||||
### First-time Setup
|
||||
|
||||
#### 1. Configure Backend URL
|
||||
|
||||
```bash
|
||||
# Set backend API URL
|
||||
export STELLAOPS_BACKEND_URL="https://api.stellaops.example.com"
|
||||
|
||||
# Or create config file
|
||||
mkdir -p ~/.stellaops
|
||||
cat > ~/.stellaops/config.yaml <<EOF
|
||||
StellaOps:
|
||||
Backend:
|
||||
BaseUrl: "https://api.stellaops.example.com"
|
||||
EOF
|
||||
```
|
||||
|
||||
#### 2. Authenticate
|
||||
|
||||
```bash
|
||||
# Interactive login (recommended)
|
||||
stella auth login
|
||||
|
||||
# Or use API key
|
||||
export STELLAOPS_API_KEY="your-api-key"
|
||||
stella auth whoami
|
||||
```
|
||||
|
||||
#### 3. Run Your First Scan
|
||||
|
||||
```bash
|
||||
# Scan a container image
|
||||
stella scan docker://nginx:latest --output scan-result.json
|
||||
|
||||
# View SBOM
|
||||
stella scan docker://nginx:latest --sbom-only --format spdx --output nginx.spdx.json
|
||||
|
||||
# Generate attestation
|
||||
stella scan docker://nginx:latest --attestation --output nginx.att.jsonl
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Command Categories
|
||||
|
||||
### Scanning & Analysis
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `stella scan` | Scan container images for vulnerabilities |
|
||||
| `stella aoc` | Generate Attestation of Compliance |
|
||||
| `stella symbols` | Extract and index debug symbols |
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
# Comprehensive scan with attestation
|
||||
stella scan docker://myapp:v1.2.3 \
|
||||
--sbom-format spdx \
|
||||
--attestation \
|
||||
--vex-mode strict \
|
||||
--output scan-results/
|
||||
```
|
||||
|
||||
### Cryptography & Compliance
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `stella crypto providers` | List available crypto providers |
|
||||
| `stella crypto sign` | Sign files with regional crypto algorithms |
|
||||
| `stella crypto verify` | Verify signatures |
|
||||
| `stella crypto profiles` | Manage crypto profiles |
|
||||
|
||||
**Example (GOST signing in Russia distribution):**
|
||||
```bash
|
||||
# Sign a document with GOST algorithm
|
||||
stella crypto sign \
|
||||
--provider gost \
|
||||
--key-id key-gost-2012 \
|
||||
--algorithm GOST12-256 \
|
||||
--file document.pdf \
|
||||
--output document.pdf.sig
|
||||
|
||||
# Verify signature
|
||||
stella crypto verify \
|
||||
--provider gost \
|
||||
--key-id key-gost-2012 \
|
||||
--algorithm GOST12-256 \
|
||||
--file document.pdf \
|
||||
--signature document.pdf.sig
|
||||
```
|
||||
|
||||
### Administration
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `stella admin policy` | Manage platform policies |
|
||||
| `stella admin users` | User management |
|
||||
| `stella admin feeds` | Advisory feed management |
|
||||
| `stella admin system` | System operations |
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
# Add a security engineer
|
||||
stella admin users add alice@example.com --role security-engineer
|
||||
|
||||
# Export current policy
|
||||
stella admin policy export --output policy-backup.yaml
|
||||
|
||||
# Refresh vulnerability feeds
|
||||
stella admin feeds refresh --source nvd --force
|
||||
```
|
||||
|
||||
### Reporting & Export
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `stella report` | Generate compliance reports |
|
||||
| `stella export` | Export scan results in various formats |
|
||||
| `stella query` | Query vulnerability database |
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
# Generate HTML report
|
||||
stella report --scan scan-result.json --format html --output report.html
|
||||
|
||||
# Export to CSV for spreadsheet analysis
|
||||
stella export --scan scan-result.json --format csv --output vulnerabilities.csv
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
### Configuration File Locations
|
||||
|
||||
Configuration files are loaded in the following order (later files override earlier):
|
||||
|
||||
1. **System-wide**: `/etc/stellaops/config.yaml`
|
||||
2. **User-level**: `~/.stellaops/config.yaml`
|
||||
3. **Project-level**: `./stellaops.config.yaml`
|
||||
4. **Environment variables**: `STELLAOPS_*`
|
||||
|
||||
### Configuration Precedence
|
||||
|
||||
```
|
||||
Environment Variables > Project Config > User Config > System Config > Defaults
|
||||
```
|
||||
|
||||
### Sample Configuration
|
||||
|
||||
```yaml
|
||||
StellaOps:
|
||||
Backend:
|
||||
BaseUrl: "https://api.stellaops.example.com"
|
||||
Auth:
|
||||
OpTok:
|
||||
Enabled: true
|
||||
|
||||
Scan:
|
||||
DefaultFormat: "spdx"
|
||||
IncludeAttestations: true
|
||||
VexMode: "strict"
|
||||
|
||||
Crypto:
|
||||
DefaultProvider: "default"
|
||||
Profiles:
|
||||
- name: "prod-signing"
|
||||
provider: "default"
|
||||
algorithm: "ECDSA-P256"
|
||||
keyId: "prod-key-2024"
|
||||
|
||||
Admin:
|
||||
RequireConfirmation: true
|
||||
AuditLog:
|
||||
Enabled: true
|
||||
OutputPath: "~/.stellaops/admin-audit.jsonl"
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
| Variable | Description | Example |
|
||||
|----------|-------------|---------|
|
||||
| `STELLAOPS_BACKEND_URL` | Backend API URL | `https://api.stellaops.example.com` |
|
||||
| `STELLAOPS_API_KEY` | API key for authentication | `sk_live_...` |
|
||||
| `STELLAOPS_OFFLINE_MODE` | Enable offline mode | `true` |
|
||||
| `STELLAOPS_CRYPTO_PROVIDER` | Default crypto provider | `gost`, `eidas`, `sm` |
|
||||
| `STELLAOPS_LOG_LEVEL` | Log level | `Debug`, `Info`, `Warning`, `Error` |
|
||||
|
||||
---
|
||||
|
||||
## Distribution Variants
|
||||
|
||||
StellaOps CLI is available in **four regional distributions** to comply with export control and cryptographic regulations:
|
||||
|
||||
### 1. International (Default)
|
||||
|
||||
**Audience:** Global users (no export restrictions)
|
||||
|
||||
**Crypto Providers:**
|
||||
- .NET Crypto (RSA, ECDSA, EdDSA)
|
||||
- BouncyCastle (additional algorithms)
|
||||
|
||||
**Download:**
|
||||
```bash
|
||||
wget https://releases.stella-ops.org/cli/latest/stella-international-linux-x64.tar.gz
|
||||
```
|
||||
|
||||
### 2. Russia (GOST)
|
||||
|
||||
**Audience:** Russia, CIS states
|
||||
|
||||
**Crypto Providers:**
|
||||
- Default (.NET Crypto, BouncyCastle)
|
||||
- **GOST R 34.10-2012** (digital signature)
|
||||
- **GOST R 34.11-2012** (hash functions)
|
||||
- **GOST R 34.12-2015** (block cipher)
|
||||
|
||||
**Providers:** CryptoPro CSP, OpenSSL-GOST, PKCS#11
|
||||
|
||||
**Download:**
|
||||
```bash
|
||||
wget https://releases.stella-ops.org/cli/russia/latest/stella-russia-linux-x64.tar.gz
|
||||
```
|
||||
|
||||
**See:** [Compliance Guide - GOST](compliance-guide.md#gost-russia)
|
||||
|
||||
### 3. EU (eIDAS)
|
||||
|
||||
**Audience:** European Union
|
||||
|
||||
**Crypto Providers:**
|
||||
- Default (.NET Crypto, BouncyCastle)
|
||||
- **eIDAS Qualified Electronic Signatures (QES)**
|
||||
- **eIDAS Advanced Electronic Signatures (AES)**
|
||||
- **eIDAS AdES signatures**
|
||||
|
||||
**Standards:** ETSI EN 319 412 (certificates), ETSI EN 319 102 (policies)
|
||||
|
||||
**Download:**
|
||||
```bash
|
||||
wget https://releases.stella-ops.org/cli/eu/latest/stella-eu-linux-x64.tar.gz
|
||||
```
|
||||
|
||||
**See:** [Compliance Guide - eIDAS](compliance-guide.md#eidas-eu)
|
||||
|
||||
### 4. China (SM)
|
||||
|
||||
**Audience:** China
|
||||
|
||||
**Crypto Providers:**
|
||||
- Default (.NET Crypto, BouncyCastle)
|
||||
- **SM2** (elliptic curve signature, GM/T 0003-2012)
|
||||
- **SM3** (hash function, GM/T 0004-2012)
|
||||
- **SM4** (block cipher, GM/T 0002-2012)
|
||||
|
||||
**Providers:** GmSSL, Commercial CSPs (OSCCA-certified)
|
||||
|
||||
**Download:**
|
||||
```bash
|
||||
wget https://releases.stella-ops.org/cli/china/latest/stella-china-linux-x64.tar.gz
|
||||
```
|
||||
|
||||
**See:** [Compliance Guide - SM](compliance-guide.md#sm-china)
|
||||
|
||||
### Which Distribution Should I Use?
|
||||
|
||||
| Your Location | Distribution | Reason |
|
||||
|---------------|--------------|--------|
|
||||
| USA, Canada, Australia, etc. | **International** | No export restrictions |
|
||||
| Russia, Kazakhstan, Belarus | **Russia** | GOST compliance required for government/regulated sectors |
|
||||
| EU member states | **EU** | eIDAS compliance for qualified signatures |
|
||||
| China | **China** | SM algorithms required for government/regulated sectors |
|
||||
|
||||
---
|
||||
|
||||
## Profile Management
|
||||
|
||||
Profiles allow switching between environments (dev, staging, production) easily.
|
||||
|
||||
### Create a Profile
|
||||
|
||||
```bash
|
||||
# Create dev profile
|
||||
stella config profile create dev \
|
||||
--backend-url https://dev.stellaops.example.com \
|
||||
--crypto-provider default
|
||||
|
||||
# Create production profile with GOST
|
||||
stella config profile create prod \
|
||||
--backend-url https://api.stellaops.example.com \
|
||||
--crypto-provider gost
|
||||
```
|
||||
|
||||
### Switch Profiles
|
||||
|
||||
```bash
|
||||
# Switch to production profile
|
||||
stella config profile use prod
|
||||
|
||||
# List profiles
|
||||
stella config profile list
|
||||
|
||||
# Show active profile
|
||||
stella config profile current
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Getting Help
|
||||
|
||||
### Built-in Help
|
||||
|
||||
```bash
|
||||
# General help
|
||||
stella --help
|
||||
|
||||
# Command-specific help
|
||||
stella scan --help
|
||||
stella crypto sign --help
|
||||
stella admin users --help
|
||||
|
||||
# Show version and build info
|
||||
stella --version
|
||||
stella admin system info
|
||||
```
|
||||
|
||||
### Documentation
|
||||
|
||||
- **CLI Architecture**: [architecture.md](../architecture.md)
|
||||
- **Command Reference**: [commands/reference.md](commands/reference.md)
|
||||
- **Crypto Plugin Development**: [crypto/crypto-plugins.md](crypto/crypto-plugins.md)
|
||||
- **Compliance Guide**: [compliance.md](compliance.md)
|
||||
- **Distribution Matrix**: [distribution-matrix.md](distribution-matrix.md)
|
||||
- **Admin Guide**: [admin/admin-reference.md](admin/admin-reference.md)
|
||||
- **Troubleshooting**: [troubleshooting.md](troubleshooting.md)
|
||||
|
||||
### Community Resources
|
||||
|
||||
- **GitHub Discussions**: https://github.com/stellaops/stellaops/discussions
|
||||
- **Issue Tracker**: https://git.stella-ops.org/stella-ops.org/git.stella-ops.org/issues
|
||||
- **Documentation**: https://docs.stella-ops.org
|
||||
|
||||
---
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### 1. Daily Vulnerability Scan
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# daily-scan.sh - Run daily vulnerability scan
|
||||
|
||||
IMAGE="myapp:latest"
|
||||
OUTPUT_DIR="scan-results/$(date +%Y-%m-%d)"
|
||||
|
||||
mkdir -p "$OUTPUT_DIR"
|
||||
|
||||
stella scan "docker://$IMAGE" \
|
||||
--sbom-format spdx \
|
||||
--attestation \
|
||||
--vex-mode strict \
|
||||
--output "$OUTPUT_DIR/scan-result.json"
|
||||
|
||||
# Generate HTML report
|
||||
stella report \
|
||||
--scan "$OUTPUT_DIR/scan-result.json" \
|
||||
--format html \
|
||||
--output "$OUTPUT_DIR/report.html"
|
||||
|
||||
echo "Scan complete: $OUTPUT_DIR"
|
||||
```
|
||||
|
||||
### 2. Compliance Attestation Workflow
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# compliance-workflow.sh - Generate compliance attestation
|
||||
|
||||
IMAGE="myapp:v1.2.3"
|
||||
|
||||
# 1. Scan image
|
||||
stella scan "docker://$IMAGE" --output scan.json
|
||||
|
||||
# 2. Generate SBOM
|
||||
stella scan "docker://$IMAGE" --sbom-only --format spdx --output sbom.spdx.json
|
||||
|
||||
# 3. Generate attestation
|
||||
stella aoc --scan scan.json --sbom sbom.spdx.json --output attestation.jsonl
|
||||
|
||||
# 4. Sign attestation (GOST example for Russia)
|
||||
stella crypto sign \
|
||||
--provider gost \
|
||||
--key-id compliance-key \
|
||||
--algorithm GOST12-256 \
|
||||
--file attestation.jsonl \
|
||||
--output attestation.jsonl.sig
|
||||
|
||||
# 5. Bundle everything
|
||||
tar -czf myapp-v1.2.3-compliance.tar.gz \
|
||||
scan.json \
|
||||
sbom.spdx.json \
|
||||
attestation.jsonl \
|
||||
attestation.jsonl.sig
|
||||
|
||||
echo "Compliance bundle: myapp-v1.2.3-compliance.tar.gz"
|
||||
```
|
||||
|
||||
### 3. Policy-based CI/CD Gate
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# ci-gate.sh - Fail CI build if policy violations found
|
||||
|
||||
IMAGE="$1"
|
||||
|
||||
stella scan "docker://$IMAGE" --output scan.json
|
||||
|
||||
# Check exit code
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "❌ Scan failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check for policy violations
|
||||
VIOLATIONS=$(jq '.policyViolations | length' scan.json)
|
||||
|
||||
if [ "$VIOLATIONS" -gt 0 ]; then
|
||||
echo "❌ Policy violations found: $VIOLATIONS"
|
||||
jq '.policyViolations' scan.json
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Image compliant with policy"
|
||||
exit 0
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Install the CLI** - Choose your distribution and install
|
||||
2. **Configure authentication** - `stella auth login`
|
||||
3. **Run your first scan** - `stella scan docker://your-image`
|
||||
4. **Explore commands** - `stella --help`
|
||||
5. **Read detailed docs** - See links above
|
||||
|
||||
For detailed architecture and plugin development, see [CLI Architecture](architecture.md).
|
||||
|
||||
For complete command reference, see [Command Reference](command-reference.md).
|
||||
|
||||
For troubleshooting, see [Troubleshooting Guide](troubleshooting.md).
|
||||
820
docs/modules/cli/guides/troubleshooting.md
Normal file
820
docs/modules/cli/guides/troubleshooting.md
Normal file
@@ -0,0 +1,820 @@
|
||||
# stella CLI - Troubleshooting Guide
|
||||
|
||||
**Sprint:** SPRINT_4100_0006_0006 - CLI Documentation Overhaul
|
||||
|
||||
## Overview
|
||||
|
||||
This guide covers common issues encountered when using the `stella` CLI and their solutions. Issues are categorized by functional area for easy navigation.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Authentication Issues](#authentication-issues)
|
||||
2. [Crypto Plugin Issues](#crypto-plugin-issues)
|
||||
3. [Build Issues](#build-issues)
|
||||
4. [Scanning Issues](#scanning-issues)
|
||||
5. [Configuration Issues](#configuration-issues)
|
||||
6. [Network Issues](#network-issues)
|
||||
7. [Permission Issues](#permission-issues)
|
||||
8. [Distribution Validation Issues](#distribution-validation-issues)
|
||||
|
||||
---
|
||||
|
||||
## Authentication Issues
|
||||
|
||||
### Problem: `stella auth login` fails with "Authority unreachable"
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
$ stella auth login
|
||||
❌ Error: Failed to connect to Authority
|
||||
Authority URL: https://auth.stellaops.example.com
|
||||
Error: Connection refused
|
||||
```
|
||||
|
||||
**Possible Causes:**
|
||||
1. Authority service is down
|
||||
2. Network connectivity issues
|
||||
3. Incorrect Authority URL in configuration
|
||||
4. Firewall blocking connection
|
||||
|
||||
**Solutions:**
|
||||
|
||||
**Solution 1: Verify Authority URL**
|
||||
```bash
|
||||
# Check current Authority URL
|
||||
stella config get Backend.BaseUrl
|
||||
|
||||
# If incorrect, set correct URL
|
||||
stella config set Backend.BaseUrl https://api.stellaops.example.com
|
||||
|
||||
# Or set via environment variable
|
||||
export STELLAOPS_BACKEND_URL="https://api.stellaops.example.com"
|
||||
```
|
||||
|
||||
**Solution 2: Test network connectivity**
|
||||
```bash
|
||||
# Test if Authority is reachable
|
||||
curl -v https://auth.stellaops.example.com/health
|
||||
|
||||
# Check DNS resolution
|
||||
nslookup auth.stellaops.example.com
|
||||
```
|
||||
|
||||
**Solution 3: Enable offline cache fallback**
|
||||
```bash
|
||||
# Allow offline cache fallback (uses cached tokens)
|
||||
export STELLAOPS_AUTHORITY_ALLOW_OFFLINE_CACHE_FALLBACK=true
|
||||
export STELLAOPS_AUTHORITY_OFFLINE_CACHE_TOLERANCE=00:30:00
|
||||
|
||||
stella auth login
|
||||
```
|
||||
|
||||
**Solution 4: Use API key authentication (bypass Authority)**
|
||||
```bash
|
||||
# Use API key instead of interactive login
|
||||
export STELLAOPS_API_KEY="sk_live_your_api_key"
|
||||
|
||||
stella auth whoami
|
||||
# Output: Authenticated via API key
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Problem: `stella auth whoami` shows "Token expired"
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
$ stella auth whoami
|
||||
❌ Error: Token expired
|
||||
Expiration: 2025-12-22T10:00:00Z
|
||||
Please re-authenticate with 'stella auth login'
|
||||
```
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Re-authenticate
|
||||
stella auth login
|
||||
|
||||
# Or refresh token (if supported by Authority)
|
||||
stella auth refresh
|
||||
|
||||
# Verify authentication
|
||||
stella auth whoami
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Problem: HTTP 403 "Insufficient scopes" when running admin commands
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
$ stella admin policy export
|
||||
❌ HTTP 403: Forbidden
|
||||
Error: Insufficient scopes. Required: admin.policy
|
||||
Your scopes: scan.read, scan.write
|
||||
```
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Re-authenticate to obtain admin scopes
|
||||
stella auth logout
|
||||
stella auth login
|
||||
|
||||
# Verify you have admin scopes
|
||||
stella auth whoami
|
||||
# Output should include: admin.policy, admin.users, admin.feeds, admin.platform
|
||||
|
||||
# If still missing scopes, contact your platform administrator
|
||||
# to grant admin role to your account
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Crypto Plugin Issues
|
||||
|
||||
### Problem: `stella crypto sign --provider gost` fails with "Provider 'gost' not available"
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
$ stella crypto sign --provider gost --file document.pdf
|
||||
❌ Error: Crypto provider 'gost' not available
|
||||
Available providers: default
|
||||
```
|
||||
|
||||
**Cause:**
|
||||
You are using the **International distribution** which does not include GOST plugin.
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Check which distribution you have
|
||||
stella --version
|
||||
# Output: stella CLI version 2.1.0
|
||||
# Distribution: stella-international <-- Problem!
|
||||
|
||||
# Download correct distribution for Russia/CIS
|
||||
wget https://releases.stella-ops.org/cli/russia/latest/stella-russia-linux-x64.tar.gz
|
||||
tar -xzf stella-russia-linux-x64.tar.gz
|
||||
sudo cp stella /usr/local/bin/
|
||||
|
||||
# Verify GOST provider is available
|
||||
stella crypto providers
|
||||
# Output:
|
||||
# - default (.NET Crypto, BouncyCastle)
|
||||
# - gost (GOST R 34.10-2012) <-- Now available
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Problem: GOST signing fails with "CryptoPro CSP not initialized"
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
$ stella crypto sign --provider gost --algorithm GOST12-256 --file document.pdf
|
||||
❌ Error: CryptoPro CSP not initialized
|
||||
Container: StellaOps-GOST-2024 not found
|
||||
```
|
||||
|
||||
**Causes:**
|
||||
1. CryptoPro CSP not installed
|
||||
2. Container not created
|
||||
3. Invalid provider configuration
|
||||
|
||||
**Solutions:**
|
||||
|
||||
**Solution 1: Verify CryptoPro CSP installation**
|
||||
```bash
|
||||
# Check if CryptoPro CSP is installed
|
||||
/opt/cprocsp/bin/amd64/csptestf -absorb -alg GR3411_2012_256
|
||||
|
||||
# If not installed, install CryptoPro CSP
|
||||
sudo ./install.sh # From CryptoPro CSP distribution
|
||||
```
|
||||
|
||||
**Solution 2: Create GOST container**
|
||||
```bash
|
||||
# Create new container
|
||||
/opt/cprocsp/bin/amd64/csptest -keyset -newkeyset -container "StellaOps-GOST-2024"
|
||||
|
||||
# List containers
|
||||
/opt/cprocsp/bin/amd64/csptest -keyset -enum_cont -verifycontext
|
||||
|
||||
# Update configuration to use correct container name
|
||||
stella config set Crypto.Providers.Gost.CryptoProCsp.ContainerName "StellaOps-GOST-2024"
|
||||
```
|
||||
|
||||
**Solution 3: Use OpenSSL-GOST instead (development only)**
|
||||
```yaml
|
||||
# appsettings.yaml
|
||||
StellaOps:
|
||||
Crypto:
|
||||
Providers:
|
||||
Gost:
|
||||
CryptoProCsp:
|
||||
Enabled: false # Disable CryptoPro CSP
|
||||
OpenSslGost:
|
||||
Enabled: true # Use OpenSSL-GOST
|
||||
```
|
||||
|
||||
**Warning:** OpenSSL-GOST is NOT FSTEC-certified and should only be used for development/testing.
|
||||
|
||||
---
|
||||
|
||||
### Problem: eIDAS signing fails with "TSP unreachable"
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
$ stella crypto sign --provider eidas --algorithm ECDSA-P256-QES --file contract.pdf
|
||||
❌ Error: Trust Service Provider unreachable
|
||||
TSP URL: https://tsp.example.eu/api/v1/sign
|
||||
HTTP Error: Connection refused
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
|
||||
**Solution 1: Verify TSP URL**
|
||||
```bash
|
||||
# Test TSP connectivity
|
||||
curl -v https://tsp.example.eu/api/v1/sign
|
||||
|
||||
# Update TSP URL if incorrect
|
||||
stella config set Crypto.Providers.Eidas.TspClient.TspUrl "https://correct-tsp.eu/api/v1/sign"
|
||||
```
|
||||
|
||||
**Solution 2: Check API key**
|
||||
```bash
|
||||
# Verify API key is set
|
||||
echo $EIDAS_TSP_API_KEY
|
||||
|
||||
# If not set, export it
|
||||
export EIDAS_TSP_API_KEY="your_api_key_here"
|
||||
|
||||
# Or set in configuration
|
||||
stella config set Crypto.Providers.Eidas.TspClient.ApiKey "your_api_key_here"
|
||||
```
|
||||
|
||||
**Solution 3: Use local signer for AES (not QES)**
|
||||
```yaml
|
||||
# For Advanced Electronic Signatures (not qualified)
|
||||
StellaOps:
|
||||
Crypto:
|
||||
Providers:
|
||||
Eidas:
|
||||
TspClient:
|
||||
Enabled: false
|
||||
LocalSigner:
|
||||
Enabled: true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Build Issues
|
||||
|
||||
### Problem: Build fails with "DefineConstants 'STELLAOPS_ENABLE_GOST' not defined"
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
$ dotnet build -p:StellaOpsEnableGOST=true
|
||||
error CS0103: The name 'STELLAOPS_ENABLE_GOST' does not exist in the current context
|
||||
```
|
||||
|
||||
**Cause:**
|
||||
Missing `-p:DefineConstants` flag.
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Correct build command (includes both flags)
|
||||
dotnet build \
|
||||
-p:StellaOpsEnableGOST=true \
|
||||
-p:DefineConstants="STELLAOPS_ENABLE_GOST"
|
||||
|
||||
# Or for publish:
|
||||
dotnet publish src/Cli/StellaOps.Cli \
|
||||
--configuration Release \
|
||||
--runtime linux-x64 \
|
||||
-p:StellaOpsEnableGOST=true \
|
||||
-p:DefineConstants="STELLAOPS_ENABLE_GOST"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Problem: Build succeeds but crypto plugin not available at runtime
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
# Build appears successful
|
||||
$ dotnet build -p:StellaOpsEnableGOST=true -p:DefineConstants="STELLAOPS_ENABLE_GOST"
|
||||
Build succeeded.
|
||||
|
||||
# But plugin not available
|
||||
$ ./stella crypto providers
|
||||
Available providers:
|
||||
- default
|
||||
|
||||
# GOST plugin missing!
|
||||
```
|
||||
|
||||
**Cause:**
|
||||
Plugin DLL not copied to output directory.
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Use dotnet publish instead of dotnet build
|
||||
dotnet publish src/Cli/StellaOps.Cli \
|
||||
--configuration Release \
|
||||
--runtime linux-x64 \
|
||||
--self-contained true \
|
||||
-p:StellaOpsEnableGOST=true \
|
||||
-p:DefineConstants="STELLAOPS_ENABLE_GOST" \
|
||||
--output dist/stella-russia-linux-x64
|
||||
|
||||
# Verify GOST plugin DLL is present
|
||||
ls dist/stella-russia-linux-x64/*.dll | grep Gost
|
||||
# Expected: StellaOps.Cli.Crypto.Gost.dll
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Problem: "GLIBC version not found" when running CLI on older Linux
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
$ ./stella --version
|
||||
./stella: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by ./stella)
|
||||
```
|
||||
|
||||
**Cause:**
|
||||
CLI built with newer .NET runtime requiring newer GLIBC.
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Check your GLIBC version
|
||||
ldd --version
|
||||
# If < 2.34, upgrade to a newer Linux distribution
|
||||
|
||||
# Or build with older .NET runtime (if possible)
|
||||
# Or use containerized version:
|
||||
docker run -it stellaops/cli:latest stella --version
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Scanning Issues
|
||||
|
||||
### Problem: `stella scan` fails with "Image not found"
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
$ stella scan docker://nginx:latest
|
||||
❌ Error: Image not found
|
||||
Image: docker://nginx:latest
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
|
||||
**Solution 1: Pull image first**
|
||||
```bash
|
||||
# Pull image from Docker registry
|
||||
docker pull nginx:latest
|
||||
|
||||
# Then scan
|
||||
stella scan docker://nginx:latest
|
||||
```
|
||||
|
||||
**Solution 2: Scan local tar archive**
|
||||
```bash
|
||||
# Export image to tar
|
||||
docker save nginx:latest -o nginx.tar
|
||||
|
||||
# Scan tar archive
|
||||
stella scan tar://nginx.tar
|
||||
```
|
||||
|
||||
**Solution 3: Specify registry explicitly**
|
||||
```bash
|
||||
# Use fully-qualified image reference
|
||||
stella scan docker://docker.io/library/nginx:latest
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Problem: Scan succeeds but no vulnerabilities found (expected some)
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
$ stella scan docker://vulnerable-app:latest
|
||||
Scan complete: 0 vulnerabilities found
|
||||
```
|
||||
|
||||
**Possible Causes:**
|
||||
1. Advisory feeds not synchronized
|
||||
2. Offline mode with stale data
|
||||
3. VEX mode filtering vulnerabilities
|
||||
|
||||
**Solutions:**
|
||||
|
||||
**Solution 1: Refresh advisory feeds (admin)**
|
||||
```bash
|
||||
stella admin feeds refresh --source nvd --force
|
||||
stella admin feeds refresh --source osv --force
|
||||
```
|
||||
|
||||
**Solution 2: Check feed status**
|
||||
```bash
|
||||
stella admin feeds status
|
||||
# Output:
|
||||
# Feed Last Sync Status
|
||||
# ────────────────────────────────────────
|
||||
# NVD 2025-12-23 10:00 ✅ UP
|
||||
# OSV 2025-12-23 09:45 ⚠️ STALE (12 hours old)
|
||||
```
|
||||
|
||||
**Solution 3: Disable VEX filtering**
|
||||
```bash
|
||||
# Scan with VEX mode disabled
|
||||
stella scan docker://vulnerable-app:latest --vex-mode disabled
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration Issues
|
||||
|
||||
### Problem: "Configuration file not found"
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
$ stella config show
|
||||
⚠️ Warning: No configuration file found
|
||||
Using default configuration
|
||||
```
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Create user configuration directory
|
||||
mkdir -p ~/.stellaops
|
||||
|
||||
# Create configuration file
|
||||
cat > ~/.stellaops/config.yaml <<EOF
|
||||
StellaOps:
|
||||
Backend:
|
||||
BaseUrl: "https://api.stellaops.example.com"
|
||||
EOF
|
||||
|
||||
# Verify configuration is loaded
|
||||
stella config show
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Problem: Environment variables not overriding configuration
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
$ export STELLAOPS_BACKEND_URL="https://test.example.com"
|
||||
$ stella config get Backend.BaseUrl
|
||||
https://api.stellaops.example.com # Still shows old value!
|
||||
```
|
||||
|
||||
**Cause:**
|
||||
Incorrect environment variable format.
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Correct environment variable format (double underscore for nested properties)
|
||||
export STELLAOPS_BACKEND__BASEURL="https://test.example.com"
|
||||
# ^^ Note: double underscore
|
||||
|
||||
# Verify
|
||||
stella config get Backend.BaseUrl
|
||||
# Output: https://test.example.com # Now correct
|
||||
```
|
||||
|
||||
**Environment Variable Format Rules:**
|
||||
- Prefix: `STELLAOPS_`
|
||||
- Nested properties: Double underscore `__`
|
||||
- Array index: Double underscore + index `__0`, `__1`, etc.
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Simple property
|
||||
export STELLAOPS_BACKEND__BASEURL="https://api.example.com"
|
||||
|
||||
# Nested property
|
||||
export STELLAOPS_CRYPTO__DEFAULTPROVIDER="gost"
|
||||
|
||||
# Array element
|
||||
export STELLAOPS_CRYPTO__PROVIDERS__GOST__KEYS__0__KEYID="key1"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Network Issues
|
||||
|
||||
### Problem: Timeouts when connecting to backend
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
$ stella scan docker://nginx:latest
|
||||
❌ Error: Request timeout
|
||||
Backend: https://api.stellaops.example.com/api/v1/scan
|
||||
Timeout: 30s
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
|
||||
**Solution 1: Increase timeout**
|
||||
```yaml
|
||||
# appsettings.yaml
|
||||
StellaOps:
|
||||
Backend:
|
||||
Http:
|
||||
TimeoutSeconds: 120 # Increase from 30 to 120
|
||||
```
|
||||
|
||||
**Solution 2: Check network latency**
|
||||
```bash
|
||||
# Ping backend
|
||||
ping api.stellaops.example.com
|
||||
|
||||
# Test HTTP latency
|
||||
time curl -v https://api.stellaops.example.com/health
|
||||
```
|
||||
|
||||
**Solution 3: Use proxy**
|
||||
```bash
|
||||
# Set HTTP proxy
|
||||
export HTTP_PROXY="http://proxy.example.com:8080"
|
||||
export HTTPS_PROXY="http://proxy.example.com:8080"
|
||||
|
||||
stella scan docker://nginx:latest
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Problem: SSL certificate verification fails
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
$ stella scan docker://nginx:latest
|
||||
❌ Error: SSL certificate verification failed
|
||||
Certificate: CN=api.stellaops.example.com
|
||||
Error: The SSL certificate is invalid
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
|
||||
**Solution 1: Add CA certificate**
|
||||
```bash
|
||||
# Add custom CA certificate (Linux)
|
||||
sudo cp custom-ca.crt /usr/local/share/ca-certificates/
|
||||
sudo update-ca-certificates
|
||||
|
||||
# Add custom CA certificate (macOS)
|
||||
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain custom-ca.crt
|
||||
```
|
||||
|
||||
**Solution 2: Disable SSL verification (INSECURE - development only)**
|
||||
```bash
|
||||
# WARNING: This disables SSL verification. Use only for testing!
|
||||
export STELLAOPS_BACKEND__HTTP__DISABLESSLVERIFICATION=true
|
||||
|
||||
stella scan docker://nginx:latest
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Permission Issues
|
||||
|
||||
### Problem: "Permission denied" when running `stella`
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
$ stella --version
|
||||
bash: /usr/local/bin/stella: Permission denied
|
||||
```
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Make binary executable
|
||||
chmod +x /usr/local/bin/stella
|
||||
|
||||
# Verify
|
||||
stella --version
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Problem: "Access denied" when accessing keys
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
$ stella crypto sign --provider gost --file doc.pdf
|
||||
❌ Error: Access denied to key file
|
||||
File: /etc/stellaops/keys/gost-key.pem
|
||||
```
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Fix key file permissions
|
||||
sudo chmod 600 /etc/stellaops/keys/gost-key.pem
|
||||
sudo chown $(whoami):$(whoami) /etc/stellaops/keys/gost-key.pem
|
||||
|
||||
# Or run as root (not recommended)
|
||||
sudo stella crypto sign --provider gost --file doc.pdf
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Distribution Validation Issues
|
||||
|
||||
### Problem: Validation script reports "wrong plugins included"
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
$ ./validate-distribution.sh international dist/stella-international-linux-x64/stella
|
||||
❌ FAIL: International distribution contains restricted plugins
|
||||
Found: GostCryptoProvider
|
||||
```
|
||||
|
||||
**Cause:**
|
||||
Built with wrong flags or flags not working.
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Clean and rebuild without regional flags
|
||||
dotnet clean
|
||||
dotnet publish src/Cli/StellaOps.Cli \
|
||||
--configuration Release \
|
||||
--runtime linux-x64 \
|
||||
--self-contained true \
|
||||
--output dist/stella-international-linux-x64
|
||||
|
||||
# Verify no build flags were set
|
||||
echo "No StellaOpsEnableGOST, StellaOpsEnableEIDAS, or StellaOpsEnableSM flags"
|
||||
|
||||
# Re-validate
|
||||
./validate-distribution.sh international dist/stella-international-linux-x64/stella
|
||||
# Expected: ✅ PASS
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Diagnostic Commands
|
||||
|
||||
### Check CLI Version and Distribution
|
||||
|
||||
```bash
|
||||
stella --version
|
||||
# Output:
|
||||
# stella CLI version 2.1.0
|
||||
# Build: 2025-12-23T10:00:00Z
|
||||
# Commit: dfaa207
|
||||
# Distribution: stella-russia
|
||||
# Platform: linux-x64
|
||||
# .NET Runtime: 10.0.0
|
||||
```
|
||||
|
||||
### System Diagnostics
|
||||
|
||||
```bash
|
||||
stella system diagnostics
|
||||
# Output:
|
||||
# System Diagnostics:
|
||||
# ✅ CLI version: 2.1.0
|
||||
# ✅ .NET Runtime: 10.0.0
|
||||
# ✅ Backend reachable: https://api.stellaops.example.com
|
||||
# ✅ Authentication: Valid (expires 2025-12-24)
|
||||
# ✅ Crypto providers: default, gost
|
||||
# ⚠️ PostgreSQL: Not configured (offline mode)
|
||||
```
|
||||
|
||||
### Check Available Crypto Providers
|
||||
|
||||
```bash
|
||||
stella crypto providers --verbose
|
||||
# Output:
|
||||
# Available Crypto Providers:
|
||||
#
|
||||
# Provider: default
|
||||
# Description: .NET Crypto, BouncyCastle
|
||||
# Algorithms: ECDSA-P256, ECDSA-P384, EdDSA, RSA-2048, RSA-4096
|
||||
# Status: ✅ Healthy
|
||||
#
|
||||
# Provider: gost
|
||||
# Description: GOST R 34.10-2012, GOST R 34.11-2012
|
||||
# Algorithms: GOST12-256, GOST12-512, GOST2001
|
||||
# Status: ⚠️ CryptoPro CSP not initialized
|
||||
```
|
||||
|
||||
### Verbose Mode
|
||||
|
||||
```bash
|
||||
# Enable verbose logging for all commands
|
||||
stella --verbose <command>
|
||||
|
||||
# Example:
|
||||
stella --verbose auth login
|
||||
stella --verbose scan docker://nginx:latest
|
||||
stella --verbose crypto sign --provider gost --file doc.pdf
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Getting Help
|
||||
|
||||
If you're still experiencing issues after trying these solutions:
|
||||
|
||||
1. **Check Documentation:**
|
||||
- [CLI Overview](README.md)
|
||||
- [CLI Architecture](architecture.md)
|
||||
- [Command Reference](command-reference.md)
|
||||
- [Compliance Guide](compliance-guide.md)
|
||||
|
||||
2. **Enable Verbose Logging:**
|
||||
```bash
|
||||
stella --verbose <command>
|
||||
```
|
||||
|
||||
3. **Check GitHub Issues:**
|
||||
- https://git.stella-ops.org/stella-ops.org/git.stella-ops.org/issues
|
||||
|
||||
4. **Community Support:**
|
||||
- GitHub Discussions: https://github.com/stellaops/stellaops/discussions
|
||||
|
||||
5. **Commercial Support:**
|
||||
- Contact: support@stella-ops.org
|
||||
|
||||
---
|
||||
|
||||
## Common Error Codes
|
||||
|
||||
| Exit Code | Meaning | Typical Cause |
|
||||
|-----------|---------|---------------|
|
||||
| `0` | Success | - |
|
||||
| `1` | General error | Check error message |
|
||||
| `2` | Policy violations | Scan found policy violations (with `--fail-on-policy-violations`) |
|
||||
| `3` | Authentication error | Token expired or invalid credentials |
|
||||
| `4` | Configuration error | Invalid configuration or missing required fields |
|
||||
| `5` | Network error | Backend unreachable or timeout |
|
||||
| `10` | Invalid arguments | Incorrect command usage or missing required arguments |
|
||||
|
||||
---
|
||||
|
||||
## Frequently Asked Questions (FAQ)
|
||||
|
||||
### Q: How do I switch between crypto providers?
|
||||
|
||||
**A:** Use the `--provider` flag or create a crypto profile:
|
||||
|
||||
```bash
|
||||
# Method 1: Use --provider flag
|
||||
stella crypto sign --provider gost --file doc.pdf
|
||||
|
||||
# Method 2: Create and use profile
|
||||
stella crypto profiles create my-gost --provider gost --algorithm GOST12-256
|
||||
stella crypto profiles use my-gost
|
||||
stella crypto sign --file doc.pdf # Uses my-gost profile
|
||||
```
|
||||
|
||||
### Q: Can I use multiple regional plugins in one distribution?
|
||||
|
||||
**A:** No. Each distribution includes only one regional plugin (GOST, eIDAS, or SM) to comply with export control laws.
|
||||
|
||||
### Q: How do I update the CLI?
|
||||
|
||||
**A:**
|
||||
```bash
|
||||
# If installed via .NET tool
|
||||
dotnet tool update --global StellaOps.Cli
|
||||
|
||||
# If installed via binary download
|
||||
wget https://releases.stella-ops.org/cli/latest/stella-<distribution>-<platform>.tar.gz
|
||||
tar -xzf stella-<distribution>-<platform>.tar.gz
|
||||
sudo cp stella /usr/local/bin/
|
||||
```
|
||||
|
||||
### Q: How do I enable offline mode?
|
||||
|
||||
**A:**
|
||||
```bash
|
||||
# Set offline mode
|
||||
export STELLAOPS_OFFLINE_MODE=true
|
||||
|
||||
# Create offline package (admin)
|
||||
stella offline sync --output stellaops-offline-$(date +%F).tar.gz
|
||||
|
||||
# Load offline package in air-gapped environment
|
||||
stella offline load --package stellaops-offline-2025-12-23.tar.gz
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## See Also
|
||||
|
||||
- [CLI Overview](README.md) - Installation and quick start
|
||||
- [CLI Architecture](architecture.md) - Plugin architecture
|
||||
- [Command Reference](command-reference.md) - Command usage
|
||||
- [Compliance Guide](compliance-guide.md) - Regional compliance
|
||||
- [Distribution Matrix](distribution-matrix.md) - Build and distribution
|
||||
- [Crypto Plugins](crypto-plugins.md) - Plugin development
|
||||
181
docs/modules/concelier/guides/aggregation-only-contract.md
Normal file
181
docs/modules/concelier/guides/aggregation-only-contract.md
Normal file
@@ -0,0 +1,181 @@
|
||||
# Aggregation-Only Contract Reference
|
||||
|
||||
> The Aggregation-Only Contract (AOC) is the governing rule set that keeps StellaOps ingestion services deterministic, policy-neutral, and auditable. It applies to Concelier, Excititor, and any future collectors that write raw advisory or VEX documents.
|
||||
|
||||
## 1. Purpose and Scope
|
||||
|
||||
- Defines the canonical behaviour for `advisory_raw` and `vex_raw` collections and the linkset hints they may emit.
|
||||
- Applies to every ingestion runtime (`StellaOps.Concelier.*`, `StellaOps.Excititor.*`), the Authority scopes that guard them, and the DevOps/QA surfaces that verify compliance.
|
||||
- Complements the high-level architecture in [Concelier](../modules/concelier/architecture.md) and Authority enforcement documented in [Authority Architecture](../modules/authority/architecture.md).
|
||||
- Paired guidance: see the guard-rail checkpoints in [AOC Guardrails](../aoc/aoc-guardrails.md), the implementation reference in [AOC Guard Library](../aoc/guard-library.md), and CLI usage that will land in `/docs/modules/cli/guides/` as part of Sprint 19 follow-up.
|
||||
|
||||
## 2. Philosophy and Goals
|
||||
|
||||
- Preserve upstream truth: ingestion only captures immutable raw facts plus provenance, never derived severity or policy decisions.
|
||||
- Defer interpretation: Policy Engine and downstream overlays remain the sole writers of materialised findings, severity, consensus, or risk scores.
|
||||
- Make every write explainable: provenance, signatures, and content hashes are required so operators can prove where each fact originated.
|
||||
- Keep outputs reproducible: identical inputs must yield identical documents, hashes, and linksets across replays and air-gapped installs.
|
||||
|
||||
## 3. Contract Invariants
|
||||
|
||||
| # | Invariant | What it forbids or requires | Enforcement surfaces |
|
||||
|---|-----------|-----------------------------|----------------------|
|
||||
| 1 | No derived severity at ingest | Reject top-level keys such as `severity`, `cvss`, `effective_status`, `consensus_provider`, `risk_score`. Raw upstream CVSS remains inside `content.raw`. | PostgreSQL schema validator, `AOCWriteGuard`, Roslyn analyzer, `stella aoc verify`. |
|
||||
| 2 | No merges or opinionated dedupe | Each upstream document persists on its own; ingestion never collapses multiple vendors into one document. | Repository interceptors, unit/fixture suites. |
|
||||
| 3 | Provenance is mandatory | `source.*`, `upstream.*`, and `signature` metadata must be present; missing provenance triggers `ERR_AOC_004`. | Schema validator, guard, CLI verifier. |
|
||||
| 4 | Idempotent upserts | Writes keyed by `(vendor, upstream_id, content_hash)` either no-op or insert a new revision with `supersedes`. Duplicate hashes map to the same document. | Repository guard, storage unique index, CI smoke tests. |
|
||||
| 5 | Append-only revisions | Updates create a new document with `supersedes` pointer; no in-place mutation of content. | PostgreSQL schema (`supersedes` format), guard, data migration scripts. |
|
||||
| 6 | Linkset only | Ingestion may compute link hints (`purls`, `cpes`, IDs) to accelerate joins, but must not transform or infer severity or policy. Observations now persist both canonical linksets (for indexed queries) and raw linksets (preserving upstream order/duplicates) so downstream policy can decide how to normalise. When `concelier:features:noMergeEnabled=true`, all merge-derived canonicalisation paths must be disabled. | Linkset builders reviewed via fixtures/analyzers; raw-vs-canonical parity covered by observation fixtures; analyzer `CONCELIER0002` blocks merge API usage. |
|
||||
| 7 | Policy-only effective findings | Only Policy Engine identities can write `effective_finding_*`; ingestion callers receive `ERR_AOC_006` if they attempt it. | Authority scopes, Policy Engine guard. |
|
||||
| 8 | Schema safety | Unknown top-level keys reject with `ERR_AOC_007`; timestamps use ISO 8601 UTC strings; tenant is required. | PostgreSQL validator, JSON schema tests. |
|
||||
| 9 | Clock discipline | Collectors stamp `fetched_at` and `received_at` monotonically per batch to support reproducibility windows. | Collector contracts, QA fixtures. |
|
||||
|
||||
## 4. Raw Schemas
|
||||
|
||||
### 4.1 `advisory_raw`
|
||||
|
||||
| Field | Type | Notes |
|
||||
|-------|------|-------|
|
||||
| `_id` | string | `advisory_raw:{source}:{upstream_id}:{revision}`; deterministic and tenant-scoped. |
|
||||
| `tenant` | string | Required; injected by Authority middleware and asserted by schema validator. |
|
||||
| `source.vendor` | string | Provider identifier (e.g., `redhat`, `osv`, `ghsa`). |
|
||||
| `source.stream` | string | Connector stream name (`csaf`, `osv`, etc.). |
|
||||
| `source.api` | string | Absolute URI of upstream document; stored for traceability. |
|
||||
| `source.collector_version` | string | Semantic version of the collector. |
|
||||
| `upstream.upstream_id` | string | Vendor- or ecosystem-provided identifier (CVE, GHSA, vendor ID). |
|
||||
| `upstream.document_version` | string | Upstream issued timestamp or revision string. |
|
||||
| `upstream.fetched_at` / `received_at` | string | ISO 8601 UTC timestamps recorded by the collector. |
|
||||
| `upstream.content_hash` | string | `sha256:` digest of the raw payload used for idempotency. |
|
||||
| `upstream.signature` | object | Required structure storing `present`, `format`, `key_id`, `sig`; even unsigned payloads set `present: false`. |
|
||||
| `content.format` | string | Source format (`CSAF`, `OSV`, etc.). |
|
||||
| `content.spec_version` | string | Upstream spec version when known. |
|
||||
| `content.raw` | object | Full upstream payload, untouched except for transport normalisation. |
|
||||
| `identifiers` | object | Upstream identifiers (`cve`, `ghsa`, `aliases`, etc.) captured as provided (trimmed, order preserved, duplicates allowed). |
|
||||
| `linkset` | object | Join hints (see section 4.3). |
|
||||
| `supersedes` | string or null | Points to previous revision of same upstream doc when content hash changes. |
|
||||
|
||||
### 4.2 `vex_raw`
|
||||
|
||||
| Field | Type | Notes |
|
||||
|-------|------|-------|
|
||||
| `_id` | string | `vex_raw:{source}:{upstream_id}:{revision}`. |
|
||||
| `tenant` | string | Required; matches advisory collection requirements. |
|
||||
| `source.*` | object | Same shape and requirements as `advisory_raw`. |
|
||||
| `upstream.*` | object | Includes `document_version`, timestamps, `content_hash`, and `signature`. |
|
||||
| `content.format` | string | Typically `CycloneDX-VEX` or `CSAF-VEX`. |
|
||||
| `content.raw` | object | Entire upstream VEX payload. |
|
||||
| `identifiers.statements` | array | Normalised statement summaries (IDs, PURLs, status, justification) to accelerate policy joins. |
|
||||
| `linkset` | object | CVEs, GHSA IDs, and PURLs referenced in the document. |
|
||||
| `supersedes` | string or null | Same convention as advisory documents. |
|
||||
|
||||
### 4.3 Linkset Fields
|
||||
|
||||
- `purls`: fully qualified Package URLs extracted from raw ranges or product nodes.
|
||||
- `cpes`: Common Platform Enumerations when upstream docs provide them.
|
||||
- `aliases`: Any alternate advisory identifiers present in the payload.
|
||||
- `references`: Array of `{ type, url }` pairs pointing back to vendor advisories, patches, or exploits.
|
||||
- `reconciled_from`: Provenance of linkset entries (JSON Pointer or field origin) to make automated checks auditable.
|
||||
|
||||
Canonicalisation rules:
|
||||
- Package URLs are rendered in canonical form without qualifiers/subpaths (`pkg:type/namespace/name@version`).
|
||||
- CPE values are normalised to the 2.3 binding (`cpe:2.3:part:vendor:product:version:*:*:*:*:*:*:*`).
|
||||
- Connector mapping stages are responsible for the canonical form; ingestion trims whitespace but otherwise preserves the original order and duplicate entries so downstream policy can reason about upstream intent.
|
||||
|
||||
### 4.4 `advisory_observations`
|
||||
|
||||
`advisory_observations` is an immutable projection of the validated raw document used by Link‑Not‑Merge overlays. Fields mirror the JSON contract surfaced by `StellaOps.Concelier.Models.Observations.AdvisoryObservation`.
|
||||
|
||||
| Field | Type | Notes |
|
||||
|-------|------|-------|
|
||||
| `_id` | string | Deterministic observation id — `{tenant}:{source.vendor}:{upstreamId}:{revision}`. |
|
||||
| `tenant` | string | Lower-case tenant identifier. |
|
||||
| `source.vendor` / `source.stream` | string | Connector identity (e.g., `vendor/redhat`, `ecosystem/osv`). |
|
||||
| `source.api` | string | Absolute URI the connector fetched from. |
|
||||
| `source.collectorVersion` | string | Optional semantic version of the connector build. |
|
||||
| `upstream.upstream_id` | string | Advisory identifier as issued by the provider (CVE, vendor ID, etc.). |
|
||||
| `upstream.document_version` | string | Upstream revision/version string. |
|
||||
| `upstream.fetchedAt` / `upstream.receivedAt` | datetime | UTC timestamps recorded by the connector. |
|
||||
| `upstream.contentHash` | string | `sha256:` digest used for idempotency. |
|
||||
| `upstream.signature` | object | `{present, format?, keyId?, signature?}` describing upstream signature material. |
|
||||
| `content.format` / `content.specVersion` | string | Raw payload format metadata (CSAF, OSV, JSON, etc.). |
|
||||
| `content.raw` | object | Full upstream document stored losslessly (Relaxed Extended JSON). |
|
||||
| `content.metadata` | object | Optional connector-specific metadata (batch ids, hints). |
|
||||
| `linkset.aliases` | array | Connector-supplied aliases (trimmed, order preserved, duplicates allowed). |
|
||||
| `linkset.purls` | array | Connector-supplied PURLs (ingestion preserves order and duplicates). |
|
||||
| `linkset.cpes` | array | Connector-supplied CPE URIs (trimmed, order preserved). |
|
||||
| `linkset.references` | array | `{ type, url }` pairs (trimmed; ingestion preserves order). |
|
||||
| `createdAt` | datetime | Timestamp when Concelier persisted the observation. |
|
||||
| `attributes` | object | Optional provenance attributes keyed by connector. |
|
||||
|
||||
## 5. Error Model
|
||||
|
||||
| Code | Description | HTTP status | Surfaces |
|
||||
|------|-------------|-------------|----------|
|
||||
| `ERR_AOC_001` | Forbidden field detected (severity, cvss, effective data). | 400 | Ingestion APIs, CLI verifier, CI guard. |
|
||||
| `ERR_AOC_002` | Merge attempt detected (multiple upstream sources fused into one document). | 400 | Ingestion APIs, CLI verifier. |
|
||||
| `ERR_AOC_003` | Idempotency violation (duplicate without supersedes pointer). | 409 | Repository guard, PostgreSQL unique index, CLI verifier. |
|
||||
| `ERR_AOC_004` | Missing provenance metadata (`source`, `upstream`, `signature`). | 422 | Schema validator, ingestion endpoints. |
|
||||
| `ERR_AOC_005` | Signature or checksum mismatch. | 422 | Collector validation, CLI verifier. |
|
||||
| `ERR_AOC_006` | Attempt to persist derived findings from ingestion context. | 403 | Policy engine guard, Authority scopes. |
|
||||
| `ERR_AOC_007` | Unknown top-level fields (schema violation). | 400 | PostgreSQL validator, CLI verifier. |
|
||||
|
||||
Consumers should map these codes to CLI exit codes and structured log events so automation can fail fast and produce actionable guidance. The shared guard library (`StellaOps.Aoc.AocError`) emits consistent payloads (`code`, `message`, `violations[]`) for HTTP APIs, CLI tooling, and verifiers.
|
||||
|
||||
## 6. API and Tooling Interfaces
|
||||
|
||||
- **Concelier ingestion** (`StellaOps.Concelier.WebService`)
|
||||
- `POST /ingest/advisory`: accepts upstream payload metadata; server-side guard constructs and persists raw document.
|
||||
- `GET /advisories/raw/{id}` and filterable list endpoints expose raw documents for debugging and offline analysis.
|
||||
- `POST /aoc/verify`: runs guard checks over recent documents and returns summary totals plus first violations.
|
||||
- **Excititor ingestion** (`StellaOps.Excititor.WebService`) mirrors the same surface for VEX documents.
|
||||
- **CLI workflows** (`stella aoc verify`, `stella sources ingest --dry-run`) surface pre-flight verification; documentation will live in `/docs/modules/cli/guides/` alongside Sprint 19 CLI updates.
|
||||
- **Authority scopes**: new `advisory:ingest`, `advisory:read`, `vex:ingest`, and `vex:read` scopes enforce least privilege; see [Authority Architecture](../modules/authority/architecture.md) for scope grammar.
|
||||
|
||||
## 7. Idempotency and Supersedes Rules
|
||||
|
||||
1. Compute `content_hash` before any transformation; use it with `(source.vendor, upstream.upstream_id)` to detect duplicates.
|
||||
2. If a document with the same hash already exists, skip the write and log a no-op.
|
||||
3. When a new hash arrives for an existing upstream document, insert a new record and set `supersedes` to the previous `_id`.
|
||||
4. Keep supersedes chains acyclic; collectors must resolve conflicts by rewinding before they insert.
|
||||
5. Expose idempotency counters via metrics (`ingestion_write_total{result=ok|noop}`) to catch regressions early.
|
||||
|
||||
## 8. Migration Playbook
|
||||
|
||||
1. Freeze ingestion writes except for raw pass-through paths while deploying schema validators.
|
||||
2. Snapshot existing collections to `_backup_*` for rollback safety.
|
||||
3. Strip forbidden fields from historical documents into a temporary `advisory_view_legacy` used only during transition.
|
||||
4. Enable PostgreSQL JSON schema validators for `advisory_raw` and `vex_raw`.
|
||||
5. Run collectors in `--dry-run` to confirm only allowed keys appear; fix violations before lifting the freeze.
|
||||
6. Point Policy Engine to consume exclusively from raw collections and compute derived outputs downstream.
|
||||
7. Delete legacy normalisation paths from ingestion code and enable runtime guards plus CI linting.
|
||||
8. Roll forward CLI, Console, and dashboards so operators can monitor AOC status end-to-end.
|
||||
|
||||
## 9. Observability and Diagnostics
|
||||
|
||||
- **Metrics**: `ingestion_write_total{result=ok|reject}`, `aoc_violation_total{code}`, `ingestion_signature_verified_total{result}`, `ingestion_latency_seconds`, `advisory_revision_count`.
|
||||
- **Traces**: spans `ingest.fetch`, `ingest.transform`, `ingest.write`, and `aoc.guard` with correlation IDs shared across workers.
|
||||
- **Logs**: structured entries must include `tenant`, `source.vendor`, `upstream.upstream_id`, `content_hash`, and `violation_code` when applicable.
|
||||
- **Dashboards**: DevOps should add panels for violation counts, signature failures, supersedes growth, and CLI verifier outcomes for each tenant.
|
||||
|
||||
## 10. Security and Tenancy Checklist
|
||||
|
||||
- Enforce Authority scopes (`advisory:ingest`, `vex:ingest`, `advisory:read`, `vex:read`) and require tenant claims on every request.
|
||||
- Maintain pinned trust stores for signature verification; capture verification result in metrics and logs.
|
||||
- Ensure collectors never log secrets or raw authentication headers; redact tokens before persistence.
|
||||
- Validate that Policy Engine remains the only identity with permission to write `effective_finding_*` documents.
|
||||
- Verify offline bundles include the raw collections, guard configuration, and verifier binaries so air-gapped installs can audit parity.
|
||||
- Document operator steps for recovering from violations, including rollback to superseded revisions and re-running policy evaluation.
|
||||
|
||||
## 11. Compliance Checklist
|
||||
|
||||
- [ ] Deterministic guard enabled in Concelier and Excititor repositories.
|
||||
- [ ] PostgreSQL validators deployed for `advisory_raw` and `vex_raw`.
|
||||
- [ ] Authority scopes and tenant enforcement verified via integration tests.
|
||||
- [ ] CLI and CI pipelines run `stella aoc verify` against seeded snapshots.
|
||||
- [ ] Observability feeds (metrics, logs, traces) wired into dashboards with alerts.
|
||||
- [ ] Offline kit instructions updated to bundle validators and verifier tooling.
|
||||
- [ ] Security review recorded covering ingestion, tenancy, and rollback procedures.
|
||||
|
||||
---
|
||||
|
||||
*Last updated: 2025-10-27 (Sprint 19).*
|
||||
218
docs/modules/concelier/guides/aggregation.md
Normal file
218
docs/modules/concelier/guides/aggregation.md
Normal file
@@ -0,0 +1,218 @@
|
||||
# Advisory Observations & Linksets
|
||||
|
||||
> Imposed rule: Work of this type or tasks of this type on this component must also
|
||||
> be applied everywhere else it should be applied.
|
||||
|
||||
The Link-Not-Merge (LNM) initiative replaces the legacy "merge" pipeline with
|
||||
immutable observations and correlation linksets. This guide explains how
|
||||
Concelier ingests advisory statements, preserves upstream truth, and produces
|
||||
linksets that downstream services (Policy Engine, Vuln Explorer, Console) can
|
||||
use without collapsing sources together.
|
||||
|
||||
---
|
||||
|
||||
## 1. Model overview
|
||||
|
||||
### 1.1 Observation lifecycle
|
||||
|
||||
1. **Ingest** – Connectors fetch upstream payloads (CSAF, OSV, vendor feeds),
|
||||
validate signatures, and drop any derived fields prohibited by the
|
||||
Aggregation-Only Contract (AOC).
|
||||
2. **Persist** – Concelier writes immutable `advisory_observations` scoped by
|
||||
`tenant`, `(source.vendor, upstreamId)`, and `contentHash`. Supersedes chains
|
||||
capture revisions without mutating history.
|
||||
3. **Expose** – WebService surfaces paged/read APIs; Offline Kit snapshots
|
||||
include the same documents for air-gapped installs.
|
||||
|
||||
Observation schema highlights:
|
||||
|
||||
```text
|
||||
observationId = {tenant}:{source.vendor}:{upstreamId}:{revision}
|
||||
tenant, source{vendor, stream, api, collectorVersion}
|
||||
upstream{upstreamId, documentVersion, fetchedAt, receivedAt,
|
||||
contentHash, signature{present, format, keyId, signature}}
|
||||
content{format, specVersion, raw}
|
||||
identifiers{cve?, ghsa?, aliases[], osvIds[]}
|
||||
linkset{purls[], cpes[], aliases[], references[], conflicts[]?}
|
||||
createdAt, attributes{batchId?, replayCursor?}
|
||||
```
|
||||
|
||||
- **Immutable raw** (`content.raw`) mirrors upstream payloads exactly.
|
||||
- **Provenance** (`source.*`, `upstream.*`) satisfies AOC guardrails and enables
|
||||
cryptographic attestations.
|
||||
- **Identifiers** retain lossless extracts (CVE, GHSA, vendor aliases) that seed
|
||||
linksets.
|
||||
- **Linkset** captures join hints but never merges or adds derived severity.
|
||||
|
||||
### 1.2 Linkset lifecycle
|
||||
|
||||
Linksets correlate observations that describe the same vulnerable product while
|
||||
keeping each source intact.
|
||||
|
||||
1. **Seed** – Observations emit normalized identifiers (`purl`, `cpe`,
|
||||
`alias`) during ingestion.
|
||||
2. **Correlate** – Linkset builder groups observations by tenant, product
|
||||
coordinates, and equivalence signals (PURL alias graph, CVE overlap, CVSS
|
||||
vector equality, fuzzy titles).
|
||||
3. **Annotate** – Detected conflicts (severity disagreements, affected-range
|
||||
mismatch, incompatible references) are recorded with structured payloads and
|
||||
preserved for UI/API export.
|
||||
4. **Persist** – Results land in `advisory_linksets` with deterministic IDs
|
||||
(`linksetId = {tenant}:{hash(aliases+purls+seedIds)}`) and append-only history
|
||||
for reproducibility.
|
||||
|
||||
Linksets never suppress or prefer one source; they provide aligned evidence so
|
||||
other services can apply policy.
|
||||
|
||||
---
|
||||
|
||||
## 2. Observation vs. linkset
|
||||
|
||||
- **Purpose**
|
||||
- Observation: Immutable record per vendor and revision.
|
||||
- Linkset: Correlates observations that share product identity.
|
||||
- **Mutation**
|
||||
- Observation: Append-only via supersedes chain.
|
||||
- Linkset: Rebuilt deterministically from canonical signals.
|
||||
- **Allowed fields**
|
||||
- Observation: Raw payload, provenance, identifiers, join hints.
|
||||
- Linkset: Observation references, normalized product metadata, conflicts.
|
||||
- **Forbidden fields**
|
||||
- Observation: Derived severity, policy status, opinionated dedupe.
|
||||
- Linkset: Derived severity (conflicts recorded but unresolved).
|
||||
- **Consumers**
|
||||
- Observation: Evidence API, Offline Kit, CLI exports.
|
||||
- Linkset: Policy Engine overlay, UI evidence panel, Vuln Explorer.
|
||||
|
||||
### 2.1 Example sequence
|
||||
|
||||
1. Red Hat PSIRT publishes RHSA-2025:1234 for OpenSSL; Concelier inserts an
|
||||
observation for vendor `redhat` with `pkg:rpm/redhat/openssl@1.1.1w-12`.
|
||||
2. NVD issues CVE-2025-0001; a second observation is inserted for vendor `nvd`.
|
||||
3. Linkset builder runs, groups the two observations, records alias and PURL
|
||||
overlap, and flags a CVSS disagreement (`7.5` vs `7.2`).
|
||||
4. Policy Engine reads the linkset, recognises the severity variance, and relies
|
||||
on configured rules to decide the effective output.
|
||||
|
||||
---
|
||||
|
||||
## 3. Conflict handling
|
||||
|
||||
Conflicts record disagreements without altering source payloads. The builder
|
||||
emits structured entries:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "severity-mismatch",
|
||||
"field": "cvss.baseScore",
|
||||
"observations": [
|
||||
{
|
||||
"source": "redhat",
|
||||
"value": "7.5",
|
||||
"vector": "AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N"
|
||||
},
|
||||
{
|
||||
"source": "nvd",
|
||||
"value": "7.2",
|
||||
"vector": "AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N"
|
||||
}
|
||||
],
|
||||
"confidence": "medium",
|
||||
"detectedAt": "2025-10-27T14:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
Supported conflict classes:
|
||||
|
||||
- `severity-mismatch` – CVSS or qualitative severities differ.
|
||||
- `affected-range-divergence` – Product ranges, fixed versions, or platforms
|
||||
disagree.
|
||||
- `statement-disagreement` – One observation declares `not_affected` while
|
||||
another states `affected`.
|
||||
- `reference-clash` – URL or classifier collisions (for example, exploit URL vs
|
||||
conflicting advisory).
|
||||
- `alias-inconsistency` – Aliases map to different canonical IDs (GHSA vs CVE).
|
||||
- `metadata-gap` – Required provenance missing on one source; logged as a
|
||||
warning.
|
||||
|
||||
Conflict surfaces:
|
||||
|
||||
- WebService endpoints (`GET /advisories/linksets/{id}` → `conflicts[]`).
|
||||
- UI evidence panel chips and conflict badges.
|
||||
- CLI exports (JSON/OSV) exposed through LNM commands.
|
||||
- Observability metrics (`advisory_linkset_conflicts_total{type}`).
|
||||
|
||||
---
|
||||
|
||||
## 4. AOC alignment
|
||||
|
||||
Observations and linksets must satisfy Aggregation-Only Contract invariants:
|
||||
|
||||
- **No derived severity** – `content.raw` may include upstream severity, but the
|
||||
observation body never injects or edits severity.
|
||||
- **No merges** – Each upstream document stays separate; linksets reference
|
||||
observations via deterministic IDs.
|
||||
- **Provenance mandatory** – Missing `signature` or `source` metadata is an AOC
|
||||
violation (`ERR_AOC_004`).
|
||||
- **Idempotent writes** – Duplicate `contentHash` yields a no-op; supersedes
|
||||
pointer captures new revisions.
|
||||
- **Deterministic output** – Linkset builder sorts keys, normalizes timestamps
|
||||
(UTC ISO-8601), and uses canonical JSON hashing.
|
||||
|
||||
Violations trigger guard errors (`ERR_AOC_00x`), emit `aoc_violation_total`
|
||||
metrics, and block persistence until corrected.
|
||||
|
||||
---
|
||||
|
||||
## 5. Downstream consumption
|
||||
|
||||
- **Policy Engine** – Computes effective severity and risk overlays from linkset
|
||||
evidence and conflicts.
|
||||
- **Console UI** – Renders per-source statements, signed hashes, and conflict
|
||||
banners inside the evidence panel.
|
||||
- **CLI (`stella advisories linkset …`)** – Exports observations and linksets as
|
||||
JSON or OSV for offline triage.
|
||||
- **Offline Kit** – Shipping snapshots include observation and linkset
|
||||
collections for air-gap parity.
|
||||
- **Observability** – Dashboards track ingestion latency, conflict counts, and
|
||||
supersedes depth.
|
||||
|
||||
When adding new consumers, ensure they honour append-only semantics and do not
|
||||
mutate observation or linkset collections.
|
||||
|
||||
---
|
||||
|
||||
## 6. Validation & testing
|
||||
|
||||
- **Unit tests** (`StellaOps.Concelier.Core.Tests`) validate schema guards,
|
||||
deterministic linkset hashing, conflict detection fixtures, and supersedes
|
||||
chains.
|
||||
- **PostgreSQL integration tests** (`StellaOps.Concelier.Storage.Postgres.Tests`) verify
|
||||
indexes and idempotent writes under concurrency.
|
||||
- **CLI smoke suites** confirm `stella advisories observations` and `stella
|
||||
advisories linksets` export stable JSON.
|
||||
- **Determinism checks** replay identical upstream payloads and assert that the
|
||||
resulting observation and linkset documents match byte for byte.
|
||||
- **Offline kit verification** simulates air-gapped bootstrap to confirm that
|
||||
snapshots align with live data.
|
||||
|
||||
Add fixtures whenever a new conflict type or correlation signal is introduced.
|
||||
Ensure canonical JSON serialization remains stable across .NET runtime updates.
|
||||
|
||||
---
|
||||
|
||||
## 7. Reviewer checklist
|
||||
|
||||
- Observation schema segment matches the latest `StellaOps.Concelier.Models`
|
||||
contract.
|
||||
- Linkset lifecycle covers correlation signals, conflict classes, and
|
||||
deterministic IDs.
|
||||
- AOC invariants are explicitly called out with violation codes.
|
||||
- Examples include multi-source correlation plus conflict annotation.
|
||||
- Downstream consumer guidance reflects active APIs and CLI features.
|
||||
- Testing section lists required suites (Core, Storage, CLI, Offline).
|
||||
- Imposed rule reminder is present at the top of the document.
|
||||
|
||||
Confirmed against Concelier Link-Not-Merge tasks:
|
||||
`CONCELIER-LNM-21-001..005`, `CONCELIER-LNM-21-101..103`,
|
||||
`CONCELIER-LNM-21-201..203`.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user