Files
git.stella-ops.org/docs/product-advisories/23-Nov-2025 - Stella Ops vs Competitors.md
2025-11-23 17:18:17 +02:00

28 KiB
Raw Blame History

Understood. Below is the full vendor comparison sheet (15 vendors) with a focus on SBOM generation, SBOM ingestion, attestations, Rekor/Sigstore integration, offline/air-gapped readiness, and gaps relative to your Stella Ops moats (deterministic replay, sovereign crypto, lattice policy engine, provenance graph, explainability layer, trust economics, quantum-resilient signatures).

This is written in the format that you can directly give to agents or drop into your architecture docs.

If you want, I can also deliver a table-only export, JSON export, or a short 1-page executive sheet.


Comprehensive SBOM / VEX / Attestation Vendor Comparison (2025)

Legend

  • SBOM Gen = Native SBOM generation
  • SBOM Ingest = Can scan from SBOM (CycloneDX/SPDX)
  • Attest = Can produce in-toto/DSSE/Cosign attestations
  • Attest Verify = Can verify attestations
  • Rekor = Can write or query Rekor transparency logs
  • Offline = Has explicit offline/air-gap support
  • Gap Score = Severity of gap relative to Stella Ops moats (High/Medium/Low)

1. Trivy (Aqua Security)

SBOM Gen: Yes (CycloneDX/SPDX) SBOM Ingest: Yes Attest: Yes (Cosign-compatible) Attest Verify: Yes Rekor: Query yes, write via Cosign Offline: Strong Gap Score: Medium

Gaps vs Stella Ops Moats:

  • No deterministic replay bundles (no freezeable feed+rules snapshot).
  • No lattice VEX engine.
  • No crypto-sovereign mode (GOST/SM/EIDAS).
  • No trust economics or provenance graph.

2. Syft + Grype (Anchore)

SBOM Gen: Yes (Syft) SBOM Ingest: Yes (Grype) Attest: Limited (requires Cosign) Attest Verify: Limited Rekor: Via Cosign only Offline: Partial but not enterprise-grade Gap Score: Medium

Gaps:

  • No attestation-first workflow; SBOMs are unsigned unless user orchestrates signing.
  • No deterministic replay.
  • No lattice merge logic.
  • No sovereign crypto.
  • No provenance chain.

3. Snyk

SBOM Gen: Yes SBOM Ingest: Limited Attest: No Attest Verify: No Rekor: No Offline: Weak (SaaS-first) Gap Score: High

Gaps:

  • Attestations, signing, deterministic replay, and offline mode are missing entirely.
  • No VEX, lattice rules, or provenance graphs.

4. Prisma Cloud (Palo Alto)

SBOM Gen: Yes (CycloneDX) SBOM Ingest: Limited Attest: No Attest Verify: No Rekor: No Offline: Yes, strong on Intel Stream updates Gap Score: High

Gaps:

  • Attestations and SBOM provenance are not part of system.
  • No deterministic audit replay.
  • No trust graph or crypto-sovereign mode.

5. AWS Inspector + AWS Signer + ECR Notary v2 (AWS)

SBOM Gen: Partial SBOM Ingest: Partial (Inspector) Attest: Yes (Notary v2) Attest Verify: Yes Rekor: No (private transparency solution) Offline: Weak Gap Score: Medium

Gaps:

  • No SBOM/VEX formal unification.
  • Closed ecosystem, no sovereign crypto.
  • Deterministic replay impossible outside AWS.
  • No lattice engine.

6. Google Artifact Registry + Cloud Build Attestations

SBOM Gen: Yes SBOM Ingest: Yes Attest: Yes (SLSA provenance) Attest Verify: Yes Rekor: Optional through Sigstore, not default Offline: Weak Gap Score: Medium

Gaps:

  • No offline bundles.
  • No trust economics.
  • No lattice VEX or regional-crypto.

7. GitHub Advanced Security + Dependabot + Actions Attestation

SBOM Gen: Yes (GHAS) SBOM Ingest: Partial Attest: Yes (OIDC + Sigstore) Attest Verify: Yes Rekor: Yes (Sigstore integrated) Offline: No Gap Score: MediumHigh

Gaps:

  • No deterministic replay of scans.
  • No custom cryptographic modes.
  • No provenance graph beyond simple attestation.

8. GitLab Ultimate + Dependency Scanning

SBOM Gen: Yes SBOM Ingest: Limited Attest: Partial Attest Verify: Partial Rekor: No native Offline: Medium Gap Score: Medium

Gaps:

  • Attestations are not first-class objects.
  • No lattice engine or trust economics.
  • No deterministic replay.

9. Microsoft Defender for DevOps

SBOM Gen: Partial SBOM Ingest: Partial Attest: No Attest Verify: No Rekor: No Offline: Weak Gap Score: High

Gaps:

  • Missing all advanced SBOM/VEX/Attestation constructs.

10. Anchore Enterprise

SBOM Gen: Yes SBOM Ingest: Yes Attest: Some (enterprise extensions) Attest Verify: Partial Rekor: No Offline: Good Gap Score: Medium

Gaps:

  • No sovereign crypto.
  • No deterministic replay.
  • No visual lattice editor.

11. JFrog Xray

SBOM Gen: Yes SBOM Ingest: Yes Attest: No Attest Verify: No Rekor: No Offline: Medium Gap Score: High

Gaps:

  • SBOM processing only; attestation and provenance missing.
  • No graph-based trust or VEX merge layers.

12. Tenable + Tenable Cloud Security

SBOM Gen: Partial SBOM Ingest: Limited Attest: No Attest Verify: No Rekor: No Offline: Weak Gap Score: High

Gaps:

  • Not an SBOM or attestation system.

13. Qualys

SBOM Gen: Limited SBOM Ingest: Limited Attest: No Attest Verify: No Rekor: No Offline: Medium Gap Score: High


14. Rezilion

SBOM Gen: Yes SBOM Ingest: Yes Attest: No Attest Verify: No Rekor: No Offline: Medium Gap Score: MediumHigh

Differentiator: Reachability analysis based on runtime—but no attestation stack.


15. Chainguard + Wolfi

SBOM Gen: Yes (SLSA-native) SBOM Ingest: Yes Attest: Yes Attest Verify: Yes Rekor: Yes Offline: Medium Gap Score: LowMedium

Gaps:

  • Very strong attestation and provenance, but no deterministic replay bundles.
  • No lattice/VEX merge engine.
  • No crypto-sovereign modes.

Consolidated Comparison Table (Compact)

Vendor SBOM Gen SBOM Ingest Attest Attest Verify Rekor Offline Gap Score
Trivy Yes Yes Yes Yes Yes Strong Medium
Syft+Grype Yes Yes Limited Limited Indirect Medium Medium
Snyk Yes Limited No No No Weak High
Prisma Yes Limited No No No Strong High
AWS Partial Partial Yes Yes No Weak Medium
Google Yes Yes Yes Yes Optional Weak Medium
GitHub Yes Partial Yes Yes Yes Weak Medium-High
GitLab Yes Limited Partial Partial No Medium Medium
Microsoft Partial Partial No No No Weak High
Anchore Ent Yes Yes Some Partial No Good Medium
JFrog Xray Yes Yes No No No Medium High
Tenable Partial Limited No No No Weak High
Qualys Limited Limited No No No Medium High
Rezilion Yes Yes No No No Medium Medium-High
Chainguard Yes Yes Yes Yes Yes Medium LowMedium

High-Level Summary for Stella Ops Positioning

1. You beat all 15 vendors on deterministic replayability

None of them provide:

  • Feed+rules snapshot capture
  • Replayable SBOM & VEX evaluation identical across time
  • Hash-locked manifests for audits/regulators

This is a primary moat.

2. You are the only one with crypto-sovereign readiness

No vendor supports:

  • GOST R 34.10-2012
  • SM2/SM3/SM4
  • EIDAS/QES native pipeline
  • PQC (Dilithium/Falcon) as first-class switches

Another moat.

3. You are the only one with a lattice-based VEX engine

No vendor even attempts:

  • Policy-merging lattice logic
  • Visual trust algebra editor
  • VEX reduction semantics

Massive moat.

4. Proof-of-Integrity Graph is unmatched

Some vendors do provenance; only Chainguard does strong provenance. None build a user-facing graph with:

  • DSSE chain
  • Build ancestry
  • Material locality
  • Container runtime verification

5. No vendor has trust economics / market layer

Your “Proof-Market Ledger” is completely unique.

6. No vendor has AI explainability for SBOM/VEX

This is a differentiator that can become a moat if done with deterministic reasoning constraints.


Final Result

Delivered: full vendor comparison sheet across 15 competitors, with SBOM/attestation/offline features and a gap analysis against Stella Ops moats.

If you want, I can now:

  1. Produce the PowerPoint deck for investors.
  2. Generate a machine-readable JSON for internal agent consumption.
  3. Produce competitive battlecards per vendor for sales.
  4. Integrate into your AGENTS.md or TASKS.md structure. Perfect, lets turn this into something you can literally hand to your documentation team as a working specification.

Below Ill structure it as:

  1. Global principles and structure.
  2. Blueprint for client-facing docs (with outlines and text samples).
  3. Blueprint for developer-facing docs (with outlines and text samples).
  4. Standard per-module template (Scanner, Sbomer, Concelier, Vexer, etc.).
  5. Two concrete, copy-pasteable full-page samples.

You can paste this into docs/authoring-guide.md (or similar) in your repo.


1. Global Principles

1.1 Audiences

Your documentators must always write with a clear audience in mind:

  • Client / Business / Security leadership CIO, CISO, Head of Infrastructure, Architects, Risk/Compliance.

  • Technical Integrators / DevOps / Platform Engineers People who deploy Stella Ops and integrate with CI/CD, registries, proxies.

  • Developers / Product Engineers People who consume APIs/SDKs, embed agents, or extend the system.

Every document must declare its audience in front-matter:

---
title: "Deterministic Scan Replay"
audience: ["client", "security-lead", "architect"]
level: "introductory"
version: "1.0"
module: "Platform"
---

1.2 Document Types

We standardize on four types:

  1. Concept / Overview explain what and why in plain language.
  2. How-to / Guide stepwise instructions to accomplish a task.
  3. Reference APIs, CLI, config, schemas.
  4. Architecture / Deep Dive internals, invariants, interaction diagrams.

Each page must state its type in front-matter:

type: "concept" # or "guide" | "reference" | "architecture"

2. Client-Facing Documentation Blueprint

The client side should answer: “What does Stella Ops do for my risk, compliance, and operations?”

2.1 Top-Level Structure (Clients)

Proposed tree:

  • docs/clients/01-solution-overview.md
  • docs/clients/02-platform-architecture-and-trust-model.md
  • docs/clients/03-sbom-and-vex-capabilities.md
  • docs/clients/04-attestations-and-transparency.md
  • docs/clients/05-deterministic-replay-and-audit-readiness.md
  • docs/clients/06-crypto-sovereign-and-regional-standards.md
  • docs/clients/07-deployment-models-and-operations.md
  • docs/clients/08-compliance-and-certifications.md
  • docs/clients/09-faq-and-glossary.md

Below are concrete directions and samples.


2.2 01 Solution Overview

Goal: In 24 pages, explain the value proposition with minimal jargon.

Outline for documentators:

  1. Problem statement: SBOM, VEX, attestation chaos today.

  2. What Stella Ops is (one paragraph).

  3. Key capabilities:

    • Deterministic, replayable scans.
    • Crypto-sovereign readiness.
    • Lattice-based VEX / policy engine.
    • Proof-of-Integrity Graph.
  4. Key deployment scenarios:

    • On-prem, air-gapped.
    • Hybrid.
    • Cloud-only.
  5. Outcomes:

    • Faster audits.
    • Lower vulnerability noise.
    • Vendor-proof cryptography and attestations.

Sample paragraph (they can adapt language):

Stella Ops is a sovereign, deterministic SBOM and VEX platform that turns software supply chain metadata into auditable evidence. It ingests container images and build artifacts, produces cryptographically signed SBOMs and VEX statements, and evaluates risk using a configurable lattice engine. Every scan can be deterministically replayed against frozen feeds and policies, enabling regulators and auditors to independently verify historical decisions.


2.3 02 Platform Architecture & Trust Model

Goal: Explain the core subsystems to architects, without drowning them in implementation details.

Outline:

  1. High-level diagram (required): modules and major data flows.

  2. Core components:

    • Scanner (binary/SBOM generation).
    • Sbomer (SBOM normalization and storage).
    • Concelier/Feedser (feed ingestion, normalization).
    • Excitior/Vexer (VEX evaluation, lattice engine).
    • Authority (signing, policy, key management).
    • Ledger (proof market, transparency-log integration).
  3. Trust boundaries:

    • What runs in customer infra vs external registries/logs.
    • Where signatures are created and verified.
  4. Threat model summary:

    • Supply chain tampering.
    • Compromised registry.
    • Compromised feed sources.

Sample section header and short text:

## Components at a Glance

Stella Ops is composed of six cooperating services:

- **Scanner**  inspects container images and binaries, produces SBOMs.
- **Sbomer**  normalizes and stores SBOMs in CycloneDX/SPDX formats.
- **Concelier/Feedser**  ingests vulnerability feeds, advisories, and vendor VEX.
- **Excitior/Vexer**  applies lattice-based policies to SBOM+VEX to compute effective risk.
- **Authority**  manages keys, signatures, and attestation policies.
- **Ledger**  integrates with transparency logs (e.g. Rekor) and maintains a local proof store.

2.4 03 SBOM & VEX Capabilities

Goal: Show clients what standards you support and how SBOM/VEX are produced and consumed.

Outline:

  1. Standards support:

    • CycloneDX (version list).
    • SPDX (version list).
    • VEX formats supported/normalized.
  2. SBOM lifecycle:

    • Generation at build.
    • Ingestion from third parties.
    • Storage, indexing, retention.
  3. VEX lifecycle:

    • Ingestion from vendors.
    • Internal VEX issuance.
    • Propagation to downstream systems.
  4. Benefits:

    • Noise reduction.
    • Consistent view across multiple vendors.

Short sample:

Stella Ops consumes and produces SBOMs in CycloneDX and SPDX formats, normalizing them into an internal graph model. On top of that graph, the platform evaluates vulnerability information combined with VEX statements (both vendor-supplied and locally authored). This separation of “what is present” (SBOM) from “what is exploitable” (VEX + lattice rules) is central to reducing false positives while maintaining explainability.


2.5 05 Deterministic Replay & Audit Readiness

This is one of your moats; it deserves its own client-facing concept doc.

Outline:

  1. What is a “deterministic scan”?

  2. What is captured in a replay bundle:

    • Feeds snapshot (hashes).
    • Policies, lattice configuration.
    • SBOM and VEX input.
    • Scan result + proof object.
  3. Use cases:

    • Regulatory audits.
    • Internal forensics.
    • Dispute resolution with vendors.
  4. Operational model:

    • Retention strategy (how long we keep bundles).
    • Export/import for external verification.

Sample explainer paragraph:

A deterministic scan in Stella Ops is a vulnerability and risk evaluation that can be reproduced bit-for-bit at a later time. For each scan, the platform records a manifest of all inputs: SBOMs, VEX statements, vulnerability feeds, and policy rules, each identified by content hash. These inputs, together with the evaluation engine version, form a replay bundle. Auditors can re-run the bundle in an offline environment and confirm that the platforms decision at that time was mathematically consistent with the evidence and policies in force.


2.6 06 Crypto-Sovereign & Regional Standards

Outline:

  1. Supported cryptographic algorithms:

    • Standard (RSA/ECDSA, etc.).
    • Regional (GOST, SM, eIDAS, etc.) list per region.
    • PQC options (Dilithium/Falcon) if planned/available.
  2. Deployment patterns:

    • How keys are hosted (HSM, external KMS, bring-your-own).
    • How to ensure regional legal compliance (data residency, crypto regulations).
  3. Interoperability:

    • How a GOST-signed attestation can still be verifiable by non-GOST consumers, and vice versa (if applicable).

Short sample:

Stella Ops supports multiple cryptographic profiles, including EU eIDAS-qualified signatures and regional standards such as GOST and SM-series algorithms. Each deployment can be configured with a crypto profile that aligns with regulatory and organizational requirements. The same SBOM and attestation data can be signed under multiple profiles, enabling cross-border verification without forcing customers onto a single cryptographic regime.


3. Developer-Facing Documentation Blueprint

Developer documentation must be immediately actionable.

3.1 Top-Level Structure (Developers)

Proposed tree:

  • docs/dev/01-getting-started.md
  • docs/dev/02-core-concepts.md
  • docs/dev/03-apis/ (REST/GraphQL/CLI)
  • docs/dev/04-sdks/
  • docs/dev/05-examples-and-recipes/
  • docs/dev/06-architecture-deep-dive/
  • docs/dev/07-data-models-and-schemas/
  • docs/dev/08-operations-and-runbooks/
  • docs/dev/09-contributing-and-extension-points/

3.2 01 Getting Started (Developers)

Outline:

  1. Minimal environment prerequisites (Docker, Kubernetes, DB).

  2. Quick install (one simple path: e.g., docker-compose, Helm).

  3. First scan:

    • Run scanner against sample image.
    • View SBOM and results via UI/API.
  4. Links to deeper sections.

Sample skeleton:

# Getting Started with Stella Ops

This guide helps you run your first scan in under 30 minutes.

## Prerequisites

- Docker 24+ or Kubernetes 1.27+
- Access to a Postgres-compatible database
- A container registry containing at least one test image

## Step 1: Deploy the Core Services

```bash
docker compose -f deploy/docker-compose.minimal.yml up -d

Step 2: Run Your First Scan

stella scanner image scan \
  --image ghcr.io/example/app:latest \
  --output sbom.json

Step 3: View Results

Use the API:

curl http://localhost:8080/api/scans/{scanId}

Or open the web UI at http://localhost:8080.


---

### 3.3 03  API Reference (REST/GraphQL/CLI)

For each API, documentators must provide:

- Endpoint/path (or GraphQL query/mutation).  
- Description.  
- Request schema & example.  
- Response schema & example.  
- Error codes and typical causes.

**Example structure for a REST endpoint:**

```md
## POST /api/scans

Create a new scan for a container image or SBOM.

### Request

```json
{
  "target": {
    "type": "container-image",
    "imageRef": "ghcr.io/example/app:1.2.3"
  },
  "policies": ["default"],
  "attest": true
}

Response

{
  "scanId": "scn_01J8YF4ZK7...",
  "status": "queued",
  "links": {
    "self": "/api/scans/scn_01J8YF4ZK7...",
    "results": "/api/scans/scn_01J8YF4ZK7.../results"
  }
}

Error Codes

  • 400 INVALID_TARGET the target specification is missing or invalid.
  • 403 POLICY_NOT_ALLOWED caller not permitted to use specified policies.
  • 503 BACKEND_UNAVAILABLE core scanning service currently unavailable.

---

### 3.4 05  Examples & Recipes

This section is extremely practical. You want short, targeted recipes such as:

- “Integrate Stella Ops in GitLab CI pipeline.”  
- “Use Stella Ops to sign SBOMs with GOST keys.”  
- “Replay a historical scan bundle for audit purposes.”  
- “Publish attestations to Rekor and verify them.”

Each recipe follows a standard pattern:

1. Problem / goal.  
2. Preconditions.  
3. Step-by-step instructions.  
4. Example code/config.  
5. Verification.

---

### 3.5 06  Architecture Deep Dive (Developers)

This is where your internal devs and advanced customers go.

Per deep dive:

- Explain internal data flows in detail.  
- Include sequence diagrams (build → scanner → sbomer → vexer → ledger).  
- Document invariants and assumptions.  
- Explain failure modes and how they surface (e.g. what happens if Rekor is down).

---

## 4. Per-Module Template

For every major module (Scanner, Sbomer, Concelier/Feedser, Excitior/Vexer, Authority, Ledger), documentators should create a module page using this template:

```md
---
title: "Scanner Module Overview"
audience: ["developer", "devops", "architect"]
type: "architecture"
module: "Scanner"
---

# Purpose

One paragraph: what this module does in the system.

# Responsibilities

- Bullet list of responsibilities.
- What input it accepts.
- What output it produces.

# Interfaces

## Incoming

- API endpoints, message queues, or CLI commands that hit this module.

## Outgoing

- Which services it calls.
- Which queues/topics it publishes to.

# Data Model

- Key entities and fields.
- Links to detailed schema documents.

# Configuration

- Config options (names, types, defaults).
- How configuration is loaded (env, file, config service).

# Operational Considerations

- Scaling characteristics.
- Resource usage patterns.
- Common failure modes and how they are reported.

# Security & Trust

- What security boundaries apply.
- Which keys or tokens are used.
- Logging and audit fields produced.

# Examples

- Example request/response.
- Example logs for a typical operation.

5. Two Concrete, Ready-to-Use Sample Pages

You asked for “very detailed samples”. Here are two you can almost drop in.

5.1 Sample Client Doc: “Deterministic Scan Replay Overview”

---
title: "Deterministic Scan Replay"
audience: ["client", "security-lead", "architect"]
type: "concept"
version: "1.0"
module: "Platform"
---

# Deterministic Scan Replay

Stella Ops allows every vulnerability scan to be reproduced bit-for-bit at a later date. This property is called **deterministic scan replay** and is central to our audit and compliance story.

## Why Determinism Matters

Most security tools evaluate vulnerabilities against live feeds and mutable policies. Six months later, it is often impossible to reconstruct why a specific risk decision was made, because:

- Vulnerability feeds have changed.
- Vendor advisories have been updated or withdrawn.
- Internal policies have evolved.

Stella Ops addresses this by capturing all inputs into a scan in a **replay bundle**.

## What is a Replay Bundle?

For each scan, Stella Ops records:

- **SBOM inputs**  the exact SBOM documents used, with content hashes.
- **VEX inputs**  vendor and local VEX statements applied, with content hashes.
- **Vulnerability feeds**  references to feed snapshots with content hashes and timestamps.
- **Policy and lattice configuration**  the full set of rules and lattice parameters in effect.
- **Engine version**  the precise version of the evaluation engine used.

These elements are referenced in a signed manifest. The manifest itself can be stored locally or anchored to an external transparency log.

## How Auditors Use Replay

During an audit, an authorized party can:

1. Export a replay bundle for the time period or system under review.
2. Load the bundle into a replay-capable environment (online or offline).
3. Re-run the evaluation and confirm that:
   - The same inputs produce the same findings.
   - The risk decisions claimed at the time are consistent with the policies that were in force.

This removes guesswork and narrative from the conversation. Auditors work with verifiable evidence and deterministic behavior instead of reconstructed stories.

## Impact on Compliance

Deterministic scan replay supports:

- **Regulatory compliance**  Demonstrate that vulnerability decisions were aligned with documented policies and evidence at any historical point.
- **Vendor accountability**  Show precisely which vendor advisories and VEX statements were applied to which software versions.
- **Internal governance**  Trace how changes in policy or feeds affect risk assessments over time.

5.2 Sample Developer Doc: “Replay Bundle Manifest Technical Specification”

---
title: "Replay Bundle Manifest"
audience: ["developer", "devops"]
type: "reference"
module: "Authority"
version: "1.0"
---

# Replay Bundle Manifest

This document defines the JSON schema for the replay bundle manifest used by Stella Ops to support deterministic scan replay.

The manifest describes all inputs required to reproduce a specific scan result.

## Top-Level Structure

```json
{
  "schemaVersion": "1.0",
  "bundleId": "rbn_01J8YF4ZK7...",
  "scanId": "scn_01J8XE3FQ8...",
  "createdAt": "2025-10-21T14:32:45Z",
  "engine": {
    "name": "stella-vex-engine",
    "version": "2.3.1"
  },
  "inputs": {
    "sboms": [],
    "vexStatements": [],
    "feeds": [],
    "policies": []
  },
  "signatures": []
}

SBOM Inputs

"sboms": [
  {
    "id": "sbom_01J8XD7A9K...",
    "role": "primary",
    "format": "CycloneDX",
    "version": "1.6",
    "hash": {
      "alg": "sha256",
      "value": "4f3c9c..."
    }
  }
]
  • role primary or dependency.
  • format CycloneDX or SPDX.
  • hash content hash of the SBOM document.

VEX Inputs

"vexStatements": [
  {
    "id": "vex_01J8XD9VTR...",
    "source": "vendor",
    "format": "CycloneDX-VEX",
    "hash": {
      "alg": "sha256",
      "value": "b3d2a1..."
    }
  }
]
  • source vendor or local.
  • format VEX format identifier.
  • hash content hash of the VEX document.

Feed Snapshots

"feeds": [
  {
    "id": "feed_nvd_2025-10-21",
    "provider": "NVD",
    "snapshotDate": "2025-10-21",
    "hash": {
      "alg": "sha256",
      "value": "89a7c5..."
    }
  }
]

Each entry represents a frozen snapshot of a vulnerability feed used during the scan.

Policies and Lattice Configuration

"policies": [
  {
    "id": "policy_default_2025-09",
    "type": "lattice-config",
    "hash": {
      "alg": "sha256",
      "value": "cc44ee..."
    }
  }
]

The manifest does not inline the full policy, only its hash and ID. Policies are stored separately but are referentially immutable.

Signatures

"signatures": [
  {
    "profile": "eidas-qes",
    "alg": "ECDSA_P256",
    "sig": "MEQCIG...",
    "certChainRef": "cert_01J8Z0..."
  }
]
  • profile cryptographic profile, e.g. eidas-qes, gost-2012, pqc-dilithium.
  • alg concrete algorithm used.
  • sig signature over the canonical JSON form.
  • certChainRef reference to the certificate chain used for verification.

Backwards Compatibility

  • New fields must be additive.
  • Existing fields must not change semantics.
  • The schemaVersion field is used to interpret optional sections.

Replay tools MUST ignore unknown fields.


---

If you give this to your documentators, they will have:

- A clear structure for client vs developer docs.  
- Concrete outlines per key area (SBOM/VEX, replay, crypto-sovereign).  
- A per-module template.  
- Example pages that show the level of depth and style you expect.

If you want next, I can:  
- Turn this into a full `docs/authoring-guide.md` plus a directory tree, or  
- Add explicit naming conventions and linting rules (e.g., required headings, glossary tags).
::contentReference[oaicite:0]{index=0}