8.7 KiB
8.7 KiB
Trust Architecture Diagrams
Sprint: SPRINT_20260125_003 - WORKFLOW-008 Last updated: 2026-01-25
This document provides architectural diagrams for the StellaOps TUF-based trust distribution system.
1. Trust Hierarchy
The TUF trust hierarchy showing roles and key relationships.
graph TB
subgraph "TUF Roles & Keys"
ROOT[("Root<br/>(threshold: 3/5)")]
TARGETS[("Targets<br/>(threshold: 1)")]
SNAPSHOT[("Snapshot<br/>(threshold: 1)")]
TIMESTAMP[("Timestamp<br/>(threshold: 1)")]
end
subgraph "Trust Targets"
REKOR_KEY["Rekor Public Key<br/>rekor-key-v1.pub"]
FULCIO_CHAIN["Fulcio Chain<br/>fulcio-chain.pem"]
SERVICE_MAP["Service Map<br/>sigstore-services-v1.json"]
ORG_KEY["Org Signing Key<br/>org-signing-key.pub"]
end
ROOT --> TARGETS
ROOT --> SNAPSHOT
ROOT --> TIMESTAMP
SNAPSHOT --> TARGETS
TIMESTAMP --> SNAPSHOT
TARGETS --> REKOR_KEY
TARGETS --> FULCIO_CHAIN
TARGETS --> SERVICE_MAP
TARGETS --> ORG_KEY
style ROOT fill:#ff6b6b,stroke:#333,stroke-width:2px
style TARGETS fill:#4ecdc4,stroke:#333
style SNAPSHOT fill:#45b7d1,stroke:#333
style TIMESTAMP fill:#96ceb4,stroke:#333
Role Descriptions
| Role | Purpose | Update Frequency |
|---|---|---|
| Root | Ultimate trust anchor, defines all other roles | Rarely (ceremony) |
| Targets | Lists trusted targets with hashes | When targets change |
| Snapshot | Point-in-time view of all metadata | With targets |
| Timestamp | Freshness guarantee | Every few hours |
2. Online Verification Flow
Client verification of attestations when network is available.
sequenceDiagram
participant Client as StellaOps Client
participant TUF as TUF Repository
participant Rekor as Rekor Transparency Log
participant Cache as Local Cache
Note over Client: Start verification
Client->>Cache: Check TUF metadata freshness
alt Metadata stale
Client->>TUF: Fetch timestamp.json
TUF-->>Client: timestamp.json
Client->>TUF: Fetch snapshot.json (if needed)
TUF-->>Client: snapshot.json
Client->>TUF: Fetch targets.json (if needed)
TUF-->>Client: targets.json
Client->>Cache: Update cached metadata
end
Client->>Cache: Load Rekor public key
Client->>Cache: Load service map
Note over Client: Resolve Rekor URL from service map
Client->>Rekor: GET /api/v2/log/entries/{uuid}/proof
Rekor-->>Client: Inclusion proof + checkpoint
Note over Client: Verify:
Note over Client: 1. Checkpoint signature (Rekor key)
Note over Client: 2. Merkle inclusion proof
Note over Client: 3. Entry matches attestation
Client-->>Client: Verification Result
3. Offline Verification Flow
Client verification using sealed trust bundle (air-gapped).
sequenceDiagram
participant Client as StellaOps Client
participant Bundle as Trust Bundle
participant Tiles as Cached Tiles
Note over Client: Start offline verification
Client->>Bundle: Load TUF metadata
Bundle-->>Client: root.json, targets.json, etc.
Client->>Bundle: Load Rekor public key
Bundle-->>Client: rekor-key-v1.pub
Client->>Bundle: Load checkpoint
Bundle-->>Client: Signed checkpoint
Note over Client: Verify checkpoint signature
Client->>Tiles: Load Merkle tiles for proof
Tiles-->>Client: tile/data/..., tile/...
Note over Client: Reconstruct inclusion proof
Client->>Client: Verify Merkle path
Note over Client: No network calls required!
Client-->>Client: Verification Result
Trust Bundle Contents
trust-bundle.tar.zst/
├── manifest.json # Bundle metadata & checksums
├── tuf/
│ ├── root.json
│ ├── targets.json
│ ├── snapshot.json
│ └── timestamp.json
├── targets/
│ ├── rekor-key-v1.pub
│ ├── sigstore-services-v1.json
│ └── fulcio-chain.pem
└── tiles/ # Pre-fetched Merkle tiles
├── checkpoint
└── tile/
├── 0/...
├── 1/...
└── data/...
4. Key Rotation Flow
Dual-key rotation with grace period.
stateDiagram-v2
[*] --> SingleKey: Initial State
SingleKey --> DualKey: Add new key
DualKey --> DualKey: Grace period<br/>(7-14 days)
DualKey --> SingleKey: Remove old key
SingleKey --> [*]
note right of SingleKey
Only one key trusted
All signatures use this key
end note
note right of DualKey
Both keys trusted
Old attestations verify (old key)
New attestations verify (new key)
Clients sync new key
end note
Detailed Rotation Timeline
gantt
title Key Rotation Timeline
dateFormat YYYY-MM-DD
section TUF Admin
Generate new key :done, gen, 2026-01-01, 1d
Add to TUF repository :done, add, after gen, 1d
Sign & publish metadata :done, pub, after add, 1d
section Grace Period
Dual-key active :active, grace, after pub, 14d
Monitor client sync :monitor, after pub, 14d
section Completion
Remove old key :remove, after grace, 1d
Sign & publish final :final, after remove, 1d
5. Failover Flow
Circuit breaker and mirror failover during primary outage.
stateDiagram-v2
[*] --> Closed: Normal operation
state "Circuit Breaker" as CB {
Closed --> Open: Failures > threshold
Open --> HalfOpen: After timeout
HalfOpen --> Closed: Success
HalfOpen --> Open: Failure
}
state "Request Routing" as Routing {
Primary: Primary Rekor
Mirror: Mirror Rekor
}
Closed --> Primary: Route to primary
Open --> Mirror: Failover to mirror
HalfOpen --> Primary: Probe primary
note right of Open
Primary unavailable
Use mirror if configured
Cache tiles locally
end note
Failover Decision Tree
flowchart TD
START([Request]) --> CB{Circuit<br/>Breaker<br/>State?}
CB -->|Closed| PRIMARY[Try Primary]
CB -->|Open| MIRROR_CHECK{Mirror<br/>Enabled?}
CB -->|HalfOpen| PROBE[Probe Primary]
PRIMARY -->|Success| SUCCESS([Return Result])
PRIMARY -->|Failure| RECORD[Record Failure]
RECORD --> THRESHOLD{Threshold<br/>Exceeded?}
THRESHOLD -->|Yes| OPEN_CB[Open Circuit]
THRESHOLD -->|No| FAIL([Return Error])
OPEN_CB --> MIRROR_CHECK
MIRROR_CHECK -->|Yes| MIRROR[Try Mirror]
MIRROR_CHECK -->|No| CACHE{Cached<br/>Data?}
MIRROR -->|Success| SUCCESS
MIRROR -->|Failure| CACHE
CACHE -->|Yes| CACHED([Return Cached])
CACHE -->|No| FAIL
PROBE -->|Success| CLOSE_CB[Close Circuit]
PROBE -->|Failure| OPEN_CB
CLOSE_CB --> SUCCESS
6. Component Architecture
Full system component view.
graph TB
subgraph "Client Layer"
CLI[stella CLI]
SDK[StellaOps SDK]
end
subgraph "Trust Layer"
TUF_CLIENT[TUF Client]
CACHE[(Local Cache)]
CB[Circuit Breaker]
end
subgraph "Service Layer"
TUF_SERVER[TUF Server]
REKOR_PRIMARY[Rekor Primary]
REKOR_MIRROR[Rekor Mirror / Tile Proxy]
end
subgraph "Storage Layer"
TUF_STORE[(TUF Metadata)]
LOG_STORE[(Transparency Log)]
TILE_STORE[(Tile Storage)]
end
CLI --> TUF_CLIENT
SDK --> TUF_CLIENT
TUF_CLIENT --> CACHE
TUF_CLIENT --> CB
CB --> REKOR_PRIMARY
CB --> REKOR_MIRROR
TUF_CLIENT --> TUF_SERVER
TUF_SERVER --> TUF_STORE
REKOR_PRIMARY --> LOG_STORE
REKOR_MIRROR --> TILE_STORE
style CB fill:#ff9999
style CACHE fill:#99ff99
7. Data Flow Summary
flowchart LR
subgraph "Bootstrap"
A[Initialize TUF] --> B[Fetch Root]
B --> C[Fetch Metadata Chain]
C --> D[Cache Targets]
end
subgraph "Attestation"
E[Create Attestation] --> F[Sign DSSE]
F --> G[Submit to Rekor]
G --> H[Store Proof]
end
subgraph "Verification"
I[Load Attestation] --> J[Check TUF Freshness]
J --> K[Fetch Inclusion Proof]
K --> L[Verify Merkle Path]
L --> M[Check Checkpoint Sig]
M --> N[Return Result]
end
D --> E
H --> I