fix tests. new product advisories enhancements
This commit is contained in:
358
docs/modules/attestor/diagrams/trust-architecture.md
Normal file
358
docs/modules/attestor/diagrams/trust-architecture.md
Normal file
@@ -0,0 +1,358 @@
|
||||
# Trust Architecture Diagrams
|
||||
|
||||
> Sprint: SPRINT_20260125_003 - WORKFLOW-008
|
||||
> Last updated: 2026-01-25
|
||||
|
||||
This document provides architectural diagrams for the StellaOps TUF-based trust
|
||||
distribution system.
|
||||
|
||||
---
|
||||
|
||||
## 1. Trust Hierarchy
|
||||
|
||||
The TUF trust hierarchy showing roles and key relationships.
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph "TUF Roles & Keys"
|
||||
ROOT[("Root<br/>(threshold: 3/5)")]
|
||||
TARGETS[("Targets<br/>(threshold: 1)")]
|
||||
SNAPSHOT[("Snapshot<br/>(threshold: 1)")]
|
||||
TIMESTAMP[("Timestamp<br/>(threshold: 1)")]
|
||||
end
|
||||
|
||||
subgraph "Trust Targets"
|
||||
REKOR_KEY["Rekor Public Key<br/>rekor-key-v1.pub"]
|
||||
FULCIO_CHAIN["Fulcio Chain<br/>fulcio-chain.pem"]
|
||||
SERVICE_MAP["Service Map<br/>sigstore-services-v1.json"]
|
||||
ORG_KEY["Org Signing Key<br/>org-signing-key.pub"]
|
||||
end
|
||||
|
||||
ROOT --> TARGETS
|
||||
ROOT --> SNAPSHOT
|
||||
ROOT --> TIMESTAMP
|
||||
SNAPSHOT --> TARGETS
|
||||
TIMESTAMP --> SNAPSHOT
|
||||
TARGETS --> REKOR_KEY
|
||||
TARGETS --> FULCIO_CHAIN
|
||||
TARGETS --> SERVICE_MAP
|
||||
TARGETS --> ORG_KEY
|
||||
|
||||
style ROOT fill:#ff6b6b,stroke:#333,stroke-width:2px
|
||||
style TARGETS fill:#4ecdc4,stroke:#333
|
||||
style SNAPSHOT fill:#45b7d1,stroke:#333
|
||||
style TIMESTAMP fill:#96ceb4,stroke:#333
|
||||
```
|
||||
|
||||
### Role Descriptions
|
||||
|
||||
| Role | Purpose | Update Frequency |
|
||||
|------|---------|-----------------|
|
||||
| Root | Ultimate trust anchor, defines all other roles | Rarely (ceremony) |
|
||||
| Targets | Lists trusted targets with hashes | When targets change |
|
||||
| Snapshot | Point-in-time view of all metadata | With targets |
|
||||
| Timestamp | Freshness guarantee | Every few hours |
|
||||
|
||||
---
|
||||
|
||||
## 2. Online Verification Flow
|
||||
|
||||
Client verification of attestations when network is available.
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant Client as StellaOps Client
|
||||
participant TUF as TUF Repository
|
||||
participant Rekor as Rekor Transparency Log
|
||||
participant Cache as Local Cache
|
||||
|
||||
Note over Client: Start verification
|
||||
|
||||
Client->>Cache: Check TUF metadata freshness
|
||||
alt Metadata stale
|
||||
Client->>TUF: Fetch timestamp.json
|
||||
TUF-->>Client: timestamp.json
|
||||
Client->>TUF: Fetch snapshot.json (if needed)
|
||||
TUF-->>Client: snapshot.json
|
||||
Client->>TUF: Fetch targets.json (if needed)
|
||||
TUF-->>Client: targets.json
|
||||
Client->>Cache: Update cached metadata
|
||||
end
|
||||
|
||||
Client->>Cache: Load Rekor public key
|
||||
Client->>Cache: Load service map
|
||||
|
||||
Note over Client: Resolve Rekor URL from service map
|
||||
|
||||
Client->>Rekor: GET /api/v2/log/entries/{uuid}/proof
|
||||
Rekor-->>Client: Inclusion proof + checkpoint
|
||||
|
||||
Note over Client: Verify:
|
||||
Note over Client: 1. Checkpoint signature (Rekor key)
|
||||
Note over Client: 2. Merkle inclusion proof
|
||||
Note over Client: 3. Entry matches attestation
|
||||
|
||||
Client-->>Client: Verification Result
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Offline Verification Flow
|
||||
|
||||
Client verification using sealed trust bundle (air-gapped).
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant Client as StellaOps Client
|
||||
participant Bundle as Trust Bundle
|
||||
participant Tiles as Cached Tiles
|
||||
|
||||
Note over Client: Start offline verification
|
||||
|
||||
Client->>Bundle: Load TUF metadata
|
||||
Bundle-->>Client: root.json, targets.json, etc.
|
||||
|
||||
Client->>Bundle: Load Rekor public key
|
||||
Bundle-->>Client: rekor-key-v1.pub
|
||||
|
||||
Client->>Bundle: Load checkpoint
|
||||
Bundle-->>Client: Signed checkpoint
|
||||
|
||||
Note over Client: Verify checkpoint signature
|
||||
|
||||
Client->>Tiles: Load Merkle tiles for proof
|
||||
Tiles-->>Client: tile/data/..., tile/...
|
||||
|
||||
Note over Client: Reconstruct inclusion proof
|
||||
|
||||
Client->>Client: Verify Merkle path
|
||||
|
||||
Note over Client: No network calls required!
|
||||
|
||||
Client-->>Client: Verification Result
|
||||
```
|
||||
|
||||
### Trust Bundle Contents
|
||||
|
||||
```
|
||||
trust-bundle.tar.zst/
|
||||
├── manifest.json # Bundle metadata & checksums
|
||||
├── tuf/
|
||||
│ ├── root.json
|
||||
│ ├── targets.json
|
||||
│ ├── snapshot.json
|
||||
│ └── timestamp.json
|
||||
├── targets/
|
||||
│ ├── rekor-key-v1.pub
|
||||
│ ├── sigstore-services-v1.json
|
||||
│ └── fulcio-chain.pem
|
||||
└── tiles/ # Pre-fetched Merkle tiles
|
||||
├── checkpoint
|
||||
└── tile/
|
||||
├── 0/...
|
||||
├── 1/...
|
||||
└── data/...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Key Rotation Flow
|
||||
|
||||
Dual-key rotation with grace period.
|
||||
|
||||
```mermaid
|
||||
stateDiagram-v2
|
||||
[*] --> SingleKey: Initial State
|
||||
SingleKey --> DualKey: Add new key
|
||||
DualKey --> DualKey: Grace period<br/>(7-14 days)
|
||||
DualKey --> SingleKey: Remove old key
|
||||
SingleKey --> [*]
|
||||
|
||||
note right of SingleKey
|
||||
Only one key trusted
|
||||
All signatures use this key
|
||||
end note
|
||||
|
||||
note right of DualKey
|
||||
Both keys trusted
|
||||
Old attestations verify (old key)
|
||||
New attestations verify (new key)
|
||||
Clients sync new key
|
||||
end note
|
||||
```
|
||||
|
||||
### Detailed Rotation Timeline
|
||||
|
||||
```mermaid
|
||||
gantt
|
||||
title Key Rotation Timeline
|
||||
dateFormat YYYY-MM-DD
|
||||
|
||||
section TUF Admin
|
||||
Generate new key :done, gen, 2026-01-01, 1d
|
||||
Add to TUF repository :done, add, after gen, 1d
|
||||
Sign & publish metadata :done, pub, after add, 1d
|
||||
|
||||
section Grace Period
|
||||
Dual-key active :active, grace, after pub, 14d
|
||||
Monitor client sync :monitor, after pub, 14d
|
||||
|
||||
section Completion
|
||||
Remove old key :remove, after grace, 1d
|
||||
Sign & publish final :final, after remove, 1d
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Failover Flow
|
||||
|
||||
Circuit breaker and mirror failover during primary outage.
|
||||
|
||||
```mermaid
|
||||
stateDiagram-v2
|
||||
[*] --> Closed: Normal operation
|
||||
|
||||
state "Circuit Breaker" as CB {
|
||||
Closed --> Open: Failures > threshold
|
||||
Open --> HalfOpen: After timeout
|
||||
HalfOpen --> Closed: Success
|
||||
HalfOpen --> Open: Failure
|
||||
}
|
||||
|
||||
state "Request Routing" as Routing {
|
||||
Primary: Primary Rekor
|
||||
Mirror: Mirror Rekor
|
||||
}
|
||||
|
||||
Closed --> Primary: Route to primary
|
||||
Open --> Mirror: Failover to mirror
|
||||
HalfOpen --> Primary: Probe primary
|
||||
|
||||
note right of Open
|
||||
Primary unavailable
|
||||
Use mirror if configured
|
||||
Cache tiles locally
|
||||
end note
|
||||
```
|
||||
|
||||
### Failover Decision Tree
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
START([Request]) --> CB{Circuit<br/>Breaker<br/>State?}
|
||||
|
||||
CB -->|Closed| PRIMARY[Try Primary]
|
||||
CB -->|Open| MIRROR_CHECK{Mirror<br/>Enabled?}
|
||||
CB -->|HalfOpen| PROBE[Probe Primary]
|
||||
|
||||
PRIMARY -->|Success| SUCCESS([Return Result])
|
||||
PRIMARY -->|Failure| RECORD[Record Failure]
|
||||
RECORD --> THRESHOLD{Threshold<br/>Exceeded?}
|
||||
THRESHOLD -->|Yes| OPEN_CB[Open Circuit]
|
||||
THRESHOLD -->|No| FAIL([Return Error])
|
||||
|
||||
OPEN_CB --> MIRROR_CHECK
|
||||
|
||||
MIRROR_CHECK -->|Yes| MIRROR[Try Mirror]
|
||||
MIRROR_CHECK -->|No| CACHE{Cached<br/>Data?}
|
||||
|
||||
MIRROR -->|Success| SUCCESS
|
||||
MIRROR -->|Failure| CACHE
|
||||
|
||||
CACHE -->|Yes| CACHED([Return Cached])
|
||||
CACHE -->|No| FAIL
|
||||
|
||||
PROBE -->|Success| CLOSE_CB[Close Circuit]
|
||||
PROBE -->|Failure| OPEN_CB
|
||||
|
||||
CLOSE_CB --> SUCCESS
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Component Architecture
|
||||
|
||||
Full system component view.
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph "Client Layer"
|
||||
CLI[stella CLI]
|
||||
SDK[StellaOps SDK]
|
||||
end
|
||||
|
||||
subgraph "Trust Layer"
|
||||
TUF_CLIENT[TUF Client]
|
||||
CACHE[(Local Cache)]
|
||||
CB[Circuit Breaker]
|
||||
end
|
||||
|
||||
subgraph "Service Layer"
|
||||
TUF_SERVER[TUF Server]
|
||||
REKOR_PRIMARY[Rekor Primary]
|
||||
REKOR_MIRROR[Rekor Mirror / Tile Proxy]
|
||||
end
|
||||
|
||||
subgraph "Storage Layer"
|
||||
TUF_STORE[(TUF Metadata)]
|
||||
LOG_STORE[(Transparency Log)]
|
||||
TILE_STORE[(Tile Storage)]
|
||||
end
|
||||
|
||||
CLI --> TUF_CLIENT
|
||||
SDK --> TUF_CLIENT
|
||||
|
||||
TUF_CLIENT --> CACHE
|
||||
TUF_CLIENT --> CB
|
||||
CB --> REKOR_PRIMARY
|
||||
CB --> REKOR_MIRROR
|
||||
|
||||
TUF_CLIENT --> TUF_SERVER
|
||||
TUF_SERVER --> TUF_STORE
|
||||
|
||||
REKOR_PRIMARY --> LOG_STORE
|
||||
REKOR_MIRROR --> TILE_STORE
|
||||
|
||||
style CB fill:#ff9999
|
||||
style CACHE fill:#99ff99
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Data Flow Summary
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
subgraph "Bootstrap"
|
||||
A[Initialize TUF] --> B[Fetch Root]
|
||||
B --> C[Fetch Metadata Chain]
|
||||
C --> D[Cache Targets]
|
||||
end
|
||||
|
||||
subgraph "Attestation"
|
||||
E[Create Attestation] --> F[Sign DSSE]
|
||||
F --> G[Submit to Rekor]
|
||||
G --> H[Store Proof]
|
||||
end
|
||||
|
||||
subgraph "Verification"
|
||||
I[Load Attestation] --> J[Check TUF Freshness]
|
||||
J --> K[Fetch Inclusion Proof]
|
||||
K --> L[Verify Merkle Path]
|
||||
L --> M[Check Checkpoint Sig]
|
||||
M --> N[Return Result]
|
||||
end
|
||||
|
||||
D --> E
|
||||
H --> I
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [TUF Integration Guide](../tuf-integration.md)
|
||||
- [Rekor Verification Design](../rekor-verification-design.md)
|
||||
- [Bootstrap Guide](../../../operations/bootstrap-guide.md)
|
||||
- [Key Rotation Runbook](../../../operations/key-rotation-runbook.md)
|
||||
- [Disaster Recovery](../../../operations/disaster-recovery.md)
|
||||
262
docs/modules/attestor/tile-proxy-design.md
Normal file
262
docs/modules/attestor/tile-proxy-design.md
Normal file
@@ -0,0 +1,262 @@
|
||||
# Tile-Proxy Service Design
|
||||
|
||||
## Overview
|
||||
|
||||
The Tile-Proxy service acts as an intermediary between StellaOps clients and upstream Rekor transparency log APIs. It provides centralized tile caching, request coalescing, and offline support for air-gapped environments.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
||||
│ CI/CD Agents │────►│ Tile Proxy │────►│ Rekor API │
|
||||
│ (StellaOps) │ │ (StellaOps) │ │ (Upstream) │
|
||||
└─────────────────┘ └────────┬────────┘ └─────────────────┘
|
||||
│
|
||||
┌───────────────────────┼───────────────────────┐
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
||||
│ Tile Cache │ │ TUF Metadata │ │ Checkpoint │
|
||||
│ (CAS Store) │ │ (TrustRepo) │ │ Cache │
|
||||
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
||||
```
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
1. **Tile Proxying**: Forward tile requests to upstream Rekor, caching responses locally
|
||||
2. **Content-Addressed Storage**: Store tiles by hash for deduplication and immutability
|
||||
3. **TUF Integration**: Optionally validate metadata using TUF trust anchors
|
||||
4. **Request Coalescing**: Deduplicate concurrent requests for the same tile
|
||||
5. **Checkpoint Caching**: Cache and serve recent checkpoints
|
||||
6. **Offline Mode**: Serve from cache when upstream is unavailable
|
||||
|
||||
## API Surface
|
||||
|
||||
### Proxy Endpoints (Passthrough)
|
||||
|
||||
| Endpoint | Description |
|
||||
|----------|-------------|
|
||||
| `GET /tile/{level}/{index}` | Proxy tile request (cache-through) |
|
||||
| `GET /tile/{level}/{index}.p/{partialWidth}` | Proxy partial tile |
|
||||
| `GET /checkpoint` | Proxy checkpoint request |
|
||||
| `GET /api/v1/log/entries/{uuid}` | Proxy entry lookup |
|
||||
|
||||
### Admin Endpoints
|
||||
|
||||
| Endpoint | Description |
|
||||
|----------|-------------|
|
||||
| `GET /_admin/cache/stats` | Cache statistics (hits, misses, size) |
|
||||
| `POST /_admin/cache/sync` | Trigger manual sync job |
|
||||
| `DELETE /_admin/cache/prune` | Prune old tiles |
|
||||
| `GET /_admin/health` | Health check |
|
||||
| `GET /_admin/ready` | Readiness check |
|
||||
|
||||
## Caching Strategy
|
||||
|
||||
### Content-Addressed Tile Storage
|
||||
|
||||
Tiles are stored using content-addressed paths based on SHA-256 hash:
|
||||
|
||||
```
|
||||
{cache_root}/
|
||||
├── tiles/
|
||||
│ ├── {origin_hash}/
|
||||
│ │ ├── {level}/
|
||||
│ │ │ ├── {index}.tile
|
||||
│ │ │ └── {index}.meta.json
|
||||
│ │ └── checkpoints/
|
||||
│ │ └── {tree_size}.checkpoint
|
||||
│ └── ...
|
||||
└── metadata/
|
||||
└── cache_stats.json
|
||||
```
|
||||
|
||||
### Tile Metadata
|
||||
|
||||
Each tile has associated metadata:
|
||||
|
||||
```json
|
||||
{
|
||||
"cachedAt": "2026-01-25T10:00:00Z",
|
||||
"treeSize": 1050000,
|
||||
"isPartial": false,
|
||||
"contentHash": "sha256:abc123...",
|
||||
"upstreamUrl": "https://rekor.sigstore.dev"
|
||||
}
|
||||
```
|
||||
|
||||
### Eviction Policy
|
||||
|
||||
1. **LRU by Access Time**: Least recently accessed tiles evicted first
|
||||
2. **Max Size Limit**: Configurable maximum cache size
|
||||
3. **TTL Override**: Force re-fetch after configurable time (for checkpoints)
|
||||
4. **Immutability Preservation**: Full tiles (width=256) never evicted unless explicitly pruned
|
||||
|
||||
## Request Coalescing
|
||||
|
||||
Concurrent requests for the same tile are coalesced:
|
||||
|
||||
```csharp
|
||||
// Pseudo-code for request coalescing
|
||||
var key = $"{origin}/{level}/{index}";
|
||||
if (_inflightRequests.TryGetValue(key, out var existing))
|
||||
{
|
||||
return await existing; // Wait for in-flight request
|
||||
}
|
||||
|
||||
var tcs = new TaskCompletionSource<byte[]>();
|
||||
_inflightRequests[key] = tcs.Task;
|
||||
try
|
||||
{
|
||||
var tile = await FetchFromUpstream(origin, level, index);
|
||||
tcs.SetResult(tile);
|
||||
return tile;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_inflightRequests.Remove(key);
|
||||
}
|
||||
```
|
||||
|
||||
## TUF Integration Point
|
||||
|
||||
When `TufValidationEnabled` is true:
|
||||
|
||||
1. Load service map from TUF to discover Rekor URL
|
||||
2. Validate Rekor public key from TUF targets
|
||||
3. Verify checkpoint signatures using TUF-loaded keys
|
||||
4. Reject tiles if checkpoint signature invalid
|
||||
|
||||
## Upstream Failover
|
||||
|
||||
Support multiple upstream sources with failover:
|
||||
|
||||
```yaml
|
||||
tile_proxy:
|
||||
upstreams:
|
||||
- url: https://rekor.sigstore.dev
|
||||
priority: 1
|
||||
timeout: 30s
|
||||
- url: https://rekor-mirror.internal
|
||||
priority: 2
|
||||
timeout: 10s
|
||||
```
|
||||
|
||||
Failover behavior:
|
||||
1. Try primary upstream first
|
||||
2. On timeout/error, try next upstream
|
||||
3. Cache successful source for subsequent requests
|
||||
4. Reset failover state on explicit refresh
|
||||
|
||||
## Deployment Model
|
||||
|
||||
### Standalone Service
|
||||
|
||||
Run as dedicated service with persistent volume:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
tile-proxy:
|
||||
image: stellaops/tile-proxy:latest
|
||||
ports:
|
||||
- "8090:8080"
|
||||
volumes:
|
||||
- tile-cache:/var/cache/stellaops/tiles
|
||||
- tuf-cache:/var/cache/stellaops/tuf
|
||||
environment:
|
||||
- TILE_PROXY__UPSTREAM_URL=https://rekor.sigstore.dev
|
||||
- TILE_PROXY__TUF_URL=https://trust.stella-ops.org/tuf/
|
||||
```
|
||||
|
||||
### Sidecar Mode
|
||||
|
||||
Run alongside attestor service:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
attestor:
|
||||
image: stellaops/attestor:latest
|
||||
environment:
|
||||
- ATTESTOR__REKOR_URL=http://localhost:8090 # Use sidecar
|
||||
|
||||
tile-proxy:
|
||||
image: stellaops/tile-proxy:latest
|
||||
network_mode: "service:attestor"
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
Prometheus metrics exposed at `/_admin/metrics`:
|
||||
|
||||
| Metric | Type | Description |
|
||||
|--------|------|-------------|
|
||||
| `tile_proxy_cache_hits_total` | Counter | Total cache hits |
|
||||
| `tile_proxy_cache_misses_total` | Counter | Total cache misses |
|
||||
| `tile_proxy_cache_size_bytes` | Gauge | Current cache size |
|
||||
| `tile_proxy_upstream_requests_total` | Counter | Upstream requests by status |
|
||||
| `tile_proxy_request_duration_seconds` | Histogram | Request latency |
|
||||
| `tile_proxy_sync_last_success_timestamp` | Gauge | Last successful sync time |
|
||||
|
||||
## Configuration
|
||||
|
||||
```yaml
|
||||
tile_proxy:
|
||||
# Upstream Rekor configuration
|
||||
upstream_url: https://rekor.sigstore.dev
|
||||
tile_base_url: https://rekor.sigstore.dev/tile/
|
||||
|
||||
# TUF integration (optional)
|
||||
tuf:
|
||||
enabled: true
|
||||
url: https://trust.stella-ops.org/tuf/
|
||||
validate_checkpoint_signature: true
|
||||
|
||||
# Cache configuration
|
||||
cache:
|
||||
base_path: /var/cache/stellaops/tiles
|
||||
max_size_gb: 10
|
||||
eviction_policy: lru
|
||||
checkpoint_ttl_minutes: 5
|
||||
|
||||
# Sync job configuration
|
||||
sync:
|
||||
enabled: true
|
||||
schedule: "0 */6 * * *"
|
||||
depth: 10000
|
||||
|
||||
# Request handling
|
||||
coalescing:
|
||||
enabled: true
|
||||
max_wait_ms: 5000
|
||||
|
||||
# Failover
|
||||
failover:
|
||||
enabled: true
|
||||
retry_count: 2
|
||||
retry_delay_ms: 1000
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **No Authentication by Default**: Designed for internal network use
|
||||
2. **Optional mTLS**: Can enable client certificate validation
|
||||
3. **Rate Limiting**: Optional rate limiting per client IP
|
||||
4. **Audit Logging**: Log all cache operations for compliance
|
||||
5. **Immutable Tiles**: Full tiles are never modified after caching
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Behavior |
|
||||
|----------|----------|
|
||||
| Upstream unavailable | Serve from cache if available; 503 otherwise |
|
||||
| Invalid tile data | Reject, don't cache, log error |
|
||||
| Cache full | Evict LRU tiles, continue serving |
|
||||
| TUF validation fails | Reject request, return 502 |
|
||||
| Checkpoint stale | Refresh from upstream, warn in logs |
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **Tile Prefetching**: Prefetch tiles for known verification patterns
|
||||
2. **Multi-Log Support**: Support multiple transparency logs
|
||||
3. **Replication**: Sync cache between proxy instances
|
||||
4. **Compression**: Optional tile compression for storage
|
||||
287
docs/modules/attestor/tuf-integration.md
Normal file
287
docs/modules/attestor/tuf-integration.md
Normal file
@@ -0,0 +1,287 @@
|
||||
# TUF Integration Guide
|
||||
|
||||
This guide explains how StellaOps uses The Update Framework (TUF) for secure trust
|
||||
distribution and how to configure TUF-based trust management.
|
||||
|
||||
## Overview
|
||||
|
||||
TUF provides a secure method for distributing and updating trust anchors (public keys,
|
||||
service endpoints) without requiring client reconfiguration. StellaOps uses TUF to:
|
||||
|
||||
- Distribute Rekor public keys for checkpoint verification
|
||||
- Distribute Fulcio certificate chains for keyless signing
|
||||
- Provide service endpoint discovery (Rekor, Fulcio URLs)
|
||||
- Enable secure key rotation with grace periods
|
||||
- Support offline verification with bundled trust state
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ TUF Trust Hierarchy │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌─────────┐ │
|
||||
│ │ Root │ ← Offline, rotates rarely (yearly) │
|
||||
│ │ Key │ │
|
||||
│ └────┬────┘ │
|
||||
│ │ │
|
||||
│ ┌────┴────────────────────────────┐ │
|
||||
│ │ │ │
|
||||
│ ▼ ▼ │
|
||||
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
||||
│ │ Snapshot │ │Timestamp │ │ Targets │ │
|
||||
│ │ Key │ │ Key │ │ Key │ │
|
||||
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
|
||||
│ │ │ │ │
|
||||
│ ▼ ▼ ▼ │
|
||||
│ snapshot.json timestamp.json targets.json │
|
||||
│ │ │ │
|
||||
│ │ ├── rekor-key-v1.pub │
|
||||
│ │ ├── rekor-key-v2.pub │
|
||||
│ │ ├── fulcio-chain.pem │
|
||||
│ │ └── sigstore-services-v1.json │
|
||||
│ │ │
|
||||
│ └── Refreshed frequently (daily) │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## TUF Roles
|
||||
|
||||
### Root
|
||||
- Signs the root metadata containing all role keys
|
||||
- Highest trust level, rotates rarely
|
||||
- Should be kept offline in secure storage (HSM, air-gapped system)
|
||||
- Used only for initial setup and key rotation ceremonies
|
||||
|
||||
### Timestamp
|
||||
- Signs timestamp metadata indicating freshness
|
||||
- Must be refreshed frequently (default: daily)
|
||||
- Clients reject metadata older than expiration
|
||||
- Can be automated with short-lived credentials
|
||||
|
||||
### Snapshot
|
||||
- Signs snapshot metadata listing current target versions
|
||||
- Updated when targets change
|
||||
- Prevents rollback attacks
|
||||
|
||||
### Targets
|
||||
- Signs metadata for actual target files
|
||||
- Lists hashes and sizes for verification
|
||||
- Supports delegations for large repositories
|
||||
|
||||
## Configuration
|
||||
|
||||
### Attestor Configuration
|
||||
|
||||
```yaml
|
||||
attestor:
|
||||
trust_repo:
|
||||
enabled: true
|
||||
tuf_url: https://trust.yourcompany.com/tuf/
|
||||
refresh_interval_minutes: 60
|
||||
freshness_threshold_days: 7
|
||||
offline_mode: false
|
||||
local_cache_path: /var/lib/stellaops/tuf-cache
|
||||
service_map_target: sigstore-services-v1
|
||||
rekor_key_targets:
|
||||
- rekor-key-v1
|
||||
- rekor-key-v2
|
||||
```
|
||||
|
||||
### Configuration Options
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `enabled` | Enable TUF-based trust distribution | `false` |
|
||||
| `tuf_url` | URL to TUF repository root | Required |
|
||||
| `refresh_interval_minutes` | How often to check for updates | `60` |
|
||||
| `freshness_threshold_days` | Max age before rejecting metadata | `7` |
|
||||
| `offline_mode` | Use bundled metadata only | `false` |
|
||||
| `local_cache_path` | Local metadata cache directory | OS-specific |
|
||||
| `service_map_target` | TUF target name for service map | `sigstore-services-v1` |
|
||||
| `rekor_key_targets` | TUF target names for Rekor keys | `["rekor-key-v1"]` |
|
||||
|
||||
### Environment Variables
|
||||
|
||||
| Variable | Description |
|
||||
|----------|-------------|
|
||||
| `STELLA_TUF_ROOT_URL` | Override TUF repository URL |
|
||||
| `STELLA_SIGSTORE_SERVICE_MAP` | Path to local service map override |
|
||||
| `STELLA_TUF_OFFLINE_MODE` | Force offline mode (`true`/`false`) |
|
||||
|
||||
## CLI Usage
|
||||
|
||||
### Initialize Trust
|
||||
|
||||
```bash
|
||||
# Initialize with a TUF repository
|
||||
stella trust init \
|
||||
--tuf-url https://trust.yourcompany.com/tuf/ \
|
||||
--service-map sigstore-services-v1 \
|
||||
--pin rekor-key-v1 rekor-key-v2
|
||||
|
||||
# Initialize in offline mode with bundled metadata
|
||||
stella trust init \
|
||||
--tuf-url file:///path/to/bundled-trust/ \
|
||||
--offline
|
||||
```
|
||||
|
||||
### Sync Metadata
|
||||
|
||||
```bash
|
||||
# Refresh TUF metadata
|
||||
stella trust sync
|
||||
|
||||
# Force refresh even if fresh
|
||||
stella trust sync --force
|
||||
```
|
||||
|
||||
### Check Status
|
||||
|
||||
```bash
|
||||
# Show current trust state
|
||||
stella trust status
|
||||
|
||||
# Show with key details
|
||||
stella trust status --show-keys --show-endpoints
|
||||
```
|
||||
|
||||
### Export for Offline Use
|
||||
|
||||
```bash
|
||||
# Export trust state
|
||||
stella trust export --out ./trust-bundle/
|
||||
|
||||
# Create sealed snapshot with tiles
|
||||
stella trust snapshot export \
|
||||
--out ./snapshots/2026-01-25.tar.zst \
|
||||
--depth 10000
|
||||
```
|
||||
|
||||
### Import Offline Bundle
|
||||
|
||||
```bash
|
||||
# Import trust bundle
|
||||
stella trust import ./snapshots/2026-01-25.tar.zst \
|
||||
--verify-manifest \
|
||||
--reject-if-stale 7d
|
||||
```
|
||||
|
||||
## Service Map
|
||||
|
||||
The service map (`sigstore-services-v1.json`) contains endpoint URLs for Sigstore
|
||||
services. This enables endpoint changes without client reconfiguration.
|
||||
|
||||
### Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"version": 1,
|
||||
"rekor": {
|
||||
"url": "https://rekor.sigstore.dev",
|
||||
"log_id": "c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d",
|
||||
"public_key_target": "rekor-key-v1"
|
||||
},
|
||||
"fulcio": {
|
||||
"url": "https://fulcio.sigstore.dev",
|
||||
"root_cert_target": "fulcio-chain.pem"
|
||||
},
|
||||
"overrides": {
|
||||
"staging": {
|
||||
"rekor_url": "https://rekor.sigstage.dev"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Site-Local Overrides
|
||||
|
||||
Organizations can define environment-specific overrides:
|
||||
|
||||
```yaml
|
||||
attestor:
|
||||
trust_repo:
|
||||
environment: staging # Use staging overrides from service map
|
||||
```
|
||||
|
||||
## Key Rotation
|
||||
|
||||
TUF supports secure key rotation with grace periods:
|
||||
|
||||
1. **Add new key**: Publish new key while keeping old key active
|
||||
2. **Grace period**: Clients sync and receive both keys
|
||||
3. **Verify**: Ensure all clients have new key
|
||||
4. **Revoke old key**: Remove old key from active set
|
||||
|
||||
See [Key Rotation Runbook](../../operations/key-rotation-runbook.md) for detailed procedures.
|
||||
|
||||
## Offline Mode
|
||||
|
||||
For air-gapped environments, StellaOps can operate with bundled TUF metadata:
|
||||
|
||||
1. Export trust state on connected system:
|
||||
```bash
|
||||
stella trust snapshot export --out ./bundle.tar.zst
|
||||
```
|
||||
|
||||
2. Transfer bundle to air-gapped system
|
||||
|
||||
3. Import on air-gapped system:
|
||||
```bash
|
||||
stella trust import ./bundle.tar.zst --offline
|
||||
```
|
||||
|
||||
4. Verify attestations using bundled trust:
|
||||
```bash
|
||||
stella attest verify ./attestation.json --offline
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "TUF metadata expired"
|
||||
|
||||
The timestamp hasn't been refreshed. On the TUF repository:
|
||||
```bash
|
||||
./scripts/update-timestamp.sh
|
||||
```
|
||||
|
||||
### "Unknown target"
|
||||
|
||||
The requested target doesn't exist in the repository:
|
||||
```bash
|
||||
./scripts/add-target.sh /path/to/target target-name
|
||||
```
|
||||
|
||||
### "Signature verification failed"
|
||||
|
||||
Keys may have rotated. Force a sync:
|
||||
```bash
|
||||
stella trust sync --force
|
||||
```
|
||||
|
||||
### "Service map not found"
|
||||
|
||||
Ensure the service map target name matches configuration:
|
||||
```bash
|
||||
stella trust status # Check service_map_target value
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Root Key Security**: Keep root key offline. Only use for initial setup and rotations.
|
||||
|
||||
2. **Timestamp Automation**: Automate timestamp updates but use short-lived credentials.
|
||||
|
||||
3. **Monitoring**: Monitor for failed TUF fetches - may indicate MITM or repository issues.
|
||||
|
||||
4. **Rollback Protection**: TUF prevents rollback attacks through version tracking.
|
||||
|
||||
5. **Freshness**: Configure appropriate freshness thresholds for your security requirements.
|
||||
|
||||
## References
|
||||
|
||||
- [TUF Specification](https://theupdateframework.github.io/specification/latest/)
|
||||
- [Sigstore Trust Root](https://github.com/sigstore/root-signing)
|
||||
- [StellaOps Trust Repository Template](../../../devops/trust-repo-template/)
|
||||
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"eventId": "d4e5f6a7-89ab-cdef-0123-456789abcdef",
|
||||
"kind": "attestor.logged",
|
||||
"version": "1",
|
||||
"tenant": "tenant-01",
|
||||
"ts": "2025-12-24T13:00:00+00:00",
|
||||
"actor": "attestor-service",
|
||||
"payload": {
|
||||
"attestationId": "attest-001-20251224",
|
||||
"imageDigest": "sha256:abc123def456789012345678901234567890123456789012345678901234abcd",
|
||||
"imageName": "registry.example.com/app:v1.0.0",
|
||||
"predicateType": "https://slsa.dev/provenance/v1",
|
||||
"logIndex": 12345,
|
||||
"links": {
|
||||
"attestation": "https://stellaops.example.com/attestations/attest-001-20251224",
|
||||
"rekor": "https://rekor.sigstore.dev/api/v1/log/entries?logIndex=12345"
|
||||
}
|
||||
},
|
||||
"attributes": {
|
||||
"category": "attestor",
|
||||
"logProvider": "rekor"
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,24 @@
|
||||
{
|
||||
"eventId": "b2c3d4e5-6789-abcd-ef01-23456789abcd",
|
||||
"kind": "scanner.report.ready",
|
||||
"version": "1",
|
||||
"tenant": "tenant-01",
|
||||
"ts": "2025-12-24T11:00:00+00:00",
|
||||
"actor": "scanner-worker",
|
||||
"payload": {
|
||||
"reportId": "report-001-20251224",
|
||||
"scanId": "scan-001-20251224",
|
||||
"imageDigest": "sha256:abc123def456789012345678901234567890123456789012345678901234abcd",
|
||||
"imageName": "registry.example.com/app:v1.0.0",
|
||||
"format": "cyclonedx",
|
||||
"size": 524288,
|
||||
"links": {
|
||||
"report": "https://stellaops.example.com/reports/report-001-20251224",
|
||||
"download": "https://stellaops.example.com/reports/report-001-20251224/download"
|
||||
}
|
||||
},
|
||||
"attributes": {
|
||||
"category": "scanner",
|
||||
"reportFormat": "cyclonedx-1.5"
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,30 @@
|
||||
{
|
||||
"eventId": "a1b2c3d4-5678-9abc-def0-123456789abc",
|
||||
"kind": "scanner.scan.completed",
|
||||
"version": "1",
|
||||
"tenant": "tenant-01",
|
||||
"ts": "2025-12-24T10:30:00+00:00",
|
||||
"actor": "scanner-worker",
|
||||
"payload": {
|
||||
"scanId": "scan-001-20251224",
|
||||
"imageDigest": "sha256:abc123def456789012345678901234567890123456789012345678901234abcd",
|
||||
"imageName": "registry.example.com/app:v1.0.0",
|
||||
"verdict": "pass",
|
||||
"findingsCount": 7,
|
||||
"vulnerabilities": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 2,
|
||||
"low": 5
|
||||
},
|
||||
"scanDurationMs": 15230,
|
||||
"links": {
|
||||
"findings": "https://stellaops.example.com/scans/scan-001-20251224/findings",
|
||||
"sbom": "https://stellaops.example.com/scans/scan-001-20251224/sbom"
|
||||
}
|
||||
},
|
||||
"attributes": {
|
||||
"category": "scanner",
|
||||
"environment": "production"
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"eventId": "c3d4e5f6-789a-bcde-f012-3456789abcde",
|
||||
"kind": "scheduler.rescan.delta",
|
||||
"version": "1",
|
||||
"tenant": "tenant-01",
|
||||
"ts": "2025-12-24T12:00:00+00:00",
|
||||
"actor": "scheduler-service",
|
||||
"payload": {
|
||||
"scheduleId": "schedule-daily-rescan",
|
||||
"deltaId": "delta-20251224-1200",
|
||||
"imagesAffected": 15,
|
||||
"newVulnerabilities": 3,
|
||||
"resolvedVulnerabilities": 2,
|
||||
"links": {
|
||||
"schedule": "https://stellaops.example.com/schedules/schedule-daily-rescan",
|
||||
"delta": "https://stellaops.example.com/deltas/delta-20251224-1200"
|
||||
}
|
||||
},
|
||||
"attributes": {
|
||||
"category": "scheduler",
|
||||
"scheduleType": "daily"
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user