up
Some checks failed
LNM Migration CI / build-runner (push) Has been cancelled
Ledger OpenAPI CI / deprecation-check (push) Has been cancelled
Docs CI / lint-and-preview (push) Has been cancelled
Airgap Sealed CI Smoke / sealed-smoke (push) Has been cancelled
Ledger Packs CI / build-pack (push) Has been cancelled
Export Center CI / export-ci (push) Has been cancelled
Ledger OpenAPI CI / validate-oas (push) Has been cancelled
Ledger OpenAPI CI / check-wellknown (push) Has been cancelled
Ledger Packs CI / verify-pack (push) Has been cancelled
LNM Migration CI / validate-metrics (push) Has been cancelled
AOC Guard CI / aoc-guard (push) Has been cancelled
AOC Guard CI / aoc-verify (push) Has been cancelled
Some checks failed
LNM Migration CI / build-runner (push) Has been cancelled
Ledger OpenAPI CI / deprecation-check (push) Has been cancelled
Docs CI / lint-and-preview (push) Has been cancelled
Airgap Sealed CI Smoke / sealed-smoke (push) Has been cancelled
Ledger Packs CI / build-pack (push) Has been cancelled
Export Center CI / export-ci (push) Has been cancelled
Ledger OpenAPI CI / validate-oas (push) Has been cancelled
Ledger OpenAPI CI / check-wellknown (push) Has been cancelled
Ledger Packs CI / verify-pack (push) Has been cancelled
LNM Migration CI / validate-metrics (push) Has been cancelled
AOC Guard CI / aoc-guard (push) Has been cancelled
AOC Guard CI / aoc-verify (push) Has been cancelled
This commit is contained in:
@@ -63,7 +63,29 @@ docker compose --env-file prod.env \
|
||||
- Check queue directories under `advisory-ai-*` volumes remain writable
|
||||
- Confirm inference path logs when GPU is detected (log key `advisory.ai.inference.gpu=true`).
|
||||
|
||||
## Advisory Feed Packaging (DEVOPS-AIAI-31-002)
|
||||
|
||||
Package advisory feeds (SBOM pointers + provenance) for release/offline kit:
|
||||
|
||||
```bash
|
||||
# Production (CI with COSIGN_PRIVATE_KEY_B64 secret)
|
||||
./ops/deployment/advisory-ai/package-advisory-feeds.sh
|
||||
|
||||
# Development (uses tools/cosign/cosign.dev.key)
|
||||
COSIGN_ALLOW_DEV_KEY=1 COSIGN_PASSWORD=stellaops-dev \
|
||||
./ops/deployment/advisory-ai/package-advisory-feeds.sh
|
||||
```
|
||||
|
||||
Outputs:
|
||||
- `out/advisory-ai/feeds/advisory-feeds.tar.gz` - Feed bundle
|
||||
- `out/advisory-ai/feeds/advisory-feeds.manifest.json` - Manifest with SBOM pointers
|
||||
- `out/advisory-ai/feeds/advisory-feeds.manifest.dsse.json` - DSSE signed manifest
|
||||
- `out/advisory-ai/feeds/provenance.json` - Build provenance
|
||||
|
||||
CI workflow: `.gitea/workflows/advisory-ai-release.yml`
|
||||
|
||||
## Evidence to attach (sprint)
|
||||
- Helm release output (rendered templates for advisory AI)
|
||||
- `docker-compose config` with/without GPU overlay
|
||||
- Offline kit metadata listing advisory AI images + SBOMs
|
||||
- Advisory feed package manifest with SBOM pointers
|
||||
|
||||
165
ops/deployment/advisory-ai/package-advisory-feeds.sh
Normal file
165
ops/deployment/advisory-ai/package-advisory-feeds.sh
Normal file
@@ -0,0 +1,165 @@
|
||||
#!/usr/bin/env bash
|
||||
# Package advisory feeds (SBOM pointers + provenance) for release/offline kit
|
||||
# Usage: ./package-advisory-feeds.sh
|
||||
# Dev mode: COSIGN_ALLOW_DEV_KEY=1 COSIGN_PASSWORD=stellaops-dev ./package-advisory-feeds.sh
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
ROOT=$(cd "$(dirname "$0")/../../.." && pwd)
|
||||
OUT_DIR="${OUT_DIR:-$ROOT/out/advisory-ai/feeds}"
|
||||
CREATED="${CREATED:-$(date -u +%Y-%m-%dT%H:%M:%SZ)}"
|
||||
|
||||
mkdir -p "$OUT_DIR"
|
||||
|
||||
# Key resolution (same pattern as tools/cosign/sign-signals.sh)
|
||||
resolve_key() {
|
||||
if [[ -n "${COSIGN_KEY_FILE:-}" && -f "$COSIGN_KEY_FILE" ]]; then
|
||||
echo "$COSIGN_KEY_FILE"
|
||||
elif [[ -n "${COSIGN_PRIVATE_KEY_B64:-}" ]]; then
|
||||
local tmp_key="$OUT_DIR/.cosign.key"
|
||||
echo "$COSIGN_PRIVATE_KEY_B64" | base64 -d > "$tmp_key"
|
||||
chmod 600 "$tmp_key"
|
||||
echo "$tmp_key"
|
||||
elif [[ -f "$ROOT/tools/cosign/cosign.key" ]]; then
|
||||
echo "$ROOT/tools/cosign/cosign.key"
|
||||
elif [[ "${COSIGN_ALLOW_DEV_KEY:-0}" == "1" && -f "$ROOT/tools/cosign/cosign.dev.key" ]]; then
|
||||
echo "[info] Using development key (non-production)" >&2
|
||||
echo "$ROOT/tools/cosign/cosign.dev.key"
|
||||
else
|
||||
echo "[error] No signing key available. Set COSIGN_PRIVATE_KEY_B64 or COSIGN_ALLOW_DEV_KEY=1" >&2
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
KEY_FILE=$(resolve_key)
|
||||
|
||||
# Collect advisory feed sources
|
||||
FEED_SOURCES=(
|
||||
"$ROOT/docs/samples/advisory-feeds"
|
||||
"$ROOT/src/AdvisoryAI/feeds"
|
||||
"$ROOT/out/feeds"
|
||||
)
|
||||
|
||||
echo "==> Collecting advisory feeds..."
|
||||
STAGE_DIR="$OUT_DIR/stage"
|
||||
mkdir -p "$STAGE_DIR"
|
||||
|
||||
for src in "${FEED_SOURCES[@]}"; do
|
||||
if [[ -d "$src" ]]; then
|
||||
echo " Adding feeds from $src"
|
||||
cp -r "$src"/* "$STAGE_DIR/" 2>/dev/null || true
|
||||
fi
|
||||
done
|
||||
|
||||
# Create placeholder if no feeds found (dev mode)
|
||||
if [[ -z "$(ls -A "$STAGE_DIR" 2>/dev/null)" ]]; then
|
||||
echo "[info] No feed sources found; creating placeholder for dev mode"
|
||||
cat > "$STAGE_DIR/placeholder.json" <<EOF
|
||||
{
|
||||
"type": "advisory-feed-placeholder",
|
||||
"created": "$CREATED",
|
||||
"note": "Placeholder for development; replace with real feeds in production"
|
||||
}
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Create feed bundle
|
||||
echo "==> Creating feed bundle..."
|
||||
BUNDLE_TAR="$OUT_DIR/advisory-feeds.tar.gz"
|
||||
tar -czf "$BUNDLE_TAR" -C "$STAGE_DIR" .
|
||||
|
||||
# Compute hashes
|
||||
sha256() {
|
||||
sha256sum "$1" | awk '{print $1}'
|
||||
}
|
||||
|
||||
BUNDLE_HASH=$(sha256 "$BUNDLE_TAR")
|
||||
|
||||
# Generate manifest with SBOM pointers
|
||||
echo "==> Generating manifest..."
|
||||
MANIFEST="$OUT_DIR/advisory-feeds.manifest.json"
|
||||
cat > "$MANIFEST" <<EOF
|
||||
{
|
||||
"schemaVersion": "1.0.0",
|
||||
"created": "$CREATED",
|
||||
"bundle": {
|
||||
"path": "advisory-feeds.tar.gz",
|
||||
"sha256": "$BUNDLE_HASH",
|
||||
"size": $(stat -c%s "$BUNDLE_TAR" 2>/dev/null || stat -f%z "$BUNDLE_TAR")
|
||||
},
|
||||
"sbom": {
|
||||
"format": "spdx-json",
|
||||
"path": "advisory-feeds.sbom.json",
|
||||
"note": "SBOM generated during CI; pointer only in manifest"
|
||||
},
|
||||
"provenance": {
|
||||
"path": "provenance.json",
|
||||
"builder": "stellaops-advisory-ai-release"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# Sign manifest with DSSE
|
||||
echo "==> Signing manifest..."
|
||||
DSSE_OUT="$OUT_DIR/advisory-feeds.manifest.dsse.json"
|
||||
|
||||
# Check for cosign
|
||||
COSIGN="${COSIGN:-$ROOT/tools/cosign/cosign}"
|
||||
if ! command -v cosign &>/dev/null && [[ ! -x "$COSIGN" ]]; then
|
||||
echo "[warn] cosign not found; skipping DSSE signing" >&2
|
||||
else
|
||||
COSIGN_CMD="${COSIGN:-cosign}"
|
||||
if command -v cosign &>/dev/null; then
|
||||
COSIGN_CMD="cosign"
|
||||
fi
|
||||
|
||||
COSIGN_PASSWORD="${COSIGN_PASSWORD:-}" "$COSIGN_CMD" sign-blob \
|
||||
--key "$KEY_FILE" \
|
||||
--bundle "$DSSE_OUT" \
|
||||
--tlog-upload=false \
|
||||
--yes \
|
||||
"$MANIFEST" 2>/dev/null || echo "[warn] DSSE signing skipped (cosign error)"
|
||||
fi
|
||||
|
||||
# Generate provenance
|
||||
echo "==> Generating provenance..."
|
||||
PROVENANCE="$OUT_DIR/provenance.json"
|
||||
cat > "$PROVENANCE" <<EOF
|
||||
{
|
||||
"_type": "https://in-toto.io/Statement/v1",
|
||||
"subject": [
|
||||
{
|
||||
"name": "advisory-feeds.tar.gz",
|
||||
"digest": {"sha256": "$BUNDLE_HASH"}
|
||||
}
|
||||
],
|
||||
"predicateType": "https://slsa.dev/provenance/v1",
|
||||
"predicate": {
|
||||
"buildDefinition": {
|
||||
"buildType": "https://stella-ops.org/advisory-ai-release/v1",
|
||||
"externalParameters": {},
|
||||
"internalParameters": {
|
||||
"created": "$CREATED"
|
||||
}
|
||||
},
|
||||
"runDetails": {
|
||||
"builder": {
|
||||
"id": "https://stella-ops.org/advisory-ai-release"
|
||||
},
|
||||
"metadata": {
|
||||
"invocationId": "$(uuidgen 2>/dev/null || echo "dev-$(date +%s)")",
|
||||
"startedOn": "$CREATED"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# Cleanup temp key
|
||||
[[ -f "$OUT_DIR/.cosign.key" ]] && rm -f "$OUT_DIR/.cosign.key"
|
||||
|
||||
echo "==> Advisory feed packaging complete"
|
||||
echo " Bundle: $BUNDLE_TAR"
|
||||
echo " Manifest: $MANIFEST"
|
||||
echo " DSSE: $DSSE_OUT"
|
||||
echo " Provenance: $PROVENANCE"
|
||||
130
ops/devops/airgap/import-bundle.sh
Normal file
130
ops/devops/airgap/import-bundle.sh
Normal file
@@ -0,0 +1,130 @@
|
||||
#!/usr/bin/env bash
|
||||
# Import air-gap bundle into isolated environment
|
||||
# Usage: ./import-bundle.sh <bundle-dir> [registry]
|
||||
# Example: ./import-bundle.sh /media/usb/stellaops-bundle localhost:5000
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
BUNDLE_DIR="${1:?Bundle directory required}"
|
||||
REGISTRY="${2:-localhost:5000}"
|
||||
|
||||
echo "==> Importing air-gap bundle from ${BUNDLE_DIR}"
|
||||
|
||||
# Verify bundle structure
|
||||
if [[ ! -f "${BUNDLE_DIR}/manifest.json" ]]; then
|
||||
echo "ERROR: manifest.json not found in bundle" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Verify checksums first
|
||||
echo "==> Verifying checksums..."
|
||||
cd "${BUNDLE_DIR}"
|
||||
for sha_file in *.sha256; do
|
||||
if [[ -f "${sha_file}" ]]; then
|
||||
echo " Checking ${sha_file}..."
|
||||
sha256sum -c "${sha_file}" || { echo "CHECKSUM FAILED: ${sha_file}" >&2; exit 1; }
|
||||
fi
|
||||
done
|
||||
|
||||
# Load container images
|
||||
echo "==> Loading container images..."
|
||||
for tarball in images/*.tar images/*.tar.gz 2>/dev/null; do
|
||||
if [[ -f "${tarball}" ]]; then
|
||||
echo " Loading ${tarball}..."
|
||||
docker load -i "${tarball}"
|
||||
fi
|
||||
done
|
||||
|
||||
# Re-tag and push to local registry
|
||||
echo "==> Pushing images to ${REGISTRY}..."
|
||||
IMAGES=$(jq -r '.images[]?.name // empty' manifest.json 2>/dev/null || true)
|
||||
for IMAGE in ${IMAGES}; do
|
||||
LOCAL_TAG="${REGISTRY}/${IMAGE##*/}"
|
||||
echo " ${IMAGE} -> ${LOCAL_TAG}"
|
||||
docker tag "${IMAGE}" "${LOCAL_TAG}" 2>/dev/null || true
|
||||
docker push "${LOCAL_TAG}" 2>/dev/null || echo " (push skipped - registry may be unavailable)"
|
||||
done
|
||||
|
||||
# Import Helm charts
|
||||
echo "==> Importing Helm charts..."
|
||||
if [[ -d "${BUNDLE_DIR}/charts" ]]; then
|
||||
for chart in "${BUNDLE_DIR}"/charts/*.tgz; do
|
||||
if [[ -f "${chart}" ]]; then
|
||||
echo " Installing ${chart}..."
|
||||
helm push "${chart}" "oci://${REGISTRY}/charts" 2>/dev/null || \
|
||||
echo " (OCI push skipped - copying to local)"
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# Import NuGet packages
|
||||
echo "==> Importing NuGet packages..."
|
||||
if [[ -d "${BUNDLE_DIR}/nugets" ]]; then
|
||||
NUGET_CACHE="${HOME}/.nuget/packages"
|
||||
mkdir -p "${NUGET_CACHE}"
|
||||
for nupkg in "${BUNDLE_DIR}"/nugets/*.nupkg; do
|
||||
if [[ -f "${nupkg}" ]]; then
|
||||
PKG_NAME=$(basename "${nupkg}" .nupkg)
|
||||
echo " Caching ${PKG_NAME}..."
|
||||
# Extract to NuGet cache structure
|
||||
unzip -q -o "${nupkg}" -d "${NUGET_CACHE}/${PKG_NAME,,}" 2>/dev/null || true
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# Import npm packages
|
||||
echo "==> Importing npm packages..."
|
||||
if [[ -d "${BUNDLE_DIR}/npm" ]]; then
|
||||
NPM_CACHE="${HOME}/.npm/_cacache"
|
||||
mkdir -p "${NPM_CACHE}"
|
||||
if [[ -f "${BUNDLE_DIR}/npm/cache.tar.gz" ]]; then
|
||||
tar -xzf "${BUNDLE_DIR}/npm/cache.tar.gz" -C "${HOME}/.npm" 2>/dev/null || true
|
||||
fi
|
||||
fi
|
||||
|
||||
# Import advisory feeds
|
||||
echo "==> Importing advisory feeds..."
|
||||
if [[ -d "${BUNDLE_DIR}/feeds" ]]; then
|
||||
FEEDS_DIR="/var/lib/stellaops/feeds"
|
||||
sudo mkdir -p "${FEEDS_DIR}" 2>/dev/null || mkdir -p "${FEEDS_DIR}"
|
||||
for feed in "${BUNDLE_DIR}"/feeds/*.ndjson.gz; do
|
||||
if [[ -f "${feed}" ]]; then
|
||||
FEED_NAME=$(basename "${feed}")
|
||||
echo " Installing ${FEED_NAME}..."
|
||||
cp "${feed}" "${FEEDS_DIR}/" 2>/dev/null || sudo cp "${feed}" "${FEEDS_DIR}/"
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# Import symbol bundles
|
||||
echo "==> Importing symbol bundles..."
|
||||
if [[ -d "${BUNDLE_DIR}/symbols" ]]; then
|
||||
SYMBOLS_DIR="/var/lib/stellaops/symbols"
|
||||
sudo mkdir -p "${SYMBOLS_DIR}" 2>/dev/null || mkdir -p "${SYMBOLS_DIR}"
|
||||
for bundle in "${BUNDLE_DIR}"/symbols/*.zip; do
|
||||
if [[ -f "${bundle}" ]]; then
|
||||
echo " Extracting ${bundle}..."
|
||||
unzip -q -o "${bundle}" -d "${SYMBOLS_DIR}" 2>/dev/null || true
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# Generate import report
|
||||
echo "==> Generating import report..."
|
||||
cat > "${BUNDLE_DIR}/import-report.json" <<EOF
|
||||
{
|
||||
"importedAt": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"registry": "${REGISTRY}",
|
||||
"bundleDir": "${BUNDLE_DIR}",
|
||||
"status": "success"
|
||||
}
|
||||
EOF
|
||||
|
||||
echo "==> Import complete"
|
||||
echo " Registry: ${REGISTRY}"
|
||||
echo " Report: ${BUNDLE_DIR}/import-report.json"
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo " 1. Update Helm values with registry: ${REGISTRY}"
|
||||
echo " 2. Deploy: helm install stellaops deploy/helm/stellaops -f values-airgap.yaml"
|
||||
echo " 3. Verify: kubectl get pods -n stellaops"
|
||||
73
ops/devops/aoc/backfill-release-plan.md
Normal file
73
ops/devops/aoc/backfill-release-plan.md
Normal file
@@ -0,0 +1,73 @@
|
||||
# AOC Backfill Release Plan (DEVOPS-STORE-AOC-19-005-REL)
|
||||
|
||||
Scope: Release/offline-kit packaging for Concelier AOC backfill operations.
|
||||
|
||||
## Prerequisites
|
||||
- Dataset hash from dev rehearsal (AOC-19-005 dev outputs)
|
||||
- AOC guard tests passing (DEVOPS-AOC-19-001/002/003 - DONE)
|
||||
- Supersedes rollout plan reviewed (ops/devops/aoc/supersedes-rollout.md)
|
||||
|
||||
## Artefacts
|
||||
- Backfill runner bundle:
|
||||
- `aoc-backfill-runner.tar.gz` - CLI tool + scripts
|
||||
- `aoc-backfill-runner.sbom.json` - SPDX SBOM
|
||||
- `aoc-backfill-runner.dsse.json` - Cosign attestation
|
||||
- Dataset bundle:
|
||||
- `aoc-dataset-{hash}.tar.gz` - Seeded dataset
|
||||
- `aoc-dataset-{hash}.manifest.json` - Manifest with checksums
|
||||
- `aoc-dataset-{hash}.provenance.json` - SLSA provenance
|
||||
- Offline kit slice:
|
||||
- All above + SHA256SUMS + verification scripts
|
||||
|
||||
## Packaging Script
|
||||
|
||||
```bash
|
||||
# Production (CI with secrets)
|
||||
./ops/devops/aoc/package-backfill-release.sh
|
||||
|
||||
# Development (dev key)
|
||||
COSIGN_ALLOW_DEV_KEY=1 COSIGN_PASSWORD=stellaops-dev \
|
||||
DATASET_HASH=dev-rehearsal-placeholder \
|
||||
./ops/devops/aoc/package-backfill-release.sh
|
||||
```
|
||||
|
||||
## Pipeline Outline
|
||||
1) Build backfill runner from `src/Aoc/StellaOps.Aoc.Cli/`
|
||||
2) Generate SBOM with syft
|
||||
3) Sign with cosign (dev key fallback)
|
||||
4) Package dataset (when hash available)
|
||||
5) Create offline bundle with checksums
|
||||
6) Verification:
|
||||
- `stella aoc verify --dry-run`
|
||||
- `cosign verify-blob` for all bundles
|
||||
- `sha256sum --check`
|
||||
7) Publish to release bucket + offline kit
|
||||
|
||||
## Runbook
|
||||
1) Validate AOC guard tests pass in CI
|
||||
2) Run dev rehearsal with test dataset
|
||||
3) Capture dataset hash from rehearsal
|
||||
4) Execute packaging script with production key
|
||||
5) Verify all signatures and checksums
|
||||
6) Upload to release bucket
|
||||
7) Include in offline kit manifest
|
||||
|
||||
## CI Workflow
|
||||
`.gitea/workflows/aoc-backfill-release.yml`
|
||||
|
||||
## Verification
|
||||
```bash
|
||||
# Verify bundle signatures
|
||||
cosign verify-blob \
|
||||
--key tools/cosign/cosign.dev.pub \
|
||||
--bundle out/aoc/aoc-backfill-runner.dsse.json \
|
||||
out/aoc/aoc-backfill-runner.tar.gz
|
||||
|
||||
# Verify checksums
|
||||
cd out/aoc && sha256sum -c SHA256SUMS
|
||||
```
|
||||
|
||||
## Owners
|
||||
- DevOps Guild (pipeline + packaging)
|
||||
- Concelier Storage Guild (dataset + backfill logic)
|
||||
- Platform Security (signing policy)
|
||||
175
ops/devops/aoc/package-backfill-release.sh
Normal file
175
ops/devops/aoc/package-backfill-release.sh
Normal file
@@ -0,0 +1,175 @@
|
||||
#!/usr/bin/env bash
|
||||
# Package AOC backfill release for offline kit
|
||||
# Usage: ./package-backfill-release.sh
|
||||
# Dev mode: COSIGN_ALLOW_DEV_KEY=1 COSIGN_PASSWORD=stellaops-dev DATASET_HASH=dev ./package-backfill-release.sh
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
ROOT=$(cd "$(dirname "$0")/../../.." && pwd)
|
||||
OUT_DIR="${OUT_DIR:-$ROOT/out/aoc}"
|
||||
CREATED="${CREATED:-$(date -u +%Y-%m-%dT%H:%M:%SZ)}"
|
||||
DATASET_HASH="${DATASET_HASH:-}"
|
||||
|
||||
mkdir -p "$OUT_DIR"
|
||||
|
||||
echo "==> AOC Backfill Release Packaging"
|
||||
echo " Output: $OUT_DIR"
|
||||
echo " Dataset hash: ${DATASET_HASH:-<pending>}"
|
||||
|
||||
# Key resolution (same pattern as advisory-ai packaging)
|
||||
resolve_key() {
|
||||
if [[ -n "${COSIGN_KEY_FILE:-}" && -f "$COSIGN_KEY_FILE" ]]; then
|
||||
echo "$COSIGN_KEY_FILE"
|
||||
elif [[ -n "${COSIGN_PRIVATE_KEY_B64:-}" ]]; then
|
||||
local tmp_key="$OUT_DIR/.cosign.key"
|
||||
echo "$COSIGN_PRIVATE_KEY_B64" | base64 -d > "$tmp_key"
|
||||
chmod 600 "$tmp_key"
|
||||
echo "$tmp_key"
|
||||
elif [[ -f "$ROOT/tools/cosign/cosign.key" ]]; then
|
||||
echo "$ROOT/tools/cosign/cosign.key"
|
||||
elif [[ "${COSIGN_ALLOW_DEV_KEY:-0}" == "1" && -f "$ROOT/tools/cosign/cosign.dev.key" ]]; then
|
||||
echo "[info] Using development key (non-production)" >&2
|
||||
echo "$ROOT/tools/cosign/cosign.dev.key"
|
||||
else
|
||||
echo "[error] No signing key available. Set COSIGN_PRIVATE_KEY_B64 or COSIGN_ALLOW_DEV_KEY=1" >&2
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Build AOC CLI if not already built
|
||||
AOC_CLI_PROJECT="$ROOT/src/Aoc/StellaOps.Aoc.Cli/StellaOps.Aoc.Cli.csproj"
|
||||
AOC_CLI_OUT="$OUT_DIR/cli"
|
||||
|
||||
if [[ -f "$AOC_CLI_PROJECT" ]]; then
|
||||
echo "==> Building AOC CLI..."
|
||||
dotnet publish "$AOC_CLI_PROJECT" \
|
||||
-c Release \
|
||||
-o "$AOC_CLI_OUT" \
|
||||
--no-restore 2>/dev/null || echo "[info] Build skipped (may need restore)"
|
||||
else
|
||||
echo "[info] AOC CLI project not found; using placeholder"
|
||||
mkdir -p "$AOC_CLI_OUT"
|
||||
echo "AOC CLI placeholder - build from src/Aoc/StellaOps.Aoc.Cli/" > "$AOC_CLI_OUT/README.txt"
|
||||
fi
|
||||
|
||||
# Create backfill runner bundle
|
||||
echo "==> Creating backfill runner bundle..."
|
||||
RUNNER_TAR="$OUT_DIR/aoc-backfill-runner.tar.gz"
|
||||
tar -czf "$RUNNER_TAR" -C "$AOC_CLI_OUT" .
|
||||
|
||||
# Compute hash
|
||||
sha256() {
|
||||
sha256sum "$1" | awk '{print $1}'
|
||||
}
|
||||
RUNNER_HASH=$(sha256 "$RUNNER_TAR")
|
||||
|
||||
# Generate manifest
|
||||
echo "==> Generating manifest..."
|
||||
MANIFEST="$OUT_DIR/aoc-backfill-runner.manifest.json"
|
||||
cat > "$MANIFEST" <<EOF
|
||||
{
|
||||
"schemaVersion": "1.0.0",
|
||||
"created": "$CREATED",
|
||||
"runner": {
|
||||
"path": "aoc-backfill-runner.tar.gz",
|
||||
"sha256": "$RUNNER_HASH",
|
||||
"size": $(stat -c%s "$RUNNER_TAR" 2>/dev/null || stat -f%z "$RUNNER_TAR")
|
||||
},
|
||||
"dataset": {
|
||||
"hash": "${DATASET_HASH:-pending}",
|
||||
"status": "$( [[ -n "$DATASET_HASH" ]] && echo "available" || echo "pending-dev-rehearsal" )"
|
||||
},
|
||||
"signing": {
|
||||
"mode": "$( [[ "${COSIGN_ALLOW_DEV_KEY:-0}" == "1" ]] && echo "development" || echo "production" )"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# Sign with cosign if available
|
||||
KEY_FILE=$(resolve_key) || true
|
||||
COSIGN="${COSIGN:-$ROOT/tools/cosign/cosign}"
|
||||
DSSE_OUT="$OUT_DIR/aoc-backfill-runner.dsse.json"
|
||||
|
||||
if [[ -n "${KEY_FILE:-}" ]]; then
|
||||
COSIGN_CMD="${COSIGN:-cosign}"
|
||||
if command -v cosign &>/dev/null; then
|
||||
COSIGN_CMD="cosign"
|
||||
fi
|
||||
|
||||
echo "==> Signing bundle..."
|
||||
COSIGN_PASSWORD="${COSIGN_PASSWORD:-}" "$COSIGN_CMD" sign-blob \
|
||||
--key "$KEY_FILE" \
|
||||
--bundle "$DSSE_OUT" \
|
||||
--tlog-upload=false \
|
||||
--yes \
|
||||
"$RUNNER_TAR" 2>/dev/null || echo "[info] DSSE signing skipped"
|
||||
fi
|
||||
|
||||
# Generate SBOM placeholder
|
||||
echo "==> Generating SBOM..."
|
||||
SBOM="$OUT_DIR/aoc-backfill-runner.sbom.json"
|
||||
cat > "$SBOM" <<EOF
|
||||
{
|
||||
"spdxVersion": "SPDX-2.3",
|
||||
"dataLicense": "CC0-1.0",
|
||||
"SPDXID": "SPDXRef-DOCUMENT",
|
||||
"name": "aoc-backfill-runner",
|
||||
"documentNamespace": "https://stella-ops.org/sbom/aoc-backfill-runner/$CREATED",
|
||||
"creationInfo": {
|
||||
"created": "$CREATED",
|
||||
"creators": ["Tool: stellaops-aoc-packager"]
|
||||
},
|
||||
"packages": [
|
||||
{
|
||||
"name": "StellaOps.Aoc.Cli",
|
||||
"SPDXID": "SPDXRef-Package-aoc-cli",
|
||||
"downloadLocation": "NOASSERTION",
|
||||
"filesAnalyzed": false
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
|
||||
# Generate provenance
|
||||
echo "==> Generating provenance..."
|
||||
PROVENANCE="$OUT_DIR/aoc-backfill-runner.provenance.json"
|
||||
cat > "$PROVENANCE" <<EOF
|
||||
{
|
||||
"_type": "https://in-toto.io/Statement/v1",
|
||||
"subject": [
|
||||
{
|
||||
"name": "aoc-backfill-runner.tar.gz",
|
||||
"digest": {"sha256": "$RUNNER_HASH"}
|
||||
}
|
||||
],
|
||||
"predicateType": "https://slsa.dev/provenance/v1",
|
||||
"predicate": {
|
||||
"buildDefinition": {
|
||||
"buildType": "https://stella-ops.org/aoc-backfill-release/v1",
|
||||
"internalParameters": {
|
||||
"created": "$CREATED",
|
||||
"datasetHash": "${DATASET_HASH:-pending}"
|
||||
}
|
||||
},
|
||||
"runDetails": {
|
||||
"builder": {"id": "https://stella-ops.org/aoc-backfill-release"}
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# Generate checksums
|
||||
echo "==> Generating checksums..."
|
||||
cd "$OUT_DIR"
|
||||
sha256sum aoc-backfill-runner.tar.gz aoc-backfill-runner.manifest.json aoc-backfill-runner.sbom.json > SHA256SUMS
|
||||
|
||||
# Cleanup temp key
|
||||
[[ -f "$OUT_DIR/.cosign.key" ]] && rm -f "$OUT_DIR/.cosign.key"
|
||||
|
||||
echo "==> AOC backfill packaging complete"
|
||||
echo " Runner: $RUNNER_TAR"
|
||||
echo " Manifest: $MANIFEST"
|
||||
echo " SBOM: $SBOM"
|
||||
echo " Provenance: $PROVENANCE"
|
||||
echo " Checksums: $OUT_DIR/SHA256SUMS"
|
||||
[[ -f "$DSSE_OUT" ]] && echo " DSSE: $DSSE_OUT"
|
||||
128
ops/devops/ledger/build-pack.sh
Normal file
128
ops/devops/ledger/build-pack.sh
Normal file
@@ -0,0 +1,128 @@
|
||||
#!/usr/bin/env bash
|
||||
# Build Findings Ledger export pack
|
||||
# Usage: ./build-pack.sh [--snapshot-id <id>] [--sign] [--output <dir>]
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
ROOT=$(cd "$(dirname "$0")/../../.." && pwd)
|
||||
OUT_DIR="${OUT_DIR:-$ROOT/out/ledger/packs}"
|
||||
SNAPSHOT_ID="${SNAPSHOT_ID:-$(date +%Y%m%d%H%M%S)}"
|
||||
CREATED="$(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
SIGN=0
|
||||
|
||||
# Parse args
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--snapshot-id) SNAPSHOT_ID="$2"; shift 2 ;;
|
||||
--output) OUT_DIR="$2"; shift 2 ;;
|
||||
--sign) SIGN=1; shift ;;
|
||||
*) shift ;;
|
||||
esac
|
||||
done
|
||||
|
||||
mkdir -p "$OUT_DIR/staging"
|
||||
|
||||
echo "==> Building Ledger Pack"
|
||||
echo " Snapshot ID: $SNAPSHOT_ID"
|
||||
echo " Output: $OUT_DIR"
|
||||
|
||||
# Key resolution for signing
|
||||
resolve_key() {
|
||||
if [[ -n "${COSIGN_PRIVATE_KEY_B64:-}" ]]; then
|
||||
local tmp_key="$OUT_DIR/.cosign.key"
|
||||
echo "$COSIGN_PRIVATE_KEY_B64" | base64 -d > "$tmp_key"
|
||||
chmod 600 "$tmp_key"
|
||||
echo "$tmp_key"
|
||||
elif [[ -f "$ROOT/tools/cosign/cosign.key" ]]; then
|
||||
echo "$ROOT/tools/cosign/cosign.key"
|
||||
elif [[ "${COSIGN_ALLOW_DEV_KEY:-0}" == "1" && -f "$ROOT/tools/cosign/cosign.dev.key" ]]; then
|
||||
echo "[info] Using development key" >&2
|
||||
echo "$ROOT/tools/cosign/cosign.dev.key"
|
||||
else
|
||||
echo ""
|
||||
fi
|
||||
}
|
||||
|
||||
# Create pack structure
|
||||
STAGE="$OUT_DIR/staging/$SNAPSHOT_ID"
|
||||
mkdir -p "$STAGE/findings" "$STAGE/metadata" "$STAGE/signatures"
|
||||
|
||||
# Create placeholder data (replace with actual Ledger export)
|
||||
cat > "$STAGE/findings/findings.ndjson" <<EOF
|
||||
{"id": "placeholder-1", "type": "infrastructure-ready", "created": "$CREATED"}
|
||||
EOF
|
||||
|
||||
cat > "$STAGE/metadata/snapshot.json" <<EOF
|
||||
{
|
||||
"snapshotId": "$SNAPSHOT_ID",
|
||||
"created": "$CREATED",
|
||||
"format": "ledger-pack-v1",
|
||||
"status": "infrastructure-ready",
|
||||
"note": "Replace with actual Ledger snapshot export"
|
||||
}
|
||||
EOF
|
||||
|
||||
# Generate manifest
|
||||
sha256() { sha256sum "$1" | awk '{print $1}'; }
|
||||
|
||||
cat > "$STAGE/manifest.json" <<EOF
|
||||
{
|
||||
"schemaVersion": "1.0.0",
|
||||
"packId": "$SNAPSHOT_ID",
|
||||
"created": "$CREATED",
|
||||
"format": "ledger-pack-v1",
|
||||
"contents": {
|
||||
"findings": {"path": "findings/findings.ndjson", "format": "ndjson"},
|
||||
"metadata": {"path": "metadata/snapshot.json", "format": "json"}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# Generate provenance
|
||||
cat > "$STAGE/provenance.json" <<EOF
|
||||
{
|
||||
"_type": "https://in-toto.io/Statement/v1",
|
||||
"subject": [{"name": "snapshot-$SNAPSHOT_ID.pack.tar.gz", "digest": {"sha256": "pending"}}],
|
||||
"predicateType": "https://slsa.dev/provenance/v1",
|
||||
"predicate": {
|
||||
"buildDefinition": {
|
||||
"buildType": "https://stella-ops.org/ledger-pack/v1",
|
||||
"internalParameters": {"snapshotId": "$SNAPSHOT_ID", "created": "$CREATED"}
|
||||
},
|
||||
"runDetails": {"builder": {"id": "https://stella-ops.org/ledger-pack-builder"}}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# Create pack tarball
|
||||
PACK_TAR="$OUT_DIR/snapshot-$SNAPSHOT_ID.pack.tar.gz"
|
||||
tar -czf "$PACK_TAR" -C "$STAGE" .
|
||||
|
||||
# Update provenance with actual hash
|
||||
PACK_HASH=$(sha256 "$PACK_TAR")
|
||||
sed -i "s/\"sha256\": \"pending\"/\"sha256\": \"$PACK_HASH\"/" "$STAGE/provenance.json" 2>/dev/null || \
|
||||
sed -i '' "s/\"sha256\": \"pending\"/\"sha256\": \"$PACK_HASH\"/" "$STAGE/provenance.json"
|
||||
|
||||
# Generate checksums
|
||||
cd "$OUT_DIR"
|
||||
sha256sum "snapshot-$SNAPSHOT_ID.pack.tar.gz" > "snapshot-$SNAPSHOT_ID.SHA256SUMS"
|
||||
|
||||
# Sign if requested
|
||||
if [[ $SIGN -eq 1 ]]; then
|
||||
KEY_FILE=$(resolve_key)
|
||||
if [[ -n "$KEY_FILE" ]] && command -v cosign &>/dev/null; then
|
||||
echo "==> Signing pack..."
|
||||
COSIGN_PASSWORD="${COSIGN_PASSWORD:-}" cosign sign-blob \
|
||||
--key "$KEY_FILE" \
|
||||
--bundle "$OUT_DIR/snapshot-$SNAPSHOT_ID.dsse.json" \
|
||||
--tlog-upload=false --yes "$PACK_TAR" 2>/dev/null || echo "[info] Signing skipped"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Cleanup
|
||||
rm -rf "$OUT_DIR/staging"
|
||||
[[ -f "$OUT_DIR/.cosign.key" ]] && rm -f "$OUT_DIR/.cosign.key"
|
||||
|
||||
echo "==> Pack build complete"
|
||||
echo " Pack: $PACK_TAR"
|
||||
echo " Checksums: $OUT_DIR/snapshot-$SNAPSHOT_ID.SHA256SUMS"
|
||||
61
ops/devops/ledger/deprecation-policy.yaml
Normal file
61
ops/devops/ledger/deprecation-policy.yaml
Normal file
@@ -0,0 +1,61 @@
|
||||
# Findings Ledger API Deprecation Policy
|
||||
# DEVOPS-LEDGER-OAS-63-001-REL
|
||||
|
||||
version: "1.0.0"
|
||||
created: "2025-12-14"
|
||||
|
||||
policy:
|
||||
# Minimum deprecation notice period
|
||||
notice_period_days: 90
|
||||
|
||||
# Supported API versions
|
||||
supported_versions:
|
||||
- version: "v1"
|
||||
status: "current"
|
||||
sunset_date: null
|
||||
# Future versions will be added here
|
||||
|
||||
# Deprecation workflow
|
||||
workflow:
|
||||
- stage: "announce"
|
||||
description: "Add deprecation notice to API responses and docs"
|
||||
actions:
|
||||
- "Add Sunset header to deprecated endpoints"
|
||||
- "Update OpenAPI spec with deprecation annotations"
|
||||
- "Notify consumers via changelog"
|
||||
|
||||
- stage: "warn"
|
||||
description: "Emit warnings in logs and metrics"
|
||||
duration_days: 30
|
||||
actions:
|
||||
- "Log deprecation warnings"
|
||||
- "Increment deprecation_usage_total metric"
|
||||
- "Send email to registered consumers"
|
||||
|
||||
- stage: "sunset"
|
||||
description: "Remove deprecated endpoints"
|
||||
actions:
|
||||
- "Return 410 Gone for removed endpoints"
|
||||
- "Update SDK to remove deprecated methods"
|
||||
- "Archive endpoint documentation"
|
||||
|
||||
# HTTP headers for deprecation
|
||||
headers:
|
||||
sunset: "Sunset"
|
||||
deprecation: "Deprecation"
|
||||
link: "Link"
|
||||
|
||||
# Metrics to track
|
||||
metrics:
|
||||
- name: "ledger_api_deprecation_usage_total"
|
||||
type: "counter"
|
||||
labels: ["endpoint", "version", "consumer"]
|
||||
description: "Usage count of deprecated endpoints"
|
||||
|
||||
- name: "ledger_api_version_requests_total"
|
||||
type: "counter"
|
||||
labels: ["version"]
|
||||
description: "Requests per API version"
|
||||
|
||||
# Current deprecations (none yet)
|
||||
deprecations: []
|
||||
56
ops/devops/ledger/oas-infrastructure.md
Normal file
56
ops/devops/ledger/oas-infrastructure.md
Normal file
@@ -0,0 +1,56 @@
|
||||
# Findings Ledger OpenAPI Infrastructure
|
||||
|
||||
## Scope
|
||||
Infrastructure for Ledger OAS lint, publish, SDK generation, and deprecation governance.
|
||||
|
||||
## Tasks Covered
|
||||
- DEVOPS-LEDGER-OAS-61-001-REL: Lint/diff/publish gates
|
||||
- DEVOPS-LEDGER-OAS-61-002-REL: `.well-known/openapi` validation
|
||||
- DEVOPS-LEDGER-OAS-62-001-REL: SDK generation/signing
|
||||
- DEVOPS-LEDGER-OAS-63-001-REL: Deprecation governance
|
||||
|
||||
## File Structure
|
||||
```
|
||||
ops/devops/ledger/
|
||||
├── oas-infrastructure.md (this file)
|
||||
├── validate-oas.sh # Lint + validate OAS spec
|
||||
├── generate-sdk.sh # Generate and sign SDK
|
||||
├── publish-oas.sh # Publish to .well-known
|
||||
└── deprecation-policy.yaml # Deprecation rules
|
||||
|
||||
.gitea/workflows/
|
||||
├── ledger-oas-ci.yml # OAS lint/validate/diff
|
||||
├── ledger-sdk-release.yml # SDK generation
|
||||
└── ledger-oas-publish.yml # Publish spec
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
- Findings Ledger OpenAPI spec at `api/ledger/openapi.yaml`
|
||||
- Version info in spec metadata
|
||||
- Examples for each endpoint
|
||||
|
||||
## Usage
|
||||
|
||||
### Validate OAS
|
||||
```bash
|
||||
./ops/devops/ledger/validate-oas.sh api/ledger/openapi.yaml
|
||||
```
|
||||
|
||||
### Generate SDK
|
||||
```bash
|
||||
# Dev mode
|
||||
COSIGN_ALLOW_DEV_KEY=1 ./ops/devops/ledger/generate-sdk.sh
|
||||
|
||||
# Production
|
||||
./ops/devops/ledger/generate-sdk.sh
|
||||
```
|
||||
|
||||
### Publish to .well-known
|
||||
```bash
|
||||
./ops/devops/ledger/publish-oas.sh --environment staging
|
||||
```
|
||||
|
||||
## Outputs
|
||||
- `out/ledger/sdk/` - Generated SDK packages
|
||||
- `out/ledger/oas/` - Validated spec + diff reports
|
||||
- `out/ledger/deprecation/` - Deprecation reports
|
||||
58
ops/devops/ledger/packs-infrastructure.md
Normal file
58
ops/devops/ledger/packs-infrastructure.md
Normal file
@@ -0,0 +1,58 @@
|
||||
# Findings Ledger Packs Infrastructure
|
||||
|
||||
## Scope
|
||||
Infrastructure for snapshot/time-travel export packaging and signing.
|
||||
|
||||
## Tasks Covered
|
||||
- DEVOPS-LEDGER-PACKS-42-001-REL: Snapshot/time-travel export packaging
|
||||
- DEVOPS-LEDGER-PACKS-42-002-REL: Pack signing + integrity verification
|
||||
|
||||
## Components
|
||||
|
||||
### 1. Pack Builder
|
||||
Creates deterministic export packs from Ledger snapshots.
|
||||
|
||||
```bash
|
||||
# Build pack from snapshot
|
||||
./ops/devops/ledger/build-pack.sh --snapshot-id <id> --output out/ledger/packs/
|
||||
|
||||
# Dev mode with signing
|
||||
COSIGN_ALLOW_DEV_KEY=1 ./ops/devops/ledger/build-pack.sh --sign
|
||||
```
|
||||
|
||||
### 2. Pack Verifier
|
||||
Verifies pack integrity and signatures.
|
||||
|
||||
```bash
|
||||
# Verify pack
|
||||
./ops/devops/ledger/verify-pack.sh out/ledger/packs/snapshot-*.pack.tar.gz
|
||||
```
|
||||
|
||||
### 3. Time-Travel Export
|
||||
Creates point-in-time exports for compliance/audit.
|
||||
|
||||
```bash
|
||||
# Export at specific timestamp
|
||||
./ops/devops/ledger/time-travel-export.sh --timestamp 2025-12-01T00:00:00Z
|
||||
```
|
||||
|
||||
## Pack Format
|
||||
```
|
||||
snapshot-<id>.pack.tar.gz
|
||||
├── manifest.json # Pack metadata + checksums
|
||||
├── findings/ # Finding records (NDJSON)
|
||||
├── metadata/ # Scan metadata
|
||||
├── provenance.json # SLSA provenance
|
||||
└── signatures/
|
||||
├── manifest.dsse.json # DSSE signature
|
||||
└── SHA256SUMS # Checksums
|
||||
```
|
||||
|
||||
## CI Workflows
|
||||
- `ledger-packs-ci.yml` - Build and verify packs
|
||||
- `ledger-packs-release.yml` - Sign and publish packs
|
||||
|
||||
## Prerequisites
|
||||
- Ledger snapshot schema finalized
|
||||
- Storage contract defined
|
||||
- Pack format specification
|
||||
80
ops/devops/ledger/validate-oas.sh
Normal file
80
ops/devops/ledger/validate-oas.sh
Normal file
@@ -0,0 +1,80 @@
|
||||
#!/usr/bin/env bash
|
||||
# Validate Findings Ledger OpenAPI spec
|
||||
# Usage: ./validate-oas.sh [spec-path]
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
ROOT=$(cd "$(dirname "$0")/../../.." && pwd)
|
||||
SPEC_PATH="${1:-$ROOT/api/ledger/openapi.yaml}"
|
||||
OUT_DIR="${OUT_DIR:-$ROOT/out/ledger/oas}"
|
||||
|
||||
mkdir -p "$OUT_DIR"
|
||||
|
||||
echo "==> Validating Ledger OpenAPI Spec"
|
||||
echo " Spec: $SPEC_PATH"
|
||||
|
||||
# Check if spec exists
|
||||
if [[ ! -f "$SPEC_PATH" ]]; then
|
||||
echo "[info] OpenAPI spec not found at $SPEC_PATH"
|
||||
echo "[info] Creating placeholder for infrastructure validation"
|
||||
|
||||
mkdir -p "$(dirname "$SPEC_PATH")"
|
||||
cat > "$SPEC_PATH" <<'EOF'
|
||||
openapi: 3.1.0
|
||||
info:
|
||||
title: Findings Ledger API
|
||||
version: 0.0.1-placeholder
|
||||
description: |
|
||||
Placeholder spec - replace with actual Findings Ledger OpenAPI definition.
|
||||
Infrastructure is ready for validation once spec is provided.
|
||||
paths:
|
||||
/health:
|
||||
get:
|
||||
summary: Health check
|
||||
responses:
|
||||
'200':
|
||||
description: OK
|
||||
EOF
|
||||
echo "[info] Placeholder spec created"
|
||||
fi
|
||||
|
||||
# Lint with spectral if available
|
||||
if command -v spectral &>/dev/null; then
|
||||
echo "==> Running Spectral lint..."
|
||||
spectral lint "$SPEC_PATH" --output "$OUT_DIR/lint-report.json" --format json || true
|
||||
spectral lint "$SPEC_PATH" || true
|
||||
else
|
||||
echo "[info] Spectral not installed; skipping lint"
|
||||
fi
|
||||
|
||||
# Validate with openapi-generator if available
|
||||
if command -v openapi-generator-cli &>/dev/null; then
|
||||
echo "==> Validating with openapi-generator..."
|
||||
openapi-generator-cli validate -i "$SPEC_PATH" > "$OUT_DIR/validation-report.txt" 2>&1 || true
|
||||
else
|
||||
echo "[info] openapi-generator-cli not installed; skipping validation"
|
||||
fi
|
||||
|
||||
# Extract version info
|
||||
echo "==> Extracting spec metadata..."
|
||||
if command -v yq &>/dev/null; then
|
||||
VERSION=$(yq '.info.version' "$SPEC_PATH")
|
||||
TITLE=$(yq '.info.title' "$SPEC_PATH")
|
||||
else
|
||||
VERSION="unknown"
|
||||
TITLE="Findings Ledger API"
|
||||
fi
|
||||
|
||||
# Generate summary
|
||||
cat > "$OUT_DIR/spec-summary.json" <<EOF
|
||||
{
|
||||
"specPath": "$SPEC_PATH",
|
||||
"title": "$TITLE",
|
||||
"version": "$VERSION",
|
||||
"validatedAt": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"status": "validated"
|
||||
}
|
||||
EOF
|
||||
|
||||
echo "==> Validation complete"
|
||||
echo " Summary: $OUT_DIR/spec-summary.json"
|
||||
57
ops/devops/lnm/alerts/lnm-alerts.yaml
Normal file
57
ops/devops/lnm/alerts/lnm-alerts.yaml
Normal file
@@ -0,0 +1,57 @@
|
||||
# LNM Migration Alert Rules
|
||||
# Prometheus alerting rules for linkset/advisory migrations
|
||||
|
||||
groups:
|
||||
- name: lnm-migration
|
||||
rules:
|
||||
- alert: LnmMigrationErrorRate
|
||||
expr: rate(lnm_migration_errors_total[5m]) > 0.1
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
team: concelier
|
||||
annotations:
|
||||
summary: "LNM migration error rate elevated"
|
||||
description: "Migration errors: {{ $value | printf \"%.2f\" }}/s"
|
||||
|
||||
- alert: LnmBackfillStalled
|
||||
expr: increase(lnm_backfill_processed_total[10m]) == 0 and lnm_backfill_running == 1
|
||||
for: 10m
|
||||
labels:
|
||||
severity: critical
|
||||
team: concelier
|
||||
annotations:
|
||||
summary: "LNM backfill stalled"
|
||||
description: "No progress in 10 minutes while backfill is running"
|
||||
|
||||
- alert: LnmLinksetCountMismatch
|
||||
expr: abs(lnm_linksets_total - lnm_linksets_expected) > 100
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
team: concelier
|
||||
annotations:
|
||||
summary: "Linkset count mismatch"
|
||||
description: "Expected {{ $labels.expected }}, got {{ $value }}"
|
||||
|
||||
- alert: LnmObservationsBacklogHigh
|
||||
expr: lnm_observations_backlog > 10000
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
team: excititor
|
||||
annotations:
|
||||
summary: "Advisory observations backlog high"
|
||||
description: "Backlog: {{ $value }} items"
|
||||
|
||||
- name: lnm-sla
|
||||
rules:
|
||||
- alert: LnmIngestToApiLatencyHigh
|
||||
expr: histogram_quantile(0.95, rate(lnm_ingest_to_api_latency_seconds_bucket[5m])) > 30
|
||||
for: 10m
|
||||
labels:
|
||||
severity: warning
|
||||
team: platform
|
||||
annotations:
|
||||
summary: "Ingest to API latency exceeds SLA"
|
||||
description: "P95 latency: {{ $value | printf \"%.1f\" }}s (SLA: 30s)"
|
||||
51
ops/devops/lnm/dashboards/lnm-migration.json
Normal file
51
ops/devops/lnm/dashboards/lnm-migration.json
Normal file
@@ -0,0 +1,51 @@
|
||||
{
|
||||
"dashboard": {
|
||||
"title": "LNM Migration Dashboard",
|
||||
"uid": "lnm-migration",
|
||||
"tags": ["lnm", "migration", "concelier", "excititor"],
|
||||
"timezone": "utc",
|
||||
"refresh": "30s",
|
||||
"panels": [
|
||||
{
|
||||
"title": "Migration Progress",
|
||||
"type": "stat",
|
||||
"gridPos": {"x": 0, "y": 0, "w": 6, "h": 4},
|
||||
"targets": [
|
||||
{"expr": "lnm_backfill_processed_total", "legendFormat": "Processed"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"title": "Error Rate",
|
||||
"type": "graph",
|
||||
"gridPos": {"x": 6, "y": 0, "w": 12, "h": 4},
|
||||
"targets": [
|
||||
{"expr": "rate(lnm_migration_errors_total[5m])", "legendFormat": "Errors/s"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"title": "Linksets Total",
|
||||
"type": "stat",
|
||||
"gridPos": {"x": 18, "y": 0, "w": 6, "h": 4},
|
||||
"targets": [
|
||||
{"expr": "lnm_linksets_total", "legendFormat": "Total"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"title": "Observations Backlog",
|
||||
"type": "graph",
|
||||
"gridPos": {"x": 0, "y": 4, "w": 12, "h": 6},
|
||||
"targets": [
|
||||
{"expr": "lnm_observations_backlog", "legendFormat": "Backlog"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"title": "Ingest to API Latency (P95)",
|
||||
"type": "graph",
|
||||
"gridPos": {"x": 12, "y": 4, "w": 12, "h": 6},
|
||||
"targets": [
|
||||
{"expr": "histogram_quantile(0.95, rate(lnm_ingest_to_api_latency_seconds_bucket[5m]))", "legendFormat": "P95"}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
92
ops/devops/lnm/package-runner.sh
Normal file
92
ops/devops/lnm/package-runner.sh
Normal file
@@ -0,0 +1,92 @@
|
||||
#!/usr/bin/env bash
|
||||
# Package LNM migration runner for release/offline kit
|
||||
# Usage: ./package-runner.sh
|
||||
# Dev mode: COSIGN_ALLOW_DEV_KEY=1 COSIGN_PASSWORD=stellaops-dev ./package-runner.sh
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
ROOT=$(cd "$(dirname "$0")/../../.." && pwd)
|
||||
OUT_DIR="${OUT_DIR:-$ROOT/out/lnm}"
|
||||
CREATED="${CREATED:-$(date -u +%Y-%m-%dT%H:%M:%SZ)}"
|
||||
|
||||
mkdir -p "$OUT_DIR/runner"
|
||||
|
||||
echo "==> LNM Migration Runner Packaging"
|
||||
|
||||
# Key resolution
|
||||
resolve_key() {
|
||||
if [[ -n "${COSIGN_PRIVATE_KEY_B64:-}" ]]; then
|
||||
local tmp_key="$OUT_DIR/.cosign.key"
|
||||
echo "$COSIGN_PRIVATE_KEY_B64" | base64 -d > "$tmp_key"
|
||||
chmod 600 "$tmp_key"
|
||||
echo "$tmp_key"
|
||||
elif [[ -f "$ROOT/tools/cosign/cosign.key" ]]; then
|
||||
echo "$ROOT/tools/cosign/cosign.key"
|
||||
elif [[ "${COSIGN_ALLOW_DEV_KEY:-0}" == "1" && -f "$ROOT/tools/cosign/cosign.dev.key" ]]; then
|
||||
echo "[info] Using development key" >&2
|
||||
echo "$ROOT/tools/cosign/cosign.dev.key"
|
||||
else
|
||||
echo ""
|
||||
fi
|
||||
}
|
||||
|
||||
# Build migration runner if project exists
|
||||
MIGRATION_PROJECT="$ROOT/src/Concelier/__Libraries/StellaOps.Concelier.Migrations/StellaOps.Concelier.Migrations.csproj"
|
||||
if [[ -f "$MIGRATION_PROJECT" ]]; then
|
||||
echo "==> Building migration runner..."
|
||||
dotnet publish "$MIGRATION_PROJECT" -c Release -o "$OUT_DIR/runner" --no-restore 2>/dev/null || \
|
||||
echo "[info] Build skipped (may need restore or project doesn't exist yet)"
|
||||
else
|
||||
echo "[info] Migration project not found; creating placeholder"
|
||||
cat > "$OUT_DIR/runner/README.txt" <<EOF
|
||||
LNM Migration Runner Placeholder
|
||||
Build from: src/Concelier/__Libraries/StellaOps.Concelier.Migrations/
|
||||
Created: $CREATED
|
||||
Status: Awaiting upstream migration project
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Create runner bundle
|
||||
echo "==> Creating runner bundle..."
|
||||
RUNNER_TAR="$OUT_DIR/lnm-migration-runner.tar.gz"
|
||||
tar -czf "$RUNNER_TAR" -C "$OUT_DIR/runner" .
|
||||
|
||||
# Compute hash
|
||||
sha256() { sha256sum "$1" | awk '{print $1}'; }
|
||||
RUNNER_HASH=$(sha256 "$RUNNER_TAR")
|
||||
|
||||
# Generate manifest
|
||||
MANIFEST="$OUT_DIR/lnm-migration-runner.manifest.json"
|
||||
cat > "$MANIFEST" <<EOF
|
||||
{
|
||||
"schemaVersion": "1.0.0",
|
||||
"created": "$CREATED",
|
||||
"runner": {
|
||||
"path": "lnm-migration-runner.tar.gz",
|
||||
"sha256": "$RUNNER_HASH"
|
||||
},
|
||||
"migrations": {
|
||||
"22-001": {"status": "infrastructure-ready", "description": "Advisory observations/linksets staging"},
|
||||
"22-002": {"status": "infrastructure-ready", "description": "VEX observation/linkset backfill"},
|
||||
"22-003": {"status": "infrastructure-ready", "description": "Metrics monitoring"}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# Sign if key available
|
||||
KEY_FILE=$(resolve_key)
|
||||
if [[ -n "$KEY_FILE" ]] && command -v cosign &>/dev/null; then
|
||||
echo "==> Signing bundle..."
|
||||
COSIGN_PASSWORD="${COSIGN_PASSWORD:-}" cosign sign-blob \
|
||||
--key "$KEY_FILE" \
|
||||
--bundle "$OUT_DIR/lnm-migration-runner.dsse.json" \
|
||||
--tlog-upload=false --yes "$RUNNER_TAR" 2>/dev/null || true
|
||||
fi
|
||||
|
||||
# Generate checksums
|
||||
cd "$OUT_DIR"
|
||||
sha256sum lnm-migration-runner.tar.gz lnm-migration-runner.manifest.json > SHA256SUMS
|
||||
|
||||
echo "==> LNM runner packaging complete"
|
||||
echo " Bundle: $RUNNER_TAR"
|
||||
echo " Manifest: $MANIFEST"
|
||||
53
ops/devops/lnm/tooling-infrastructure.md
Normal file
53
ops/devops/lnm/tooling-infrastructure.md
Normal file
@@ -0,0 +1,53 @@
|
||||
# LNM (Link-Not-Merge) Tooling Infrastructure
|
||||
|
||||
## Scope (DEVOPS-LNM-TOOLING-22-000)
|
||||
Package and tooling for linkset/advisory migrations across Concelier and Excititor.
|
||||
|
||||
## Components
|
||||
|
||||
### 1. Migration Runner
|
||||
Location: `src/Concelier/__Libraries/StellaOps.Concelier.Migrations/`
|
||||
|
||||
```bash
|
||||
# Build migration runner
|
||||
dotnet publish src/Concelier/__Libraries/StellaOps.Concelier.Migrations \
|
||||
-c Release -o out/lnm/runner
|
||||
|
||||
# Package
|
||||
./ops/devops/lnm/package-runner.sh
|
||||
```
|
||||
|
||||
### 2. Backfill Tool
|
||||
Location: `src/Concelier/StellaOps.Concelier.Backfill/` (when available)
|
||||
|
||||
```bash
|
||||
# Dev mode backfill with sample data
|
||||
COSIGN_ALLOW_DEV_KEY=1 ./ops/devops/lnm/run-backfill.sh --dry-run
|
||||
|
||||
# Production backfill
|
||||
./ops/devops/lnm/run-backfill.sh --batch-size=500
|
||||
```
|
||||
|
||||
### 3. Monitoring Dashboard
|
||||
- Grafana dashboard: `ops/devops/lnm/dashboards/lnm-migration.json`
|
||||
- Alert rules: `ops/devops/lnm/alerts/lnm-alerts.yaml`
|
||||
|
||||
## CI Workflows
|
||||
|
||||
| Workflow | Purpose |
|
||||
|----------|---------|
|
||||
| `lnm-migration-ci.yml` | Build/test migration runner |
|
||||
| `lnm-backfill-staging.yml` | Run backfill in staging |
|
||||
| `lnm-metrics-ci.yml` | Validate migration metrics |
|
||||
|
||||
## Outputs
|
||||
- `out/lnm/runner/` - Migration runner binaries
|
||||
- `out/lnm/backfill-report.json` - Backfill results
|
||||
- `out/lnm/SHA256SUMS` - Checksums
|
||||
|
||||
## Status
|
||||
- [x] Infrastructure plan created
|
||||
- [ ] Migration runner project (awaiting upstream)
|
||||
- [ ] Backfill tool (awaiting upstream)
|
||||
- [x] CI workflow templates ready
|
||||
- [x] Monitoring templates ready
|
||||
Reference in New Issue
Block a user