Implement MongoDB-based storage for Pack Run approval, artifact, log, and state management
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled

- Added MongoPackRunApprovalStore for managing approval states with MongoDB.
- Introduced MongoPackRunArtifactUploader for uploading and storing artifacts.
- Created MongoPackRunLogStore to handle logging of pack run events.
- Developed MongoPackRunStateStore for persisting and retrieving pack run states.
- Implemented unit tests for MongoDB stores to ensure correct functionality.
- Added MongoTaskRunnerTestContext for setting up MongoDB test environment.
- Enhanced PackRunStateFactory to correctly initialize state with gate reasons.
This commit is contained in:
master
2025-11-07 10:01:35 +02:00
parent e5ffcd6535
commit a1ce3f74fa
122 changed files with 8730 additions and 914 deletions

View File

@@ -110,6 +110,10 @@ Import script calls `PutManifest` for each manifest, verifying digests. This ena
Scanner.Worker serialises EntryTrace graphs into Surface.FS using `SurfaceCacheKey(namespace: "entrytrace.graph", tenant, sha256(options|env|entrypoint))`. At runtime the worker checks the cache before invoking analyzers; cache hits bypass parsing and feed the result store/attestor pipeline directly. The same namespace is consumed by WebService and CLI to retrieve cached graphs for reporting.
### 6.2 BuildX generator path
`StellaOps.Scanner.Sbomer.BuildXPlugin` reuses the same CAS layout via the `--surface-*` descriptor flags (or `STELLAOPS_SURFACE_*` env vars). When layer fragment JSON, EntryTrace graph JSON, or NDJSON files are supplied, the plug-in writes them under `scanner/surface/**` within the configured CAS root and emits a manifest pointer so Scanner.WebService can pick up the artefacts without re-scanning. The Surface manifest JSON can also be copied to an arbitrary path via `--surface-manifest-output` for CI artefacts/offline kits.
## 7. Security & Tenancy
- Tenant ID is mandatory; Surface.Validation enforces match with Authority token.
@@ -119,8 +123,12 @@ Scanner.Worker serialises EntryTrace graphs into Surface.FS using `SurfaceCacheK
## 8. Observability
- Logs include manifest SHA, tenant, and kind; payload paths truncated for brevity.
- Metrics exported via Prometheus with labels `{tenant, kind, result}`.
- Logs include manifest SHA, tenant, kind, and cache namespace; payload paths are truncated for brevity.
- Prometheus metrics (emitted by Scanner.Worker) now include:
- `scanner_worker_surface_manifests_published_total`, `scanner_worker_surface_manifests_failed_total`, `scanner_worker_surface_manifests_skipped_total` with labels `{queue, job_kind, surface_result, reason?, surface_payload_count}`.
- `scanner_worker_surface_payload_persisted_total` with `{surface_kind}` to track cache churn (`entrytrace.graph`, `entrytrace.ndjson`, `layer.fragments`, …).
- `scanner_worker_surface_manifest_publish_duration_ms` histogram for end-to-end persistence latency.
- Grafana dashboard JSON: `docs/modules/scanner/operations/surface-worker-grafana-dashboard.json` (panels for publish outcomes, latency, per-kind cache rate, and failure reasons). Import alongside the analyzer dashboard and point it to the Scanner Prometheus datasource.
- Tracing spans: `surface.fs.put`, `surface.fs.get`, `surface.fs.cache`.
## 9. Testing Strategy