feat: Implement Filesystem and MongoDB provenance writers for PackRun execution context
Some checks failed
Airgap Sealed CI Smoke / sealed-smoke (push) Has been cancelled
Docs CI / lint-and-preview (push) Has been cancelled
Export Center CI / export-ci (push) Has been cancelled

- Added `FilesystemPackRunProvenanceWriter` to write provenance manifests to the filesystem.
- Introduced `MongoPackRunArtifactReader` to read artifacts from MongoDB.
- Created `MongoPackRunProvenanceWriter` to store provenance manifests in MongoDB.
- Developed unit tests for filesystem and MongoDB provenance writers.
- Established `ITimelineEventStore` and `ITimelineIngestionService` interfaces for timeline event handling.
- Implemented `TimelineIngestionService` to validate and persist timeline events with hashing.
- Created PostgreSQL schema and migration scripts for timeline indexing.
- Added dependency injection support for timeline indexer services.
- Developed tests for timeline ingestion and schema validation.
This commit is contained in:
StellaOps Bot
2025-11-30 15:38:14 +02:00
parent 8f54ffa203
commit 17d45a6d30
276 changed files with 8618 additions and 688 deletions

View File

@@ -0,0 +1,38 @@
id: "py-unsafe-exec:101"
language: py
project: unsafe-exec
version: "1.0.0"
description: "Python handler with reachable eval sink"
entrypoints:
- "POST /api/exec"
sinks:
- id: "PyUnsafeExec::handle_request"
path: "src/app.py::handle_request"
kind: "process"
location:
file: src/app.py
line: 8
notes: "eval on user input"
environment:
os_image: "python:3.12-alpine"
runtime:
python: "3.12"
source_date_epoch: 1730000000
build:
command: "./build/build.sh"
source_date_epoch: 1730000000
outputs:
artifact_path: outputs/binary.tar.gz
sbom_path: outputs/sbom.cdx.json
coverage_path: outputs/coverage.json
traces_dir: outputs/traces
test:
command: "./tests/run-tests.sh"
expected_coverage:
- outputs/coverage.json
expected_traces:
- outputs/traces/traces.json
ground_truth:
summary: "Eval reachable via POST /api/exec"
evidence_files:
- "../benchmark/truth/py-unsafe-exec.json"

View File

@@ -0,0 +1,8 @@
case_id: "py-unsafe-exec:101"
entries:
http:
- id: "POST /api/exec"
route: "/api/exec"
method: "POST"
handler: "app.handle_request"
description: "Executes user code via eval"

View File

@@ -0,0 +1 @@
# Intentionally empty; uses stdlib only.

View File

@@ -0,0 +1,10 @@
"""Minimal Python handler with an unsafe eval sink."""
def handle_request(body):
code = body.get("code") if isinstance(body, dict) else None
if not isinstance(code, str):
return {"status": 400, "body": "bad request"}
# Sink: eval on user input (reachable)
result = eval(code)
return {"status": 200, "body": str(result)}

View File

@@ -0,0 +1,8 @@
#!/usr/bin/env bash
set -euo pipefail
cd "$(dirname "$0")"
export SOURCE_DATE_EPOCH=${SOURCE_DATE_EPOCH:-1730000000}
export TZ=UTC
export LC_ALL=C
export PYTHONPATH="$(cd .. && pwd)/src"
python test_reach.py

View File

@@ -0,0 +1,54 @@
import json
import os
import pathlib
from app import handle_request
ROOT = pathlib.Path(__file__).resolve().parent.parent
OUT = ROOT / "outputs"
TRACE_DIR = OUT / "traces"
COVERAGE_FILE = OUT / "coverage.json"
TRACE_FILE = TRACE_DIR / "traces.json"
def ensure_dirs():
OUT.mkdir(parents=True, exist_ok=True)
TRACE_DIR.mkdir(parents=True, exist_ok=True)
def record_trace(entry, path_nodes):
TRACE_FILE.write_text(
json.dumps({
"entry": entry,
"path": path_nodes,
"sink": "PyUnsafeExec::handle_request",
"notes": "Eval reached"
}, indent=2)
)
def record_coverage(file_path, lines):
COVERAGE_FILE.write_text(
json.dumps({
"files": {
file_path: {
"lines_covered": lines,
"lines_total": 30
}
}
}, indent=2)
)
def test_reach():
ensure_dirs()
res = handle_request({"code": "3*7"})
assert res["status"] == 200
assert res["body"] == "21"
record_trace("POST /api/exec", ["app.py::handle_request", "eval(code)"])
record_coverage("src/app.py", [3, 4, 5, 8, 10])
(OUT / "SINK_REACHED").write_text("true")
if __name__ == "__main__":
test_reach()