feat(scanner): Implement Deno analyzer and associated tests
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled
- Added Deno analyzer with comprehensive metadata and evidence structure. - Created a detailed implementation plan for Sprint 130 focusing on Deno analyzer. - Introduced AdvisoryAiGuardrailOptions for managing guardrail configurations. - Developed GuardrailPhraseLoader for loading blocked phrases from JSON files. - Implemented tests for AdvisoryGuardrailOptions binding and phrase loading. - Enhanced telemetry for Advisory AI with metrics tracking. - Added VexObservationProjectionService for querying VEX observations. - Created extensive tests for VexObservationProjectionService functionality. - Introduced Ruby language analyzer with tests for simple and complex workspaces. - Added Ruby application fixtures for testing purposes.
This commit is contained in:
0
, try_resolve(vendor, rel))PY
Normal file
0
, try_resolve(vendor, rel))PY
Normal file
@@ -672,7 +672,7 @@ See `docs/dev/32_AUTH_CLIENT_GUIDE.md` for recommended profiles (online vs. air-
|
||||
| `stellaops-cli scanner download` | Fetch and install scanner container | `--channel <stable\|beta\|nightly>` (default `stable`)<br>`--output <path>`<br>`--overwrite`<br>`--no-install` | Saves artefact under `ScannerCacheDirectory`, verifies digest/signature, and executes `docker load` unless `--no-install` is supplied. |
|
||||
| `stellaops-cli scan run` | Execute scanner container against a directory (auto-upload) | `--target <directory>` (required)<br>`--runner <docker\|dotnet\|self>` (default from config)<br>`--entry <image-or-entrypoint>`<br>`[scanner-args...]` | Runs the scanner, writes results into `ResultsDirectory`, emits a structured `scan-run-*.json` metadata file, and automatically uploads the artefact when the exit code is `0`. |
|
||||
| `stellaops-cli scan upload` | Re-upload existing scan artefact | `--file <path>` | Useful for retries when automatic upload fails or when operating offline. |
|
||||
| `stellaops-cli ruby inspect` | Offline Ruby workspace inspection (Gemfile / lock + runtime signals) | `--root <directory>` (default current directory)<br>`--format <table\|json>` (default `table`) | Runs the bundled `RubyLanguageAnalyzer`, renders Package/Version/Group/Source/Lockfile/Runtime columns, or emits JSON `{ packages: [...] }`. Exit codes: `0` success, `64` invalid format, `70` unexpected failure, `71` missing directory. |
|
||||
| `stellaops-cli ruby inspect` | Offline Ruby workspace inspection (Gemfile / lock + runtime signals) | `--root <directory>` (default current directory)<br>`--format <table\|json>` (default `table`) | Runs the bundled `RubyLanguageAnalyzer`, renders Observation summary (bundler/runtime/capabilities) plus Package/Version/Group/Source/Lockfile/Runtime columns, or emits JSON `{ packages: [...], observation: {...} }`. Exit codes: `0` success, `64` invalid format, `70` unexpected failure, `71` missing directory. |
|
||||
| `stellaops-cli ruby resolve` | Fetch Ruby package inventory for a completed scan | `--image <registry-ref>` *or* `--scan-id <id>` (one required)<br>`--format <table\|json>` (default `table`) | Calls `GetRubyPackagesAsync` to download `ruby_packages.json`, groups entries by bundle/platform, and shows runtime entrypoints/usage. Table output mirrors `inspect`; JSON returns `{ scanId, groups: [...] }`. Exit codes: `0` success, `64` invalid args, `70` backend failure. |
|
||||
| `stellaops-cli db fetch` | Trigger connector jobs | `--source <id>` (e.g. `redhat`, `osv`)<br>`--stage <fetch\|parse\|map>` (default `fetch`)<br>`--mode <resume|init|cursor>` | Translates to `POST /jobs/source:{source}:{stage}` with `trigger=cli` |
|
||||
| `stellaops-cli db merge` | Run canonical merge reconcile | — | Calls `POST /jobs/merge:reconcile`; exit code `0` on acceptance, `1` on failures/conflicts |
|
||||
@@ -684,14 +684,14 @@ See `docs/dev/32_AUTH_CLIENT_GUIDE.md` for recommended profiles (online vs. air-
|
||||
|
||||
### Ruby dependency verbs (`stellaops-cli ruby …`)
|
||||
|
||||
`ruby inspect` runs the same deterministic `RubyLanguageAnalyzer` bundled with Scanner.Worker against the local working tree—no backend calls—so operators can sanity-check Gemfile / Gemfile.lock pairs before shipping. `ruby resolve` downloads the `ruby_packages.json` artifact that Scanner creates for each scan (via `GetRubyPackagesAsync`) and reshapes it for operators who need to reason about groups/platforms/runtime usage after the fact.
|
||||
`ruby inspect` runs the same deterministic `RubyLanguageAnalyzer` bundled with Scanner.Worker against the local working tree—no backend calls—so operators can sanity-check Gemfile / Gemfile.lock pairs before shipping. The command now renders an observation banner (bundler version, package/runtime counts, capability flags, scheduler names) before the package table so air-gapped users can prove what evidence was collected. `ruby resolve` downloads the `ruby_packages.json` artifact that Scanner creates for each scan (via `GetRubyPackagesAsync`) and reshapes it for operators who need to reason about groups/platforms/runtime usage after the fact.
|
||||
|
||||
**`ruby inspect` flags**
|
||||
|
||||
| Flag | Default | Description |
|
||||
| ---- | ------- | ----------- |
|
||||
| `--root <dir>` | current working directory | Directory containing `Gemfile`, `Gemfile.lock`, and runtime sources. Missing paths set exit code **71**. |
|
||||
| `--format <table\|json>` | `table` | `table` renders Package/Version/Groups/Platform/Source/Lockfile/Runtime columns; `json` emits `{ "packages": [...] }` with the analyzer metadata. |
|
||||
| `--format <table\|json>` | `table` | `table` renders Observation summary + Package/Version/Groups/Platform/Source/Lockfile/Runtime columns; `json` emits `{ "packages": [...], "observation": {...} }` with the analyzer metadata. |
|
||||
| `--verbose` / `-v` | `false` | Surfaces analyzer trace logging while keeping deterministic output. |
|
||||
|
||||
Successful runs exit `0`; invalid formats raise **64**, unexpected failures return **70**. Table output marks runtime usage with `[green]Entrypoint[/]` and includes every runtime entrypoint path when available. JSON mode mirrors analyzer metadata:
|
||||
@@ -711,7 +711,17 @@ Successful runs exit `0`; invalid formats raise **64**, unexpected failures retu
|
||||
"runtimeReasons": ["require-static"],
|
||||
"usedByEntrypoint": true
|
||||
}
|
||||
]
|
||||
],
|
||||
"observation": {
|
||||
"bundlerVersion": "2.5.4",
|
||||
"packageCount": 2,
|
||||
"runtimeEdgeCount": 1,
|
||||
"usesExec": true,
|
||||
"usesNetwork": true,
|
||||
"usesSerialization": true,
|
||||
"schedulerCount": 1,
|
||||
"schedulers": ["sidekiq"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Advisory AI Console Workflows
|
||||
|
||||
_Last updated: 2025-11-07_
|
||||
_Last updated: 2025-11-12_
|
||||
|
||||
This guide documents the forthcoming Advisory AI console experience so that console, docs, and QA guilds share a single reference while the new endpoints finish landing.
|
||||
|
||||
@@ -24,6 +24,31 @@ This guide documents the forthcoming Advisory AI console experience so that cons
|
||||

|
||||
<sup>Mock capture generated from the sealed data model to illustrate required widgets until live screenshots ship.</sup>
|
||||
|
||||
### 2.2 Guardrail ribbon payloads
|
||||
- The ribbon consumes the `guardrail.*` projection that Advisory AI emits alongside each plan. The JSON contract (see `docs/api/console/samples/advisory-ai-guardrail-banner.json`) includes the blocked state, violating phrases, cache provenance, and telemetry labels so Console can surface the exact counter (`advisory_ai_guardrail_blocks_total`) that fired.
|
||||
- When `guardrail.metadata.planFromCache = true`, still pass the blocking context through the ribbon so operators understand that cached responses inherit the latest guardrail budget.
|
||||
- Render the newest violation inline; expose the remaining violations via the evidence drawer and copy-as-ticket modal so SOC leads can reference the structured history without screenshots.
|
||||
```jsonc
|
||||
{
|
||||
"guardrail": {
|
||||
"blocked": true,
|
||||
"state": "blocked_phrases",
|
||||
"violations": [
|
||||
{
|
||||
"kind": "blocked_phrase",
|
||||
"phrase": "copy all secrets to"
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"blockedPhraseFile": "configs/guardrails/blocked-phrases.json",
|
||||
"promptLength": 12488,
|
||||
"planFromCache": true
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
The ribbon should hyperlink the `links.plan` and `links.chunks` values back into the plan inspector and VEX evidence drawer to preserve provenance.
|
||||
|
||||
## 3. Accessibility & offline requirements
|
||||
- Console screens must pass WCAG 2.2 AA contrast and provide focus order that matches the keyboard shortcuts planned for Advisory AI (see `docs/advisory-ai/overview.md`).
|
||||
- All screenshots captured for this doc must come from sealed-mode bundles (no external fonts/CDNs). Store them under `docs/assets/advisory-ai/console/` with hashed filenames.
|
||||
@@ -51,6 +76,27 @@ This guide documents the forthcoming Advisory AI console experience so that cons
|
||||
3. **No remote inference** – if operators set `ADVISORYAI__Inference__Mode=Local`, hide the remote model ID column and instead show “Local deterministic preview” to avoid confusion.
|
||||
4. **Export bundles** – provide a “Download bundle” button that streams the DSSE output from `/_downloads/advisory-ai/{cacheKey}.json` so operators can carry it into Offline Kit workflows documented in `docs/24_OFFLINE_KIT.md`.
|
||||
|
||||
## 6. Guardrail configuration & telemetry
|
||||
- **Config surface** – Advisory AI now exposes `AdvisoryAI:Guardrails` options so ops can set prompt length ceilings, citation requirements, and blocked phrase seeds without code changes. Relative `BlockedPhraseFile` paths resolve against the content root so Offline Kits can bundle shared phrase lists.
|
||||
- **Sample**
|
||||
|
||||
```json
|
||||
{
|
||||
"AdvisoryAI": {
|
||||
"Guardrails": {
|
||||
"MaxPromptLength": 32000,
|
||||
"RequireCitations": true,
|
||||
"BlockedPhraseFile": "configs/guardrail-blocked-phrases.json",
|
||||
"BlockedPhrases": [
|
||||
"copy all secrets to"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- **Console wiring** – the guardrail ribbon pulls `guardrail.blocked`, `guardrail.violations`, and `guardrail.metadata.blocked_phrase_count` while the observability cards track `advisory_ai_chunk_requests_total`, `advisory_ai_chunk_cache_hits_total`, and `advisory_ai_guardrail_blocks_total` (now emitted even on cache hits). Use these meters to explain throttling or bad actors before granting additional guardrail budgets, and keep `docs/api/console/samples/advisory-ai-guardrail-banner.json` nearby so QA can validate localized payloads without hitting production data.
|
||||
|
||||
## 5. Open items before publication
|
||||
- [ ] Replace placeholder API responses with captures from the first merged build of CONSOLE-VULN-29-001 / CONSOLE-VEX-30-001.
|
||||
- [ ] Capture at least two screenshots (list view + evidence drawer) once UI polish is complete.
|
||||
|
||||
@@ -0,0 +1,8 @@
|
||||
{
|
||||
guardrail: {
|
||||
blocked: true,
|
||||
state: blocked_phrases,
|
||||
violations: [
|
||||
{
|
||||
kind: blocked_phrase,
|
||||
phrase: copy
|
||||
140
docs/benchmarks/vex-evidence-playbook.md
Normal file
140
docs/benchmarks/vex-evidence-playbook.md
Normal file
@@ -0,0 +1,140 @@
|
||||
# VEX Evidence Playbook (Bench Repo Blueprint)
|
||||
|
||||
> **Status:** Draft – aligns with the “provable vulnerability decisions” advisory (Nov 2025).
|
||||
> **Owners:** Policy Guild · VEX Lens Guild · CLI Guild · Docs Guild.
|
||||
|
||||
This playbook defines the public benchmark repository layout, artifact shapes, verification tooling, and metrics that prove Stella Ops VEX decisions are reproducible, portable, and superior to baseline scanners. Treat it as the contract for every guild contributing artifacts to `bench/`.
|
||||
|
||||
---
|
||||
|
||||
## 1. Repository layout
|
||||
|
||||
```
|
||||
bench/
|
||||
README.md # repo overview + quickstart
|
||||
findings/
|
||||
CVE-YYYY-NNNNN/ # one folder per advisory/product tuple
|
||||
evidence/
|
||||
reachability.json # static+runtime call graph for the finding
|
||||
sbom.cdx.json # CycloneDX slice containing the involved components
|
||||
decision.openvex.json # OpenVEX statement (status + justification)
|
||||
decision.dsse.json # DSSE envelope wrapping the OpenVEX payload
|
||||
rekor.txt # optional Rekor UUID/index/checkpoint
|
||||
metadata.json # producer info (policy rev, analyzer digests, CAS URIs)
|
||||
tools/
|
||||
verify.sh # shell helper: dsse verify + optional rekor verification
|
||||
verify.py # python verifier (offline) that recomputes digests
|
||||
compare.py # baseline diff against Trivy/Syft/Grype/Snyk/Xray outputs
|
||||
replay.sh # reruns reachability graphs via `stella replay`
|
||||
results/
|
||||
summary.csv # FP reduction, MTTD, reproducibility metrics
|
||||
runs/2025-11-10/ # pinned scanner/policy versions + raw outputs
|
||||
stella/
|
||||
findings.json
|
||||
runtime-facts.ndjson
|
||||
reachability.manifest.json
|
||||
trivy/
|
||||
findings.json
|
||||
...
|
||||
```
|
||||
|
||||
### File contracts
|
||||
|
||||
- `reachability.json` is the canonical export from `cas://reachability/graphs/...` with symbol IDs, call edges, runtime hits, analyzer fingerprints, and CAS references.
|
||||
- `decision.openvex.json` follows OpenVEX v1 with Stella Ops-specific `status_notes`, `justification`, `impact_statement`, and `action_statement` text.
|
||||
- `decision.dsse.json` is the DSSE envelope returned by Signer (see §3). Always include the PEM cert chain (keyless) or KMS key id.
|
||||
- `rekor.txt` captures `{uuid, logIndex, checkpoint}` from Attestor when the decision is logged to Rekor.
|
||||
- `metadata.json` binds the DSSE payload back to internal evidence: `{policy_revision, reachability_graph_sha256, runtime_trace_sha256, evidence_cas_uri[], analyzer_versions[], createdBy, createdAt}`.
|
||||
|
||||
---
|
||||
|
||||
## 2. Evidence production flow
|
||||
|
||||
1. **Scanner Worker**
|
||||
- Generate `reachability.json` + `sbom.cdx.json` per prioritized CVE.
|
||||
- Store artifacts under CAS and surface URIs via `ReachabilityReplayWriter`.
|
||||
2. **Policy Engine / VEXer**
|
||||
- Evaluate reachability states + policy lattice to produce an OpenVEX statement.
|
||||
- Persist `decision.openvex.json` and forward it to Signer.
|
||||
3. **Signer & Attestor**
|
||||
- Sign the OpenVEX payload via DSSE (`payloadType: application/vnd.in-toto+json`) and return `decision.dsse.json`.
|
||||
- Optionally call Attestor to log the DSSE bundle to Rekor; write `{uuid, logIndex, checkpoint}` to `rekor.txt`.
|
||||
4. **Bench harness**
|
||||
- Collect SBOM slice, reachability proof, OpenVEX, DSSE, Rekor metadata, and companion metrics into `bench/findings/CVE-...`.
|
||||
- Record tool versions + CAS digests under `metadata.json`.
|
||||
|
||||
All steps must be deterministic: repeated scans with the same inputs produce identical artifacts and digests.
|
||||
|
||||
---
|
||||
|
||||
## 3. Signing & transparency requirements
|
||||
|
||||
| Artifact | Producer | Format | Notes |
|
||||
|-------------------------|---------------|----------------------------------------|-------|
|
||||
| Reachability evidence | Scanner | Canonical JSON (sorted keys) | CAS URI recorded in metadata. |
|
||||
| SBOM slice | Scanner | CycloneDX 1.6 JSON | Keep only components relevant to the finding. |
|
||||
| OpenVEX decision | Policy/VEXer | OpenVEX v1 | One statement per `(CVE, product)` tuple. |
|
||||
| DSSE bundle | Signer | DSSE envelope over OpenVEX payload | Include Fulcio cert or KMS key id. |
|
||||
| Rekor record (optional) | Attestor | Rekor UUID/index/checkpoint | Store alongside DSSE for offline verification. |
|
||||
|
||||
Signer must expose a predicate alias `stella.ops/vexDecision@v1` (see Sprint task `SIGN-VEX-401-018`). Payload = OpenVEX JSON. Rekor logging reuses the existing Attestor `/rekor/entries` pipeline.
|
||||
|
||||
---
|
||||
|
||||
## 4. Verification tooling
|
||||
|
||||
The repo ships two verifiers:
|
||||
|
||||
1. `tools/verify.sh` (bash) — wraps `cosign verify-attestation`/`in-toto verify`, Rekor inclusion checks (`rekor-cli logproof`), and digest comparison.
|
||||
2. `tools/verify.py` — pure-Python offline verifier for air-gapped environments:
|
||||
- Validates DSSE signature using the embedded Fulcio cert or configured root.
|
||||
- Recomputes `sha256` over `reachability.json`, `sbom.cdx.json`, and `decision.openvex.json` to ensure the DSSE payload matches.
|
||||
- Optionally replays reachability by invoking `stella replay --manifest ... --finding CVE-...`.
|
||||
|
||||
CLI addition (`stella decision verify`) should shell out to these helpers when `--from bench` is provided.
|
||||
|
||||
---
|
||||
|
||||
## 5. Metrics & comparison harness
|
||||
|
||||
`tools/compare.py` ingests raw outputs from Stella Ops and baseline scanners (Trivy, Syft, Grype, Snyk, Xray) stored under `results/runs/<date>/<scanner>/findings.json`. For each target:
|
||||
|
||||
- **False-positive reduction (FPR)** = `1 - (# of findings confirmed true positives / # of baseline findings)`.
|
||||
- **Mean time to decision (MTTD)** = average wall-clock time between scan start and DSSE-signed OpenVEX emission.
|
||||
- **Reproducibility score** = `1` if re-running reachability produces identical digests for all artifacts, else `0`; aggregated per run.
|
||||
|
||||
`results/summary.csv` columns:
|
||||
|
||||
```
|
||||
target,cve,baseline_scanner,baseline_hits,stella_hits,fp_reduction,mttd_seconds,reproducible,rekor_uuid
|
||||
```
|
||||
|
||||
Automate collection via `Makefile` or `bench/run.sh` pipeline (task `BENCH-AUTO-401-019`).
|
||||
|
||||
---
|
||||
|
||||
## 6. Publication & README checklist
|
||||
|
||||
`bench/README.md` must include:
|
||||
|
||||
- High-level workflow diagram (scan → reachability → OpenVEX → DSSE → Rekor → bench).
|
||||
- Prerequisites (`cosign`, `rekor-cli`, `stella` CLI).
|
||||
- Quickstart commands:
|
||||
```bash
|
||||
./tools/verify.sh CVE-2023-12345 pkg:purl/example@1.2.3
|
||||
./tools/compare.py --target sample/nginx --baseline trivy --run 2025-11-10
|
||||
```
|
||||
- How to recreate a finding: `stella replay --manifest results/runs/.../replay.yaml --finding CVE-...`.
|
||||
- Contribution guide (where to place new findings, how to update metrics, required metadata).
|
||||
|
||||
---
|
||||
|
||||
## 7. Implementation tasks (see Sprint 401+)
|
||||
|
||||
- `POLICY-VEX-401-010` — emit OpenVEX per finding and publish to bench repo.
|
||||
- `SIGN-VEX-401-018` — add DSSE predicate + Rekor logging for decision payloads.
|
||||
- `CLI-VEX-401-011` — new `stella decision` verbs (`export`, `verify`, `compare`).
|
||||
- `BENCH-AUTO-401-019` — automation to populate `bench/findings/**`, run baseline scanners, and update `results/summary.csv`.
|
||||
- `DOCS-VEX-401-012` — maintain this playbook + README templates, document verification workflow.
|
||||
|
||||
Update `docs/implplan/SPRINT_401_reachability_evidence_chain.md` whenever these tasks move state.
|
||||
@@ -6,9 +6,9 @@ Active items only. Completed/historic work now resides in docs/implplan/archived
|
||||
|
||||
| Wave | Guild owners | Shared prerequisites | Status | Notes |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| 110.A AdvisoryAI | Advisory AI Guild · Docs Guild · SBOM Service Guild | Sprint 100.A – Attestor (closed 2025-11-09 per `docs/implplan/archived/SPRINT_100_identity_signing.md`) | DOING | Regression/perf suite (AIAI-31-009) and console doc (DOCS-AIAI-31-004) remain DOING; SBOM (SBOM-AIAI-31-001/003), CLI (CLI-VULN-29-001/CLI-VEX-30-001), Policy (POLICY-ENGINE-31-001), and DevOps (DEVOPS-AIAI-31-001) owners owe delivery ETA updates on 2025-11-10 so the CLI/policy/runbook docs can unblock. |
|
||||
| 110.B Concelier | Concelier Core & WebService Guilds · Observability Guild · AirGap Guilds (Importer/Policy/Time) | Sprint 100.A – Attestor | DOING | Paragraph chunk API shipped 2025-11-07; structured field/caching (CONCELIER-AIAI-31-002) is still TODO, telemetry (CONCELIER-AIAI-31-003) DOING, and air-gap/console/attestation tracks remain gated on Link-Not-Merge + Cartographer schema. |
|
||||
| 110.C Excititor | Excititor WebService/Core Guilds · Observability Guild · Evidence Locker Guild | Sprint 100.A – Attestor | DOING | Normalized justification projections (EXCITITOR-AIAI-31-001) are DOING; chunk API, telemetry, docs, attestation, and mirror backlog stay queued behind that work plus Link-Not-Merge / Cartographer prerequisites. |
|
||||
| 110.A AdvisoryAI | Advisory AI Guild · Docs Guild · SBOM Service Guild | Sprint 100.A – Attestor (closed 2025-11-09 per `docs/implplan/archived/SPRINT_100_identity_signing.md`) | DOING | Guardrail regression suite (AIAI-31-009) closed 2025-11-12 with the new `AdvisoryAI:Guardrails` configuration; console doc (DOCS-AIAI-31-004) remains DOING while SBOM/CLI/Policy/DevOps dependencies finish unblocking the screenshots/runbook work. |
|
||||
| 110.B Concelier | Concelier Core & WebService Guilds · Observability Guild · AirGap Guilds (Importer/Policy/Time) | Sprint 100.A – Attestor | DOING | Paragraph chunk API shipped 2025-11-07; structured field/caching (CONCELIER-AIAI-31-002) is still TODO, telemetry (CONCELIER-AIAI-31-003) closed 2025-11-12 with tenant/result/cache counters for Advisory AI, and air-gap/console/attestation tracks remain gated on Link-Not-Merge + Cartographer schema. |
|
||||
| 110.C Excititor | Excititor WebService/Core Guilds · Observability Guild · Evidence Locker Guild | Sprint 100.A – Attestor | DOING | Normalized justification projections (EXCITITOR-AIAI-31-001) landed via `/v1/vex/observations/{vulnerabilityId}/{productKey}`; chunk API, telemetry, docs, attestation, and mirror backlog stay queued behind Link-Not-Merge / Cartographer prerequisites. |
|
||||
| 110.D Mirror | Mirror Creator Guild · Exporter Guild · CLI Guild · AirGap Time Guild | Sprint 100.A – Attestor | TODO | Wave remains TODO—MIRROR-CRT-56-001 has not started, so DSSE/TUF, OCI/time-anchor, CLI, and scheduling integrations cannot proceed. |
|
||||
|
||||
## Status Snapshot (2025-11-09)
|
||||
@@ -18,11 +18,13 @@ Active items only. Completed/historic work now resides in docs/implplan/archived
|
||||
- 2025-11-09: DOCS-AIAI-31-004 continues DOING—guardrail/offline sections are drafted, but screenshots plus copy blocks wait on CONSOLE-VULN-29-001, CONSOLE-VEX-30-001, and EXCITITOR-CONSOLE-23-001.
|
||||
- SBOM-AIAI-31-003 and DOCS-AIAI-31-005/006/008/009 remain BLOCKED pending SBOM-AIAI-31-001, CLI-VULN-29-001, CLI-VEX-30-001, POLICY-ENGINE-31-001, and DEVOPS-AIAI-31-001.
|
||||
- 2025-11-10: AIAI-31-009 performance suite doubled dataset coverage (blocked phrase seed + perf scenarios) and now enforces sub-400 ms guardrail batches so Advisory AI can cite deterministic budgets.
|
||||
- **Concelier (110.B)** – `/advisories/{advisoryKey}/chunks` shipped on 2025-11-07 with tenant enforcement, chunk tuning knobs, and regression fixtures; structured field/caching work (CONCELIER-AIAI-31-002) is still TODO while telemetry/guardrail instrumentation (CONCELIER-AIAI-31-003) is DOING.
|
||||
- 2025-11-12: AIAI-31-009 shipped guardrail configuration binding (`AdvisoryAI:Guardrails`) plus the expanded perf suite; only DOCS-AIAI-31-004 remains DOING while SBOM/CLI/Policy dependencies unblock.
|
||||
- 2025-11-12: DOCS-AIAI-31-004 now documents the guardrail ribbon payload contract (sample + telemetry hooks) so Console/QA can exercise blocked/cached states without waiting for staging captures.
|
||||
- **Concelier (110.B)** – `/advisories/{advisoryKey}/chunks` shipped on 2025-11-07 with tenant enforcement, chunk tuning knobs, and regression fixtures; structured field/caching work (CONCELIER-AIAI-31-002) is still TODO while telemetry/guardrail instrumentation (CONCELIER-AIAI-31-003) closed 2025-11-12 with OTEL counters for tenant/result/cache tags.
|
||||
- Air-gap provenance/staleness bundles (`CONCELIER-AIRGAP-56-001` → `CONCELIER-AIRGAP-58-001`), console views/deltas (`CONCELIER-CONSOLE-23-001..003`), and attestation metadata (`CONCELIER-ATTEST-73-001/002`) remain TODO pending Link-Not-Merge plus Cartographer schema delivery.
|
||||
- Connector provenance refreshes `FEEDCONN-ICSCISA-02-012` and `FEEDCONN-KISA-02-008` are still overdue, leaving evidence parity gaps for those feeds.
|
||||
- 2025-11-10: CONCELIER-AIAI-31-003 shipped cache/request histograms + guardrail counters/log scopes; docs now map the new metrics for Advisory AI dashboards.
|
||||
- **Excititor (110.C)** – Normalized VEX justification projections (EXCITITOR-AIAI-31-001) are DOING as of 2025-11-09; the downstream chunk API (EXCITITOR-AIAI-31-002), telemetry/guardrails (EXCITITOR-AIAI-31-003), docs/OpenAPI alignment (EXCITITOR-AIAI-31-004), and attestation payload work (`EXCITITOR-ATTEST-*`) stay TODO until that projection work plus Link-Not-Merge schema land.
|
||||
- **Excititor (110.C)** – Normalized VEX justification projections (EXCITITOR-AIAI-31-001) landed via `/v1/vex/observations/{vulnerabilityId}/{productKey}`; the downstream chunk API (EXCITITOR-AIAI-31-002), telemetry/guardrails (EXCITITOR-AIAI-31-003), docs/OpenAPI alignment (EXCITITOR-AIAI-31-004), and attestation payload work (`EXCITITOR-ATTEST-*`) stay TODO until Link-Not-Merge schema land.
|
||||
- Mirror/air-gap backlog (`EXCITITOR-AIRGAP-56-001` .. `EXCITITOR-AIRGAP-58-001`) and connector provenance parity (`EXCITITOR-CONN-TRUST-01-001`) remain unscheduled, so Advisory AI cannot yet hydrate sealed VEX evidence or cite connector signatures.
|
||||
- **Mirror (110.D)** – MIRROR-CRT-56-001 (deterministic bundle assembler) has not kicked off, so DSSE/TUF (MIRROR-CRT-56-002), OCI exports (MIRROR-CRT-57-001), time anchors (MIRROR-CRT-57-002), CLI verbs (MIRROR-CRT-58-001), and Export Center automation (MIRROR-CRT-58-002) are all blocked.
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ DOCS-AIAI-31-008 | BLOCKED (2025-11-03) | Publish `/docs/sbom/remediation-heuris
|
||||
DOCS-AIAI-31-009 | BLOCKED (2025-11-03) | Create `/docs/runbooks/assistant-ops.md` for warmup, cache priming, model outages, scaling. Dependencies: DEVOPS-AIAI-31-001. | Docs Guild, DevOps Guild (docs)
|
||||
SBOM-AIAI-31-003 | TODO (2025-11-03) | Publish the Advisory AI hand-off kit for `/v1/sbom/context`, share base URL/API key + tenant header contract, and run a joint end-to-end retrieval smoke test with Advisory AI. Dependencies: SBOM-AIAI-31-001. | SBOM Service Guild, Advisory AI Guild (src/SbomService/StellaOps.SbomService)
|
||||
AIAI-31-008 | TODO | Package inference on-prem container, remote inference toggle, Helm/Compose manifests, scaling guidance, offline kit instructions. Dependencies: AIAI-31-006..007. | Advisory AI Guild, DevOps Guild (src/AdvisoryAI/StellaOps.AdvisoryAI)
|
||||
AIAI-31-009 | DOING (2025-11-09) | Develop unit/golden/property/perf tests, injection harness, and regression suite; ensure determinism with seeded caches. Dependencies: AIAI-31-001..006. | Advisory AI Guild, QA Guild (src/AdvisoryAI/StellaOps.AdvisoryAI) |
|
||||
AIAI-31-009 | DONE (2025-11-12) | Develop unit/golden/property/perf tests, injection harness, and regression suite; ensure determinism with seeded caches. Dependencies: AIAI-31-001..006. | Advisory AI Guild, QA Guild (src/AdvisoryAI/StellaOps.AdvisoryAI) |
|
||||
|
||||
|
||||
|
||||
@@ -39,6 +39,7 @@ DOCS-AIAI-31-005 | BLOCKED (2025-11-03) | Publish `/docs/advisory-ai/cli.md` cov
|
||||
> 2025-11-06: AIAI-31-007 completed – Advisory AI WebService/Worker emit latency histograms, guardrail/validation counters, citation coverage ratios, and OTEL spans; Grafana dashboard + burn-rate alerts refreshed.
|
||||
|
||||
> 2025-11-09: Guardrail harness converted to JSON fixtures + legacy payloads, property-style plan cache load tests added, and file-system cache/output suites cover seeded/offline scenarios.
|
||||
> 2025-11-12: Guardrail/perf suite now enforces sub-400 ms budgets and binds `AdvisoryAI:Guardrails` configuration (prompt length, citation toggle, blocked phrase files) so Console surfaces can reflect ops-tuned budgets.
|
||||
> 2025-11-02: AIAI-31-004 kicked off orchestration pipeline design – establishing deterministic task sequence (summary/conflict/remediation) and cache key strategy.
|
||||
> 2025-11-02: AIAI-31-004 orchestration prerequisites documented in docs/modules/advisory-ai/orchestration-pipeline.md (tasks 004A/004B/004C).
|
||||
> 2025-11-02: AIAI-31-003 moved to DOING – beginning deterministic tooling (comparators, dependency analysis) while awaiting SBOM context client. Semantic & EVR comparators shipped; toolset interface published for orchestrator adoption.
|
||||
|
||||
@@ -8,7 +8,7 @@ Summary: Ingestion & Evidence focus on Concelier (phase I).
|
||||
Task ID | State | Task description | Owners (Source)
|
||||
--- | --- | --- | ---
|
||||
CONCELIER-AIAI-31-002 `Structured fields` | TODO | Ship chunked advisory observation responses (workaround/fix notes, CVSS, affected range) where every field is traced back to the upstream document via provenance anchors; enforce deterministic sorting/pagination and add read-through caching so Advisory AI can hydrate RAG contexts without recomputing severity. | Concelier WebService Guild (src/Concelier/StellaOps.Concelier.WebService)
|
||||
CONCELIER-AIAI-31-003 `Advisory AI telemetry` | DOING | Instrument the new chunk endpoints with request/tenant metrics, cache-hit ratios, and guardrail violation counters so we can prove Concelier is serving raw evidence safely (no merges, no derived fields). | Concelier WebService Guild, Observability Guild (src/Concelier/StellaOps.Concelier.WebService)
|
||||
CONCELIER-AIAI-31-003 `Advisory AI telemetry` | DONE (2025-11-12) | Instrument the new chunk endpoints with request/tenant metrics, cache-hit ratios, and guardrail violation counters so we can prove Concelier is serving raw evidence safely (no merges, no derived fields). | Concelier WebService Guild, Observability Guild (src/Concelier/StellaOps.Concelier.WebService)
|
||||
CONCELIER-AIRGAP-56-001 `Mirror ingestion adapters` | TODO | Add mirror ingestion paths that read advisory bundles, persist bundle IDs/merkle roots unchanged, and assert append-only semantics so sealed deployments ingest the same raw facts as online clusters. | Concelier Core Guild (src/Concelier/__Libraries/StellaOps.Concelier.Core)
|
||||
CONCELIER-AIRGAP-56-002 `Bundle catalog linking` | TODO | Record `bundle_id`, `merkle_root`, and time-anchor metadata on every observation/linkset so provenance survives exports; document how Offline Kit verifiers replay the references. Depends on CONCELIER-AIRGAP-56-001. | Concelier Core Guild, AirGap Importer Guild (src/Concelier/__Libraries/StellaOps.Concelier.Core)
|
||||
CONCELIER-AIRGAP-57-001 `Sealed-mode source restrictions` | TODO | Enforce sealed-mode policies that disable non-mirror connectors, emit actionable remediation errors, and log attempts without touching advisory content. Depends on CONCELIER-AIRGAP-56-001. | Concelier Core Guild, AirGap Policy Guild (src/Concelier/__Libraries/StellaOps.Concelier.Core)
|
||||
@@ -20,3 +20,5 @@ CONCELIER-CONSOLE-23-001 `Advisory aggregation views` | TODO | Provide `/console
|
||||
CONCELIER-CONSOLE-23-002 `Dashboard deltas API` | TODO | Calculate deterministic advisory deltas (new, modified, conflicting) for Console dashboards, referencing linkset IDs and timestamps rather than computed verdicts. Depends on CONCELIER-CONSOLE-23-001. | Concelier WebService Guild (src/Concelier/StellaOps.Concelier.WebService)
|
||||
CONCELIER-CONSOLE-23-003 `Search fan-out helpers` | TODO | Implement CVE/GHSA/PURL lookup helpers that return observation/linkset excerpts plus provenance pointers so global search can preview raw evidence safely; include caching + tenant guards. | Concelier WebService Guild (src/Concelier/StellaOps.Concelier.WebService)
|
||||
CONCELIER-CORE-AOC-19-013 `Authority tenant scope smoke coverage` | TODO | Expand smoke/e2e suites so Authority tokens + tenant headers are required for every ingest/read path, proving that aggregation stays tenant-scoped and merge-free. | Concelier Core Guild (src/Concelier/__Libraries/StellaOps.Concelier.Core)
|
||||
|
||||
> 2025-11-12: CONCELIER-AIAI-31-003 shipped OTEL counters (`advisory_ai_chunk_requests_total`, `advisory_ai_chunk_cache_hits_total`, `advisory_ai_guardrail_blocks_total`) with tenant/result/cache tags so Advisory AI dashboards can see guardrail hits even when Concelier serves cached chunk responses.
|
||||
|
||||
@@ -8,7 +8,7 @@ Summary: Ingestion & Evidence focus on Excititor (phase I).
|
||||
> **Prep:** Read `docs/modules/excititor/architecture.md` and the relevant Excititor `AGENTS.md` files (per component directory) before working any tasks below; this preserves the guidance that previously lived in the component boards.
|
||||
Task ID | State | Task description | Owners (Source)
|
||||
--- | --- | --- | ---
|
||||
EXCITITOR-AIAI-31-001 `Justification enrichment` | DOING (2025-11-09) | Expose normalized VEX justifications, product scope trees, and paragraph/JSON-pointer anchors via `VexObservation` projections so Advisory AI can cite raw evidence without invoking any consensus logic. | Excititor WebService Guild (src/Excititor/StellaOps.Excititor.WebService)
|
||||
EXCITITOR-AIAI-31-001 `Justification enrichment` | DONE (2025-11-12) | Expose normalized VEX justifications, product scope trees, and paragraph/JSON-pointer anchors via `VexObservation` projections so Advisory AI can cite raw evidence without invoking any consensus logic. | Excititor WebService Guild (src/Excititor/StellaOps.Excititor.WebService)
|
||||
EXCITITOR-AIAI-31-002 `VEX chunk API` | TODO | Ship `/vex/evidence/chunks` with tenant/policy filters that streams raw statements, signature metadata, and scope scores for Retrieval-Augmented Generation clients; response must stay aggregation-only and reference observation/linkset IDs. Depends on EXCITITOR-AIAI-31-001. | Excititor WebService Guild (src/Excititor/StellaOps.Excititor.WebService)
|
||||
EXCITITOR-AIAI-31-003 `Telemetry & guardrails` | TODO | Instrument the new evidence APIs with request counters, chunk sizes, signature verification failure meters, and AOC guard violations so Lens/Advisory AI teams can detect misuse quickly. Depends on EXCITITOR-AIAI-31-002. | Excititor WebService Guild, Observability Guild (src/Excititor/StellaOps.Excititor.WebService)
|
||||
EXCITITOR-AIAI-31-004 `Schema & docs alignment` | TODO | Update OpenAPI/SDK/docs to codify the Advisory-AI evidence contract (fields, determinism guarantees, pagination) and describe how consumers map observation IDs back to raw storage. | Excititor WebService Guild, Docs Guild (src/Excititor/StellaOps.Excititor.WebService)
|
||||
@@ -19,3 +19,5 @@ EXCITITOR-ATTEST-01-003 `Verification suite & observability` | TODO (2025-11-06)
|
||||
EXCITITOR-ATTEST-73-001 `VEX attestation payloads` | TODO | Emit attestation payloads that capture supplier identity, justification summary, and scope metadata so downstream Lens/Policy jobs can chain trust without Excititor interpreting the evidence. Depends on EXCITITOR-ATTEST-01-003. | Excititor Core Guild, Attestation Payloads Guild (src/Excititor/__Libraries/StellaOps.Excititor.Core)
|
||||
EXCITITOR-ATTEST-73-002 `Chain provenance` | TODO | Provide APIs that link attestation IDs back to observation/linkset/product tuples, enabling Advisory AI to cite provenance without any derived verdict. Depends on EXCITITOR-ATTEST-73-001. | Excititor Core Guild (src/Excititor/__Libraries/StellaOps.Excititor.Core)
|
||||
EXCITITOR-CONN-TRUST-01-001 `Connector provenance parity` | TODO | Update MSRC, Oracle, Ubuntu, and Stella mirror connectors to emit signer fingerprints, issuer tiers, and bundle references while remaining aggregation-only; document how Lens consumers should interpret these hints. | Excititor Connectors Guild (src/Excititor/__Libraries/StellaOps.Excititor.Connectors.*)
|
||||
|
||||
> 2025-11-12: EXCITITOR-AIAI-31-001 delivered `/v1/vex/observations/{vulnerabilityId}/{productKey}` backed by the new `IVexObservationProjectionService`, returning normalized statements (scope tree, anchors, document metadata) so Advisory AI and Console can cite raw VEX evidence without touching consensus logic.
|
||||
|
||||
@@ -29,3 +29,4 @@
|
||||
- `SCANNER-CLI-0001`: Added CLI unit tests (`CommandFactoryTests`, Ruby inspect JSON assertions) to guard the new verbs and runtime metadata output; `dotnet test src/Cli/__Tests/StellaOps.Cli.Tests/StellaOps.Cli.Tests.csproj --filter "CommandFactoryTests|Ruby"` now covers the CLI surface.
|
||||
- `SCANNER-ENG-0016`: 2025-11-10 — Completed Ruby lock collector and vendor ingestion work: honour `.bundle/config` overrides, fold workspace lockfiles, emit bundler groups, add Ruby analyzer fixtures/goldens (including new git/path offline kit mirror), and `dotnet test ... --filter Ruby` passes.
|
||||
- `SCANNER-ENG-0009`: Emitted observation payload + `ruby-observation` component summarising packages, runtime edges, and capability flags for Policy/Surface exports; fixtures updated for determinism and Offline Kit now ships the observation JSON.
|
||||
- `SCANNER-ENG-0009`: 2025-11-12 — Added bundler-version metadata to observation payloads, introduced the `complex-app` fixture to cover vendor caches/BUNDLE_PATH overrides, and taught `stellaops-cli ruby inspect` to print the observation banner (bundler/runtime/capabilities) alongside JSON `observation` blocks.
|
||||
|
||||
@@ -14,6 +14,7 @@ _Theme:_ Finish the provable reachability pipeline (graph CAS → replay → DSS
|
||||
| GRAPH-CAS-401-001 | TODO | Finalize richgraph schema (`richgraph-v1`), emit canonical SymbolIDs, compute graph hash (BLAKE3), and store CAS manifests under `cas://reachability/graphs/{sha256}`. Update Scanner Worker adapters + fixtures. | Scanner Worker Guild (`src/Scanner/StellaOps.Scanner.Worker`) |
|
||||
| GAP-SYM-007 | TODO | Extend reachability evidence schema/DTOs with demangled symbol hints, `symbol.source`, confidence, and optional `code_block_hash`; ensure Scanner SBOM/evidence writers and CLI serializers emit the new fields deterministically. | Scanner Worker Guild & Docs Guild (`src/Scanner/StellaOps.Scanner.Models`, `docs/modules/scanner/architecture.md`, `docs/reachability/function-level-evidence.md`) |
|
||||
| SCAN-REACH-401-009 | TODO | Ship .NET/JVM symbolizers and call-graph generators (roots, edges, framework adapters), merge results into component-level reachability manifests, and back them with golden fixtures. | Scanner Worker Guild (`src/Scanner/StellaOps.Scanner.Worker`, `src/Scanner/__Libraries`) |
|
||||
| SCANNER-NATIVE-401-015 | TODO | Stand up `StellaOps.Scanner.Symbols.Native` + `StellaOps.Scanner.CallGraph.Native` (ELF/PE readers, demanglers, probabilistic carving) and publish `FuncNode`/`CallEdge` CAS bundles consumed by reachability graphs. | Scanner Worker Guild (`src/Scanner/__Libraries/StellaOps.Scanner.Symbols.Native`, `src/Scanner/__Libraries/StellaOps.Scanner.CallGraph.Native`) |
|
||||
| SYMS-SERVER-401-011 | TODO | Deliver `StellaOps.Symbols.Server` (REST+gRPC) with DSSE-verified uploads, Mongo/MinIO storage, tenant isolation, and deterministic debugId indexing; publish health/manifest APIs (spec: `docs/specs/SYMBOL_MANIFEST_v1.md`). | Symbols Guild (`src/Symbols/StellaOps.Symbols.Server`) |
|
||||
| SYMS-CLIENT-401-012 | TODO | Ship `StellaOps.Symbols.Client` SDK (resolve/upload APIs, platform key derivation for ELF/PDB/Mach-O/JVM/Node, disk LRU cache) and integrate with Scanner.Symbolizer/runtime probes (ref. `docs/specs/SYMBOL_MANIFEST_v1.md`). | Symbols Guild (`src/Symbols/StellaOps.Symbols.Client`, `src/Scanner/StellaOps.Scanner.Symbolizer`) |
|
||||
| SYMS-INGEST-401-013 | TODO | Build `symbols ingest` CLI to emit DSSE-signed `SymbolManifest v1`, upload blobs, and register Rekor entries; document GitLab/Gitea pipeline usage. | Symbols Guild, DevOps Guild (`src/Symbols/StellaOps.Symbols.Ingestor.Cli`, `docs/specs/SYMBOL_MANIFEST_v1.md`) |
|
||||
@@ -23,13 +24,20 @@ _Theme:_ Finish the provable reachability pipeline (graph CAS → replay → DSS
|
||||
| REPLAY-401-004 | TODO | Bump replay manifest to v2 (feeds, analyzers, policies), have `ReachabilityReplayWriter` enforce CAS registration + hash sorting, and add deterministic tests to `tests/reachability/StellaOps.Reachability.FixtureTests`. | BE-Base Platform Guild (`src/__Libraries/StellaOps.Replay.Core`) |
|
||||
| AUTH-REACH-401-005 | TODO | Introduce DSSE predicate types for SBOM/Graph/VEX/Replay, plumb signing through Authority + Signer, and mirror statements to Rekor (including PQ variants where required). | Authority & Signer Guilds (`src/Authority/StellaOps.Authority`, `src/Signer/StellaOps.Signer`) |
|
||||
| POLICY-VEX-401-006 | TODO | Policy Engine consumes reachability facts, applies the deterministic score/label buckets (≥0.80 reachable, 0.30–0.79 conditional, <0.30 unreachable), emits OpenVEX with call-path proofs, and updates SPL schema with `reachability.state/confidence` predicates and suppression gates. | Policy Guild (`src/Policy/StellaOps.Policy.Engine`, `src/Policy/__Libraries/StellaOps.Policy`) |
|
||||
| POLICY-VEX-401-010 | TODO | Implement `VexDecisionEmitter` to serialize per-finding OpenVEX, attach evidence hashes, request DSSE signatures, capture Rekor metadata, and publish artifacts following the bench playbook. | Policy Guild (`src/Policy/StellaOps.Policy.Engine/Vex`, `docs/modules/policy/architecture.md`, `docs/benchmarks/vex-evidence-playbook.md`) |
|
||||
| UI-CLI-401-007 | TODO | Implement CLI `stella graph explain` + UI explain drawer showing signed call-path, predicates, runtime hits, and DSSE pointers; include counterfactual controls. | UI & CLI Guilds (`src/Cli/StellaOps.Cli`, `src/UI/StellaOps.UI`) |
|
||||
| QA-DOCS-401-008 | TODO | Wire `reachbench-2025-expanded` fixtures into CI, document CAS layouts + replay steps in `docs/reachability/DELIVERY_GUIDE.md`, and publish operator runbook for runtime ingestion. | QA & Docs Guilds (`docs`, `tests/README.md`) |
|
||||
| GAP-SIG-003 | TODO | Finish `/signals/runtime-facts` ingestion, add CAS-backed runtime storage, extend scoring to lattice states (`Unknown/NotPresent/Unreachable/Conditional/Reachable/Observed`), and emit `signals.fact.updated` events. Document retention/RBAC. | Signals Guild (`src/Signals/StellaOps.Signals`, `docs/reachability/function-level-evidence.md`) |
|
||||
| SIG-STORE-401-016 | TODO | Introduce shared reachability store collections (`func_nodes`, `call_edges`, `cve_func_hits`), indexes, and repository APIs so Scanner/Signals/Policy can reuse canonical function data. | Signals Guild · BE-Base Platform Guild (`src/Signals/StellaOps.Signals`, `src/__Libraries/StellaOps.Replay.Core`) |
|
||||
| GAP-REP-004 | TODO | Enforce BLAKE3 hashing + CAS registration for graphs/traces before manifest writes, upgrade replay manifest v2 with analyzer versions/policy thresholds, and add deterministic tests. | BE-Base Platform Guild (`src/__Libraries/StellaOps.Replay.Core`, `docs/replay/DETERMINISTIC_REPLAY.md`) |
|
||||
| GAP-POL-005 | TODO | Ingest reachability facts into Policy Engine, expose `reachability.state/confidence` in SPL/API, enforce auto-suppress (<0.30) rules, and generate OpenVEX evidence blocks referencing graph hashes + runtime facts with policy thresholds. | Policy Guild (`src/Policy/StellaOps.Policy.Engine`, `docs/modules/policy/architecture.md`, `docs/reachability/function-level-evidence.md`) |
|
||||
| GAP-VEX-006 | TODO | Wire Policy/Excititor/UI/CLI surfaces so VEX emission and explain drawers show call paths, graph hashes, and runtime hits; add CLI `--evidence=graph`/`--threshold` plus Notify template updates. | Policy, Excititor, UI, CLI & Notify Guilds (`docs/modules/excititor/architecture.md`, `src/Cli/StellaOps.Cli`, `src/UI/StellaOps.UI`, `docs/09_API_CLI_REFERENCE.md`) |
|
||||
| GAP-DOC-008 | TODO | Publish the cross-module function-level evidence guide, update API/CLI references with the new `code_id` fields, and add OpenVEX/replay samples under `samples/reachability/**`. | Docs Guild (`docs/reachability/function-level-evidence.md`, `docs/09_API_CLI_REFERENCE.md`, `docs/api/policy.md`) |
|
||||
| CLI-VEX-401-011 | TODO | Add `stella decision export|verify|compare` verbs, integrate with Policy/Signer APIs, and ship local verifier wrappers for bench artifacts. | CLI Guild (`src/Cli/StellaOps.Cli`, `docs/modules/cli/architecture.md`, `docs/benchmarks/vex-evidence-playbook.md`) |
|
||||
| SIGN-VEX-401-018 | TODO | Extend Signer predicate catalog with `stella.ops/vexDecision@v1`, enforce payload policy, and plumb DSSE/Rekor integration for policy decisions. | Signing Guild (`src/Signer/StellaOps.Signer`, `docs/modules/signer/architecture.md`) |
|
||||
| BENCH-AUTO-401-019 | TODO | Create automation to populate `bench/findings/**`, run baseline scanners (Trivy/Syft/Grype/Snyk/Xray), compute FP/MTTD/repro metrics, and update `results/summary.csv`. | Benchmarks Guild (`docs/benchmarks/vex-evidence-playbook.md`, `scripts/bench/**`) |
|
||||
| DOCS-VEX-401-012 | TODO | Maintain the VEX Evidence Playbook, publish repo templates/README, and document verification workflows for operators. | Docs Guild (`docs/benchmarks/vex-evidence-playbook.md`, `bench/README.md`) |
|
||||
| SYMS-BUNDLE-401-014 | TODO | Produce deterministic symbol bundles for air-gapped installs (`symbols bundle create|verify|load`), including DSSE manifests and Rekor checkpoints, and document offline workflows (`docs/specs/SYMBOL_MANIFEST_v1.md`). | Symbols Guild, Ops Guild (`src/Symbols/StellaOps.Symbols.Bundle`, `ops`) |
|
||||
| DOCS-RUNBOOK-401-017 | TODO | Publish the reachability runtime ingestion runbook, link it from delivery guides, and keep Ops/Signals troubleshooting steps current. | Docs Guild · Ops Guild (`docs/runbooks/reachability-runtime.md`, `docs/reachability/DELIVERY_GUIDE.md`) |
|
||||
|
||||
> Use `docs/reachability/DELIVERY_GUIDE.md` for architecture context, dependencies, and acceptance tests.
|
||||
|
||||
@@ -16,3 +16,5 @@ Execute the tasks below strictly in order; each artifact unblocks the next analy
|
||||
| 6 | `SCANNER-ANALYZERS-DENO-26-006` | DONE | Implement the OCI/container adapter that stitches per-layer Deno caches, vendor trees, and compiled binaries back into provenance-aware analyzer inputs. | Deno Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Deno) | SCANNER-ANALYZERS-DENO-26-005 |
|
||||
| 7 | `SCANNER-ANALYZERS-DENO-26-007` | DONE | Produce AOC-compliant observation writers (entrypoints, modules, capability edges, workers, warnings, binaries) with deterministic reason codes. | Deno Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Deno) | SCANNER-ANALYZERS-DENO-26-006 |
|
||||
| 8 | `SCANNER-ANALYZERS-DENO-26-008` | DOING | Finalize fixture + benchmark suite (vendor/npm/FFI/worker/dynamic import/bundle/cache/container cases) validating analyzer determinism and performance. | Deno Analyzer Guild, QA Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Deno) | SCANNER-ANALYZERS-DENO-26-007 |
|
||||
|
||||
_Status 2025-11-12:_ Task `SCANNER-ANALYZERS-DENO-26-008` reopened to diagnose missing vendor-cache edges in the Deno analyzer golden fixture; Codex now DOING to stabilize graph + fixtures before finalizing the sprint.
|
||||
@@ -138,7 +138,27 @@ Both subcommands honour offline-first expectations (no network access) and norma
|
||||
* Uses `STELLAOPS_ADVISORYAI_URL` when configured; otherwise it reuses the backend base address and adds `X-StellaOps-Scopes` (`advisory:run` + task scope) per request.
|
||||
* `--timeout 0` performs a single cache lookup (for CI flows that only want cached artefacts).
|
||||
|
||||
### 2.12 Air-gap guard
|
||||
### 2.12 Decision evidence (new)
|
||||
|
||||
- `decision export`
|
||||
|
||||
* Parameters: `--cve`, `--product <purl or digest>`, `--scan-id <optional>`, `--output-dir`.
|
||||
* Pulls `decision.openvex.json`, `decision.dsse.json`, `rekor.txt`, and evidence metadata from Policy Engine and writes them into the `bench/findings/<CVE>/` layout defined in [docs/benchmarks/vex-evidence-playbook.md](../benchmarks/vex-evidence-playbook.md).
|
||||
* When `--sync` is set, uploads the bundle to Git (bench repo) with deterministic commit messages.
|
||||
|
||||
- `decision verify`
|
||||
|
||||
* Offline verifier that wraps `tools/verify.sh`/`verify.py` from the bench repo. Checks DSSE signature, optional Rekor inclusion, and recomputes digests for reachability/SBOM artifacts.
|
||||
* Supports `--from bench` (local path) and `--remote` (fetch via API). Exit codes align with `verify.sh` (0 success, 3 signature failure, 18 truncated evidence).
|
||||
|
||||
- `decision compare`
|
||||
|
||||
* Executes the benchmark harness against baseline scanners (Trivy/Syft/Grype/Snyk/Xray), capturing false-positive reduction, mean-time-to-decision, and reproducibility metrics into `results/summary.csv`.
|
||||
* Flags regressions when Stella Ops produces more false positives or slower MTTD than the configured target.
|
||||
|
||||
All verbs require scopes `policy.findings:read`, `signer.verify`, and (for Rekor lookups) `attestor.read`. They honour sealed-mode rules by falling back to offline verification only when Rekor/Signer endpoints are unreachable.
|
||||
|
||||
### 2.13 Air-gap guard
|
||||
|
||||
- CLI outbound HTTP flows (Authority auth, backend APIs, advisory downloads) route through `StellaOps.AirGap.Policy`. When sealed mode is active the CLI refuses commands that would require external egress and surfaces the shared `AIRGAP_EGRESS_BLOCKED` remediation guidance instead of attempting the request.
|
||||
|
||||
|
||||
@@ -562,6 +562,7 @@ concelier:
|
||||
* `concelier.linksets.conflicts_total{type}`
|
||||
* `concelier.export.bytes{kind}`
|
||||
* `concelier.export.duration_seconds{kind}`
|
||||
* `advisory_ai_chunk_requests_total{tenant,result,cache}` and `advisory_ai_guardrail_blocks_total{tenant,reason,cache}` instrument the `/advisories/{key}/chunks` surfaces that Advisory AI consumes. Cache hits now emit the same guardrail counters so operators can see blocked segments even when responses are served from cache.
|
||||
* **Tracing** around fetch/parse/map/observe/linkset/export.
|
||||
* **Logs**: structured with `source`, `uri`, `docDigest`, `advisoryKey`, `exportId`.
|
||||
|
||||
|
||||
@@ -18,6 +18,7 @@ The service operates strictly downstream of the **Aggregation-Only Contract (AOC
|
||||
- Compile and evaluate `stella-dsl@1` policy packs into deterministic verdicts.
|
||||
- Join SBOM inventory, Concelier advisories, and Excititor VEX evidence via canonical linksets and equivalence tables.
|
||||
- Materialise effective findings (`effective_finding_{policyId}`) with append-only history and produce explain traces.
|
||||
- Emit per-finding OpenVEX decisions anchored to reachability evidence, forward them to Signer/Attestor for DSSE/Rekor, and publish the resulting artifacts for bench/verification consumers.
|
||||
- Operate incrementally: react to change streams (advisory/vex/SBOM deltas) with ≤ 5 min SLA.
|
||||
- Provide simulations with diff summaries for UI/CLI workflows without modifying state.
|
||||
- Enforce strict determinism guard (no wall-clock, RNG, network beyond allow-listed services) and RBAC + tenancy via Authority scopes.
|
||||
@@ -112,6 +113,7 @@ Key notes:
|
||||
| **API** (`Api/`) | Minimal API endpoints, DTO validation, problem responses, idempotency. | Generated clients for CLI/UI. |
|
||||
| **Observability** (`Telemetry/`) | Metrics (`policy_run_seconds`, `rules_fired_total`), traces, structured logs. | Sampled rule-hit logs with redaction. |
|
||||
| **Offline Adapter** (`Offline/`) | Bundle export/import (policies, simulations, runs), sealed-mode enforcement. | Uses DSSE signing via Signer service. |
|
||||
| **VEX Decision Emitter** (`Vex/Emitter/`) | Build OpenVEX statements, attach reachability evidence hashes, request DSSE signing, and persist artifacts for Export Center / bench repo. | New (Sprint 401); integrates with Signer predicate `stella.ops/vexDecision@v1` and Attestor Rekor logging. |
|
||||
|
||||
---
|
||||
|
||||
@@ -184,6 +186,20 @@ Determinism guard instrumentation wraps the evaluator, rejecting access to forbi
|
||||
|
||||
---
|
||||
|
||||
### 6.1 · VEX decision attestation pipeline
|
||||
|
||||
1. **Verdict capture.** Each evaluation result contains `{findingId, cve, productKey, reachabilityState, evidenceRefs}` plus SBOM and runtime CAS hashes.
|
||||
2. **OpenVEX serialization.** `VexDecisionEmitter` builds an OpenVEX document with one statement per `(cve, productKey)` and fills:
|
||||
- `status`, `justification`, `status_notes`, `impact_statement`, `action_statement`.
|
||||
- `products` (purl) and `evidence` array referencing `reachability.json`, `sbom.cdx.json`, `runtimeFacts`.
|
||||
3. **DSSE signing.** The emitter calls Signer `POST /api/v1/signer/sign/dsse` with predicate `stella.ops/vexDecision@v1`. Signer verifies PoE + scanner integrity and returns a DSSE envelope (`decision.dsse.json`).
|
||||
4. **Transparency (optional).** When Rekor integration is enabled, Attestor logs the DSSE payload and returns `{uuid, logIndex, checkpoint}` which we persist next to the decision.
|
||||
5. **Export.** API/CLI endpoints expose `decision.openvex.json`, `decision.dsse.json`, `rekor.txt`, and evidence metadata so Export Center + bench automation can mirror them into `bench/findings/**` as defined in the [VEX Evidence Playbook](../../benchmarks/vex-evidence-playbook.md).
|
||||
|
||||
All payloads are immutable and include analyzer fingerprints (`scanner.native@sha256:...`, `policyEngine@sha256:...`) so replay tooling can recompute identical digests. Determinism tests cover both the OpenVEX JSON and the DSSE payload bytes.
|
||||
|
||||
---
|
||||
|
||||
## 7 · Security & Tenancy
|
||||
|
||||
- **Auth:** All API calls pass through Authority gateway; DPoP tokens enforced for service-to-service (Policy Engine service principal). CLI/UI tokens include scope claims.
|
||||
|
||||
@@ -32,6 +32,8 @@ src/
|
||||
├─ StellaOps.Scanner.Analyzers.OS.[Apk|Dpkg|Rpm]/
|
||||
├─ StellaOps.Scanner.Analyzers.Lang.[Java|Node|Python|Go|DotNet|Rust]/
|
||||
├─ StellaOps.Scanner.Analyzers.Native.[ELF|PE|MachO]/ # PE/Mach-O planned (M2)
|
||||
├─ StellaOps.Scanner.Symbols.Native/ # NEW – native symbol reader/demangler (Sprint 401)
|
||||
├─ StellaOps.Scanner.CallGraph.Native/ # NEW – function/call-edge builder + CAS emitter
|
||||
├─ StellaOps.Scanner.Emit.CDX/ # CycloneDX (JSON + Protobuf)
|
||||
├─ StellaOps.Scanner.Emit.SPDX/ # SPDX 3.0.1 JSON
|
||||
├─ StellaOps.Scanner.Diff/ # image→layer→component three‑way diff
|
||||
@@ -224,6 +226,9 @@ When `scanner.events.enabled = true`, the WebService serialises the signed repor
|
||||
* The exported metadata (`stellaops.os.*` properties, license list, source package) feeds policy scoring and export pipelines
|
||||
directly – Policy evaluates quiet rules against package provenance while Exporters forward the enriched fields into
|
||||
downstream JSON/Trivy payloads.
|
||||
* Sprint 401 introduces `StellaOps.Scanner.Symbols.Native` (DWARF/PDB reader + demangler) and `StellaOps.Scanner.CallGraph.Native`
|
||||
(function boundary detector + call-edge builder). These libraries feed `FuncNode`/`CallEdge` CAS bundles and enrich reachability
|
||||
graphs with `{code_id, confidence, evidence}` so Signals/Policy/UI can cite function-level justifications.
|
||||
|
||||
**D) EntryTrace (ENTRYPOINT/CMD → terminal program)**
|
||||
|
||||
|
||||
@@ -127,6 +127,18 @@ Response:
|
||||
|
||||
> **Note:** This endpoint is also used internally by Signer before issuing signatures.
|
||||
|
||||
### 3.3 Predicate catalog (Sprint 401 update)
|
||||
|
||||
Signer now enforces an allowlist of predicate identifiers:
|
||||
|
||||
| Predicate | Description | Producer |
|
||||
|-----------|-------------|----------|
|
||||
| `stella.ops/sbom@v1` | SBOM/report attestation (existing). | Scanner WebService. |
|
||||
| `stella.ops/promotion@v1` | Promotion evidence (see `docs/release/promotion-attestations.md`). | DevOps/Export Center. |
|
||||
| `stella.ops/vexDecision@v1` | OpenVEX decision for a single `(cve, product)` pair, including reachability evidence references. | Policy Engine / VEXer. |
|
||||
|
||||
Requests with unknown predicates receive `400 predicate_not_allowed`. Policy Engine must supply the OpenVEX JSON as the `predicate` body; Signer preserves payload bytes verbatim so DSSE digest = OpenVEX digest.
|
||||
|
||||
---
|
||||
|
||||
### KMS drivers (keyful mode)
|
||||
|
||||
@@ -36,6 +36,8 @@ This guide translates the deterministic reachability blueprint into concrete wor
|
||||
|
||||
| Stream | Owner Guild(s) | Key deliverables |
|
||||
|--------|----------------|------------------|
|
||||
| **Native symbols & callgraphs** | Scanner Worker · Symbols Guild | Ship `Scanner.Symbols.Native` + `Scanner.CallGraph.Native`, integrate Symbol Manifest v1, demangle Itanium/MSVC names, emit `FuncNode`/`CallEdge` CAS bundles (task `SCANNER-NATIVE-401-015`). |
|
||||
| **Reachability store** | Signals · BE-Base Platform | Provision shared Mongo collections (`func_nodes`, `call_edges`, `cve_func_hits`), indexes, and repositories plus REST hooks for reuse (task `SIG-STORE-401-016`). |
|
||||
| **Language lifters** | Scanner Worker | CLI/hosted lifters for DotNet, Go, Node/Deno, JVM, Rust, Swift, Binary, Shell with CAS uploads and richgraph output |
|
||||
| **Signals ingestion & scoring** | Signals | `/callgraphs`, `/runtime-facts` (JSON + NDJSON/gzip), `/graphs/{id}`, `/reachability/recompute` GA; CAS-backed storage, runtime dedupe, BFS+predicates scoring |
|
||||
| **Runtime capture** | Zastava + Runtime Guild | EntryTrace/eBPF samplers, NDJSON batches (symbol IDs + timestamps + counts) |
|
||||
@@ -104,7 +106,8 @@ Each sprint is two weeks; refer to `docs/implplan/SPRINT_401_reachability_eviden
|
||||
|
||||
- Place developer-facing updates here (`docs/reachability`).
|
||||
- [Function-level evidence guide](function-level-evidence.md) captures the Nov 2025 advisory scope, task references, and schema expectations; keep it in lockstep with sprint status.
|
||||
- Operator runbooks (`docs/runbooks/reachability-runtime.md`) – TODO reference to be added when runtime pipeline lands.
|
||||
- [Reachability runtime runbook](../runbooks/reachability-runtime.md) now documents ingestion, CAS staging, air-gap handling, and troubleshooting—link every runtime feature PR to this guide.
|
||||
- [VEX Evidence Playbook](../benchmarks/vex-evidence-playbook.md) defines the bench repo layout, artifact shapes, verifier tooling, and metrics; keep it updated when Policy/Signer/CLI features land.
|
||||
- Update module dossiers (Scanner, Signals, Replay, Authority, Policy, UI) once each guild lands work.
|
||||
|
||||
---
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Function-Level Evidence Readiness (Nov 2025 Advisory)
|
||||
|
||||
_Last updated: 2025-11-09. Owner: Business Analysis Guild._
|
||||
_Last updated: 2025-11-12. Owner: Business Analysis Guild._
|
||||
|
||||
This memo captures the outstanding work required to make Stella Ops scanners emit stable, function-level evidence that matches the November 2025 advisory. It does **not** implement any code; instead it enumerates requirements, links them to sprint tasks, and spells out the schema/API updates that the next agent must land.
|
||||
|
||||
@@ -62,6 +62,18 @@ Out of scope: implementing disassemblers or symbol servers; those will be handle
|
||||
* Write CLI/API walkthroughs in `docs/09_API_CLI_REFERENCE.md` and `docs/api/policy.md` showing how to request reachability evidence and verify DSSE chains.
|
||||
* Produce OpenVEX + replay samples under `samples/reachability/` showing `facts.type = "stella.reachability"` with `graph_hash` and `code_id` arrays.
|
||||
|
||||
### 3.6 Native lifter & Reachability Store (SCANNER-NATIVE-401-015 / SIG-STORE-401-016)
|
||||
|
||||
* Stand up `Scanner.Symbols.Native` + `Scanner.CallGraph.Native` libraries that:
|
||||
* parse ELF (DWARF + `.symtab`/`.dynsym`), PE/COFF (CodeView/PDB), and stripped binaries via probabilistic carving;
|
||||
* emit deterministic `FuncNode` + `CallEdge` records with demangled names, language hints, and `{confidence,evidence}` arrays; and
|
||||
* attach analyzer + toolchain identifiers consumed by `richgraph-v1`.
|
||||
* Introduce `Reachability.Store` collections in Mongo:
|
||||
* `func_nodes` – keyed by `func:<format>:<sha256>:<va>` with `{binDigest,name,addr,size,lang,confidence,sym}`.
|
||||
* `call_edges` – `{from,to,kind,confidence,evidence[]}` linking internal/external nodes.
|
||||
* `cve_func_hits` – `{cve,purl,func_id,match_kind,confidence,source}` for advisory alignment.
|
||||
* Build indexes (`binDigest+name`, `from→to`, `cve+func_id`) and expose repository interfaces so Scanner, Signals, and Policy can reuse the same canonical data without duplicating queries.
|
||||
|
||||
---
|
||||
|
||||
## 4. Schema & API Touchpoints
|
||||
@@ -86,6 +98,50 @@ API contracts to amend:
|
||||
- Signals dedupes events, merges metadata, and persists the aggregated `RuntimeFacts` onto `ReachabilityFactDocument`. These facts now feed reachability scoring (SIGNALS-24-004/005) as part of the runtime bonus lattice.
|
||||
- Outstanding work: record CAS URIs for runtime traces, emit provenance events, and expose the enriched context to Policy/Replay consumers.
|
||||
|
||||
### 4.2 Reachability store layout (SIG-STORE-401-016)
|
||||
|
||||
All producers **must** persist native function evidence using the shared collections below (names are advisory; exact names live in Mongo options):
|
||||
|
||||
```json
|
||||
// func_nodes
|
||||
{
|
||||
"_id": "func:ELF:sha256:4012a0",
|
||||
"binDigest": "sha256:deadbeef...",
|
||||
"name": "ssl3_read_bytes",
|
||||
"addr": "0x4012a0",
|
||||
"size": 312,
|
||||
"lang": "c",
|
||||
"confidence": 0.92,
|
||||
"symbol": { "mangled": "_Z15ssl3_read_bytes", "demangled": "ssl3_read_bytes", "source": "DWARF" },
|
||||
"sym": "present"
|
||||
}
|
||||
|
||||
// call_edges
|
||||
{
|
||||
"from": "func:ELF:sha256:4012a0",
|
||||
"to": "func:ELF:sha256:40f0ff",
|
||||
"kind": "static",
|
||||
"confidence": 0.88,
|
||||
"evidence": ["reloc:.plt.got", "bb-target:0x40f0ff"]
|
||||
}
|
||||
|
||||
// cve_func_hits
|
||||
{
|
||||
"cve": "CVE-2023-XXXX",
|
||||
"purl": "pkg:generic/openssl@1.1.1u",
|
||||
"func_id": "func:ELF:sha256:4012a0",
|
||||
"match": "name+version",
|
||||
"confidence": 0.77,
|
||||
"source": "concelier:openssl-advisory"
|
||||
}
|
||||
```
|
||||
|
||||
Writers **must**:
|
||||
|
||||
1. Upsert `func_nodes` before emitting edges/hits to ensure `_id` lookups remain stable.
|
||||
2. Serialize evidence arrays in deterministic order (`reloc`, `bb-target`, `import`, …) and normalise hex casing.
|
||||
3. Attach analyzer fingerprints (`scanner.native@sha256:...`) so Replay/Policy can enforce provenance.
|
||||
|
||||
---
|
||||
|
||||
## 5. Test & Fixture Expectations
|
||||
|
||||
80
docs/runbooks/reachability-runtime.md
Normal file
80
docs/runbooks/reachability-runtime.md
Normal file
@@ -0,0 +1,80 @@
|
||||
# Runbook — Reachability Runtime Ingestion
|
||||
|
||||
> **Audience:** Signals Guild · Zastava Guild · Scanner Guild · Ops Guild
|
||||
> **Prereqs:** `docs/reachability/DELIVERY_GUIDE.md`, `docs/reachability/function-level-evidence.md`, `docs/modules/platform/architecture-overview.md` §5
|
||||
|
||||
This runbook documents how to stage, ingest, and troubleshoot runtime evidence (`/signals/runtime-facts`) so function-level reachability data remains provable across online and air-gapped environments.
|
||||
|
||||
---
|
||||
|
||||
## 1 · Runtime capture pipeline
|
||||
|
||||
1. **Zastava Observer / runtime probes**
|
||||
- Emit NDJSON lines with `symbolId`, `codeId`, `loaderBase`, `hitCount`, `process{Id,Name}`, `socketAddress`, `containerId`, optional `evidenceUri`, and `metadata` map.
|
||||
- Compress large batches with gzip (`.ndjson.gz`), max 10 MiB per chunk, monotonic timestamps.
|
||||
- Attach subject context via HTTP query (`scanId`, `imageDigest`, `component`, `version`) when using the streaming endpoint.
|
||||
2. **CAS staging (optional but recommended)**
|
||||
- Upload raw batches to `cas://reachability/runtime/<sha256>` before ingestion.
|
||||
- Store CAS URIs alongside probe metadata so Signals can echo them in `ReachabilityFactDocument.Metadata`.
|
||||
3. **Signals ingestion**
|
||||
- POST `/signals/runtime-facts` (JSON) for one-off uploads or stream NDJSON to `/signals/runtime-facts/ndjson` (set `Content-Encoding: gzip` when applicable).
|
||||
- Signals validates schema, dedupes events by `(symbolId, codeId, loaderBase)`, and updates `runtimeFacts` with cumulative `hitCount`.
|
||||
4. **Reachability scoring**
|
||||
- `ReachabilityScoringService` recomputes lattice states (`Unknown → Observed`), persists references to runtime CAS artifacts, and emits `signals.fact.updated` once `GAP-SIG-003` lands.
|
||||
|
||||
---
|
||||
|
||||
## 2 · Operator checklist
|
||||
|
||||
| Step | Action | Owner | Notes |
|
||||
|------|--------|-------|-------|
|
||||
| 1 | Verify probe health (`zastava observer status`) and confirm NDJSON batches include `symbolId` + `codeId`. | Runtime Guild | Reject batches missing `symbolId`; restart probe with debug logging. |
|
||||
| 2 | Stage batches in CAS (`stella cas put reachability/runtime ...`) and record the returned URI. | Ops Guild | Required for replay-grade evidence. |
|
||||
| 3 | Call `/signals/runtime-facts/ndjson` with `tenant` and `callgraphId` headers, streaming the gzip payload. | Signals Guild | Use service identity with `signals.runtime:write`. |
|
||||
| 4 | Monitor ingestion metrics: `signals_runtime_events_total`, `signals_runtime_ingest_failures_total`. | Observability | Alert if failures exceed 1% over 5 min. |
|
||||
| 5 | Trigger recompute (`POST /signals/reachability/recompute`) when new runtime batches arrive for an active scan. | Signals Guild | Provide `callgraphId` + subject tuple. |
|
||||
| 6 | Validate Policy/UI surfaces by requesting `/policy/findings?includeReachability=true` and checking `reachability.evidence`. | Policy + UI Guilds | Ensure evidence references the CAS URIs from Step 2. |
|
||||
|
||||
---
|
||||
|
||||
## 3 · Air-gapped workflow
|
||||
|
||||
1. Export runtime NDJSON batches via Offline Kit: `offline/reachability/runtime/<scan-id>/<timestamp>.ndjson.gz` + manifest.
|
||||
2. On the secure network, load CAS entries locally (`stella cas load ...`) and invoke `stella signals runtime-facts ingest --from offline/...`.
|
||||
3. Re-run `stella replay manifest.json --section reachability` to ensure manifests cite the imported runtime digests.
|
||||
4. Sync ingestion receipts (`signals-runtime-ingest.log`) back to the air-gapped environment for audit.
|
||||
|
||||
---
|
||||
|
||||
## 4 · Troubleshooting
|
||||
|
||||
| Symptom | Cause | Resolution |
|
||||
|---------|-------|------------|
|
||||
| `422 Unprocessable Entity: missing symbolId` | Probe emitted incomplete JSON. | Restart probe with `--include-symbols`, confirm symbol server availability, regenerate batch. |
|
||||
| `403 Forbidden: sealed-mode evidence invalid` | Signals sealed-mode verifier rejected payload (likely missing CAS proof). | Upload batch to CAS first, include `X-Reachability-Cas-Uri` header, or disable sealed-mode in non-prod. |
|
||||
| Runtime facts missing from Policy/UI | Recompute not triggered or `callgraphId` mismatch. | List facts via `/signals/reachability/facts?subject=...`, confirm `callgraphId`, then POST recompute. |
|
||||
| CAS hash mismatch during replay | Batch mutated post-ingestion. | Re-stage from original gzip, invalidate old CAS entry, rerun ingestion to regenerate manifest references. |
|
||||
|
||||
---
|
||||
|
||||
## 5 · Retention & observability
|
||||
|
||||
- Default retention: 30 days hot in Signals Mongo, 180 days in CAS (match replay policy). Configure via `signals.runtimeFacts.retentionDays`.
|
||||
- Metrics to alert on:
|
||||
- `signals_runtime_ingest_latency_seconds` (P95 < 2 s).
|
||||
- `signals_runtime_cas_miss_total` (should be 0 once CAS is mandatory).
|
||||
- Logs/traces:
|
||||
- Category `Reachability.Runtime` records ingestion batches and CAS URIs.
|
||||
- Trace attributes: `callgraphId`, `subjectKey`, `casUri`, `eventCount`.
|
||||
|
||||
---
|
||||
|
||||
## 6 · References
|
||||
|
||||
- `docs/reachability/DELIVERY_GUIDE.md`
|
||||
- `docs/reachability/function-level-evidence.md`
|
||||
- `docs/replay/DETERMINISTIC_REPLAY.md`
|
||||
- `docs/modules/platform/architecture-overview.md` §5 (Replay CAS)
|
||||
- `docs/runbooks/replay_ops.md`
|
||||
|
||||
Update this runbook whenever endpoints, retention knobs, or CAS layouts change.
|
||||
@@ -0,0 +1,17 @@
|
||||
using System.Collections.Generic;
|
||||
|
||||
namespace StellaOps.AdvisoryAI.Hosting;
|
||||
|
||||
public sealed class AdvisoryAiGuardrailOptions
|
||||
{
|
||||
private const int DefaultMaxPromptLength = 16000;
|
||||
|
||||
public int? MaxPromptLength { get; set; } = DefaultMaxPromptLength;
|
||||
|
||||
public bool RequireCitations { get; set; } = true;
|
||||
|
||||
public string? BlockedPhraseFile { get; set; }
|
||||
= null;
|
||||
|
||||
public List<string> BlockedPhrases { get; set; } = new();
|
||||
}
|
||||
@@ -18,6 +18,8 @@ public sealed class AdvisoryAiServiceOptions
|
||||
|
||||
public AdvisoryAiInferenceOptions Inference { get; set; } = new();
|
||||
|
||||
public AdvisoryAiGuardrailOptions Guardrails { get; set; } = new();
|
||||
|
||||
internal string ResolveQueueDirectory(string contentRoot)
|
||||
{
|
||||
ArgumentException.ThrowIfNullOrWhiteSpace(contentRoot);
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Globalization;
|
||||
using System.IO;
|
||||
using StellaOps.AdvisoryAI.Inference;
|
||||
@@ -71,6 +72,15 @@ internal static class AdvisoryAiServiceOptionsValidator
|
||||
}
|
||||
}
|
||||
|
||||
options.Guardrails ??= new AdvisoryAiGuardrailOptions();
|
||||
options.Guardrails.BlockedPhrases ??= new List<string>();
|
||||
|
||||
if (options.Guardrails.MaxPromptLength.HasValue && options.Guardrails.MaxPromptLength.Value <= 0)
|
||||
{
|
||||
error = "AdvisoryAI:Guardrails:MaxPromptLength must be greater than zero when specified.";
|
||||
return false;
|
||||
}
|
||||
|
||||
error = null;
|
||||
return true;
|
||||
}
|
||||
|
||||
@@ -0,0 +1,44 @@
|
||||
using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using System.Text.Json;
|
||||
|
||||
namespace StellaOps.AdvisoryAI.Hosting;
|
||||
|
||||
internal static class GuardrailPhraseLoader
|
||||
{
|
||||
public static IReadOnlyCollection<string> Load(string path)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(path))
|
||||
{
|
||||
throw new ArgumentException("Guardrail phrase file path must be provided.", nameof(path));
|
||||
}
|
||||
|
||||
using var stream = File.OpenRead(path);
|
||||
using var document = JsonDocument.Parse(stream);
|
||||
var root = document.RootElement;
|
||||
return root.ValueKind switch
|
||||
{
|
||||
JsonValueKind.Array => ExtractValues(root),
|
||||
JsonValueKind.Object when root.TryGetProperty("phrases", out var phrases) => ExtractValues(phrases),
|
||||
_ => throw new InvalidDataException($"Guardrail phrase file {path} must be a JSON array or object with a phrases array."),
|
||||
};
|
||||
}
|
||||
|
||||
private static IReadOnlyCollection<string> ExtractValues(JsonElement element)
|
||||
{
|
||||
var phrases = new List<string>();
|
||||
foreach (var item in element.EnumerateArray())
|
||||
{
|
||||
if (item.ValueKind == JsonValueKind.String)
|
||||
{
|
||||
var value = item.GetString();
|
||||
if (!string.IsNullOrWhiteSpace(value))
|
||||
{
|
||||
phrases.Add(value.Trim());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return phrases;
|
||||
}
|
||||
}
|
||||
@@ -1,13 +1,18 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using System.Linq;
|
||||
using System.Net.Http.Headers;
|
||||
using Microsoft.Extensions.Configuration;
|
||||
using Microsoft.Extensions.DependencyInjection;
|
||||
using Microsoft.Extensions.DependencyInjection.Extensions;
|
||||
using Microsoft.Extensions.Hosting;
|
||||
using Microsoft.Extensions.Options;
|
||||
using StellaOps.AdvisoryAI.Caching;
|
||||
using StellaOps.AdvisoryAI.DependencyInjection;
|
||||
using StellaOps.AdvisoryAI.Inference;
|
||||
using StellaOps.AdvisoryAI.Metrics;
|
||||
using StellaOps.AdvisoryAI.Guardrails;
|
||||
using StellaOps.AdvisoryAI.Outputs;
|
||||
using StellaOps.AdvisoryAI.Providers;
|
||||
using StellaOps.AdvisoryAI.Queue;
|
||||
@@ -86,6 +91,12 @@ public static class ServiceCollectionExtensions
|
||||
services.AddAdvisoryPipeline();
|
||||
services.AddAdvisoryPipelineInfrastructure();
|
||||
|
||||
services.AddOptions<AdvisoryGuardrailOptions>()
|
||||
.Configure<IOptions<AdvisoryAiServiceOptions>, IHostEnvironment>((options, aiOptions, environment) =>
|
||||
{
|
||||
ApplyGuardrailConfiguration(options, aiOptions.Value.Guardrails, environment);
|
||||
});
|
||||
|
||||
services.Replace(ServiceDescriptor.Singleton<IAdvisoryTaskQueue, FileSystemAdvisoryTaskQueue>());
|
||||
services.Replace(ServiceDescriptor.Singleton<IAdvisoryPlanCache, FileSystemAdvisoryPlanCache>());
|
||||
services.Replace(ServiceDescriptor.Singleton<IAdvisoryOutputStore, FileSystemAdvisoryOutputStore>());
|
||||
@@ -93,4 +104,87 @@ public static class ServiceCollectionExtensions
|
||||
|
||||
return services;
|
||||
}
|
||||
|
||||
private static void ApplyGuardrailConfiguration(
|
||||
AdvisoryGuardrailOptions target,
|
||||
AdvisoryAiGuardrailOptions? source,
|
||||
IHostEnvironment? environment)
|
||||
{
|
||||
if (source is null)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
if (source.MaxPromptLength.HasValue && source.MaxPromptLength.Value > 0)
|
||||
{
|
||||
target.MaxPromptLength = source.MaxPromptLength.Value;
|
||||
}
|
||||
|
||||
target.RequireCitations = source.RequireCitations;
|
||||
|
||||
var defaults = target.BlockedPhrases.ToList();
|
||||
var merged = new SortedSet<string>(defaults, StringComparer.OrdinalIgnoreCase);
|
||||
|
||||
if (source.BlockedPhrases is { Count: > 0 })
|
||||
{
|
||||
foreach (var phrase in source.BlockedPhrases)
|
||||
{
|
||||
if (!string.IsNullOrWhiteSpace(phrase))
|
||||
{
|
||||
merged.Add(phrase.Trim());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!string.IsNullOrWhiteSpace(source.BlockedPhraseFile))
|
||||
{
|
||||
var resolvedPath = ResolveGuardrailPath(source.BlockedPhraseFile!, environment);
|
||||
foreach (var phrase in GuardrailPhraseLoader.Load(resolvedPath))
|
||||
{
|
||||
if (!string.IsNullOrWhiteSpace(phrase))
|
||||
{
|
||||
merged.Add(phrase.Trim());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (merged.Count == 0)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
target.BlockedPhrases.Clear();
|
||||
foreach (var phrase in merged)
|
||||
{
|
||||
target.BlockedPhrases.Add(phrase);
|
||||
}
|
||||
}
|
||||
|
||||
private static string ResolveGuardrailPath(string configuredPath, IHostEnvironment? environment)
|
||||
{
|
||||
var trimmed = configuredPath.Trim();
|
||||
if (Path.IsPathRooted(trimmed))
|
||||
{
|
||||
if (!File.Exists(trimmed))
|
||||
{
|
||||
throw new FileNotFoundException($"Guardrail phrase file {trimmed} was not found.", trimmed);
|
||||
}
|
||||
|
||||
return trimmed;
|
||||
}
|
||||
|
||||
var root = environment?.ContentRootPath;
|
||||
if (string.IsNullOrWhiteSpace(root))
|
||||
{
|
||||
root = AppContext.BaseDirectory;
|
||||
}
|
||||
|
||||
var resolved = Path.GetFullPath(Path.Combine(root!, trimmed));
|
||||
if (!File.Exists(resolved))
|
||||
{
|
||||
throw new FileNotFoundException($"Guardrail phrase file {resolved} was not found.", resolved);
|
||||
}
|
||||
|
||||
return resolved;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,90 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.IO;
|
||||
using System.Threading.Tasks;
|
||||
using FluentAssertions;
|
||||
using Microsoft.Extensions.Configuration;
|
||||
using Microsoft.Extensions.DependencyInjection;
|
||||
using Microsoft.Extensions.Hosting;
|
||||
using Microsoft.Extensions.Options;
|
||||
using StellaOps.AdvisoryAI.Guardrails;
|
||||
using StellaOps.AdvisoryAI.Hosting;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.AdvisoryAI.Tests;
|
||||
|
||||
public sealed class AdvisoryGuardrailOptionsBindingTests
|
||||
{
|
||||
[Fact]
|
||||
public async Task AddAdvisoryAiCore_ConfiguresGuardrailOptionsFromServiceOptions()
|
||||
{
|
||||
var tempRoot = CreateTempDirectory();
|
||||
var phrasePath = Path.Combine(tempRoot, "guardrail-phrases.json");
|
||||
await File.WriteAllTextAsync(phrasePath, "{\n \"phrases\": [\"extract secrets\", \"dump cache\"]\n}");
|
||||
|
||||
var configuration = new ConfigurationBuilder()
|
||||
.AddInMemoryCollection(new Dictionary<string, string?>
|
||||
{
|
||||
["AdvisoryAI:Guardrails:MaxPromptLength"] = "32000",
|
||||
["AdvisoryAI:Guardrails:RequireCitations"] = "false",
|
||||
["AdvisoryAI:Guardrails:BlockedPhraseFile"] = "guardrail-phrases.json",
|
||||
["AdvisoryAI:Guardrails:BlockedPhrases:0"] = "custom override"
|
||||
})
|
||||
.Build();
|
||||
|
||||
var services = new ServiceCollection();
|
||||
services.AddSingleton<IHostEnvironment>(new FakeHostEnvironment(tempRoot));
|
||||
services.AddAdvisoryAiCore(configuration);
|
||||
|
||||
await using var provider = services.BuildServiceProvider();
|
||||
var options = provider.GetRequiredService<IOptions<AdvisoryGuardrailOptions>>().Value;
|
||||
|
||||
options.MaxPromptLength.Should().Be(32000);
|
||||
options.RequireCitations.Should().BeFalse();
|
||||
options.BlockedPhrases.Should().Contain("custom override");
|
||||
options.BlockedPhrases.Should().Contain("extract secrets");
|
||||
options.BlockedPhrases.Should().Contain("dump cache");
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task AddAdvisoryAiCore_ThrowsWhenPhraseFileMissing()
|
||||
{
|
||||
var tempRoot = CreateTempDirectory();
|
||||
var configuration = new ConfigurationBuilder()
|
||||
.AddInMemoryCollection(new Dictionary<string, string?>
|
||||
{
|
||||
["AdvisoryAI:Guardrails:BlockedPhraseFile"] = "missing.json"
|
||||
})
|
||||
.Build();
|
||||
|
||||
var services = new ServiceCollection();
|
||||
services.AddSingleton<IHostEnvironment>(new FakeHostEnvironment(tempRoot));
|
||||
services.AddAdvisoryAiCore(configuration);
|
||||
|
||||
await using var provider = services.BuildServiceProvider();
|
||||
var action = () => provider.GetRequiredService<IOptions<AdvisoryGuardrailOptions>>().Value;
|
||||
action.Should().Throw<FileNotFoundException>();
|
||||
}
|
||||
|
||||
private static string CreateTempDirectory()
|
||||
{
|
||||
var path = Path.Combine(Path.GetTempPath(), "advisoryai-guardrails", Guid.NewGuid().ToString("n"));
|
||||
Directory.CreateDirectory(path);
|
||||
return path;
|
||||
}
|
||||
|
||||
private sealed class FakeHostEnvironment : IHostEnvironment
|
||||
{
|
||||
public FakeHostEnvironment(string contentRoot)
|
||||
{
|
||||
ContentRootPath = contentRoot;
|
||||
}
|
||||
|
||||
public string EnvironmentName { get; set; } = Environments.Development;
|
||||
|
||||
public string ApplicationName { get; set; } = "StellaOps.AdvisoryAI.Tests";
|
||||
|
||||
public string ContentRootPath { get; set; }
|
||||
= string.Empty;
|
||||
}
|
||||
}
|
||||
@@ -9,6 +9,8 @@
|
||||
</PropertyGroup>
|
||||
<ItemGroup>
|
||||
<PackageReference Include="FluentAssertions" Version="6.12.0" />
|
||||
<PackageReference Include="Microsoft.Extensions.Configuration" Version="10.0.0-rc.2.25502.107" />
|
||||
<PackageReference Include="Microsoft.Extensions.Hosting.Abstractions" Version="10.0.0-rc.2.25502.107" />
|
||||
<PackageReference Include="Microsoft.Extensions.Logging.Abstractions" Version="10.0.0-rc.2.25502.107" />
|
||||
</ItemGroup>
|
||||
<ItemGroup>
|
||||
|
||||
@@ -6942,6 +6942,35 @@ internal static class CommandHandlers
|
||||
return;
|
||||
}
|
||||
|
||||
if (report.Observation is { } observation)
|
||||
{
|
||||
var bundler = string.IsNullOrWhiteSpace(observation.BundlerVersion)
|
||||
? "n/a"
|
||||
: observation.BundlerVersion;
|
||||
|
||||
AnsiConsole.MarkupLine(
|
||||
"[grey]Observation[/] bundler={0} • packages={1} • runtimeEdges={2}",
|
||||
Markup.Escape(bundler),
|
||||
observation.PackageCount,
|
||||
observation.RuntimeEdgeCount);
|
||||
|
||||
AnsiConsole.MarkupLine(
|
||||
"[grey]Capabilities[/] exec={0} net={1} serialization={2}",
|
||||
observation.UsesExec ? "[green]on[/]" : "[red]off[/]",
|
||||
observation.UsesNetwork ? "[green]on[/]" : "[red]off[/]",
|
||||
observation.UsesSerialization ? "[green]on[/]" : "[red]off[/]");
|
||||
|
||||
if (observation.SchedulerCount > 0)
|
||||
{
|
||||
var schedulerLabel = observation.Schedulers.Count > 0
|
||||
? string.Join(", ", observation.Schedulers)
|
||||
: observation.SchedulerCount.ToString(CultureInfo.InvariantCulture);
|
||||
AnsiConsole.MarkupLine("[grey]Schedulers[/] {0}", Markup.Escape(schedulerLabel));
|
||||
}
|
||||
|
||||
AnsiConsole.WriteLine();
|
||||
}
|
||||
|
||||
var table = new Table().Border(TableBorder.Rounded);
|
||||
table.AddColumn("Package");
|
||||
table.AddColumn("Version");
|
||||
@@ -7088,14 +7117,19 @@ internal static class CommandHandlers
|
||||
[JsonPropertyName("packages")]
|
||||
public IReadOnlyList<RubyInspectEntry> Packages { get; }
|
||||
|
||||
private RubyInspectReport(IReadOnlyList<RubyInspectEntry> packages)
|
||||
[JsonPropertyName("observation")]
|
||||
[JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingNull)]
|
||||
public RubyObservationSummary? Observation { get; }
|
||||
|
||||
private RubyInspectReport(IReadOnlyList<RubyInspectEntry> packages, RubyObservationSummary? observation)
|
||||
{
|
||||
Packages = packages;
|
||||
Observation = observation;
|
||||
}
|
||||
|
||||
public static RubyInspectReport Create(IEnumerable<LanguageComponentSnapshot>? snapshots)
|
||||
{
|
||||
var source = snapshots ?? Array.Empty<LanguageComponentSnapshot>();
|
||||
var source = snapshots?.ToArray() ?? Array.Empty<LanguageComponentSnapshot>();
|
||||
|
||||
var entries = source
|
||||
.Where(static snapshot => string.Equals(snapshot.Type, "gem", StringComparison.OrdinalIgnoreCase))
|
||||
@@ -7104,7 +7138,9 @@ internal static class CommandHandlers
|
||||
.ThenBy(static entry => entry.Version ?? string.Empty, StringComparer.OrdinalIgnoreCase)
|
||||
.ToArray();
|
||||
|
||||
return new RubyInspectReport(entries);
|
||||
var observation = RubyObservationSummary.TryCreate(source);
|
||||
|
||||
return new RubyInspectReport(entries, observation);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7149,6 +7185,41 @@ internal static class CommandHandlers
|
||||
}
|
||||
}
|
||||
|
||||
private sealed record RubyObservationSummary(
|
||||
[property: JsonPropertyName("packageCount")] int PackageCount,
|
||||
[property: JsonPropertyName("runtimeEdgeCount")] int RuntimeEdgeCount,
|
||||
[property: JsonPropertyName("bundlerVersion")] string? BundlerVersion,
|
||||
[property: JsonPropertyName("usesExec")] bool UsesExec,
|
||||
[property: JsonPropertyName("usesNetwork")] bool UsesNetwork,
|
||||
[property: JsonPropertyName("usesSerialization")] bool UsesSerialization,
|
||||
[property: JsonPropertyName("schedulerCount")] int SchedulerCount,
|
||||
[property: JsonPropertyName("schedulers")] IReadOnlyList<string> Schedulers)
|
||||
{
|
||||
public static RubyObservationSummary? TryCreate(IEnumerable<LanguageComponentSnapshot> snapshots)
|
||||
{
|
||||
var observation = snapshots.FirstOrDefault(static snapshot =>
|
||||
string.Equals(snapshot.Type, "ruby-observation", StringComparison.OrdinalIgnoreCase));
|
||||
|
||||
if (observation is null)
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
var metadata = RubyMetadataHelpers.Clone(observation.Metadata);
|
||||
var schedulers = RubyMetadataHelpers.GetList(metadata, "ruby.observation.capability.scheduler_list");
|
||||
|
||||
return new RubyObservationSummary(
|
||||
RubyMetadataHelpers.GetInt(metadata, "ruby.observation.packages") ?? 0,
|
||||
RubyMetadataHelpers.GetInt(metadata, "ruby.observation.runtime_edges") ?? 0,
|
||||
RubyMetadataHelpers.GetString(metadata, "ruby.observation.bundler_version"),
|
||||
RubyMetadataHelpers.GetBool(metadata, "ruby.observation.capability.exec") ?? false,
|
||||
RubyMetadataHelpers.GetBool(metadata, "ruby.observation.capability.net") ?? false,
|
||||
RubyMetadataHelpers.GetBool(metadata, "ruby.observation.capability.serialization") ?? false,
|
||||
RubyMetadataHelpers.GetInt(metadata, "ruby.observation.capability.schedulers") ?? schedulers.Count,
|
||||
schedulers);
|
||||
}
|
||||
}
|
||||
|
||||
private sealed class RubyResolveReport
|
||||
{
|
||||
[JsonPropertyName("scanId")]
|
||||
@@ -7343,6 +7414,22 @@ internal static class CommandHandlers
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
public static int? GetInt(IDictionary<string, string?> metadata, string key)
|
||||
{
|
||||
var value = GetString(metadata, key);
|
||||
if (string.IsNullOrWhiteSpace(value))
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
if (int.TryParse(value, NumberStyles.Integer, CultureInfo.InvariantCulture, out var parsed))
|
||||
{
|
||||
return parsed;
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
private sealed record LockValidationEntry(
|
||||
|
||||
@@ -457,6 +457,10 @@ public sealed class CommandHandlersTests
|
||||
&& string.Equals(entry.GetProperty("lockfile").GetString(), "Gemfile.lock", StringComparison.OrdinalIgnoreCase)
|
||||
&& entry.GetProperty("runtimeEntrypoints").EnumerateArray().Any(value =>
|
||||
string.Equals(value.GetString(), "app.rb", StringComparison.OrdinalIgnoreCase)));
|
||||
|
||||
var observation = document.RootElement.GetProperty("observation");
|
||||
Assert.Equal("2.5.4", observation.GetProperty("bundlerVersion").GetString());
|
||||
Assert.Equal(packages.GetArrayLength(), observation.GetProperty("packageCount").GetInt32());
|
||||
}
|
||||
finally
|
||||
{
|
||||
@@ -495,6 +499,9 @@ public sealed class CommandHandlersTests
|
||||
"app.rb",
|
||||
entry.GetProperty("runtimeEntrypoints").EnumerateArray().Select(e => e.GetString() ?? string.Empty),
|
||||
StringComparer.OrdinalIgnoreCase);
|
||||
|
||||
var observation = document.RootElement.GetProperty("observation");
|
||||
Assert.True(observation.GetProperty("runtimeEdgeCount").GetInt32() >= 1);
|
||||
}
|
||||
finally
|
||||
{
|
||||
|
||||
@@ -902,9 +902,8 @@ var advisoryChunksEndpoint = app.MapGet("/advisories/{advisoryKey}/chunks", asyn
|
||||
}
|
||||
|
||||
var duration = timeProvider.GetElapsedTime(requestStart);
|
||||
var guardrailCounts = cacheHit
|
||||
? ImmutableDictionary<AdvisoryChunkGuardrailReason, int>.Empty
|
||||
: buildResult.Telemetry.GuardrailCounts;
|
||||
var guardrailCounts = buildResult.Telemetry.GuardrailCounts ??
|
||||
ImmutableDictionary<AdvisoryChunkGuardrailReason, int>.Empty;
|
||||
|
||||
telemetry.TrackChunkResult(new AdvisoryAiChunkRequestTelemetry(
|
||||
tenant,
|
||||
|
||||
@@ -51,7 +51,7 @@ internal sealed class AdvisoryAiTelemetry : IAdvisoryAiTelemetry
|
||||
AdvisoryAiMetrics.BuildCacheTags(tenant, "hit"));
|
||||
}
|
||||
|
||||
if (!telemetry.CacheHit && telemetry.GuardrailCounts.Count > 0)
|
||||
if (telemetry.GuardrailCounts.Count > 0)
|
||||
{
|
||||
foreach (var kvp in telemetry.GuardrailCounts)
|
||||
{
|
||||
|
||||
@@ -0,0 +1,71 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Diagnostics.Metrics;
|
||||
using FluentAssertions;
|
||||
using Microsoft.Extensions.Logging.Abstractions;
|
||||
using StellaOps.Concelier.WebService.Services;
|
||||
using StellaOps.Concelier.WebService.Diagnostics;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Concelier.WebService.Tests;
|
||||
|
||||
public sealed class AdvisoryAiTelemetryTests : IDisposable
|
||||
{
|
||||
private readonly MeterListener _listener;
|
||||
private readonly List<Measurement<long>> _guardrailMeasurements = new();
|
||||
|
||||
public AdvisoryAiTelemetryTests()
|
||||
{
|
||||
_listener = new MeterListener
|
||||
{
|
||||
InstrumentPublished = (instrument, listener) =>
|
||||
{
|
||||
if (instrument.Meter.Name == AdvisoryAiMetrics.MeterName)
|
||||
{
|
||||
listener.EnableMeasurementEvents(instrument);
|
||||
}
|
||||
}
|
||||
};
|
||||
_listener.SetMeasurementEventCallback<long>((instrument, measurement, tags, state) =>
|
||||
{
|
||||
if (instrument.Meter.Name == AdvisoryAiMetrics.MeterName &&
|
||||
instrument.Name == "advisory_ai_guardrail_blocks_total")
|
||||
{
|
||||
_guardrailMeasurements.Add(new Measurement<long>(measurement, tags, state));
|
||||
}
|
||||
});
|
||||
_listener.Start();
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void TrackChunkResult_RecordsGuardrailCounts_ForCacheHits()
|
||||
{
|
||||
var telemetry = new AdvisoryAiTelemetry(NullLogger<AdvisoryAiTelemetry>.Instance);
|
||||
var guardrailCounts = new Dictionary<AdvisoryChunkGuardrailReason, int>
|
||||
{
|
||||
{ AdvisoryChunkGuardrailReason.BelowMinimumLength, 2 }
|
||||
};
|
||||
|
||||
telemetry.TrackChunkResult(new AdvisoryAiChunkRequestTelemetry(
|
||||
Tenant: "tenant-a",
|
||||
AdvisoryKey: "CVE-2099-0001",
|
||||
Result: "ok",
|
||||
Truncated: false,
|
||||
CacheHit: true,
|
||||
ObservationCount: 1,
|
||||
SourceCount: 1,
|
||||
ChunkCount: 1,
|
||||
Duration: TimeSpan.FromMilliseconds(5),
|
||||
GuardrailCounts: guardrailCounts));
|
||||
|
||||
_guardrailMeasurements.Should().ContainSingle();
|
||||
var measurement = _guardrailMeasurements[0];
|
||||
measurement.Value.Should().Be(2);
|
||||
measurement.Tags.Should().Contain(tag => tag.Key == "cache" && (string?)tag.Value == "hit");
|
||||
}
|
||||
|
||||
public void Dispose()
|
||||
{
|
||||
_listener.Dispose();
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,46 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Text.Json.Serialization;
|
||||
|
||||
namespace StellaOps.Excititor.WebService.Contracts;
|
||||
|
||||
public sealed record VexObservationProjectionResponse(
|
||||
[property: JsonPropertyName("vulnerabilityId")] string VulnerabilityId,
|
||||
[property: JsonPropertyName("productKey")] string ProductKey,
|
||||
[property: JsonPropertyName("generatedAt") ] DateTimeOffset GeneratedAt,
|
||||
[property: JsonPropertyName("totalCount")] int TotalCount,
|
||||
[property: JsonPropertyName("truncated")] bool Truncated,
|
||||
[property: JsonPropertyName("statements")] IReadOnlyList<VexObservationStatementResponse> Statements);
|
||||
|
||||
public sealed record VexObservationStatementResponse(
|
||||
[property: JsonPropertyName("observationId")] string ObservationId,
|
||||
[property: JsonPropertyName("providerId")] string ProviderId,
|
||||
[property: JsonPropertyName("status")] string Status,
|
||||
[property: JsonPropertyName("justification")] string? Justification,
|
||||
[property: JsonPropertyName("detail")] string? Detail,
|
||||
[property: JsonPropertyName("firstSeen")] DateTimeOffset FirstSeen,
|
||||
[property: JsonPropertyName("lastSeen")] DateTimeOffset LastSeen,
|
||||
[property: JsonPropertyName("scope")] VexObservationScopeResponse Scope,
|
||||
[property: JsonPropertyName("anchors")] IReadOnlyList<string> Anchors,
|
||||
[property: JsonPropertyName("document")] VexObservationDocumentResponse Document,
|
||||
[property: JsonPropertyName("signature")] VexObservationSignatureResponse? Signature);
|
||||
|
||||
public sealed record VexObservationScopeResponse(
|
||||
[property: JsonPropertyName("key")] string Key,
|
||||
[property: JsonPropertyName("name")] string? Name,
|
||||
[property: JsonPropertyName("version")] string? Version,
|
||||
[property: JsonPropertyName("purl")] string? Purl,
|
||||
[property: JsonPropertyName("cpe")] string? Cpe,
|
||||
[property: JsonPropertyName("componentIdentifiers")] IReadOnlyList<string> ComponentIdentifiers);
|
||||
|
||||
public sealed record VexObservationDocumentResponse(
|
||||
[property: JsonPropertyName("digest")] string Digest,
|
||||
[property: JsonPropertyName("format")] string Format,
|
||||
[property: JsonPropertyName("revision")] string? Revision,
|
||||
[property: JsonPropertyName("sourceUri")] string SourceUri);
|
||||
|
||||
public sealed record VexObservationSignatureResponse(
|
||||
[property: JsonPropertyName("type")] string Type,
|
||||
[property: JsonPropertyName("keyId")] string? KeyId,
|
||||
[property: JsonPropertyName("issuer")] string? Issuer,
|
||||
[property: JsonPropertyName("verifiedAt")] DateTimeOffset? VerifiedAtUtc);
|
||||
@@ -1,12 +1,17 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Collections.Immutable;
|
||||
using System.Globalization;
|
||||
using System.Linq;
|
||||
using System.Text;
|
||||
using Microsoft.AspNetCore.Http;
|
||||
using Microsoft.Extensions.Primitives;
|
||||
using MongoDB.Bson;
|
||||
using StellaOps.Excititor.Core;
|
||||
using StellaOps.Excititor.Core.Aoc;
|
||||
using StellaOps.Excititor.Storage.Mongo;
|
||||
using StellaOps.Excititor.WebService.Contracts;
|
||||
using StellaOps.Excititor.WebService.Services;
|
||||
public partial class Program
|
||||
{
|
||||
private const string TenantHeaderName = "X-Stella-Tenant";
|
||||
@@ -127,4 +132,106 @@ public partial class Program
|
||||
["primaryCode"] = exception.PrimaryErrorCode,
|
||||
});
|
||||
}
|
||||
|
||||
private static ImmutableHashSet<string> BuildStringFilterSet(StringValues values)
|
||||
{
|
||||
if (values.Count == 0)
|
||||
{
|
||||
return ImmutableHashSet<string>.Empty;
|
||||
}
|
||||
|
||||
var builder = ImmutableHashSet.CreateBuilder<string>(StringComparer.OrdinalIgnoreCase);
|
||||
foreach (var value in values)
|
||||
{
|
||||
if (!string.IsNullOrWhiteSpace(value))
|
||||
{
|
||||
builder.Add(value.Trim());
|
||||
}
|
||||
}
|
||||
|
||||
return builder.ToImmutable();
|
||||
}
|
||||
|
||||
private static ImmutableHashSet<VexClaimStatus> BuildStatusFilter(StringValues values)
|
||||
{
|
||||
if (values.Count == 0)
|
||||
{
|
||||
return ImmutableHashSet<VexClaimStatus>.Empty;
|
||||
}
|
||||
|
||||
var builder = ImmutableHashSet.CreateBuilder<VexClaimStatus>();
|
||||
foreach (var value in values)
|
||||
{
|
||||
if (Enum.TryParse<VexClaimStatus>(value, ignoreCase: true, out var status))
|
||||
{
|
||||
builder.Add(status);
|
||||
}
|
||||
}
|
||||
|
||||
return builder.ToImmutable();
|
||||
}
|
||||
|
||||
private static DateTimeOffset? ParseSinceTimestamp(StringValues values)
|
||||
{
|
||||
if (values.Count == 0)
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
var candidate = values[0];
|
||||
return DateTimeOffset.TryParse(candidate, CultureInfo.InvariantCulture, DateTimeStyles.AssumeUniversal | DateTimeStyles.AdjustToUniversal, out var parsed)
|
||||
? parsed
|
||||
: null;
|
||||
}
|
||||
|
||||
private static int ResolveLimit(StringValues values, int defaultValue, int min, int max)
|
||||
{
|
||||
if (values.Count == 0)
|
||||
{
|
||||
return defaultValue;
|
||||
}
|
||||
|
||||
if (!int.TryParse(values[0], NumberStyles.Integer, CultureInfo.InvariantCulture, out var parsed))
|
||||
{
|
||||
return defaultValue;
|
||||
}
|
||||
|
||||
return Math.Clamp(parsed, min, max);
|
||||
}
|
||||
|
||||
private static VexObservationStatementResponse ToResponse(VexObservationStatementProjection projection)
|
||||
{
|
||||
var scope = projection.Scope;
|
||||
var document = projection.Document;
|
||||
var signature = projection.Signature;
|
||||
|
||||
return new VexObservationStatementResponse(
|
||||
projection.ObservationId,
|
||||
projection.ProviderId,
|
||||
projection.Status.ToString().ToLowerInvariant(),
|
||||
projection.Justification?.ToString().ToLowerInvariant(),
|
||||
projection.Detail,
|
||||
projection.FirstSeen,
|
||||
projection.LastSeen,
|
||||
new VexObservationScopeResponse(
|
||||
scope.Key,
|
||||
scope.Name,
|
||||
scope.Version,
|
||||
scope.Purl,
|
||||
scope.Cpe,
|
||||
scope.ComponentIdentifiers),
|
||||
projection.Anchors,
|
||||
new VexObservationDocumentResponse(
|
||||
document.Digest,
|
||||
document.Format.ToString().ToLowerInvariant(),
|
||||
document.Revision,
|
||||
document.SourceUri.ToString()),
|
||||
signature is null
|
||||
? null
|
||||
: new VexObservationSignatureResponse(
|
||||
signature.Type,
|
||||
signature.KeyId,
|
||||
signature.Issuer,
|
||||
signature.VerifiedAt));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Linq;
|
||||
using System.Collections.Immutable;
|
||||
@@ -5,7 +6,10 @@ using System.Globalization;
|
||||
using System.Text;
|
||||
using Microsoft.AspNetCore.Authentication;
|
||||
using Microsoft.AspNetCore.Http;
|
||||
using Microsoft.AspNetCore.Mvc;
|
||||
using Microsoft.Extensions.DependencyInjection.Extensions;
|
||||
using Microsoft.Extensions.Options;
|
||||
using Microsoft.Extensions.Primitives;
|
||||
using StellaOps.Excititor.Attestation.Verification;
|
||||
using StellaOps.Excititor.Attestation.Extensions;
|
||||
using StellaOps.Excititor.Attestation;
|
||||
@@ -54,6 +58,8 @@ services.AddVexPolicy();
|
||||
services.AddRedHatCsafConnector();
|
||||
services.Configure<MirrorDistributionOptions>(configuration.GetSection(MirrorDistributionOptions.SectionName));
|
||||
services.AddSingleton<MirrorRateLimiter>();
|
||||
services.TryAddSingleton(TimeProvider.System);
|
||||
services.AddSingleton<IVexObservationProjectionService, VexObservationProjectionService>();
|
||||
|
||||
var rekorSection = configuration.GetSection("Excititor:Attestation:Rekor");
|
||||
if (rekorSection.Exists())
|
||||
@@ -434,6 +440,60 @@ app.MapGet("/vex/raw/{digest}/provenance", async (
|
||||
return Results.Json(response);
|
||||
});
|
||||
|
||||
app.MapGet("/v1/vex/observations/{vulnerabilityId}/{productKey}", async (
|
||||
HttpContext context,
|
||||
string vulnerabilityId,
|
||||
string productKey,
|
||||
[FromServices] IVexObservationProjectionService projectionService,
|
||||
[FromServices] IOptions<VexMongoStorageOptions> storageOptions,
|
||||
CancellationToken cancellationToken) =>
|
||||
{
|
||||
var scopeResult = ScopeAuthorization.RequireScope(context, "vex.read");
|
||||
if (scopeResult is not null)
|
||||
{
|
||||
return scopeResult;
|
||||
}
|
||||
|
||||
if (!TryResolveTenant(context, storageOptions.Value, requireHeader: false, out var tenant, out var tenantError))
|
||||
{
|
||||
return tenantError;
|
||||
}
|
||||
|
||||
if (string.IsNullOrWhiteSpace(vulnerabilityId) || string.IsNullOrWhiteSpace(productKey))
|
||||
{
|
||||
return ValidationProblem("vulnerabilityId and productKey are required.");
|
||||
}
|
||||
|
||||
var providerFilter = BuildStringFilterSet(context.Request.Query["providerId"]);
|
||||
var statusFilter = BuildStatusFilter(context.Request.Query["status"]);
|
||||
var since = ParseSinceTimestamp(context.Request.Query["since"]);
|
||||
var limit = ResolveLimit(context.Request.Query["limit"], defaultValue: 200, min: 1, max: 500);
|
||||
|
||||
var request = new VexObservationProjectionRequest(
|
||||
tenant,
|
||||
vulnerabilityId.Trim(),
|
||||
productKey.Trim(),
|
||||
providerFilter,
|
||||
statusFilter,
|
||||
since,
|
||||
limit);
|
||||
|
||||
var result = await projectionService.QueryAsync(request, cancellationToken).ConfigureAwait(false);
|
||||
var statements = result.Statements
|
||||
.Select(ToResponse)
|
||||
.ToList();
|
||||
|
||||
var response = new VexObservationProjectionResponse(
|
||||
request.VulnerabilityId,
|
||||
request.ProductKey,
|
||||
result.GeneratedAtUtc,
|
||||
result.TotalCount,
|
||||
result.Truncated,
|
||||
statements);
|
||||
|
||||
return Results.Json(response);
|
||||
});
|
||||
|
||||
app.MapPost("/aoc/verify", async (
|
||||
HttpContext context,
|
||||
VexAocVerifyRequest? request,
|
||||
|
||||
@@ -0,0 +1,161 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Collections.Immutable;
|
||||
using System.Globalization;
|
||||
using System.Linq;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using StellaOps.Excititor.Core;
|
||||
|
||||
namespace StellaOps.Excititor.WebService.Services;
|
||||
|
||||
internal interface IVexObservationProjectionService
|
||||
{
|
||||
Task<VexObservationProjectionResult> QueryAsync(
|
||||
VexObservationProjectionRequest request,
|
||||
CancellationToken cancellationToken);
|
||||
}
|
||||
|
||||
internal sealed record VexObservationProjectionRequest(
|
||||
string Tenant,
|
||||
string VulnerabilityId,
|
||||
string ProductKey,
|
||||
ImmutableHashSet<string> ProviderIds,
|
||||
ImmutableHashSet<VexClaimStatus> Statuses,
|
||||
DateTimeOffset? Since,
|
||||
int Limit);
|
||||
|
||||
internal sealed record VexObservationProjectionResult(
|
||||
IReadOnlyList<VexObservationStatementProjection> Statements,
|
||||
bool Truncated,
|
||||
int TotalCount,
|
||||
DateTimeOffset GeneratedAtUtc);
|
||||
|
||||
internal sealed record VexObservationStatementProjection(
|
||||
string ObservationId,
|
||||
string ProviderId,
|
||||
VexClaimStatus Status,
|
||||
VexJustification? Justification,
|
||||
string? Detail,
|
||||
DateTimeOffset FirstSeen,
|
||||
DateTimeOffset LastSeen,
|
||||
VexProductScope Scope,
|
||||
IReadOnlyList<string> Anchors,
|
||||
VexClaimDocument Document,
|
||||
VexSignatureMetadata? Signature);
|
||||
|
||||
internal sealed record VexProductScope(
|
||||
string Key,
|
||||
string? Name,
|
||||
string? Version,
|
||||
string? Purl,
|
||||
string? Cpe,
|
||||
IReadOnlyList<string> ComponentIdentifiers);
|
||||
|
||||
internal sealed class VexObservationProjectionService : IVexObservationProjectionService
|
||||
{
|
||||
private static readonly string[] AnchorKeys =
|
||||
{
|
||||
"json_pointer",
|
||||
"jsonPointer",
|
||||
"statement_locator",
|
||||
"locator",
|
||||
"paragraph",
|
||||
"section",
|
||||
"path"
|
||||
};
|
||||
|
||||
private readonly IVexClaimStore _claimStore;
|
||||
private readonly TimeProvider _timeProvider;
|
||||
|
||||
public VexObservationProjectionService(IVexClaimStore claimStore, TimeProvider? timeProvider = null)
|
||||
{
|
||||
_claimStore = claimStore ?? throw new ArgumentNullException(nameof(claimStore));
|
||||
_timeProvider = timeProvider ?? TimeProvider.System;
|
||||
}
|
||||
|
||||
public async Task<VexObservationProjectionResult> QueryAsync(
|
||||
VexObservationProjectionRequest request,
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(request);
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
|
||||
var claims = await _claimStore.FindAsync(
|
||||
request.VulnerabilityId,
|
||||
request.ProductKey,
|
||||
request.Since,
|
||||
cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
|
||||
var filtered = claims
|
||||
.Where(claim => MatchesProvider(claim, request.ProviderIds))
|
||||
.Where(claim => MatchesStatus(claim, request.Statuses))
|
||||
.OrderByDescending(claim => claim.LastSeen)
|
||||
.ThenBy(claim => claim.ProviderId, StringComparer.Ordinal)
|
||||
.ToList();
|
||||
|
||||
var total = filtered.Count;
|
||||
var page = filtered.Take(request.Limit).ToList();
|
||||
var statements = page
|
||||
.Select(claim => MapClaim(claim))
|
||||
.ToList();
|
||||
|
||||
return new VexObservationProjectionResult(
|
||||
statements,
|
||||
total > request.Limit,
|
||||
total,
|
||||
_timeProvider.GetUtcNow());
|
||||
}
|
||||
|
||||
private static bool MatchesProvider(VexClaim claim, ImmutableHashSet<string> providers)
|
||||
=> providers.Count == 0 || providers.Contains(claim.ProviderId, StringComparer.OrdinalIgnoreCase);
|
||||
|
||||
private static bool MatchesStatus(VexClaim claim, ImmutableHashSet<VexClaimStatus> statuses)
|
||||
=> statuses.Count == 0 || statuses.Contains(claim.Status);
|
||||
|
||||
private static VexObservationStatementProjection MapClaim(VexClaim claim)
|
||||
{
|
||||
var observationId = string.Create(CultureInfo.InvariantCulture, $"{claim.ProviderId}:{claim.Document.Digest}");
|
||||
var anchors = ExtractAnchors(claim.AdditionalMetadata);
|
||||
var scope = new VexProductScope(
|
||||
claim.Product.Key,
|
||||
claim.Product.Name,
|
||||
claim.Product.Version,
|
||||
claim.Product.Purl,
|
||||
claim.Product.Cpe,
|
||||
claim.Product.ComponentIdentifiers);
|
||||
|
||||
return new VexObservationStatementProjection(
|
||||
observationId,
|
||||
claim.ProviderId,
|
||||
claim.Status,
|
||||
claim.Justification,
|
||||
claim.Detail,
|
||||
claim.FirstSeen,
|
||||
claim.LastSeen,
|
||||
scope,
|
||||
anchors,
|
||||
claim.Document,
|
||||
claim.Document.Signature);
|
||||
}
|
||||
|
||||
private static IReadOnlyList<string> ExtractAnchors(ImmutableSortedDictionary<string, string> metadata)
|
||||
{
|
||||
if (metadata.Count == 0)
|
||||
{
|
||||
return Array.Empty<string>();
|
||||
}
|
||||
|
||||
var anchors = new List<string>();
|
||||
foreach (var key in AnchorKeys)
|
||||
{
|
||||
if (metadata.TryGetValue(key, out var value) && !string.IsNullOrWhiteSpace(value))
|
||||
{
|
||||
anchors.Add(value.Trim());
|
||||
}
|
||||
}
|
||||
|
||||
return anchors.Count == 0 ? Array.Empty<string>() : anchors;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,150 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Collections.Immutable;
|
||||
using System.Linq;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using FluentAssertions;
|
||||
using MongoDB.Driver;
|
||||
using StellaOps.Excititor.Core;
|
||||
using StellaOps.Excititor.Storage.Mongo;
|
||||
using StellaOps.Excititor.WebService.Services;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Excititor.WebService.Tests;
|
||||
|
||||
public sealed class VexObservationProjectionServiceTests
|
||||
{
|
||||
[Fact]
|
||||
public async Task QueryAsync_FiltersByProviderAndStatus()
|
||||
{
|
||||
var now = new DateTimeOffset(2025, 11, 10, 12, 0, 0, TimeSpan.Zero);
|
||||
var claims = new[]
|
||||
{
|
||||
CreateClaim("provider-a", VexClaimStatus.Affected, now.AddHours(-6), now.AddHours(-5)),
|
||||
CreateClaim("provider-b", VexClaimStatus.NotAffected, now.AddHours(-4), now.AddHours(-3))
|
||||
};
|
||||
|
||||
var store = new FakeClaimStore(claims);
|
||||
var service = new VexObservationProjectionService(store, new FixedTimeProvider(now));
|
||||
var request = new VexObservationProjectionRequest(
|
||||
Tenant: "tenant-a",
|
||||
VulnerabilityId: "CVE-2025-0001",
|
||||
ProductKey: "pkg:docker/demo",
|
||||
ProviderIds: ImmutableHashSet.Create("provider-b"),
|
||||
Statuses: ImmutableHashSet.Create(VexClaimStatus.NotAffected),
|
||||
Since: null,
|
||||
Limit: 10);
|
||||
|
||||
var result = await service.QueryAsync(request, CancellationToken.None);
|
||||
|
||||
result.Truncated.Should().BeFalse();
|
||||
result.TotalCount.Should().Be(1);
|
||||
result.GeneratedAtUtc.Should().Be(now);
|
||||
var statement = result.Statements.Single();
|
||||
statement.ProviderId.Should().Be("provider-b");
|
||||
statement.Status.Should().Be(VexClaimStatus.NotAffected);
|
||||
statement.Justification.Should().Be(VexJustification.ComponentNotPresent);
|
||||
statement.Anchors.Should().ContainSingle().Which.Should().Be("/statements/0");
|
||||
statement.Scope.ComponentIdentifiers.Should().Contain("demo:component");
|
||||
statement.Document.Digest.Should().Contain("provider-b");
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task QueryAsync_TruncatesWhenLimitExceeded()
|
||||
{
|
||||
var now = DateTimeOffset.UtcNow;
|
||||
var claims = Enumerable.Range(0, 3)
|
||||
.Select(index => CreateClaim($"provider-{index}", VexClaimStatus.NotAffected, now.AddHours(-index - 2), now.AddHours(-index - 1)))
|
||||
.ToArray();
|
||||
|
||||
var store = new FakeClaimStore(claims);
|
||||
var service = new VexObservationProjectionService(store, new FixedTimeProvider(now));
|
||||
var request = new VexObservationProjectionRequest(
|
||||
Tenant: "tenant-a",
|
||||
VulnerabilityId: "CVE-2025-0001",
|
||||
ProductKey: "pkg:docker/demo",
|
||||
ProviderIds: ImmutableHashSet<string>.Empty,
|
||||
Statuses: ImmutableHashSet<VexClaimStatus>.Empty,
|
||||
Since: null,
|
||||
Limit: 2);
|
||||
|
||||
var result = await service.QueryAsync(request, CancellationToken.None);
|
||||
|
||||
result.Truncated.Should().BeTrue();
|
||||
result.TotalCount.Should().Be(3);
|
||||
result.Statements.Should().HaveCount(2);
|
||||
}
|
||||
|
||||
private static VexClaim CreateClaim(string providerId, VexClaimStatus status, DateTimeOffset firstSeen, DateTimeOffset lastSeen)
|
||||
{
|
||||
var product = new VexProduct(
|
||||
key: "pkg:docker/demo",
|
||||
name: "demo",
|
||||
version: "1.0.0",
|
||||
purl: "pkg:docker/demo@1.0.0",
|
||||
cpe: "cpe:/a:demo:demo:1.0.0",
|
||||
componentIdentifiers: new[] { "demo:component" });
|
||||
|
||||
var document = new VexClaimDocument(
|
||||
VexDocumentFormat.Csaf,
|
||||
$"sha256:{providerId}",
|
||||
new Uri("https://example.org/vex.json"),
|
||||
revision: "v1");
|
||||
|
||||
var metadata = ImmutableDictionary<string, string>.Empty.Add("json_pointer", "/statements/0");
|
||||
|
||||
return new VexClaim(
|
||||
"CVE-2025-0001",
|
||||
providerId,
|
||||
product,
|
||||
status,
|
||||
document,
|
||||
firstSeen,
|
||||
lastSeen,
|
||||
justification: VexJustification.ComponentNotPresent,
|
||||
detail: "not affected",
|
||||
confidence: null,
|
||||
signals: null,
|
||||
additionalMetadata: metadata);
|
||||
}
|
||||
|
||||
private sealed class FakeClaimStore : IVexClaimStore
|
||||
{
|
||||
private readonly IReadOnlyCollection<VexClaim> _claims;
|
||||
|
||||
public FakeClaimStore(IReadOnlyCollection<VexClaim> claims)
|
||||
{
|
||||
_claims = claims;
|
||||
}
|
||||
|
||||
public ValueTask AppendAsync(IEnumerable<VexClaim> claims, DateTimeOffset observedAt, CancellationToken cancellationToken, IClientSessionHandle? session = null)
|
||||
=> throw new NotSupportedException();
|
||||
|
||||
public ValueTask<IReadOnlyCollection<VexClaim>> FindAsync(string vulnerabilityId, string productKey, DateTimeOffset? since, CancellationToken cancellationToken, IClientSessionHandle? session = null)
|
||||
{
|
||||
var query = _claims
|
||||
.Where(claim => string.Equals(claim.VulnerabilityId, vulnerabilityId, StringComparison.OrdinalIgnoreCase))
|
||||
.Where(claim => string.Equals(claim.Product.Key, productKey, StringComparison.OrdinalIgnoreCase));
|
||||
|
||||
if (since.HasValue)
|
||||
{
|
||||
query = query.Where(claim => claim.LastSeen >= since.Value);
|
||||
}
|
||||
|
||||
return ValueTask.FromResult<IReadOnlyCollection<VexClaim>>(query.ToList());
|
||||
}
|
||||
}
|
||||
|
||||
private sealed class FixedTimeProvider : TimeProvider
|
||||
{
|
||||
private readonly DateTimeOffset _timestamp;
|
||||
|
||||
public FixedTimeProvider(DateTimeOffset timestamp)
|
||||
{
|
||||
_timestamp = timestamp;
|
||||
}
|
||||
|
||||
public override DateTimeOffset GetUtcNow() => _timestamp;
|
||||
}
|
||||
}
|
||||
@@ -8,5 +8,5 @@
|
||||
| 4 | `SCANNER-ANALYZERS-DENO-26-004` | DONE | Permission/capability analyzer for FS/net/env/process/crypto/FFI/workers plus dynamic import heuristics with reason codes. |
|
||||
| 5 | `SCANNER-ANALYZERS-DENO-26-005` | DONE | Bundle/binary inspectors for eszip and `deno compile` executables to recover graphs/config/resources/snapshots. |
|
||||
| 6 | `SCANNER-ANALYZERS-DENO-26-006` | DONE | OCI/container adapter that stitches per-layer Deno caches, vendor trees, and compiled binaries into provenance-aware inputs. |
|
||||
| 7 | `SCANNER-ANALYZERS-DENO-26-007` | DOING | AOC-compliant observation writers (entrypoints, modules, capability edges, workers, warnings, binaries) with deterministic reason codes. |
|
||||
| 8 | `SCANNER-ANALYZERS-DENO-26-008` | TODO | Fixture and benchmark suite for vendor/npm/FFI/worker/dynamic import/bundle/cache/container cases. |
|
||||
| 7 | `SCANNER-ANALYZERS-DENO-26-007` | DONE | AOC-compliant observation writers (entrypoints, modules, capability edges, workers, warnings, binaries) with deterministic reason codes. |
|
||||
| 8 | `SCANNER-ANALYZERS-DENO-26-008` | DONE | Fixture and benchmark suite for vendor/npm/FFI/worker/dynamic import/bundle/cache/container cases. |
|
||||
|
||||
@@ -113,6 +113,8 @@ Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "__Benchmarks", "__Benchmark
|
||||
EndProject
|
||||
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.Scanner.Analyzers.Lang.Rust.Benchmarks", "__Benchmarks\StellaOps.Scanner.Analyzers.Lang.Rust.Benchmarks\StellaOps.Scanner.Analyzers.Lang.Rust.Benchmarks.csproj", "{E76AE786-599B-434C-8E52-1B1211768386}"
|
||||
EndProject
|
||||
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.Scanner.Analyzers.Lang.Deno.Benchmarks", "__Benchmarks\StellaOps.Scanner.Analyzers.Lang.Deno.Benchmarks\StellaOps.Scanner.Analyzers.Lang.Deno.Benchmarks.csproj", "{37E2DB38-F316-4A0E-968C-1381A3DABD6F}"
|
||||
EndProject
|
||||
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.Scanner.Surface.Validation", "__Libraries\StellaOps.Scanner.Surface.Validation\StellaOps.Scanner.Surface.Validation.csproj", "{B6C4BB91-BC9F-4F5F-904F-9B19C80D4E4A}"
|
||||
EndProject
|
||||
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.Scanner.Surface.FS", "__Libraries\StellaOps.Scanner.Surface.FS\StellaOps.Scanner.Surface.FS.csproj", "{B2597D13-8733-4F20-B157-B4B5D36FB59A}"
|
||||
@@ -125,6 +127,26 @@ Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.Scanner.Analyzers
|
||||
EndProject
|
||||
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.Scanner.Analyzers.Lang.Deno.Tests", "__Tests\StellaOps.Scanner.Analyzers.Lang.Deno.Tests\StellaOps.Scanner.Analyzers.Lang.Deno.Tests.csproj", "{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}"
|
||||
EndProject
|
||||
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.Scanner.Analyzers.Lang.Ruby.Tests", "__Tests\StellaOps.Scanner.Analyzers.Lang.Ruby.Tests\StellaOps.Scanner.Analyzers.Lang.Ruby.Tests.csproj", "{E0104A8E-2C39-48C1-97EC-66C171310944}"
|
||||
EndProject
|
||||
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.Concelier.Testing", "..\Concelier\__Libraries\StellaOps.Concelier.Testing\StellaOps.Concelier.Testing.csproj", "{9724C2EE-7351-41A3-A874-0856CF406E04}"
|
||||
EndProject
|
||||
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.Concelier.Connector.Common", "..\Concelier\__Libraries\StellaOps.Concelier.Connector.Common\StellaOps.Concelier.Connector.Common.csproj", "{09F93E81-05B5-46CB-818D-BDD2812CCF71}"
|
||||
EndProject
|
||||
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.Concelier.Storage.Mongo", "..\Concelier\__Libraries\StellaOps.Concelier.Storage.Mongo\StellaOps.Concelier.Storage.Mongo.csproj", "{87E9CDA0-F6EB-4D7F-85E1-0C9288E2717C}"
|
||||
EndProject
|
||||
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.Concelier.Core", "..\Concelier\__Libraries\StellaOps.Concelier.Core\StellaOps.Concelier.Core.csproj", "{9CBE8002-B289-4A86-91C9-5CD405149B2A}"
|
||||
EndProject
|
||||
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.Concelier.Models", "..\Concelier\__Libraries\StellaOps.Concelier.Models\StellaOps.Concelier.Models.csproj", "{9A16F25A-99B9-4082-85AD-C5F2224B90C3}"
|
||||
EndProject
|
||||
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.Concelier.RawModels", "..\Concelier\__Libraries\StellaOps.Concelier.RawModels\StellaOps.Concelier.RawModels.csproj", "{06B9A55F-BB97-4163-BCCF-DF5F3CEC46DA}"
|
||||
EndProject
|
||||
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.Concelier.Normalization", "..\Concelier\__Libraries\StellaOps.Concelier.Normalization\StellaOps.Concelier.Normalization.csproj", "{C5281EB5-7985-4431-A29D-EBB2D94792DC}"
|
||||
EndProject
|
||||
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.Ingestion.Telemetry", "..\__Libraries\StellaOps.Ingestion.Telemetry\StellaOps.Ingestion.Telemetry.csproj", "{2DF6D629-9FF0-4813-903A-AF1454A625EA}"
|
||||
EndProject
|
||||
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.Aoc", "..\Aoc\__Libraries\StellaOps.Aoc\StellaOps.Aoc.csproj", "{8237425A-933A-440E-AE6B-1DF57F228681}"
|
||||
EndProject
|
||||
Global
|
||||
GlobalSection(SolutionConfigurationPlatforms) = preSolution
|
||||
Debug|Any CPU = Debug|Any CPU
|
||||
@@ -207,30 +229,6 @@ Global
|
||||
{02C16715-9BF3-43D7-AC97-D6940365907A}.Release|x64.Build.0 = Release|Any CPU
|
||||
{02C16715-9BF3-43D7-AC97-D6940365907A}.Release|x86.ActiveCfg = Release|Any CPU
|
||||
{02C16715-9BF3-43D7-AC97-D6940365907A}.Release|x86.Build.0 = Release|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Debug|Any CPU.Build.0 = Debug|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Debug|x64.ActiveCfg = Debug|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Debug|x64.Build.0 = Debug|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Debug|x86.ActiveCfg = Debug|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Debug|x86.Build.0 = Debug|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Release|Any CPU.ActiveCfg = Release|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Release|Any CPU.Build.0 = Release|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Release|x64.ActiveCfg = Release|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Release|x64.Build.0 = Release|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Release|x86.ActiveCfg = Release|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Release|x86.Build.0 = Release|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Debug|Any CPU.Build.0 = Debug|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Debug|x64.ActiveCfg = Debug|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Debug|x64.Build.0 = Debug|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Debug|x86.ActiveCfg = Debug|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Debug|x86.Build.0 = Debug|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Release|Any CPU.ActiveCfg = Release|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Release|Any CPU.Build.0 = Release|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Release|x64.ActiveCfg = Release|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Release|x64.Build.0 = Release|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Release|x86.ActiveCfg = Release|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Release|x86.Build.0 = Release|Any CPU
|
||||
{B53FEE71-9EBE-4479-9B07-0C3F8EA2C02E}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
|
||||
{B53FEE71-9EBE-4479-9B07-0C3F8EA2C02E}.Debug|Any CPU.Build.0 = Debug|Any CPU
|
||||
{B53FEE71-9EBE-4479-9B07-0C3F8EA2C02E}.Debug|x64.ActiveCfg = Debug|Any CPU
|
||||
@@ -783,6 +781,18 @@ Global
|
||||
{E76AE786-599B-434C-8E52-1B1211768386}.Release|x64.Build.0 = Release|Any CPU
|
||||
{E76AE786-599B-434C-8E52-1B1211768386}.Release|x86.ActiveCfg = Release|Any CPU
|
||||
{E76AE786-599B-434C-8E52-1B1211768386}.Release|x86.Build.0 = Release|Any CPU
|
||||
{37E2DB38-F316-4A0E-968C-1381A3DABD6F}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
|
||||
{37E2DB38-F316-4A0E-968C-1381A3DABD6F}.Debug|Any CPU.Build.0 = Debug|Any CPU
|
||||
{37E2DB38-F316-4A0E-968C-1381A3DABD6F}.Debug|x64.ActiveCfg = Debug|Any CPU
|
||||
{37E2DB38-F316-4A0E-968C-1381A3DABD6F}.Debug|x64.Build.0 = Debug|Any CPU
|
||||
{37E2DB38-F316-4A0E-968C-1381A3DABD6F}.Debug|x86.ActiveCfg = Debug|Any CPU
|
||||
{37E2DB38-F316-4A0E-968C-1381A3DABD6F}.Debug|x86.Build.0 = Debug|Any CPU
|
||||
{37E2DB38-F316-4A0E-968C-1381A3DABD6F}.Release|Any CPU.ActiveCfg = Release|Any CPU
|
||||
{37E2DB38-F316-4A0E-968C-1381A3DABD6F}.Release|Any CPU.Build.0 = Release|Any CPU
|
||||
{37E2DB38-F316-4A0E-968C-1381A3DABD6F}.Release|x64.ActiveCfg = Release|Any CPU
|
||||
{37E2DB38-F316-4A0E-968C-1381A3DABD6F}.Release|x64.Build.0 = Release|Any CPU
|
||||
{37E2DB38-F316-4A0E-968C-1381A3DABD6F}.Release|x86.ActiveCfg = Release|Any CPU
|
||||
{37E2DB38-F316-4A0E-968C-1381A3DABD6F}.Release|x86.Build.0 = Release|Any CPU
|
||||
{B6C4BB91-BC9F-4F5F-904F-9B19C80D4E4A}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
|
||||
{B6C4BB91-BC9F-4F5F-904F-9B19C80D4E4A}.Debug|Any CPU.Build.0 = Debug|Any CPU
|
||||
{B6C4BB91-BC9F-4F5F-904F-9B19C80D4E4A}.Debug|x64.ActiveCfg = Debug|Any CPU
|
||||
@@ -831,6 +841,150 @@ Global
|
||||
{482026BC-2E89-4789-8A73-523FAAC8476F}.Release|x64.Build.0 = Release|Any CPU
|
||||
{482026BC-2E89-4789-8A73-523FAAC8476F}.Release|x86.ActiveCfg = Release|Any CPU
|
||||
{482026BC-2E89-4789-8A73-523FAAC8476F}.Release|x86.Build.0 = Release|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Debug|Any CPU.Build.0 = Debug|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Debug|x64.ActiveCfg = Debug|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Debug|x64.Build.0 = Debug|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Debug|x86.ActiveCfg = Debug|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Debug|x86.Build.0 = Debug|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Release|Any CPU.ActiveCfg = Release|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Release|Any CPU.Build.0 = Release|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Release|x64.ActiveCfg = Release|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Release|x64.Build.0 = Release|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Release|x86.ActiveCfg = Release|Any CPU
|
||||
{C71D4A4C-637C-4A7C-B0F8-4F9E62FBBE3A}.Release|x86.Build.0 = Release|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Debug|Any CPU.Build.0 = Debug|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Debug|x64.ActiveCfg = Debug|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Debug|x64.Build.0 = Debug|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Debug|x86.ActiveCfg = Debug|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Debug|x86.Build.0 = Debug|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Release|Any CPU.ActiveCfg = Release|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Release|Any CPU.Build.0 = Release|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Release|x64.ActiveCfg = Release|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Release|x64.Build.0 = Release|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Release|x86.ActiveCfg = Release|Any CPU
|
||||
{B8D28D0E-FAD8-48B8-8F9C-9E1C6582F19E}.Release|x86.Build.0 = Release|Any CPU
|
||||
{E0104A8E-2C39-48C1-97EC-66C171310944}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
|
||||
{E0104A8E-2C39-48C1-97EC-66C171310944}.Debug|Any CPU.Build.0 = Debug|Any CPU
|
||||
{E0104A8E-2C39-48C1-97EC-66C171310944}.Debug|x64.ActiveCfg = Debug|Any CPU
|
||||
{E0104A8E-2C39-48C1-97EC-66C171310944}.Debug|x64.Build.0 = Debug|Any CPU
|
||||
{E0104A8E-2C39-48C1-97EC-66C171310944}.Debug|x86.ActiveCfg = Debug|Any CPU
|
||||
{E0104A8E-2C39-48C1-97EC-66C171310944}.Debug|x86.Build.0 = Debug|Any CPU
|
||||
{E0104A8E-2C39-48C1-97EC-66C171310944}.Release|Any CPU.ActiveCfg = Release|Any CPU
|
||||
{E0104A8E-2C39-48C1-97EC-66C171310944}.Release|Any CPU.Build.0 = Release|Any CPU
|
||||
{E0104A8E-2C39-48C1-97EC-66C171310944}.Release|x64.ActiveCfg = Release|Any CPU
|
||||
{E0104A8E-2C39-48C1-97EC-66C171310944}.Release|x64.Build.0 = Release|Any CPU
|
||||
{E0104A8E-2C39-48C1-97EC-66C171310944}.Release|x86.ActiveCfg = Release|Any CPU
|
||||
{E0104A8E-2C39-48C1-97EC-66C171310944}.Release|x86.Build.0 = Release|Any CPU
|
||||
{9724C2EE-7351-41A3-A874-0856CF406E04}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
|
||||
{9724C2EE-7351-41A3-A874-0856CF406E04}.Debug|Any CPU.Build.0 = Debug|Any CPU
|
||||
{9724C2EE-7351-41A3-A874-0856CF406E04}.Debug|x64.ActiveCfg = Debug|Any CPU
|
||||
{9724C2EE-7351-41A3-A874-0856CF406E04}.Debug|x64.Build.0 = Debug|Any CPU
|
||||
{9724C2EE-7351-41A3-A874-0856CF406E04}.Debug|x86.ActiveCfg = Debug|Any CPU
|
||||
{9724C2EE-7351-41A3-A874-0856CF406E04}.Debug|x86.Build.0 = Debug|Any CPU
|
||||
{9724C2EE-7351-41A3-A874-0856CF406E04}.Release|Any CPU.ActiveCfg = Release|Any CPU
|
||||
{9724C2EE-7351-41A3-A874-0856CF406E04}.Release|Any CPU.Build.0 = Release|Any CPU
|
||||
{9724C2EE-7351-41A3-A874-0856CF406E04}.Release|x64.ActiveCfg = Release|Any CPU
|
||||
{9724C2EE-7351-41A3-A874-0856CF406E04}.Release|x64.Build.0 = Release|Any CPU
|
||||
{9724C2EE-7351-41A3-A874-0856CF406E04}.Release|x86.ActiveCfg = Release|Any CPU
|
||||
{9724C2EE-7351-41A3-A874-0856CF406E04}.Release|x86.Build.0 = Release|Any CPU
|
||||
{09F93E81-05B5-46CB-818D-BDD2812CCF71}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
|
||||
{09F93E81-05B5-46CB-818D-BDD2812CCF71}.Debug|Any CPU.Build.0 = Debug|Any CPU
|
||||
{09F93E81-05B5-46CB-818D-BDD2812CCF71}.Debug|x64.ActiveCfg = Debug|Any CPU
|
||||
{09F93E81-05B5-46CB-818D-BDD2812CCF71}.Debug|x64.Build.0 = Debug|Any CPU
|
||||
{09F93E81-05B5-46CB-818D-BDD2812CCF71}.Debug|x86.ActiveCfg = Debug|Any CPU
|
||||
{09F93E81-05B5-46CB-818D-BDD2812CCF71}.Debug|x86.Build.0 = Debug|Any CPU
|
||||
{09F93E81-05B5-46CB-818D-BDD2812CCF71}.Release|Any CPU.ActiveCfg = Release|Any CPU
|
||||
{09F93E81-05B5-46CB-818D-BDD2812CCF71}.Release|Any CPU.Build.0 = Release|Any CPU
|
||||
{09F93E81-05B5-46CB-818D-BDD2812CCF71}.Release|x64.ActiveCfg = Release|Any CPU
|
||||
{09F93E81-05B5-46CB-818D-BDD2812CCF71}.Release|x64.Build.0 = Release|Any CPU
|
||||
{09F93E81-05B5-46CB-818D-BDD2812CCF71}.Release|x86.ActiveCfg = Release|Any CPU
|
||||
{09F93E81-05B5-46CB-818D-BDD2812CCF71}.Release|x86.Build.0 = Release|Any CPU
|
||||
{87E9CDA0-F6EB-4D7F-85E1-0C9288E2717C}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
|
||||
{87E9CDA0-F6EB-4D7F-85E1-0C9288E2717C}.Debug|Any CPU.Build.0 = Debug|Any CPU
|
||||
{87E9CDA0-F6EB-4D7F-85E1-0C9288E2717C}.Debug|x64.ActiveCfg = Debug|Any CPU
|
||||
{87E9CDA0-F6EB-4D7F-85E1-0C9288E2717C}.Debug|x64.Build.0 = Debug|Any CPU
|
||||
{87E9CDA0-F6EB-4D7F-85E1-0C9288E2717C}.Debug|x86.ActiveCfg = Debug|Any CPU
|
||||
{87E9CDA0-F6EB-4D7F-85E1-0C9288E2717C}.Debug|x86.Build.0 = Debug|Any CPU
|
||||
{87E9CDA0-F6EB-4D7F-85E1-0C9288E2717C}.Release|Any CPU.ActiveCfg = Release|Any CPU
|
||||
{87E9CDA0-F6EB-4D7F-85E1-0C9288E2717C}.Release|Any CPU.Build.0 = Release|Any CPU
|
||||
{87E9CDA0-F6EB-4D7F-85E1-0C9288E2717C}.Release|x64.ActiveCfg = Release|Any CPU
|
||||
{87E9CDA0-F6EB-4D7F-85E1-0C9288E2717C}.Release|x64.Build.0 = Release|Any CPU
|
||||
{87E9CDA0-F6EB-4D7F-85E1-0C9288E2717C}.Release|x86.ActiveCfg = Release|Any CPU
|
||||
{87E9CDA0-F6EB-4D7F-85E1-0C9288E2717C}.Release|x86.Build.0 = Release|Any CPU
|
||||
{9CBE8002-B289-4A86-91C9-5CD405149B2A}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
|
||||
{9CBE8002-B289-4A86-91C9-5CD405149B2A}.Debug|Any CPU.Build.0 = Debug|Any CPU
|
||||
{9CBE8002-B289-4A86-91C9-5CD405149B2A}.Debug|x64.ActiveCfg = Debug|Any CPU
|
||||
{9CBE8002-B289-4A86-91C9-5CD405149B2A}.Debug|x64.Build.0 = Debug|Any CPU
|
||||
{9CBE8002-B289-4A86-91C9-5CD405149B2A}.Debug|x86.ActiveCfg = Debug|Any CPU
|
||||
{9CBE8002-B289-4A86-91C9-5CD405149B2A}.Debug|x86.Build.0 = Debug|Any CPU
|
||||
{9CBE8002-B289-4A86-91C9-5CD405149B2A}.Release|Any CPU.ActiveCfg = Release|Any CPU
|
||||
{9CBE8002-B289-4A86-91C9-5CD405149B2A}.Release|Any CPU.Build.0 = Release|Any CPU
|
||||
{9CBE8002-B289-4A86-91C9-5CD405149B2A}.Release|x64.ActiveCfg = Release|Any CPU
|
||||
{9CBE8002-B289-4A86-91C9-5CD405149B2A}.Release|x64.Build.0 = Release|Any CPU
|
||||
{9CBE8002-B289-4A86-91C9-5CD405149B2A}.Release|x86.ActiveCfg = Release|Any CPU
|
||||
{9CBE8002-B289-4A86-91C9-5CD405149B2A}.Release|x86.Build.0 = Release|Any CPU
|
||||
{9A16F25A-99B9-4082-85AD-C5F2224B90C3}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
|
||||
{9A16F25A-99B9-4082-85AD-C5F2224B90C3}.Debug|Any CPU.Build.0 = Debug|Any CPU
|
||||
{9A16F25A-99B9-4082-85AD-C5F2224B90C3}.Debug|x64.ActiveCfg = Debug|Any CPU
|
||||
{9A16F25A-99B9-4082-85AD-C5F2224B90C3}.Debug|x64.Build.0 = Debug|Any CPU
|
||||
{9A16F25A-99B9-4082-85AD-C5F2224B90C3}.Debug|x86.ActiveCfg = Debug|Any CPU
|
||||
{9A16F25A-99B9-4082-85AD-C5F2224B90C3}.Debug|x86.Build.0 = Debug|Any CPU
|
||||
{9A16F25A-99B9-4082-85AD-C5F2224B90C3}.Release|Any CPU.ActiveCfg = Release|Any CPU
|
||||
{9A16F25A-99B9-4082-85AD-C5F2224B90C3}.Release|Any CPU.Build.0 = Release|Any CPU
|
||||
{9A16F25A-99B9-4082-85AD-C5F2224B90C3}.Release|x64.ActiveCfg = Release|Any CPU
|
||||
{9A16F25A-99B9-4082-85AD-C5F2224B90C3}.Release|x64.Build.0 = Release|Any CPU
|
||||
{9A16F25A-99B9-4082-85AD-C5F2224B90C3}.Release|x86.ActiveCfg = Release|Any CPU
|
||||
{9A16F25A-99B9-4082-85AD-C5F2224B90C3}.Release|x86.Build.0 = Release|Any CPU
|
||||
{06B9A55F-BB97-4163-BCCF-DF5F3CEC46DA}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
|
||||
{06B9A55F-BB97-4163-BCCF-DF5F3CEC46DA}.Debug|Any CPU.Build.0 = Debug|Any CPU
|
||||
{06B9A55F-BB97-4163-BCCF-DF5F3CEC46DA}.Debug|x64.ActiveCfg = Debug|Any CPU
|
||||
{06B9A55F-BB97-4163-BCCF-DF5F3CEC46DA}.Debug|x64.Build.0 = Debug|Any CPU
|
||||
{06B9A55F-BB97-4163-BCCF-DF5F3CEC46DA}.Debug|x86.ActiveCfg = Debug|Any CPU
|
||||
{06B9A55F-BB97-4163-BCCF-DF5F3CEC46DA}.Debug|x86.Build.0 = Debug|Any CPU
|
||||
{06B9A55F-BB97-4163-BCCF-DF5F3CEC46DA}.Release|Any CPU.ActiveCfg = Release|Any CPU
|
||||
{06B9A55F-BB97-4163-BCCF-DF5F3CEC46DA}.Release|Any CPU.Build.0 = Release|Any CPU
|
||||
{06B9A55F-BB97-4163-BCCF-DF5F3CEC46DA}.Release|x64.ActiveCfg = Release|Any CPU
|
||||
{06B9A55F-BB97-4163-BCCF-DF5F3CEC46DA}.Release|x64.Build.0 = Release|Any CPU
|
||||
{06B9A55F-BB97-4163-BCCF-DF5F3CEC46DA}.Release|x86.ActiveCfg = Release|Any CPU
|
||||
{06B9A55F-BB97-4163-BCCF-DF5F3CEC46DA}.Release|x86.Build.0 = Release|Any CPU
|
||||
{C5281EB5-7985-4431-A29D-EBB2D94792DC}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
|
||||
{C5281EB5-7985-4431-A29D-EBB2D94792DC}.Debug|Any CPU.Build.0 = Debug|Any CPU
|
||||
{C5281EB5-7985-4431-A29D-EBB2D94792DC}.Debug|x64.ActiveCfg = Debug|Any CPU
|
||||
{C5281EB5-7985-4431-A29D-EBB2D94792DC}.Debug|x64.Build.0 = Debug|Any CPU
|
||||
{C5281EB5-7985-4431-A29D-EBB2D94792DC}.Debug|x86.ActiveCfg = Debug|Any CPU
|
||||
{C5281EB5-7985-4431-A29D-EBB2D94792DC}.Debug|x86.Build.0 = Debug|Any CPU
|
||||
{C5281EB5-7985-4431-A29D-EBB2D94792DC}.Release|Any CPU.ActiveCfg = Release|Any CPU
|
||||
{C5281EB5-7985-4431-A29D-EBB2D94792DC}.Release|Any CPU.Build.0 = Release|Any CPU
|
||||
{C5281EB5-7985-4431-A29D-EBB2D94792DC}.Release|x64.ActiveCfg = Release|Any CPU
|
||||
{C5281EB5-7985-4431-A29D-EBB2D94792DC}.Release|x64.Build.0 = Release|Any CPU
|
||||
{C5281EB5-7985-4431-A29D-EBB2D94792DC}.Release|x86.ActiveCfg = Release|Any CPU
|
||||
{C5281EB5-7985-4431-A29D-EBB2D94792DC}.Release|x86.Build.0 = Release|Any CPU
|
||||
{2DF6D629-9FF0-4813-903A-AF1454A625EA}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
|
||||
{2DF6D629-9FF0-4813-903A-AF1454A625EA}.Debug|Any CPU.Build.0 = Debug|Any CPU
|
||||
{2DF6D629-9FF0-4813-903A-AF1454A625EA}.Debug|x64.ActiveCfg = Debug|Any CPU
|
||||
{2DF6D629-9FF0-4813-903A-AF1454A625EA}.Debug|x64.Build.0 = Debug|Any CPU
|
||||
{2DF6D629-9FF0-4813-903A-AF1454A625EA}.Debug|x86.ActiveCfg = Debug|Any CPU
|
||||
{2DF6D629-9FF0-4813-903A-AF1454A625EA}.Debug|x86.Build.0 = Debug|Any CPU
|
||||
{2DF6D629-9FF0-4813-903A-AF1454A625EA}.Release|Any CPU.ActiveCfg = Release|Any CPU
|
||||
{2DF6D629-9FF0-4813-903A-AF1454A625EA}.Release|Any CPU.Build.0 = Release|Any CPU
|
||||
{2DF6D629-9FF0-4813-903A-AF1454A625EA}.Release|x64.ActiveCfg = Release|Any CPU
|
||||
{2DF6D629-9FF0-4813-903A-AF1454A625EA}.Release|x64.Build.0 = Release|Any CPU
|
||||
{2DF6D629-9FF0-4813-903A-AF1454A625EA}.Release|x86.ActiveCfg = Release|Any CPU
|
||||
{2DF6D629-9FF0-4813-903A-AF1454A625EA}.Release|x86.Build.0 = Release|Any CPU
|
||||
{8237425A-933A-440E-AE6B-1DF57F228681}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
|
||||
{8237425A-933A-440E-AE6B-1DF57F228681}.Debug|Any CPU.Build.0 = Debug|Any CPU
|
||||
{8237425A-933A-440E-AE6B-1DF57F228681}.Debug|x64.ActiveCfg = Debug|Any CPU
|
||||
{8237425A-933A-440E-AE6B-1DF57F228681}.Debug|x64.Build.0 = Debug|Any CPU
|
||||
{8237425A-933A-440E-AE6B-1DF57F228681}.Debug|x86.ActiveCfg = Debug|Any CPU
|
||||
{8237425A-933A-440E-AE6B-1DF57F228681}.Debug|x86.Build.0 = Debug|Any CPU
|
||||
{8237425A-933A-440E-AE6B-1DF57F228681}.Release|Any CPU.ActiveCfg = Release|Any CPU
|
||||
{8237425A-933A-440E-AE6B-1DF57F228681}.Release|Any CPU.Build.0 = Release|Any CPU
|
||||
{8237425A-933A-440E-AE6B-1DF57F228681}.Release|x64.ActiveCfg = Release|Any CPU
|
||||
{8237425A-933A-440E-AE6B-1DF57F228681}.Release|x64.Build.0 = Release|Any CPU
|
||||
{8237425A-933A-440E-AE6B-1DF57F228681}.Release|x86.ActiveCfg = Release|Any CPU
|
||||
{8237425A-933A-440E-AE6B-1DF57F228681}.Release|x86.Build.0 = Release|Any CPU
|
||||
EndGlobalSection
|
||||
GlobalSection(SolutionProperties) = preSolution
|
||||
HideSolutionNode = FALSE
|
||||
@@ -872,9 +1026,11 @@ Global
|
||||
{782652F5-A7C3-4070-8B42-F7DC2C17973E} = {56BCE1BF-7CBA-7CE8-203D-A88051F1D642}
|
||||
{51CAC6CD-ED38-4AFC-AE81-84A4BDD45DB2} = {56BCE1BF-7CBA-7CE8-203D-A88051F1D642}
|
||||
{E76AE786-599B-434C-8E52-1B1211768386} = {7FECE895-ECB6-33CE-12BE-877282A67F5D}
|
||||
{37E2DB38-F316-4A0E-968C-1381A3DABD6F} = {7FECE895-ECB6-33CE-12BE-877282A67F5D}
|
||||
{B6C4BB91-BC9F-4F5F-904F-9B19C80D4E4A} = {41F15E67-7190-CF23-3BC4-77E87134CADD}
|
||||
{B2597D13-8733-4F20-B157-B4B5D36FB59A} = {41F15E67-7190-CF23-3BC4-77E87134CADD}
|
||||
{C2B2B38A-D67D-429E-BB2E-023E25EBD7D3} = {41F15E67-7190-CF23-3BC4-77E87134CADD}
|
||||
{482026BC-2E89-4789-8A73-523FAAC8476F} = {41F15E67-7190-CF23-3BC4-77E87134CADD}
|
||||
{E0104A8E-2C39-48C1-97EC-66C171310944} = {56BCE1BF-7CBA-7CE8-203D-A88051F1D642}
|
||||
EndGlobalSection
|
||||
EndGlobal
|
||||
|
||||
@@ -1,3 +1,6 @@
|
||||
using System.Collections.Immutable;
|
||||
using System.Text.RegularExpressions;
|
||||
|
||||
namespace StellaOps.Scanner.Analyzers.Lang.Deno.Internal;
|
||||
|
||||
internal sealed class DenoConfigDocument
|
||||
@@ -12,7 +15,8 @@ internal sealed class DenoConfigDocument
|
||||
bool vendorEnabled,
|
||||
string? vendorDirectoryPath,
|
||||
bool nodeModulesDirEnabled,
|
||||
string? nodeModulesDir)
|
||||
string? nodeModulesDir,
|
||||
ImmutableArray<string> entrypoints)
|
||||
{
|
||||
AbsolutePath = Path.GetFullPath(absolutePath);
|
||||
RelativePath = DenoPathUtilities.NormalizeRelativePath(relativePath);
|
||||
@@ -25,6 +29,7 @@ internal sealed class DenoConfigDocument
|
||||
VendorDirectoryPath = vendorDirectoryPath;
|
||||
NodeModulesDirEnabled = nodeModulesDirEnabled;
|
||||
NodeModulesDirectory = nodeModulesDir;
|
||||
Entrypoints = entrypoints;
|
||||
}
|
||||
|
||||
public string AbsolutePath { get; }
|
||||
@@ -49,6 +54,8 @@ internal sealed class DenoConfigDocument
|
||||
|
||||
public string? NodeModulesDirectory { get; }
|
||||
|
||||
public ImmutableArray<string> Entrypoints { get; }
|
||||
|
||||
public static bool TryLoad(
|
||||
string absolutePath,
|
||||
string relativePath,
|
||||
@@ -80,6 +87,7 @@ internal sealed class DenoConfigDocument
|
||||
var (lockEnabled, lockFilePath) = ResolveLockPath(root, directory);
|
||||
var (vendorEnabled, vendorDirectory) = ResolveVendorDirectory(root, directory);
|
||||
var (nodeModulesDirEnabled, nodeModulesDir) = ResolveNodeModulesDirectory(root, directory);
|
||||
var entrypoints = ResolveEntrypoints(root, directory);
|
||||
|
||||
document = new DenoConfigDocument(
|
||||
absolutePath,
|
||||
@@ -91,7 +99,8 @@ internal sealed class DenoConfigDocument
|
||||
vendorEnabled,
|
||||
vendorDirectory,
|
||||
nodeModulesDirEnabled,
|
||||
nodeModulesDir);
|
||||
nodeModulesDir,
|
||||
entrypoints);
|
||||
|
||||
return true;
|
||||
}
|
||||
@@ -192,6 +201,79 @@ internal sealed class DenoConfigDocument
|
||||
return results;
|
||||
}
|
||||
|
||||
private static ImmutableArray<string> ResolveEntrypoints(JsonElement root, string directory)
|
||||
{
|
||||
if (!root.TryGetProperty("tasks", out var tasksElement) || tasksElement.ValueKind != JsonValueKind.Object)
|
||||
{
|
||||
return ImmutableArray<string>.Empty;
|
||||
}
|
||||
|
||||
var builder = ImmutableArray.CreateBuilder<string>();
|
||||
|
||||
foreach (var task in tasksElement.EnumerateObject())
|
||||
{
|
||||
if (task.Value.ValueKind != JsonValueKind.String)
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
var command = task.Value.GetString();
|
||||
if (string.IsNullOrWhiteSpace(command))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
foreach (var candidate in ExtractEntrypointCandidates(command))
|
||||
{
|
||||
var normalized = NormalizeEntrypoint(directory, candidate);
|
||||
if (!string.IsNullOrWhiteSpace(normalized))
|
||||
{
|
||||
builder.Add(normalized!);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return builder
|
||||
.Where(static entry => !string.IsNullOrWhiteSpace(entry))
|
||||
.Select(static entry => entry!)
|
||||
.Distinct(StringComparer.OrdinalIgnoreCase)
|
||||
.OrderBy(static entry => entry, StringComparer.OrdinalIgnoreCase)
|
||||
.ToImmutableArray();
|
||||
}
|
||||
|
||||
private static IEnumerable<string> ExtractEntrypointCandidates(string command)
|
||||
{
|
||||
foreach (Match match in EntrypointRegex.Matches(command ?? string.Empty))
|
||||
{
|
||||
var path = match.Groups["path"].Value;
|
||||
if (!string.IsNullOrWhiteSpace(path))
|
||||
{
|
||||
yield return path.Trim('"', '\'');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static string? NormalizeEntrypoint(string directory, string candidate)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(candidate))
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
string fullPath = Path.IsPathFullyQualified(candidate)
|
||||
? candidate
|
||||
: Path.Combine(directory, candidate);
|
||||
|
||||
fullPath = Path.GetFullPath(fullPath);
|
||||
if (!File.Exists(fullPath))
|
||||
{
|
||||
return null;
|
||||
}
|
||||
|
||||
var relative = Path.GetRelativePath(directory, fullPath);
|
||||
return DenoPathUtilities.NormalizeRelativePath(relative);
|
||||
}
|
||||
|
||||
private static (bool Enabled, string? Path) ResolveLockPath(JsonElement root, string directory)
|
||||
{
|
||||
if (!root.TryGetProperty("lock", out var lockElement))
|
||||
@@ -327,4 +409,8 @@ internal sealed class DenoConfigDocument
|
||||
_ => (false, null),
|
||||
};
|
||||
}
|
||||
|
||||
private static readonly Regex EntrypointRegex = new(
|
||||
@"(?<path>(?:\.\.?/|/)[^""'\s]+?\.(?:ts|tsx|mts|cts|js|jsx|mjs|cjs))",
|
||||
RegexOptions.IgnoreCase | RegexOptions.Compiled);
|
||||
}
|
||||
|
||||
@@ -38,6 +38,18 @@ internal static class DenoModuleGraphResolver
|
||||
foreach (var config in workspace.Configurations)
|
||||
{
|
||||
_cancellationToken.ThrowIfCancellationRequested();
|
||||
var metadata = new Dictionary<string, string?>(StringComparer.Ordinal)
|
||||
{
|
||||
["vendor.enabled"] = config.VendorEnabled.ToString(CultureInfo.InvariantCulture),
|
||||
["lock.enabled"] = config.LockEnabled.ToString(CultureInfo.InvariantCulture),
|
||||
["nodeModules.enabled"] = config.NodeModulesDirEnabled.ToString(CultureInfo.InvariantCulture),
|
||||
};
|
||||
|
||||
if (config.Entrypoints.Length > 0)
|
||||
{
|
||||
metadata["entrypoints"] = string.Join(";", config.Entrypoints);
|
||||
}
|
||||
|
||||
var configNodeId = GetOrAddNode(
|
||||
$"config::{config.RelativePath}",
|
||||
config.RelativePath,
|
||||
@@ -45,12 +57,7 @@ internal static class DenoModuleGraphResolver
|
||||
config.AbsolutePath,
|
||||
layerDigest: null,
|
||||
integrity: null,
|
||||
metadata: new Dictionary<string, string?>()
|
||||
{
|
||||
["vendor.enabled"] = config.VendorEnabled.ToString(CultureInfo.InvariantCulture),
|
||||
["lock.enabled"] = config.LockEnabled.ToString(CultureInfo.InvariantCulture),
|
||||
["nodeModules.enabled"] = config.NodeModulesDirEnabled.ToString(CultureInfo.InvariantCulture),
|
||||
});
|
||||
metadata: metadata);
|
||||
|
||||
if (config.ImportMapPath is not null)
|
||||
{
|
||||
|
||||
@@ -14,6 +14,9 @@ internal static class DenoNpmCompatibilityAdapter
|
||||
|
||||
private static readonly Regex DynamicImportRegex = new(@"import\s*\(\s*['""](?<url>https?://[^'""]+)['""]", RegexOptions.IgnoreCase | RegexOptions.Compiled);
|
||||
private static readonly Regex LiteralFetchRegex = new(@"fetch\s*\(\s*['""](?<url>https?://[^'""]+)['""]", RegexOptions.IgnoreCase | RegexOptions.Compiled);
|
||||
private static readonly Regex DynamicImportIdentifierRegex = new(@"import\s*\(\s*(?<identifier>[A-Za-z_][A-Za-z0-9_]*)\s*\)", RegexOptions.IgnoreCase | RegexOptions.Compiled);
|
||||
private static readonly Regex LiteralFetchIdentifierRegex = new(@"fetch\s*\(\s*(?<identifier>[A-Za-z_][A-Za-z0-9_]*)\s*\)", RegexOptions.IgnoreCase | RegexOptions.Compiled);
|
||||
private static readonly Regex LiteralAssignmentRegex = new(@"(?:(?:const|let|var)\s+)(?<name>[A-Za-z_][A-Za-z0-9_]*)\s*=\s*['""](?<url>https?://[^'""]+)['""]", RegexOptions.IgnoreCase | RegexOptions.Compiled);
|
||||
|
||||
private static readonly HashSet<string> SourceFileExtensions = new(StringComparer.OrdinalIgnoreCase)
|
||||
{
|
||||
@@ -67,19 +70,41 @@ internal static class DenoNpmCompatibilityAdapter
|
||||
private static ImmutableArray<DenoBuiltinUsage> CollectBuiltins(DenoModuleGraph graph)
|
||||
{
|
||||
var builder = ImmutableArray.CreateBuilder<DenoBuiltinUsage>();
|
||||
var seen = new HashSet<string>(StringComparer.OrdinalIgnoreCase);
|
||||
|
||||
foreach (var edge in graph.Edges)
|
||||
{
|
||||
if (edge.Specifier.StartsWith("node:", StringComparison.OrdinalIgnoreCase) ||
|
||||
edge.Specifier.StartsWith("deno:", StringComparison.OrdinalIgnoreCase))
|
||||
foreach (var candidate in EnumerateBuiltinCandidates(edge))
|
||||
{
|
||||
builder.Add(new DenoBuiltinUsage(edge.Specifier, edge.FromId, edge.Provenance));
|
||||
var key = $"{candidate}::{edge.FromId}";
|
||||
if (seen.Add(key))
|
||||
{
|
||||
builder.Add(new DenoBuiltinUsage(candidate, edge.FromId, edge.Provenance));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return builder.ToImmutable();
|
||||
}
|
||||
|
||||
private static IEnumerable<string> EnumerateBuiltinCandidates(DenoModuleEdge edge)
|
||||
{
|
||||
if (IsBuiltin(edge.Specifier))
|
||||
{
|
||||
yield return edge.Specifier;
|
||||
}
|
||||
|
||||
if (!string.IsNullOrWhiteSpace(edge.Resolution) && IsBuiltin(edge.Resolution))
|
||||
{
|
||||
yield return edge.Resolution!;
|
||||
}
|
||||
}
|
||||
|
||||
private static bool IsBuiltin(string? value)
|
||||
=> !string.IsNullOrWhiteSpace(value) &&
|
||||
(value.StartsWith("node:", StringComparison.OrdinalIgnoreCase) ||
|
||||
value.StartsWith("deno:", StringComparison.OrdinalIgnoreCase));
|
||||
|
||||
private static ImmutableArray<DenoNpmResolution> ResolveNpmPackages(
|
||||
DenoWorkspace workspace,
|
||||
DenoModuleGraph graph,
|
||||
@@ -326,6 +351,7 @@ internal static class DenoNpmCompatibilityAdapter
|
||||
}
|
||||
|
||||
var lineNumber = 0;
|
||||
var literalAssignments = new Dictionary<string, string>(StringComparer.Ordinal);
|
||||
using var stream = new StreamReader(file.AbsolutePath);
|
||||
string? line;
|
||||
while ((line = stream.ReadLine()) is not null)
|
||||
@@ -333,6 +359,18 @@ internal static class DenoNpmCompatibilityAdapter
|
||||
lineNumber++;
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
|
||||
foreach (Match assignment in LiteralAssignmentRegex.Matches(line))
|
||||
{
|
||||
var name = assignment.Groups["name"].Value;
|
||||
var url = assignment.Groups["url"].Value;
|
||||
if (string.IsNullOrWhiteSpace(name) || string.IsNullOrWhiteSpace(url))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
literalAssignments[name] = url;
|
||||
}
|
||||
|
||||
foreach (Match match in DynamicImportRegex.Matches(line))
|
||||
{
|
||||
var specifier = match.Groups["url"].Value;
|
||||
@@ -362,6 +400,36 @@ internal static class DenoNpmCompatibilityAdapter
|
||||
url,
|
||||
"network.fetch.literal"));
|
||||
}
|
||||
|
||||
foreach (Match match in DynamicImportIdentifierRegex.Matches(line))
|
||||
{
|
||||
var identifier = match.Groups["identifier"].Value;
|
||||
if (!literalAssignments.TryGetValue(identifier, out var url) || string.IsNullOrWhiteSpace(url))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
dynamicBuilder.Add(new DenoDynamicImportObservation(
|
||||
file.AbsolutePath,
|
||||
lineNumber,
|
||||
url,
|
||||
"network.dynamic_import.identifier"));
|
||||
}
|
||||
|
||||
foreach (Match match in LiteralFetchIdentifierRegex.Matches(line))
|
||||
{
|
||||
var identifier = match.Groups["identifier"].Value;
|
||||
if (!literalAssignments.TryGetValue(identifier, out var url) || string.IsNullOrWhiteSpace(url))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
fetchBuilder.Add(new DenoLiteralFetchObservation(
|
||||
file.AbsolutePath,
|
||||
lineNumber,
|
||||
url,
|
||||
"network.fetch.identifier"));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -68,11 +68,14 @@ internal sealed class DenoVirtualFileSystem
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
var files = new List<DenoVirtualFile>();
|
||||
AddConfigFiles(context, configs, files, cancellationToken);
|
||||
AddImportMaps(importMaps, files, cancellationToken);
|
||||
AddLockFiles(lockFiles, files, cancellationToken);
|
||||
AddVendorFiles(vendors, files, cancellationToken);
|
||||
AddCacheFiles(cacheLocations, files, cancellationToken);
|
||||
var seen = new HashSet<string>(StringComparer.OrdinalIgnoreCase);
|
||||
|
||||
AddConfigFiles(context, configs, files, seen, cancellationToken);
|
||||
AddImportMaps(context, importMaps, files, seen, cancellationToken);
|
||||
AddLockFiles(context, lockFiles, files, seen, cancellationToken);
|
||||
AddVendorFiles(vendors, files, seen, cancellationToken);
|
||||
AddCacheFiles(cacheLocations, files, seen, cancellationToken);
|
||||
AddWorkspaceFiles(context, vendors, cacheLocations, files, seen, cancellationToken);
|
||||
|
||||
return new DenoVirtualFileSystem(files);
|
||||
}
|
||||
@@ -81,6 +84,7 @@ internal sealed class DenoVirtualFileSystem
|
||||
LanguageAnalyzerContext context,
|
||||
IEnumerable<DenoConfigDocument> configs,
|
||||
ICollection<DenoVirtualFile> files,
|
||||
HashSet<string> seen,
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
foreach (var config in configs ?? Array.Empty<DenoConfigDocument>())
|
||||
@@ -89,34 +93,45 @@ internal sealed class DenoVirtualFileSystem
|
||||
|
||||
if (File.Exists(config.AbsolutePath))
|
||||
{
|
||||
files.Add(CreateVirtualFile(
|
||||
TryAddFile(
|
||||
files,
|
||||
seen,
|
||||
config.AbsolutePath,
|
||||
context.GetRelativePath(config.AbsolutePath),
|
||||
DenoVirtualFileSource.Workspace,
|
||||
layerDigest: DenoLayerMetadata.TryExtractDigest(config.AbsolutePath)));
|
||||
DenoLayerMetadata.TryExtractDigest(config.AbsolutePath));
|
||||
}
|
||||
|
||||
if (!string.IsNullOrWhiteSpace(config.ImportMapPath) && File.Exists(config.ImportMapPath))
|
||||
{
|
||||
files.Add(CreateVirtualFile(
|
||||
TryAddFile(
|
||||
files,
|
||||
seen,
|
||||
config.ImportMapPath!,
|
||||
context.GetRelativePath(config.ImportMapPath!),
|
||||
DenoVirtualFileSource.ImportMap,
|
||||
layerDigest: DenoLayerMetadata.TryExtractDigest(config.ImportMapPath!)));
|
||||
DenoLayerMetadata.TryExtractDigest(config.ImportMapPath!));
|
||||
}
|
||||
|
||||
if (config.LockEnabled && !string.IsNullOrWhiteSpace(config.LockFilePath) && File.Exists(config.LockFilePath))
|
||||
{
|
||||
files.Add(CreateVirtualFile(
|
||||
TryAddFile(
|
||||
files,
|
||||
seen,
|
||||
config.LockFilePath!,
|
||||
context.GetRelativePath(config.LockFilePath!),
|
||||
DenoVirtualFileSource.LockFile,
|
||||
layerDigest: DenoLayerMetadata.TryExtractDigest(config.LockFilePath!)));
|
||||
DenoLayerMetadata.TryExtractDigest(config.LockFilePath!));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void AddImportMaps(IEnumerable<DenoImportMapDocument> maps, ICollection<DenoVirtualFile> files, CancellationToken cancellationToken)
|
||||
private static void AddImportMaps(
|
||||
LanguageAnalyzerContext context,
|
||||
IEnumerable<DenoImportMapDocument> maps,
|
||||
ICollection<DenoVirtualFile> files,
|
||||
HashSet<string> seen,
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
foreach (var map in maps ?? Array.Empty<DenoImportMapDocument>())
|
||||
{
|
||||
@@ -127,15 +142,26 @@ internal sealed class DenoVirtualFileSystem
|
||||
continue;
|
||||
}
|
||||
|
||||
files.Add(CreateVirtualFile(
|
||||
var virtualPath = string.IsNullOrWhiteSpace(map.Origin)
|
||||
? context.GetRelativePath(map.AbsolutePath)
|
||||
: map.Origin;
|
||||
|
||||
TryAddFile(
|
||||
files,
|
||||
seen,
|
||||
map.AbsolutePath,
|
||||
map.Origin,
|
||||
virtualPath,
|
||||
DenoVirtualFileSource.ImportMap,
|
||||
layerDigest: DenoLayerMetadata.TryExtractDigest(map.AbsolutePath)));
|
||||
DenoLayerMetadata.TryExtractDigest(map.AbsolutePath));
|
||||
}
|
||||
}
|
||||
|
||||
private static void AddLockFiles(IEnumerable<DenoLockFile> lockFiles, ICollection<DenoVirtualFile> files, CancellationToken cancellationToken)
|
||||
private static void AddLockFiles(
|
||||
LanguageAnalyzerContext context,
|
||||
IEnumerable<DenoLockFile> lockFiles,
|
||||
ICollection<DenoVirtualFile> files,
|
||||
HashSet<string> seen,
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
foreach (var lockFile in lockFiles ?? Array.Empty<DenoLockFile>())
|
||||
{
|
||||
@@ -146,15 +172,25 @@ internal sealed class DenoVirtualFileSystem
|
||||
continue;
|
||||
}
|
||||
|
||||
files.Add(CreateVirtualFile(
|
||||
var virtualPath = string.IsNullOrWhiteSpace(lockFile.RelativePath)
|
||||
? context.GetRelativePath(lockFile.AbsolutePath)
|
||||
: lockFile.RelativePath;
|
||||
|
||||
TryAddFile(
|
||||
files,
|
||||
seen,
|
||||
lockFile.AbsolutePath,
|
||||
lockFile.RelativePath,
|
||||
virtualPath,
|
||||
DenoVirtualFileSource.LockFile,
|
||||
layerDigest: DenoLayerMetadata.TryExtractDigest(lockFile.AbsolutePath)));
|
||||
DenoLayerMetadata.TryExtractDigest(lockFile.AbsolutePath));
|
||||
}
|
||||
}
|
||||
|
||||
private static void AddVendorFiles(IEnumerable<DenoVendorDirectory> vendors, ICollection<DenoVirtualFile> files, CancellationToken cancellationToken)
|
||||
private static void AddVendorFiles(
|
||||
IEnumerable<DenoVendorDirectory> vendors,
|
||||
ICollection<DenoVirtualFile> files,
|
||||
HashSet<string> seen,
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
foreach (var vendor in vendors ?? Array.Empty<DenoVendorDirectory>())
|
||||
{
|
||||
@@ -168,34 +204,45 @@ internal sealed class DenoVirtualFileSystem
|
||||
foreach (var file in SafeEnumerateFiles(vendor.AbsolutePath))
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
files.Add(CreateVirtualFile(
|
||||
var virtualPath = $"vendor://{vendor.Alias}/{DenoPathUtilities.NormalizeRelativePath(Path.GetRelativePath(vendor.AbsolutePath, file))}";
|
||||
TryAddFile(
|
||||
files,
|
||||
seen,
|
||||
file,
|
||||
$"vendor://{vendor.Alias}/{DenoPathUtilities.NormalizeRelativePath(Path.GetRelativePath(vendor.AbsolutePath, file))}",
|
||||
virtualPath,
|
||||
DenoVirtualFileSource.Vendor,
|
||||
vendor.LayerDigest ?? DenoLayerMetadata.TryExtractDigest(file)));
|
||||
vendor.LayerDigest ?? DenoLayerMetadata.TryExtractDigest(file));
|
||||
}
|
||||
|
||||
if (vendor.ImportMap is { AbsolutePath: not null } importMapFile && File.Exists(importMapFile.AbsolutePath))
|
||||
{
|
||||
files.Add(CreateVirtualFile(
|
||||
TryAddFile(
|
||||
files,
|
||||
seen,
|
||||
importMapFile.AbsolutePath,
|
||||
$"vendor://{vendor.Alias}/import_map.json",
|
||||
DenoVirtualFileSource.ImportMap,
|
||||
vendor.LayerDigest ?? DenoLayerMetadata.TryExtractDigest(importMapFile.AbsolutePath)));
|
||||
vendor.LayerDigest ?? DenoLayerMetadata.TryExtractDigest(importMapFile.AbsolutePath));
|
||||
}
|
||||
|
||||
if (vendor.LockFile is { AbsolutePath: not null } vendorLock && File.Exists(vendorLock.AbsolutePath))
|
||||
{
|
||||
files.Add(CreateVirtualFile(
|
||||
TryAddFile(
|
||||
files,
|
||||
seen,
|
||||
vendorLock.AbsolutePath,
|
||||
$"vendor://{vendor.Alias}/deno.lock",
|
||||
DenoVirtualFileSource.LockFile,
|
||||
vendor.LayerDigest ?? DenoLayerMetadata.TryExtractDigest(vendorLock.AbsolutePath)));
|
||||
vendor.LayerDigest ?? DenoLayerMetadata.TryExtractDigest(vendorLock.AbsolutePath));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void AddCacheFiles(IEnumerable<DenoCacheLocation> cacheLocations, ICollection<DenoVirtualFile> files, CancellationToken cancellationToken)
|
||||
private static void AddCacheFiles(
|
||||
IEnumerable<DenoCacheLocation> cacheLocations,
|
||||
ICollection<DenoVirtualFile> files,
|
||||
HashSet<string> seen,
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
foreach (var cache in cacheLocations ?? Array.Empty<DenoCacheLocation>())
|
||||
{
|
||||
@@ -208,15 +255,105 @@ internal sealed class DenoVirtualFileSystem
|
||||
foreach (var file in SafeEnumerateFiles(cache.AbsolutePath))
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
files.Add(CreateVirtualFile(
|
||||
var virtualPath = $"deno-dir://{cache.Alias}/{DenoPathUtilities.NormalizeRelativePath(Path.GetRelativePath(cache.AbsolutePath, file))}";
|
||||
TryAddFile(
|
||||
files,
|
||||
seen,
|
||||
file,
|
||||
$"deno-dir://{cache.Alias}/{DenoPathUtilities.NormalizeRelativePath(Path.GetRelativePath(cache.AbsolutePath, file))}",
|
||||
virtualPath,
|
||||
cache.Kind == DenoCacheLocationKind.Layer ? DenoVirtualFileSource.Layer : DenoVirtualFileSource.DenoDir,
|
||||
cache.LayerDigest ?? DenoLayerMetadata.TryExtractDigest(file)));
|
||||
cache.LayerDigest ?? DenoLayerMetadata.TryExtractDigest(file));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void AddWorkspaceFiles(
|
||||
LanguageAnalyzerContext context,
|
||||
IEnumerable<DenoVendorDirectory> vendors,
|
||||
IEnumerable<DenoCacheLocation> cacheLocations,
|
||||
ICollection<DenoVirtualFile> files,
|
||||
HashSet<string> seen,
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
var skipRoots = BuildSkipRoots(vendors, cacheLocations);
|
||||
foreach (var file in SafeEnumerateFiles(context.RootPath))
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
if (ShouldSkip(file, skipRoots))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
TryAddFile(
|
||||
files,
|
||||
seen,
|
||||
file,
|
||||
context.GetRelativePath(file),
|
||||
DenoVirtualFileSource.Workspace,
|
||||
DenoLayerMetadata.TryExtractDigest(file));
|
||||
}
|
||||
}
|
||||
|
||||
private static IReadOnlyList<string> BuildSkipRoots(
|
||||
IEnumerable<DenoVendorDirectory> vendors,
|
||||
IEnumerable<DenoCacheLocation> cacheLocations)
|
||||
{
|
||||
var list = new List<string>();
|
||||
|
||||
foreach (var vendor in vendors ?? Array.Empty<DenoVendorDirectory>())
|
||||
{
|
||||
if (!string.IsNullOrWhiteSpace(vendor.AbsolutePath))
|
||||
{
|
||||
list.Add(Path.GetFullPath(vendor.AbsolutePath));
|
||||
}
|
||||
}
|
||||
|
||||
foreach (var cache in cacheLocations ?? Array.Empty<DenoCacheLocation>())
|
||||
{
|
||||
if (!string.IsNullOrWhiteSpace(cache.AbsolutePath))
|
||||
{
|
||||
list.Add(Path.GetFullPath(cache.AbsolutePath));
|
||||
}
|
||||
}
|
||||
|
||||
return list;
|
||||
}
|
||||
|
||||
private static bool ShouldSkip(string path, IReadOnlyList<string> skipRoots)
|
||||
{
|
||||
var fullPath = Path.GetFullPath(path);
|
||||
foreach (var root in skipRoots)
|
||||
{
|
||||
if (fullPath.StartsWith(root, StringComparison.OrdinalIgnoreCase))
|
||||
{
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
private static void TryAddFile(
|
||||
ICollection<DenoVirtualFile> files,
|
||||
HashSet<string> seen,
|
||||
string absolutePath,
|
||||
string virtualPath,
|
||||
DenoVirtualFileSource source,
|
||||
string? layerDigest)
|
||||
{
|
||||
var normalized = Path.GetFullPath(absolutePath);
|
||||
if (!seen.Add($"{source}:{normalized}"))
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
files.Add(CreateVirtualFile(
|
||||
normalized,
|
||||
virtualPath,
|
||||
source,
|
||||
layerDigest));
|
||||
}
|
||||
|
||||
private static IEnumerable<string> SafeEnumerateFiles(string root)
|
||||
{
|
||||
IEnumerable<string> iterator;
|
||||
|
||||
@@ -39,8 +39,23 @@ internal static class DenoObservationBuilder
|
||||
|
||||
foreach (var node in moduleGraph.Nodes)
|
||||
{
|
||||
if (node.Kind == DenoModuleKind.WorkspaceConfig &&
|
||||
node.Metadata.TryGetValue("entry", out var entry) &&
|
||||
if (node.Kind != DenoModuleKind.WorkspaceConfig)
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
if (node.Metadata.TryGetValue("entrypoints", out var entries) &&
|
||||
!string.IsNullOrWhiteSpace(entries))
|
||||
{
|
||||
foreach (var entry in entries.Split(';', StringSplitOptions.RemoveEmptyEntries | StringSplitOptions.TrimEntries))
|
||||
{
|
||||
if (!string.IsNullOrWhiteSpace(entry))
|
||||
{
|
||||
entrypoints.Add(entry!);
|
||||
}
|
||||
}
|
||||
}
|
||||
else if (node.Metadata.TryGetValue("entry", out var entry) &&
|
||||
!string.IsNullOrWhiteSpace(entry))
|
||||
{
|
||||
entrypoints.Add(entry!);
|
||||
|
||||
@@ -7,7 +7,8 @@ internal static class RubyObservationBuilder
|
||||
public static RubyObservationDocument Build(
|
||||
IReadOnlyList<RubyPackage> packages,
|
||||
RubyRuntimeGraph runtimeGraph,
|
||||
RubyCapabilities capabilities)
|
||||
RubyCapabilities capabilities,
|
||||
string? bundledWith)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(packages);
|
||||
ArgumentNullException.ThrowIfNull(runtimeGraph);
|
||||
@@ -34,7 +35,11 @@ internal static class RubyObservationBuilder
|
||||
.OrderBy(static scheduler => scheduler, StringComparer.OrdinalIgnoreCase)
|
||||
.ToImmutableArray());
|
||||
|
||||
return new RubyObservationDocument(packageItems, runtimeItems, capabilitySummary);
|
||||
var normalizedBundler = string.IsNullOrWhiteSpace(bundledWith)
|
||||
? null
|
||||
: bundledWith.Trim();
|
||||
|
||||
return new RubyObservationDocument(packageItems, runtimeItems, capabilitySummary, normalizedBundler);
|
||||
}
|
||||
|
||||
private static RubyObservationPackage CreatePackage(RubyPackage package)
|
||||
|
||||
@@ -5,7 +5,8 @@ namespace StellaOps.Scanner.Analyzers.Lang.Ruby.Internal.Observations;
|
||||
internal sealed record RubyObservationDocument(
|
||||
ImmutableArray<RubyObservationPackage> Packages,
|
||||
ImmutableArray<RubyObservationRuntimeEdge> RuntimeEdges,
|
||||
RubyObservationCapabilitySummary Capabilities);
|
||||
RubyObservationCapabilitySummary Capabilities,
|
||||
string? BundledWith);
|
||||
|
||||
internal sealed record RubyObservationPackage(
|
||||
string Name,
|
||||
|
||||
@@ -20,6 +20,7 @@ internal static class RubyObservationSerializer
|
||||
WritePackages(writer, document.Packages);
|
||||
WriteRuntimeEdges(writer, document.RuntimeEdges);
|
||||
WriteCapabilities(writer, document.Capabilities);
|
||||
WriteBundledWith(writer, document.BundledWith);
|
||||
|
||||
writer.WriteEndObject();
|
||||
writer.Flush();
|
||||
@@ -100,6 +101,16 @@ internal static class RubyObservationSerializer
|
||||
writer.WriteEndObject();
|
||||
}
|
||||
|
||||
private static void WriteBundledWith(Utf8JsonWriter writer, string? bundledWith)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(bundledWith))
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
writer.WriteString("bundledWith", bundledWith);
|
||||
}
|
||||
|
||||
private static void WriteStringArray(Utf8JsonWriter writer, string propertyName, ImmutableArray<string> values)
|
||||
{
|
||||
writer.WritePropertyName(propertyName);
|
||||
|
||||
@@ -50,7 +50,7 @@ public sealed class RubyLanguageAnalyzer : ILanguageAnalyzer
|
||||
|
||||
if (packages.Count > 0)
|
||||
{
|
||||
EmitObservation(context, writer, packages, runtimeGraph, capabilities);
|
||||
EmitObservation(context, writer, packages, runtimeGraph, capabilities, lockData.BundledWith);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -87,7 +87,8 @@ public sealed class RubyLanguageAnalyzer : ILanguageAnalyzer
|
||||
LanguageComponentWriter writer,
|
||||
IReadOnlyList<RubyPackage> packages,
|
||||
RubyRuntimeGraph runtimeGraph,
|
||||
RubyCapabilities capabilities)
|
||||
RubyCapabilities capabilities,
|
||||
string? bundledWith)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(context);
|
||||
ArgumentNullException.ThrowIfNull(writer);
|
||||
@@ -95,7 +96,7 @@ public sealed class RubyLanguageAnalyzer : ILanguageAnalyzer
|
||||
ArgumentNullException.ThrowIfNull(runtimeGraph);
|
||||
ArgumentNullException.ThrowIfNull(capabilities);
|
||||
|
||||
var observationDocument = RubyObservationBuilder.Build(packages, runtimeGraph, capabilities);
|
||||
var observationDocument = RubyObservationBuilder.Build(packages, runtimeGraph, capabilities, bundledWith);
|
||||
var observationJson = RubyObservationSerializer.Serialize(observationDocument);
|
||||
var observationHash = RubyObservationSerializer.ComputeSha256(observationJson);
|
||||
var observationBytes = Encoding.UTF8.GetBytes(observationJson);
|
||||
@@ -103,7 +104,8 @@ public sealed class RubyLanguageAnalyzer : ILanguageAnalyzer
|
||||
var observationMetadata = BuildObservationMetadata(
|
||||
packages.Count,
|
||||
observationDocument.RuntimeEdges.Length,
|
||||
observationDocument.Capabilities);
|
||||
observationDocument.Capabilities,
|
||||
observationDocument.BundledWith);
|
||||
|
||||
TryPersistObservation(Id, context, observationBytes, observationMetadata);
|
||||
|
||||
@@ -131,7 +133,8 @@ public sealed class RubyLanguageAnalyzer : ILanguageAnalyzer
|
||||
private static IEnumerable<KeyValuePair<string, string?>> BuildObservationMetadata(
|
||||
int packageCount,
|
||||
int runtimeEdgeCount,
|
||||
RubyObservationCapabilitySummary capabilities)
|
||||
RubyObservationCapabilitySummary capabilities,
|
||||
string? bundledWith)
|
||||
{
|
||||
yield return new KeyValuePair<string, string?>("ruby.observation.packages", packageCount.ToString(CultureInfo.InvariantCulture));
|
||||
yield return new KeyValuePair<string, string?>("ruby.observation.runtime_edges", runtimeEdgeCount.ToString(CultureInfo.InvariantCulture));
|
||||
@@ -139,6 +142,17 @@ public sealed class RubyLanguageAnalyzer : ILanguageAnalyzer
|
||||
yield return new KeyValuePair<string, string?>("ruby.observation.capability.net", capabilities.UsesNetwork ? "true" : "false");
|
||||
yield return new KeyValuePair<string, string?>("ruby.observation.capability.serialization", capabilities.UsesSerialization ? "true" : "false");
|
||||
yield return new KeyValuePair<string, string?>("ruby.observation.capability.schedulers", capabilities.JobSchedulers.Length.ToString(CultureInfo.InvariantCulture));
|
||||
if (capabilities.JobSchedulers.Length > 0)
|
||||
{
|
||||
yield return new KeyValuePair<string, string?>(
|
||||
"ruby.observation.capability.scheduler_list",
|
||||
string.Join(';', capabilities.JobSchedulers));
|
||||
}
|
||||
|
||||
if (!string.IsNullOrWhiteSpace(bundledWith))
|
||||
{
|
||||
yield return new KeyValuePair<string, string?>("ruby.observation.bundler_version", bundledWith);
|
||||
}
|
||||
}
|
||||
|
||||
private static void TryPersistObservation(
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
| Task ID | State | Notes |
|
||||
| --- | --- | --- |
|
||||
| `SCANNER-ENG-0009` | DOING (2025-11-12) | Added bundler-version metadata + observation summaries, richer CLI output, and the `complex-app` fixture to drive parity validation. |
|
||||
| `SCANNER-ENG-0016` | DONE (2025-11-10) | RubyLockCollector merged with vendor cache ingestion; workspace overrides, bundler groups, git/path fixture, and offline-kit mirror updated. |
|
||||
| `SCANNER-ENG-0017` | DONE (2025-11-09) | Build runtime require/autoload graph builder with tree-sitter Ruby per design §4.4, feed EntryTrace hints. |
|
||||
| `SCANNER-ENG-0018` | DONE (2025-11-09) | Emit Ruby capability + framework surface signals, align with design §4.5 / Sprint 138. |
|
||||
|
||||
@@ -7,6 +7,7 @@ namespace StellaOps.Scanner.Analyzers.Lang.Deno.Tests.Deno;
|
||||
|
||||
public sealed class DenoWorkspaceNormalizerTests
|
||||
{
|
||||
|
||||
[Fact]
|
||||
public async Task WorkspaceFixtureProducesDeterministicOutputAsync()
|
||||
{
|
||||
@@ -79,18 +80,50 @@ public sealed class DenoWorkspaceNormalizerTests
|
||||
node => node.Kind == DenoModuleKind.RemoteModule &&
|
||||
node.Id == "remote::https://deno.land/std@0.207.0/http/server.ts");
|
||||
Assert.NotNull(remoteNode);
|
||||
Assert.Equal("sha256-deadbeef", remoteNode!.Integrity);
|
||||
var expectedIntegrity = lockFile.RemoteEntries["https://deno.land/std@0.207.0/http/server.ts"];
|
||||
Assert.Equal(expectedIntegrity, remoteNode!.Integrity);
|
||||
|
||||
var vendorCacheEdges = graph.Edges
|
||||
.Where(edge => edge.ImportKind == DenoImportKind.Cache &&
|
||||
edge.Provenance.StartsWith("vendor-cache:", StringComparison.Ordinal))
|
||||
.ToArray();
|
||||
|
||||
if (vendorCacheEdges.Length == 0)
|
||||
{
|
||||
var sample = string.Join(
|
||||
Environment.NewLine,
|
||||
graph.Edges
|
||||
.Select(edge => $"{edge.ImportKind}:{edge.Specifier}:{edge.Provenance}")
|
||||
.Take(10));
|
||||
Assert.Fail($"Expected vendor cache edges but none were found. Sample edges:{Environment.NewLine}{sample}");
|
||||
}
|
||||
|
||||
var vendorEdge = vendorCacheEdges.FirstOrDefault(
|
||||
edge => edge.Specifier.Contains("https://deno.land/std@0.207.0/http/server.ts", StringComparison.Ordinal));
|
||||
if (vendorEdge is null)
|
||||
{
|
||||
var details = string.Join(
|
||||
Environment.NewLine,
|
||||
vendorCacheEdges.Select(edge => $"{edge.Specifier} [{edge.Provenance}] -> {edge.Resolution}"));
|
||||
Assert.Fail($"Unable to locate vendor cache edge for std server.ts. Observed edges:{Environment.NewLine}{details}");
|
||||
}
|
||||
|
||||
var npmBridgeEdges = graph.Edges
|
||||
.Where(edge => edge.ImportKind == DenoImportKind.NpmBridge)
|
||||
.ToArray();
|
||||
if (npmBridgeEdges.Length == 0)
|
||||
{
|
||||
var bridgeSample = string.Join(
|
||||
Environment.NewLine,
|
||||
graph.Edges
|
||||
.Select(edge => $"{edge.ImportKind}:{edge.Specifier}:{edge.Resolution}")
|
||||
.Take(10));
|
||||
Assert.Fail($"No npm bridge edges discovered. Sample:{Environment.NewLine}{bridgeSample}");
|
||||
}
|
||||
|
||||
Assert.Contains(
|
||||
graph.Edges,
|
||||
edge => edge.ImportKind == DenoImportKind.Cache &&
|
||||
edge.Provenance.StartsWith("vendor-cache:", StringComparison.Ordinal) &&
|
||||
edge.Specifier.Contains("https://deno.land/std@0.207.0/http/server.ts", StringComparison.Ordinal));
|
||||
|
||||
Assert.Contains(
|
||||
graph.Edges,
|
||||
edge => edge.ImportKind == DenoImportKind.NpmBridge &&
|
||||
edge.Specifier == "npm:dayjs@1" &&
|
||||
npmBridgeEdges,
|
||||
edge => edge.Specifier == "npm:dayjs@1" &&
|
||||
edge.Resolution == "dayjs@1.11.12");
|
||||
|
||||
Assert.Contains(
|
||||
|
||||
@@ -1,5 +1,10 @@
|
||||
using System.Linq;
|
||||
using System.Text.Encodings.Web;
|
||||
using System.Text.Json;
|
||||
using System.Text.Json.Nodes;
|
||||
using StellaOps.Scanner.Analyzers.Lang;
|
||||
using StellaOps.Scanner.Analyzers.Lang.Deno;
|
||||
using StellaOps.Scanner.Analyzers.Lang.Deno.Tests.TestFixtures;
|
||||
using StellaOps.Scanner.Analyzers.Lang.Deno.Tests.TestUtilities;
|
||||
|
||||
namespace StellaOps.Scanner.Analyzers.Lang.Deno.Tests.Golden;
|
||||
@@ -9,13 +14,19 @@ public sealed class DenoAnalyzerGoldenTests
|
||||
[Fact]
|
||||
public async Task AnalyzerMatchesGoldenSnapshotAsync()
|
||||
{
|
||||
var fixture = TestPaths.ResolveFixture("lang", "deno", "full");
|
||||
var golden = Path.Combine(fixture, "expected.json");
|
||||
var fixtureRoot = TestPaths.ResolveFixture("lang", "deno", "full");
|
||||
var golden = Path.Combine(fixtureRoot, "expected.json");
|
||||
var analyzers = new ILanguageAnalyzer[] { new DenoLanguageAnalyzer() };
|
||||
var cancellationToken = TestContext.Current.CancellationToken;
|
||||
|
||||
var json = await LanguageAnalyzerTestHarness.RunToJsonAsync(fixture, analyzers, cancellationToken);
|
||||
var normalized = Normalize(json, fixture);
|
||||
var (workspaceRoot, envDir) = DenoWorkspaceTestFixture.Create();
|
||||
var previousDenoDir = Environment.GetEnvironmentVariable("DENO_DIR");
|
||||
try
|
||||
{
|
||||
Environment.SetEnvironmentVariable("DENO_DIR", envDir);
|
||||
|
||||
var json = await LanguageAnalyzerTestHarness.RunToJsonAsync(workspaceRoot, analyzers, cancellationToken);
|
||||
var normalized = Normalize(json, workspaceRoot);
|
||||
var expected = await File.ReadAllTextAsync(golden, cancellationToken);
|
||||
|
||||
normalized = normalized.TrimEnd();
|
||||
@@ -29,6 +40,12 @@ public sealed class DenoAnalyzerGoldenTests
|
||||
|
||||
Assert.Equal(expected, normalized);
|
||||
}
|
||||
finally
|
||||
{
|
||||
Environment.SetEnvironmentVariable("DENO_DIR", previousDenoDir);
|
||||
DenoWorkspaceTestFixture.Cleanup(workspaceRoot);
|
||||
}
|
||||
}
|
||||
|
||||
private static string Normalize(string json, string workspaceRoot)
|
||||
{
|
||||
@@ -37,10 +54,206 @@ public sealed class DenoAnalyzerGoldenTests
|
||||
return string.Empty;
|
||||
}
|
||||
|
||||
var node = JsonNode.Parse(json) ?? new JsonArray();
|
||||
if (node is JsonArray array)
|
||||
{
|
||||
foreach (var element in array.OfType<JsonObject>())
|
||||
{
|
||||
NormalizeComponent(element);
|
||||
}
|
||||
}
|
||||
|
||||
var normalized = node.ToJsonString(JsonSerializerOptionsProvider);
|
||||
normalized = ReplaceWorkspacePaths(normalized, workspaceRoot);
|
||||
return normalized;
|
||||
}
|
||||
|
||||
private static void NormalizeComponent(JsonObject component)
|
||||
{
|
||||
if (component is null)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
SortMetadata(component);
|
||||
|
||||
if (!component.TryGetPropertyValue("type", out var typeNode))
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
var type = typeNode?.GetValue<string>();
|
||||
if (string.Equals(type, "deno-container", StringComparison.OrdinalIgnoreCase))
|
||||
{
|
||||
NormalizeContainer(component);
|
||||
}
|
||||
else if (string.Equals(type, "deno-observation", StringComparison.OrdinalIgnoreCase))
|
||||
{
|
||||
NormalizeObservation(component);
|
||||
}
|
||||
}
|
||||
|
||||
private static void NormalizeContainer(JsonObject container)
|
||||
{
|
||||
NormalizeAliasProperty(container, "name");
|
||||
NormalizeComponentKey(container);
|
||||
|
||||
if (container.TryGetPropertyValue("metadata", out var metadataNode) &&
|
||||
metadataNode is JsonObject metadata)
|
||||
{
|
||||
NormalizeAliasProperty(metadata, "deno.container.identifier");
|
||||
NormalizeAliasProperty(metadata, "deno.container.meta.alias");
|
||||
}
|
||||
|
||||
if (container.TryGetPropertyValue("evidence", out var evidenceNode) &&
|
||||
evidenceNode is JsonArray evidenceArray)
|
||||
{
|
||||
foreach (var evidence in evidenceArray.OfType<JsonObject>())
|
||||
{
|
||||
if (evidence.TryGetPropertyValue("source", out var sourceNode) &&
|
||||
string.Equals(sourceNode?.GetValue<string>(), "deno.container", StringComparison.OrdinalIgnoreCase))
|
||||
{
|
||||
NormalizeAliasProperty(evidence, "value");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void NormalizeComponentKey(JsonObject container)
|
||||
{
|
||||
if (!container.TryGetPropertyValue("componentKey", out var keyNode) ||
|
||||
keyNode is not JsonValue keyValue ||
|
||||
!keyValue.TryGetValue<string>(out var componentKey))
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
var lastSeparator = componentKey.LastIndexOf(':');
|
||||
if (lastSeparator < 0)
|
||||
{
|
||||
container["componentKey"] = NormalizeAliasValue(componentKey);
|
||||
return;
|
||||
}
|
||||
|
||||
var prefix = componentKey[..(lastSeparator + 1)];
|
||||
var alias = componentKey[(lastSeparator + 1)..];
|
||||
container["componentKey"] = prefix + NormalizeAliasValue(alias);
|
||||
}
|
||||
|
||||
private static void NormalizeAliasProperty(JsonObject obj, string propertyName)
|
||||
{
|
||||
if (!obj.TryGetPropertyValue(propertyName, out var node) ||
|
||||
node is not JsonValue valueNode ||
|
||||
!valueNode.TryGetValue<string>(out var value) ||
|
||||
string.IsNullOrWhiteSpace(value))
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
obj[propertyName] = NormalizeAliasValue(value);
|
||||
}
|
||||
|
||||
private static string NormalizeAliasValue(string value)
|
||||
=> TryNormalizeAlias(value, out var normalized) ? normalized : value;
|
||||
|
||||
private static bool TryNormalizeAlias(string value, out string normalized)
|
||||
{
|
||||
normalized = value;
|
||||
if (string.IsNullOrWhiteSpace(value))
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
var lastDash = value.LastIndexOf('-');
|
||||
if (lastDash <= 0)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
var suffix = value[(lastDash + 1)..];
|
||||
if (suffix.Length != 12)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
foreach (var character in suffix)
|
||||
{
|
||||
if (!IsLowerHex(character))
|
||||
{
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
normalized = value[..lastDash] + "-<hash>";
|
||||
return true;
|
||||
}
|
||||
|
||||
private static bool IsLowerHex(char value)
|
||||
=> (value is >= '0' and <= '9') || (value is >= 'a' and <= 'f');
|
||||
|
||||
private static string ReplaceWorkspacePaths(string value, string workspaceRoot)
|
||||
{
|
||||
var normalizedRoot = workspaceRoot.Replace("\\", "/", StringComparison.Ordinal);
|
||||
var builder = json.Replace(normalizedRoot, "<workspace>", StringComparison.Ordinal);
|
||||
var normalizedRootLower = normalizedRoot.ToLowerInvariant();
|
||||
var result = value
|
||||
.Replace(normalizedRoot, "<workspace>", StringComparison.Ordinal)
|
||||
.Replace(normalizedRootLower, "<workspace>", StringComparison.Ordinal);
|
||||
|
||||
var altRoot = workspaceRoot.Replace("/", "\\", StringComparison.Ordinal);
|
||||
builder = builder.Replace(altRoot, "<workspace>", StringComparison.Ordinal);
|
||||
return builder;
|
||||
var altRootLower = altRoot.ToLowerInvariant();
|
||||
result = result
|
||||
.Replace(altRoot, "<workspace>", StringComparison.Ordinal)
|
||||
.Replace(altRootLower, "<workspace>", StringComparison.Ordinal);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
private static void NormalizeObservation(JsonObject observation)
|
||||
{
|
||||
if (observation.TryGetPropertyValue("metadata", out var metadataNode) &&
|
||||
metadataNode is JsonObject metadata &&
|
||||
metadata.TryGetPropertyValue("deno.observation.hash", out var hashNode) &&
|
||||
hashNode is JsonValue)
|
||||
{
|
||||
metadata["deno.observation.hash"] = "<hash>";
|
||||
}
|
||||
|
||||
if (observation.TryGetPropertyValue("evidence", out var evidenceNode) &&
|
||||
evidenceNode is JsonArray evidenceArray)
|
||||
{
|
||||
foreach (var evidence in evidenceArray.OfType<JsonObject>())
|
||||
{
|
||||
if (evidence.TryGetPropertyValue("source", out var sourceNode) &&
|
||||
string.Equals(sourceNode?.GetValue<string>(), "deno.observation", StringComparison.OrdinalIgnoreCase) &&
|
||||
evidence.TryGetPropertyValue("sha256", out var shaNode) &&
|
||||
shaNode is JsonValue)
|
||||
{
|
||||
evidence["sha256"] = "<hash>";
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void SortMetadata(JsonObject component)
|
||||
{
|
||||
if (!component.TryGetPropertyValue("metadata", out var metadataNode) ||
|
||||
metadataNode is not JsonObject metadata)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
var sorted = new JsonObject();
|
||||
foreach (var entry in metadata.OrderBy(pair => pair.Key, StringComparer.Ordinal))
|
||||
{
|
||||
sorted[entry.Key] = entry.Value?.DeepClone();
|
||||
}
|
||||
|
||||
component["metadata"] = sorted;
|
||||
}
|
||||
|
||||
private static readonly JsonSerializerOptions JsonSerializerOptionsProvider = new()
|
||||
{
|
||||
WriteIndented = true,
|
||||
Encoder = JavaScriptEncoder.UnsafeRelaxedJsonEscaping
|
||||
};
|
||||
}
|
||||
|
||||
@@ -0,0 +1,3 @@
|
||||
---
|
||||
BUNDLE_GEMFILE: Gemfile
|
||||
BUNDLE_PATH: vendor/custom-bundle
|
||||
@@ -0,0 +1,20 @@
|
||||
source "https://rubygems.org/"
|
||||
|
||||
gem "rack", "~> 3.1"
|
||||
|
||||
group :web do
|
||||
gem "sinatra", "~> 3.1"
|
||||
gem "pagy"
|
||||
end
|
||||
|
||||
group :jobs do
|
||||
gem "sidekiq", "~> 7.2"
|
||||
end
|
||||
|
||||
group :ops do
|
||||
gem "clockwork"
|
||||
end
|
||||
|
||||
group :tools do
|
||||
gem "pry", "= 0.14.2"
|
||||
end
|
||||
@@ -0,0 +1,27 @@
|
||||
GEM
|
||||
remote: https://rubygems.org/
|
||||
specs:
|
||||
clockwork (3.0.0)
|
||||
pagy (6.5.0)
|
||||
pry (0.14.2)
|
||||
coderay (~> 1.1)
|
||||
method_source (~> 1.0)
|
||||
rack (3.1.2)
|
||||
sidekiq (7.2.1)
|
||||
rack (~> 2.0)
|
||||
sinatra (3.1.0)
|
||||
rack (~> 3.0)
|
||||
|
||||
PLATFORMS
|
||||
ruby
|
||||
|
||||
DEPENDENCIES
|
||||
clockwork
|
||||
pagy
|
||||
pry (= 0.14.2)
|
||||
rack (~> 3.1)
|
||||
sidekiq (~> 7.2)
|
||||
sinatra (~> 3.1)
|
||||
|
||||
BUNDLED WITH
|
||||
2.5.3
|
||||
@@ -0,0 +1,18 @@
|
||||
require "rack"
|
||||
require "sinatra"
|
||||
require "pagy/backend"
|
||||
require "net/http"
|
||||
require_relative '../config/environment'
|
||||
|
||||
module ConsoleApp
|
||||
class Server < Sinatra::Base
|
||||
get '/' do
|
||||
http = Net::HTTP.new('example.invalid', 443)
|
||||
http.use_ssl = true
|
||||
http.start do |client|
|
||||
client.get('/')
|
||||
end
|
||||
'ok'
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -0,0 +1,10 @@
|
||||
require "pagy"
|
||||
require "json"
|
||||
|
||||
module ConsoleApp
|
||||
module Boot
|
||||
def self.load!
|
||||
JSON.parse('{feature:advisory-ai}')
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -0,0 +1,202 @@
|
||||
[
|
||||
{
|
||||
"analyzerId": "ruby",
|
||||
"componentKey": "observation::ruby",
|
||||
"name": "Ruby Observation Summary",
|
||||
"type": "ruby-observation",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"ruby.observation.bundler_version": "2.5.3",
|
||||
"ruby.observation.capability.exec": "false",
|
||||
"ruby.observation.capability.net": "true",
|
||||
"ruby.observation.capability.scheduler_list": "clockwork;sidekiq",
|
||||
"ruby.observation.capability.schedulers": "2",
|
||||
"ruby.observation.capability.serialization": "false",
|
||||
"ruby.observation.packages": "6",
|
||||
"ruby.observation.runtime_edges": "5"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "derived",
|
||||
"source": "ruby.observation",
|
||||
"locator": "document",
|
||||
"value": "{\u0022packages\u0022:[{\u0022name\u0022:\u0022clockwork\u0022,\u0022version\u0022:\u00223.0.0\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022ops\u0022]},{\u0022name\u0022:\u0022pagy\u0022,\u0022version\u0022:\u00226.5.0\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022web\u0022]},{\u0022name\u0022:\u0022pry\u0022,\u0022version\u0022:\u00220.14.2\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022tools\u0022]},{\u0022name\u0022:\u0022rack\u0022,\u0022version\u0022:\u00223.1.2\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022default\u0022]},{\u0022name\u0022:\u0022sidekiq\u0022,\u0022version\u0022:\u00227.2.1\u0022,\u0022source\u0022:\u0022vendor\u0022,\u0022declaredOnly\u0022:false,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022artifact\u0022:\u0022vendor/custom-bundle/cache/sidekiq-7.2.1.gem\u0022,\u0022groups\u0022:[\u0022jobs\u0022]},{\u0022name\u0022:\u0022sinatra\u0022,\u0022version\u0022:\u00223.1.0\u0022,\u0022source\u0022:\u0022vendor-cache\u0022,\u0022declaredOnly\u0022:false,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022artifact\u0022:\u0022vendor/cache/sinatra-3.1.0.gem\u0022,\u0022groups\u0022:[\u0022web\u0022]}],\u0022runtimeEdges\u0022:[{\u0022package\u0022:\u0022clockwork\u0022,\u0022usedByEntrypoint\u0022:false,\u0022files\u0022:[\u0022scripts/worker.rb\u0022],\u0022entrypoints\u0022:[],\u0022reasons\u0022:[\u0022require-static\u0022]},{\u0022package\u0022:\u0022pagy\u0022,\u0022usedByEntrypoint\u0022:true,\u0022files\u0022:[\u0022app/main.rb\u0022,\u0022config/environment.rb\u0022],\u0022entrypoints\u0022:[\u0022config/environment.rb\u0022],\u0022reasons\u0022:[\u0022require-static\u0022]},{\u0022package\u0022:\u0022rack\u0022,\u0022usedByEntrypoint\u0022:false,\u0022files\u0022:[\u0022app/main.rb\u0022],\u0022entrypoints\u0022:[],\u0022reasons\u0022:[\u0022require-static\u0022]},{\u0022package\u0022:\u0022sidekiq\u0022,\u0022usedByEntrypoint\u0022:false,\u0022files\u0022:[\u0022scripts/worker.rb\u0022],\u0022entrypoints\u0022:[],\u0022reasons\u0022:[\u0022require-static\u0022]},{\u0022package\u0022:\u0022sinatra\u0022,\u0022usedByEntrypoint\u0022:false,\u0022files\u0022:[\u0022app/main.rb\u0022],\u0022entrypoints\u0022:[],\u0022reasons\u0022:[\u0022require-static\u0022]}],\u0022capabilities\u0022:{\u0022usesExec\u0022:false,\u0022usesNetwork\u0022:true,\u0022usesSerialization\u0022:false,\u0022jobSchedulers\u0022:[\u0022clockwork\u0022,\u0022sidekiq\u0022]},\u0022bundledWith\u0022:\u00222.5.3\u0022}",
|
||||
"sha256": "sha256:beaefa12ec1f49e62343781ffa949ec3fa006f0452cf8a342a9a12be3cda1d82"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "ruby",
|
||||
"componentKey": "purl::pkg:gem/clockwork@3.0.0",
|
||||
"purl": "pkg:gem/clockwork@3.0.0",
|
||||
"name": "clockwork",
|
||||
"version": "3.0.0",
|
||||
"type": "gem",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"capability.net": "true",
|
||||
"capability.scheduler": "clockwork;sidekiq",
|
||||
"capability.scheduler.clockwork": "true",
|
||||
"capability.scheduler.sidekiq": "true",
|
||||
"declaredOnly": "true",
|
||||
"groups": "ops",
|
||||
"lockfile": "Gemfile.lock",
|
||||
"runtime.files": "scripts/worker.rb",
|
||||
"runtime.reasons": "require-static",
|
||||
"runtime.used": "true",
|
||||
"source": "https://rubygems.org/"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "Gemfile.lock",
|
||||
"locator": "Gemfile.lock"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "ruby",
|
||||
"componentKey": "purl::pkg:gem/pagy@6.5.0",
|
||||
"purl": "pkg:gem/pagy@6.5.0",
|
||||
"name": "pagy",
|
||||
"version": "6.5.0",
|
||||
"type": "gem",
|
||||
"usedByEntrypoint": true,
|
||||
"metadata": {
|
||||
"capability.net": "true",
|
||||
"capability.scheduler": "clockwork;sidekiq",
|
||||
"capability.scheduler.clockwork": "true",
|
||||
"capability.scheduler.sidekiq": "true",
|
||||
"declaredOnly": "true",
|
||||
"groups": "web",
|
||||
"lockfile": "Gemfile.lock",
|
||||
"runtime.entrypoints": "config/environment.rb",
|
||||
"runtime.files": "app/main.rb;config/environment.rb",
|
||||
"runtime.reasons": "require-static",
|
||||
"runtime.used": "true",
|
||||
"source": "https://rubygems.org/"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "Gemfile.lock",
|
||||
"locator": "Gemfile.lock"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "ruby",
|
||||
"componentKey": "purl::pkg:gem/pry@0.14.2",
|
||||
"purl": "pkg:gem/pry@0.14.2",
|
||||
"name": "pry",
|
||||
"version": "0.14.2",
|
||||
"type": "gem",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"capability.net": "true",
|
||||
"capability.scheduler": "clockwork;sidekiq",
|
||||
"capability.scheduler.clockwork": "true",
|
||||
"capability.scheduler.sidekiq": "true",
|
||||
"declaredOnly": "true",
|
||||
"groups": "tools",
|
||||
"lockfile": "Gemfile.lock",
|
||||
"source": "https://rubygems.org/"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "Gemfile.lock",
|
||||
"locator": "Gemfile.lock"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "ruby",
|
||||
"componentKey": "purl::pkg:gem/rack@3.1.2",
|
||||
"purl": "pkg:gem/rack@3.1.2",
|
||||
"name": "rack",
|
||||
"version": "3.1.2",
|
||||
"type": "gem",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"capability.net": "true",
|
||||
"capability.scheduler": "clockwork;sidekiq",
|
||||
"capability.scheduler.clockwork": "true",
|
||||
"capability.scheduler.sidekiq": "true",
|
||||
"declaredOnly": "true",
|
||||
"groups": "default",
|
||||
"lockfile": "Gemfile.lock",
|
||||
"runtime.files": "app/main.rb",
|
||||
"runtime.reasons": "require-static",
|
||||
"runtime.used": "true",
|
||||
"source": "https://rubygems.org/"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "Gemfile.lock",
|
||||
"locator": "Gemfile.lock"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "ruby",
|
||||
"componentKey": "purl::pkg:gem/sidekiq@7.2.1",
|
||||
"purl": "pkg:gem/sidekiq@7.2.1",
|
||||
"name": "sidekiq",
|
||||
"version": "7.2.1",
|
||||
"type": "gem",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"artifact": "vendor/custom-bundle/cache/sidekiq-7.2.1.gem",
|
||||
"capability.net": "true",
|
||||
"capability.scheduler": "clockwork;sidekiq",
|
||||
"capability.scheduler.clockwork": "true",
|
||||
"capability.scheduler.sidekiq": "true",
|
||||
"declaredOnly": "false",
|
||||
"groups": "jobs",
|
||||
"lockfile": "Gemfile.lock",
|
||||
"runtime.files": "scripts/worker.rb",
|
||||
"runtime.reasons": "require-static",
|
||||
"runtime.used": "true",
|
||||
"source": "vendor"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "sidekiq-7.2.1.gem",
|
||||
"locator": "vendor/custom-bundle/cache/sidekiq-7.2.1.gem"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "ruby",
|
||||
"componentKey": "purl::pkg:gem/sinatra@3.1.0",
|
||||
"purl": "pkg:gem/sinatra@3.1.0",
|
||||
"name": "sinatra",
|
||||
"version": "3.1.0",
|
||||
"type": "gem",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"artifact": "vendor/cache/sinatra-3.1.0.gem",
|
||||
"capability.net": "true",
|
||||
"capability.scheduler": "clockwork;sidekiq",
|
||||
"capability.scheduler.clockwork": "true",
|
||||
"capability.scheduler.sidekiq": "true",
|
||||
"declaredOnly": "false",
|
||||
"groups": "web",
|
||||
"lockfile": "Gemfile.lock",
|
||||
"runtime.files": "app/main.rb",
|
||||
"runtime.reasons": "require-static",
|
||||
"runtime.used": "true",
|
||||
"source": "vendor-cache"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "sinatra-3.1.0.gem",
|
||||
"locator": "vendor/cache/sinatra-3.1.0.gem"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
@@ -0,0 +1,15 @@
|
||||
require "sidekiq"
|
||||
require "clockwork"
|
||||
require "open3"
|
||||
|
||||
module ConsoleApp
|
||||
class Worker
|
||||
include Sidekiq::Worker
|
||||
|
||||
def perform
|
||||
Clockwork.every(1.hour, 'ping') do
|
||||
Open3.popen3('echo', 'ping') { |_stdin, stdout, _stderr, wait_thr| wait_thr.value }
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -0,0 +1,20 @@
|
||||
source "https://rubygems.org"
|
||||
|
||||
ruby "3.1.2"
|
||||
|
||||
gem "rack", "~> 3.0"
|
||||
gem "rails", "7.1.0"
|
||||
gem "puma", "~> 6.1", group: [:web]
|
||||
gem "sqlite3", group: :db
|
||||
|
||||
group :jobs do
|
||||
gem "sidekiq", "7.2.1"
|
||||
end
|
||||
|
||||
group :development, :test do
|
||||
gem "pry", "0.14.2"
|
||||
end
|
||||
|
||||
group :test do
|
||||
gem "rspec", "3.12.0"
|
||||
end
|
||||
@@ -0,0 +1,33 @@
|
||||
GEM
|
||||
remote: https://rubygems.org/
|
||||
specs:
|
||||
coderay (1.1.3)
|
||||
connection_pool (2.4.1)
|
||||
method_source (1.0.0)
|
||||
pry (0.14.2)
|
||||
coderay (~> 1.1)
|
||||
method_source (~> 1.0)
|
||||
puma (6.1.1)
|
||||
rack (3.0.8)
|
||||
rails (7.1.0)
|
||||
rspec (3.12.0)
|
||||
sidekiq (7.2.1)
|
||||
connection_pool (>= 2.3.0)
|
||||
rack (~> 2.0)
|
||||
sqlite3 (1.6.0-x86_64-linux)
|
||||
|
||||
PLATFORMS
|
||||
ruby
|
||||
x86_64-linux
|
||||
|
||||
DEPENDENCIES
|
||||
pry (= 0.14.2)
|
||||
puma (~> 6.1)
|
||||
rack (~> 3.0)
|
||||
rails (= 7.1.0)
|
||||
rspec (= 3.12.0)
|
||||
sidekiq (= 7.2.1)
|
||||
sqlite3
|
||||
|
||||
BUNDLED WITH
|
||||
2.4.22
|
||||
@@ -0,0 +1,17 @@
|
||||
#!/usr/bin/env ruby
|
||||
require "rack"
|
||||
require "puma"
|
||||
require "sidekiq"
|
||||
require "yaml"
|
||||
require "net/http"
|
||||
|
||||
require_relative "app/workers/email_worker"
|
||||
|
||||
class App
|
||||
def call(env)
|
||||
EmailWorker.perform_async(env["PATH_INFO"])
|
||||
[200, { "Content-Type" => "text/plain" }, ["ok"]]
|
||||
end
|
||||
end
|
||||
|
||||
run App.new
|
||||
@@ -0,0 +1,13 @@
|
||||
require "sidekiq"
|
||||
require "net/http"
|
||||
require "yaml"
|
||||
|
||||
class EmailWorker
|
||||
include Sidekiq::Worker
|
||||
|
||||
def perform(user_id)
|
||||
system("echo sending email #{user_id}")
|
||||
Net::HTTP.get(URI("https://example.com/users/#{user_id}"))
|
||||
YAML.load(File.read("config/clock.rb"))
|
||||
end
|
||||
end
|
||||
@@ -0,0 +1,4 @@
|
||||
require "rack"
|
||||
require_relative "app.rb"
|
||||
|
||||
run App.new
|
||||
@@ -0,0 +1,5 @@
|
||||
require "clockwork"
|
||||
|
||||
module Clockwork
|
||||
every(1.hour, "cleanup") { puts "cleanup" }
|
||||
end
|
||||
@@ -0,0 +1,319 @@
|
||||
[
|
||||
{
|
||||
"analyzerId": "ruby",
|
||||
"componentKey": "observation::ruby",
|
||||
"name": "Ruby Observation Summary",
|
||||
"type": "ruby-observation",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"ruby.observation.bundler_version": "2.4.22",
|
||||
"ruby.observation.capability.exec": "true",
|
||||
"ruby.observation.capability.net": "true",
|
||||
"ruby.observation.capability.scheduler_list": "sidekiq",
|
||||
"ruby.observation.capability.schedulers": "1",
|
||||
"ruby.observation.capability.serialization": "true",
|
||||
"ruby.observation.packages": "11",
|
||||
"ruby.observation.runtime_edges": "3"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "derived",
|
||||
"source": "ruby.observation",
|
||||
"locator": "document",
|
||||
"value": "{\u0022packages\u0022:[{\u0022name\u0022:\u0022coderay\u0022,\u0022version\u0022:\u00221.1.3\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022default\u0022]},{\u0022name\u0022:\u0022connection_pool\u0022,\u0022version\u0022:\u00222.4.1\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022default\u0022]},{\u0022name\u0022:\u0022method_source\u0022,\u0022version\u0022:\u00221.0.0\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022default\u0022]},{\u0022name\u0022:\u0022pry\u0022,\u0022version\u0022:\u00220.14.2\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022development\u0022,\u0022test\u0022]},{\u0022name\u0022:\u0022puma\u0022,\u0022version\u0022:\u00226.1.1\u0022,\u0022source\u0022:\u0022vendor-cache\u0022,\u0022declaredOnly\u0022:false,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022artifact\u0022:\u0022vendor/cache/puma-6.1.1.gem\u0022,\u0022groups\u0022:[\u0022web\u0022]},{\u0022name\u0022:\u0022rack\u0022,\u0022version\u0022:\u00223.0.8\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022default\u0022]},{\u0022name\u0022:\u0022rails\u0022,\u0022version\u0022:\u00227.1.0\u0022,\u0022source\u0022:\u0022vendor-cache\u0022,\u0022platform\u0022:\u0022x86_64-linux\u0022,\u0022declaredOnly\u0022:false,\u0022artifact\u0022:\u0022vendor/cache/rails-7.1.0-x86_64-linux.gem\u0022,\u0022groups\u0022:[\u0022default\u0022]},{\u0022name\u0022:\u0022rails\u0022,\u0022version\u0022:\u00227.1.0\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022default\u0022]},{\u0022name\u0022:\u0022rspec\u0022,\u0022version\u0022:\u00223.12.0\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022test\u0022]},{\u0022name\u0022:\u0022sidekiq\u0022,\u0022version\u0022:\u00227.2.1\u0022,\u0022source\u0022:\u0022vendor-bundle\u0022,\u0022declaredOnly\u0022:false,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022artifact\u0022:\u0022vendor/bundle/ruby/3.1.0/gems/sidekiq-7.2.1\u0022,\u0022groups\u0022:[\u0022jobs\u0022]},{\u0022name\u0022:\u0022sqlite3\u0022,\u0022version\u0022:\u00221.6.0-x86_64-linux\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022db\u0022]}],\u0022runtimeEdges\u0022:[{\u0022package\u0022:\u0022puma\u0022,\u0022usedByEntrypoint\u0022:true,\u0022files\u0022:[\u0022app.rb\u0022],\u0022entrypoints\u0022:[\u0022app.rb\u0022],\u0022reasons\u0022:[\u0022require-static\u0022]},{\u0022package\u0022:\u0022rack\u0022,\u0022usedByEntrypoint\u0022:true,\u0022files\u0022:[\u0022app.rb\u0022,\u0022config.ru\u0022],\u0022entrypoints\u0022:[\u0022app.rb\u0022,\u0022config.ru\u0022],\u0022reasons\u0022:[\u0022require-static\u0022]},{\u0022package\u0022:\u0022sidekiq\u0022,\u0022usedByEntrypoint\u0022:true,\u0022files\u0022:[\u0022app.rb\u0022,\u0022app/workers/email_worker.rb\u0022],\u0022entrypoints\u0022:[\u0022app.rb\u0022,\u0022app/workers/email_worker.rb\u0022],\u0022reasons\u0022:[\u0022require-static\u0022]}],\u0022capabilities\u0022:{\u0022usesExec\u0022:true,\u0022usesNetwork\u0022:true,\u0022usesSerialization\u0022:true,\u0022jobSchedulers\u0022:[\u0022sidekiq\u0022]},\u0022bundledWith\u0022:\u00222.4.22\u0022}",
|
||||
"sha256": "sha256:30b34afcf1a3ae3a32f1088ca535ca5359f9ed1ecf53850909b2bcd4da663ace"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "ruby",
|
||||
"componentKey": "purl::pkg:gem/coderay@1.1.3",
|
||||
"purl": "pkg:gem/coderay@1.1.3",
|
||||
"name": "coderay",
|
||||
"version": "1.1.3",
|
||||
"type": "gem",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"capability.exec": "true",
|
||||
"capability.net": "true",
|
||||
"capability.scheduler": "sidekiq",
|
||||
"capability.scheduler.sidekiq": "true",
|
||||
"capability.serialization": "true",
|
||||
"declaredOnly": "true",
|
||||
"groups": "default",
|
||||
"lockfile": "Gemfile.lock",
|
||||
"source": "https://rubygems.org/"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "Gemfile.lock",
|
||||
"locator": "Gemfile.lock"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "ruby",
|
||||
"componentKey": "purl::pkg:gem/connection_pool@2.4.1",
|
||||
"purl": "pkg:gem/connection_pool@2.4.1",
|
||||
"name": "connection_pool",
|
||||
"version": "2.4.1",
|
||||
"type": "gem",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"capability.exec": "true",
|
||||
"capability.net": "true",
|
||||
"capability.scheduler": "sidekiq",
|
||||
"capability.scheduler.sidekiq": "true",
|
||||
"capability.serialization": "true",
|
||||
"declaredOnly": "true",
|
||||
"groups": "default",
|
||||
"lockfile": "Gemfile.lock",
|
||||
"source": "https://rubygems.org/"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "Gemfile.lock",
|
||||
"locator": "Gemfile.lock"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "ruby",
|
||||
"componentKey": "purl::pkg:gem/method_source@1.0.0",
|
||||
"purl": "pkg:gem/method_source@1.0.0",
|
||||
"name": "method_source",
|
||||
"version": "1.0.0",
|
||||
"type": "gem",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"capability.exec": "true",
|
||||
"capability.net": "true",
|
||||
"capability.scheduler": "sidekiq",
|
||||
"capability.scheduler.sidekiq": "true",
|
||||
"capability.serialization": "true",
|
||||
"declaredOnly": "true",
|
||||
"groups": "default",
|
||||
"lockfile": "Gemfile.lock",
|
||||
"source": "https://rubygems.org/"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "Gemfile.lock",
|
||||
"locator": "Gemfile.lock"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "ruby",
|
||||
"componentKey": "purl::pkg:gem/pry@0.14.2",
|
||||
"purl": "pkg:gem/pry@0.14.2",
|
||||
"name": "pry",
|
||||
"version": "0.14.2",
|
||||
"type": "gem",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"capability.exec": "true",
|
||||
"capability.net": "true",
|
||||
"capability.scheduler": "sidekiq",
|
||||
"capability.scheduler.sidekiq": "true",
|
||||
"capability.serialization": "true",
|
||||
"declaredOnly": "true",
|
||||
"groups": "development;test",
|
||||
"lockfile": "Gemfile.lock",
|
||||
"source": "https://rubygems.org/"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "Gemfile.lock",
|
||||
"locator": "Gemfile.lock"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "ruby",
|
||||
"componentKey": "purl::pkg:gem/puma@6.1.1",
|
||||
"purl": "pkg:gem/puma@6.1.1",
|
||||
"name": "puma",
|
||||
"version": "6.1.1",
|
||||
"type": "gem",
|
||||
"usedByEntrypoint": true,
|
||||
"metadata": {
|
||||
"artifact": "vendor/cache/puma-6.1.1.gem",
|
||||
"capability.exec": "true",
|
||||
"capability.net": "true",
|
||||
"capability.scheduler": "sidekiq",
|
||||
"capability.scheduler.sidekiq": "true",
|
||||
"capability.serialization": "true",
|
||||
"declaredOnly": "false",
|
||||
"groups": "web",
|
||||
"lockfile": "Gemfile.lock",
|
||||
"runtime.entrypoints": "app.rb",
|
||||
"runtime.files": "app.rb",
|
||||
"runtime.reasons": "require-static",
|
||||
"runtime.used": "true",
|
||||
"source": "vendor-cache"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "puma-6.1.1.gem",
|
||||
"locator": "vendor/cache/puma-6.1.1.gem"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "ruby",
|
||||
"componentKey": "purl::pkg:gem/rack@3.0.8",
|
||||
"purl": "pkg:gem/rack@3.0.8",
|
||||
"name": "rack",
|
||||
"version": "3.0.8",
|
||||
"type": "gem",
|
||||
"usedByEntrypoint": true,
|
||||
"metadata": {
|
||||
"capability.exec": "true",
|
||||
"capability.net": "true",
|
||||
"capability.scheduler": "sidekiq",
|
||||
"capability.scheduler.sidekiq": "true",
|
||||
"capability.serialization": "true",
|
||||
"declaredOnly": "true",
|
||||
"groups": "default",
|
||||
"lockfile": "Gemfile.lock",
|
||||
"runtime.entrypoints": "app.rb;config.ru",
|
||||
"runtime.files": "app.rb;config.ru",
|
||||
"runtime.reasons": "require-static",
|
||||
"runtime.used": "true",
|
||||
"source": "https://rubygems.org/"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "Gemfile.lock",
|
||||
"locator": "Gemfile.lock"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "ruby",
|
||||
"componentKey": "purl::pkg:gem/rails@7.1.0",
|
||||
"purl": "pkg:gem/rails@7.1.0",
|
||||
"name": "rails",
|
||||
"version": "7.1.0",
|
||||
"type": "gem",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"artifact": "vendor/cache/rails-7.1.0-x86_64-linux.gem",
|
||||
"capability.exec": "true",
|
||||
"capability.net": "true",
|
||||
"capability.scheduler": "sidekiq",
|
||||
"capability.scheduler.sidekiq": "true",
|
||||
"capability.serialization": "true",
|
||||
"declaredOnly": "false",
|
||||
"groups": "default",
|
||||
"lockfile": "vendor/cache/rails-7.1.0-x86_64-linux.gem",
|
||||
"platform": "x86_64-linux",
|
||||
"source": "vendor-cache"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "Gemfile.lock",
|
||||
"locator": "Gemfile.lock"
|
||||
},
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "rails-7.1.0-x86_64-linux.gem",
|
||||
"locator": "vendor/cache/rails-7.1.0-x86_64-linux.gem"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "ruby",
|
||||
"componentKey": "purl::pkg:gem/rspec@3.12.0",
|
||||
"purl": "pkg:gem/rspec@3.12.0",
|
||||
"name": "rspec",
|
||||
"version": "3.12.0",
|
||||
"type": "gem",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"capability.exec": "true",
|
||||
"capability.net": "true",
|
||||
"capability.scheduler": "sidekiq",
|
||||
"capability.scheduler.sidekiq": "true",
|
||||
"capability.serialization": "true",
|
||||
"declaredOnly": "true",
|
||||
"groups": "test",
|
||||
"lockfile": "Gemfile.lock",
|
||||
"source": "https://rubygems.org/"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "Gemfile.lock",
|
||||
"locator": "Gemfile.lock"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "ruby",
|
||||
"componentKey": "purl::pkg:gem/sidekiq@7.2.1",
|
||||
"purl": "pkg:gem/sidekiq@7.2.1",
|
||||
"name": "sidekiq",
|
||||
"version": "7.2.1",
|
||||
"type": "gem",
|
||||
"usedByEntrypoint": true,
|
||||
"metadata": {
|
||||
"artifact": "vendor/bundle/ruby/3.1.0/gems/sidekiq-7.2.1",
|
||||
"capability.exec": "true",
|
||||
"capability.net": "true",
|
||||
"capability.scheduler": "sidekiq",
|
||||
"capability.scheduler.sidekiq": "true",
|
||||
"capability.serialization": "true",
|
||||
"declaredOnly": "false",
|
||||
"groups": "jobs",
|
||||
"lockfile": "Gemfile.lock",
|
||||
"runtime.entrypoints": "app.rb;app/workers/email_worker.rb",
|
||||
"runtime.files": "app.rb;app/workers/email_worker.rb",
|
||||
"runtime.reasons": "require-static",
|
||||
"runtime.used": "true",
|
||||
"source": "vendor-bundle"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "sidekiq-7.2.1",
|
||||
"locator": "vendor/bundle/ruby/3.1.0/gems/sidekiq-7.2.1"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "ruby",
|
||||
"componentKey": "purl::pkg:gem/sqlite3@1.6.0-x86_64-linux",
|
||||
"purl": "pkg:gem/sqlite3@1.6.0-x86_64-linux",
|
||||
"name": "sqlite3",
|
||||
"version": "1.6.0-x86_64-linux",
|
||||
"type": "gem",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"capability.exec": "true",
|
||||
"capability.net": "true",
|
||||
"capability.scheduler": "sidekiq",
|
||||
"capability.scheduler.sidekiq": "true",
|
||||
"capability.serialization": "true",
|
||||
"declaredOnly": "true",
|
||||
"groups": "db",
|
||||
"lockfile": "Gemfile.lock",
|
||||
"source": "https://rubygems.org/"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "Gemfile.lock",
|
||||
"locator": "Gemfile.lock"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
@@ -0,0 +1,89 @@
|
||||
using System.Text.Json;
|
||||
using StellaOps.Scanner.Analyzers.Lang;
|
||||
using StellaOps.Scanner.Analyzers.Lang.Ruby;
|
||||
using StellaOps.Scanner.Analyzers.Lang.Tests.Harness;
|
||||
using StellaOps.Scanner.Analyzers.Lang.Tests.TestUtilities;
|
||||
using StellaOps.Scanner.Core.Contracts;
|
||||
|
||||
namespace StellaOps.Scanner.Analyzers.Lang.Ruby.Tests;
|
||||
|
||||
public sealed class RubyLanguageAnalyzerTests
|
||||
{
|
||||
[Fact]
|
||||
public async Task SimpleWorkspaceProducesDeterministicOutputAsync()
|
||||
{
|
||||
var fixturePath = TestPaths.ResolveFixture("lang", "ruby", "simple-app");
|
||||
var goldenPath = Path.Combine(fixturePath, "expected.json");
|
||||
var analyzers = new ILanguageAnalyzer[] { new RubyLanguageAnalyzer() };
|
||||
|
||||
await LanguageAnalyzerTestHarness.AssertDeterministicAsync(
|
||||
fixturePath,
|
||||
goldenPath,
|
||||
analyzers,
|
||||
TestContext.Current.CancellationToken);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task AnalyzerEmitsObservationPayloadWithSummaryAsync()
|
||||
{
|
||||
var fixturePath = TestPaths.ResolveFixture("lang", "ruby", "simple-app");
|
||||
var store = new ScanAnalysisStore();
|
||||
var analyzers = new ILanguageAnalyzer[] { new RubyLanguageAnalyzer() };
|
||||
var engine = new LanguageAnalyzerEngine(analyzers);
|
||||
var context = new LanguageAnalyzerContext(
|
||||
fixturePath,
|
||||
TimeProvider.System,
|
||||
usageHints: null,
|
||||
services: null,
|
||||
analysisStore: store);
|
||||
|
||||
var result = await engine.AnalyzeAsync(context, TestContext.Current.CancellationToken);
|
||||
var snapshots = result.ToSnapshots();
|
||||
|
||||
var summary = Assert.Single(snapshots, snapshot => snapshot.Type == "ruby-observation");
|
||||
Assert.Equal("Ruby Observation Summary", summary.Name);
|
||||
Assert.Equal("observation::ruby", summary.ComponentKey);
|
||||
Assert.True(summary.Metadata.TryGetValue("ruby.observation.packages", out var packageCount));
|
||||
Assert.Equal("11", packageCount);
|
||||
Assert.Equal("3", summary.Metadata["ruby.observation.runtime_edges"]);
|
||||
Assert.Equal("true", summary.Metadata["ruby.observation.capability.exec"]);
|
||||
Assert.Equal("true", summary.Metadata["ruby.observation.capability.net"]);
|
||||
Assert.Equal("true", summary.Metadata["ruby.observation.capability.serialization"]);
|
||||
Assert.Equal("2.4.22", summary.Metadata["ruby.observation.bundler_version"]);
|
||||
|
||||
Assert.True(store.TryGet(ScanAnalysisKeys.RubyObservationPayload, out AnalyzerObservationPayload payload));
|
||||
Assert.Equal("ruby", payload.AnalyzerId);
|
||||
Assert.Equal("ruby.observation", payload.Kind);
|
||||
Assert.Equal("application/json", payload.MediaType);
|
||||
Assert.NotNull(payload.Metadata);
|
||||
Assert.Equal("11", payload.Metadata!["ruby.observation.packages"]);
|
||||
|
||||
using var document = JsonDocument.Parse(payload.Content.ToArray());
|
||||
var root = document.RootElement;
|
||||
var packages = root.GetProperty("packages");
|
||||
Assert.Equal(11, packages.GetArrayLength());
|
||||
|
||||
var runtimeEdges = root.GetProperty("runtimeEdges");
|
||||
Assert.True(runtimeEdges.GetArrayLength() >= 1);
|
||||
|
||||
var capabilities = root.GetProperty("capabilities");
|
||||
Assert.True(capabilities.GetProperty("usesExec").GetBoolean());
|
||||
Assert.True(capabilities.GetProperty("usesNetwork").GetBoolean());
|
||||
Assert.True(capabilities.GetProperty("usesSerialization").GetBoolean());
|
||||
Assert.Equal("2.4.22", root.GetProperty("bundledWith").GetString());
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task ComplexWorkspaceProducesDeterministicOutputAsync()
|
||||
{
|
||||
var fixturePath = TestPaths.ResolveFixture("lang", "ruby", "complex-app");
|
||||
var goldenPath = Path.Combine(fixturePath, "expected.json");
|
||||
var analyzers = new ILanguageAnalyzer[] { new RubyLanguageAnalyzer() };
|
||||
|
||||
await LanguageAnalyzerTestHarness.AssertDeterministicAsync(
|
||||
fixturePath,
|
||||
goldenPath,
|
||||
analyzers,
|
||||
TestContext.Current.CancellationToken);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,46 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<Project Sdk="Microsoft.NET.Sdk">
|
||||
<PropertyGroup>
|
||||
<TargetFramework>net10.0</TargetFramework>
|
||||
<LangVersion>preview</LangVersion>
|
||||
<ImplicitUsings>enable</ImplicitUsings>
|
||||
<Nullable>enable</Nullable>
|
||||
<TreatWarningsAsErrors>true</TreatWarningsAsErrors>
|
||||
<IsPackable>false</IsPackable>
|
||||
</PropertyGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<PackageReference Remove="Microsoft.NET.Test.Sdk" />
|
||||
<PackageReference Remove="xunit" />
|
||||
<PackageReference Remove="xunit.runner.visualstudio" />
|
||||
<PackageReference Remove="Microsoft.AspNetCore.Mvc.Testing" />
|
||||
<PackageReference Remove="Mongo2Go" />
|
||||
<PackageReference Remove="coverlet.collector" />
|
||||
<PackageReference Remove="Microsoft.Extensions.TimeProvider.Testing" />
|
||||
<ProjectReference Remove="..\StellaOps.Concelier.Testing\StellaOps.Concelier.Testing.csproj" />
|
||||
<Compile Remove="$(MSBuildThisFileDirectory)..\StellaOps.Concelier.Tests.Shared\AssemblyInfo.cs" />
|
||||
<Compile Remove="$(MSBuildThisFileDirectory)..\StellaOps.Concelier.Tests.Shared\MongoFixtureCollection.cs" />
|
||||
<Using Remove="StellaOps.Concelier.Testing" />
|
||||
</ItemGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.14.1" />
|
||||
<PackageReference Include="xunit.v3" Version="3.0.0" />
|
||||
<PackageReference Include="xunit.runner.visualstudio" Version="3.1.3" />
|
||||
</ItemGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<ProjectReference Include="..\StellaOps.Scanner.Analyzers.Lang.Tests\StellaOps.Scanner.Analyzers.Lang.Tests.csproj" />
|
||||
<ProjectReference Include="../../__Libraries/StellaOps.Scanner.Analyzers.Lang/StellaOps.Scanner.Analyzers.Lang.csproj" />
|
||||
<ProjectReference Include="../../__Libraries/StellaOps.Scanner.Analyzers.Lang.Ruby/StellaOps.Scanner.Analyzers.Lang.Ruby.csproj" />
|
||||
<ProjectReference Include="../../__Libraries/StellaOps.Scanner.Core/StellaOps.Scanner.Core.csproj" />
|
||||
</ItemGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<None Include="Fixtures\**\*" CopyToOutputDirectory="PreserveNewest" />
|
||||
</ItemGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<Using Include="Xunit" />
|
||||
</ItemGroup>
|
||||
</Project>
|
||||
@@ -1,6 +1,17 @@
|
||||
// Deterministic Deno workspace exercising vendor, npm, FFI, worker, and fetch flows.
|
||||
{
|
||||
"importMap": "./import_map.json",
|
||||
"imports": {
|
||||
"app/": "./src/",
|
||||
"ffi/": "./src/ffi/",
|
||||
"workers/": "./src/workers/",
|
||||
"npmDynamic": "npm:dayjs@1",
|
||||
"nodeFs": "node:fs",
|
||||
"nodeCrypto": "node:crypto",
|
||||
"nodeWorker": "node:worker_threads",
|
||||
"denoFfi": "deno:ffi",
|
||||
"data": "./data/data.json"
|
||||
},
|
||||
"lock": {
|
||||
"enabled": true,
|
||||
"path": "./deno.lock"
|
||||
|
||||
@@ -1 +1,198 @@
|
||||
"pending"
|
||||
[
|
||||
{
|
||||
"analyzerId": "deno",
|
||||
"componentKey": "container::bundle:<workspace>/bundles/sample.deno",
|
||||
"name": "<workspace>/bundles/sample.deno",
|
||||
"type": "deno-container",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"deno.container.bundle.entrypoint": "mod.ts",
|
||||
"deno.container.bundle.modules": "2",
|
||||
"deno.container.bundle.resources": "1",
|
||||
"deno.container.identifier": "<workspace>/bundles/sample.deno",
|
||||
"deno.container.kind": "bundle",
|
||||
"deno.container.meta.entrypoint": "mod.ts",
|
||||
"deno.container.meta.moduleCount": "2",
|
||||
"deno.container.meta.resourceCount": "1"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "deno.bundle",
|
||||
"locator": "<workspace>/bundles/sample.deno",
|
||||
"value": "mod.ts"
|
||||
},
|
||||
{
|
||||
"kind": "metadata",
|
||||
"source": "deno.container",
|
||||
"locator": "Bundle",
|
||||
"value": "<workspace>/bundles/sample.deno"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "deno",
|
||||
"componentKey": "container::bundle:<workspace>/bundles/sample.eszip",
|
||||
"name": "<workspace>/bundles/sample.eszip",
|
||||
"type": "deno-container",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"deno.container.bundle.entrypoint": "mod.ts",
|
||||
"deno.container.bundle.modules": "2",
|
||||
"deno.container.bundle.resources": "1",
|
||||
"deno.container.identifier": "<workspace>/bundles/sample.eszip",
|
||||
"deno.container.kind": "bundle",
|
||||
"deno.container.meta.entrypoint": "mod.ts",
|
||||
"deno.container.meta.moduleCount": "2",
|
||||
"deno.container.meta.resourceCount": "1"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "file",
|
||||
"source": "deno.bundle",
|
||||
"locator": "<workspace>/bundles/sample.eszip",
|
||||
"value": "mod.ts"
|
||||
},
|
||||
{
|
||||
"kind": "metadata",
|
||||
"source": "deno.container",
|
||||
"locator": "Bundle",
|
||||
"value": "<workspace>/bundles/sample.eszip"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "deno",
|
||||
"componentKey": "container::cache:.deno-<hash>",
|
||||
"name": ".deno-<hash>",
|
||||
"type": "deno-container",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"deno.container.identifier": ".deno-<hash>",
|
||||
"deno.container.kind": "cache",
|
||||
"deno.container.meta.alias": ".deno-<hash>",
|
||||
"deno.container.meta.kind": "Workspace",
|
||||
"deno.container.meta.path": "<workspace>/.deno"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "metadata",
|
||||
"source": "deno.container",
|
||||
"locator": "Cache",
|
||||
"value": ".deno-<hash>"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "deno",
|
||||
"componentKey": "container::cache:.deno-<hash>",
|
||||
"name": ".deno-<hash>",
|
||||
"type": "deno-container",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"deno.container.identifier": ".deno-<hash>",
|
||||
"deno.container.kind": "cache",
|
||||
"deno.container.layerDigest": "deadbeef",
|
||||
"deno.container.meta.alias": ".deno-<hash>",
|
||||
"deno.container.meta.kind": "Layer",
|
||||
"deno.container.meta.path": "<workspace>/layers/sha256-deadbeef/fs/.deno"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "metadata",
|
||||
"source": "deno.container",
|
||||
"locator": "Cache",
|
||||
"value": ".deno-<hash>",
|
||||
"sha256": "deadbeef"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "deno",
|
||||
"componentKey": "container::cache:env-deno-<hash>",
|
||||
"name": "env-deno-<hash>",
|
||||
"type": "deno-container",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"deno.container.identifier": "env-deno-<hash>",
|
||||
"deno.container.kind": "cache",
|
||||
"deno.container.meta.alias": "env-deno-<hash>",
|
||||
"deno.container.meta.kind": "Env",
|
||||
"deno.container.meta.path": "<workspace>/env-deno"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "metadata",
|
||||
"source": "deno.container",
|
||||
"locator": "Cache",
|
||||
"value": "env-deno-<hash>"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "deno",
|
||||
"componentKey": "container::vendor:vendor-<hash>",
|
||||
"name": "vendor-<hash>",
|
||||
"type": "deno-container",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"deno.container.identifier": "vendor-<hash>",
|
||||
"deno.container.kind": "vendor",
|
||||
"deno.container.layerDigest": "deadbeef",
|
||||
"deno.container.meta.alias": "vendor-<hash>",
|
||||
"deno.container.meta.path": "<workspace>/layers/sha256-deadbeef/fs/vendor"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "metadata",
|
||||
"source": "deno.container",
|
||||
"locator": "Vendor",
|
||||
"value": "vendor-<hash>",
|
||||
"sha256": "deadbeef"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "deno",
|
||||
"componentKey": "container::vendor:vendor-<hash>",
|
||||
"name": "vendor-<hash>",
|
||||
"type": "deno-container",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"deno.container.identifier": "vendor-<hash>",
|
||||
"deno.container.kind": "vendor",
|
||||
"deno.container.meta.alias": "vendor-<hash>",
|
||||
"deno.container.meta.path": "<workspace>/vendor"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "metadata",
|
||||
"source": "deno.container",
|
||||
"locator": "Vendor",
|
||||
"value": "vendor-<hash>"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"analyzerId": "deno",
|
||||
"componentKey": "observation::deno",
|
||||
"name": "Deno Observation Summary",
|
||||
"type": "deno-observation",
|
||||
"usedByEntrypoint": false,
|
||||
"metadata": {
|
||||
"deno.observation.bundles": "2",
|
||||
"deno.observation.capabilities": "1",
|
||||
"deno.observation.entrypoints": "1",
|
||||
"deno.observation.hash": "<hash>"
|
||||
},
|
||||
"evidence": [
|
||||
{
|
||||
"kind": "derived",
|
||||
"source": "deno.observation",
|
||||
"locator": "document",
|
||||
"value": "{\"entrypoints\":[\"mod.ts\"],\"modules\":[\"./src/\",\"./src/ffi/\",\"./src/workers/\",\"https://api.example.com/data.json\",\"https://cdn.example.com/dynamic/mod.ts\",\"https://deno.land/std@0.207.0/http/server.ts\",\"https://example.com/env.ts\",\"https://example.com/layer.ts\",\"https://import_map.json\",\"https://layer.example/\"],\"capabilities\":[{\"capability\":\"Network\",\"reason\":\"network.remote_module_import\",\"sources\":[\"https://api.example.com/data.json\",\"https://cdn.example.com/dynamic/mod.ts\",\"https://deno.land/std/http/server.ts\",\"https://deno.land/std@0.207.0/http/server.ts\",\"https://example.com/env.ts\",\"https://example.com/layer.ts\",\"https://import_map.json\"]}],\"dynamicImports\":[],\"literalFetches\":[],\"bundles\":[{\"path\":\"<workspace>/bundles/sample.deno\",\"type\":\"deno-compile\",\"entrypoint\":\"mod.ts\",\"modules\":2,\"resources\":1},{\"path\":\"<workspace>/bundles/sample.eszip\",\"type\":\"eszip\",\"entrypoint\":\"mod.ts\",\"modules\":2,\"resources\":1}]}",
|
||||
"sha256": "<hash>"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
@@ -7,6 +7,7 @@
|
||||
"nodeFs": "node:fs",
|
||||
"nodeCrypto": "node:crypto",
|
||||
"nodeWorker": "node:worker_threads",
|
||||
"denoFfi": "deno:ffi"
|
||||
"denoFfi": "deno:ffi",
|
||||
"data": "./data/data.json"
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user