4.1 KiB
4.1 KiB
Here’s a quick, practical add-on for your pipelines: simple guardrails for AI‑assisted code so you don’t ship security/IP/license problems without slowing teams down.
Why this matters (super short)
AI coding tools boost output but can introduce:
- Security issues (hardcoded secrets, unsafe APIs, prompt‑planted vulns)
- IP leakage (copied proprietary snippets)
- License conflicts (incompatible OSS brought by the model)
Three fast checks (under ~1–3s per change)
- Secrets & Unsafe Patterns
- Detect credentials, tokens, and high‑risk calls (e.g., weak crypto, eval/exec, SQL concat).
- Flag “new secrets introduced” vs “pre‑existing” to reduce noise.
- Require fix or approved suppression with evidence.
- Attribution & Similarity
- Fuzzy match new/changed hunks against a vetted “allowlist” (your repos) and a “denylist” (company IP you can’t disclose).
- If similarity > threshold to denylist, block; if unknown origin, require justification note.
- License Hygiene (deps + snippets)
- On dependency diffs, compute SBOM, resolve licenses, evaluate policy matrix (e.g., OK: MIT/BSD/Apache‑2.0; Review: MPL/LGPL; Block: GPL‑3.0 for closed components).
- For pasted code blocks > N lines, enforce “license/attribution comment” presence or ticket link proving provenance.
Lightweight policy you can ship today
Policy goals
- Be explainable (every fail has a short reason + link to evidence)
- Be configurable per repo/env
- Support override with audit (who, why, for how long)
Example (YAML)
stellaops:
ai_code_guard:
enabled: true
thresholds:
similarity_block: 0.92
similarity_review: 0.80
max_paste_lines_without_provenance: 12
licenses:
allow: [MIT, BSD-2-Clause, BSD-3-Clause, Apache-2.0]
review: [MPL-2.0, LGPL-2.1, LGPL-3.0]
block: [GPL-3.0-only, AGPL-3.0-only]
checks:
- id: secrets_scan
required: true
- id: unsafe_api_scan
required: true
- id: snippet_similarity
required: true
- id: dep_sbom_license
required: true
overrides:
require_issue_link: true
max_duration_days: 14
Gate outcomes
- ✅ Pass: merge/release continues
- 🟡 Review: needs approver with role
SecurityReviewer - ⛔ Block: only
SecurityOwnercan override with issue link + time‑boxed waiver
Minimal evidence you store (per change)
- Hashes of changed hunks + similarity scores
- Secret/unsafe findings with line refs
- SBOM delta + license verdicts
- Override metadata (who/why/expiry)
This feeds Stella Ops’ deterministic replay: same inputs → same verdicts → audit‑ready.
Drop‑in CI snippets
GitHub Actions
jobs:
ai-guard:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: stella-ops/cli-action@v1
with:
args: guard run --policy .stellaops.yml --format sarif --out guard.sarif
- name: Upload SARIF
uses: github/codeql-action/upload-sarif@v3
with: { sarif_file: guard.sarif }
GitLab CI
ai_guard:
image: stellaops/cli:latest
script:
- stella guard run --policy .stellaops.yml --format gitlab --out guard.json
- test "$(jq -r .status guard.json)" = "pass"
Small UX that wins trust
- Inline PR annotations (secret types, API names, license rule hit)
- One‑click “request waiver” (requires ticket link + expiry)
- Policy badges in PR (“AI Code Guard: Pass / Review / Block”)
How this plugs into Stella Ops
- Scanner: run the 3 checks; emit evidence (JSON + DSSE).
- Policy/Lattice Engine: combine verdicts (e.g., Block if secrets OR block‑license; Review if similarity_review).
- Authority: sign the gate result; attach to release attestation.
- Replay: store inputs + rule versions to reproduce decisions exactly.
If you want, I’ll turn this into:
- a ready‑to‑use
.stellaops.yml, - a CLI subcommand spec (
stella guard run), - and UI wireframes for the PR annotations + waiver flow.