Files
git.stella-ops.org/docs-archived/product/advisories/14-Jan-2026 - Security gaps in AI-generated code.md

4.1 KiB
Raw Blame History

Heres a quick, practical add-on for your pipelines: simple guardrails for AIassisted code so you dont ship security/IP/license problems without slowing teams down.


Why this matters (super short)

AI coding tools boost output but can introduce:

  • Security issues (hardcoded secrets, unsafe APIs, promptplanted vulns)
  • IP leakage (copied proprietary snippets)
  • License conflicts (incompatible OSS brought by the model)

Three fast checks (under ~13s per change)

  1. Secrets & Unsafe Patterns
  • Detect credentials, tokens, and highrisk calls (e.g., weak crypto, eval/exec, SQL concat).
  • Flag “new secrets introduced” vs “preexisting” to reduce noise.
  • Require fix or approved suppression with evidence.
  1. Attribution & Similarity
  • Fuzzy match new/changed hunks against a vetted “allowlist” (your repos) and a “denylist” (company IP you cant disclose).
  • If similarity > threshold to denylist, block; if unknown origin, require justification note.
  1. License Hygiene (deps + snippets)
  • On dependency diffs, compute SBOM, resolve licenses, evaluate policy matrix (e.g., OK: MIT/BSD/Apache2.0; Review: MPL/LGPL; Block: GPL3.0 for closed components).
  • For pasted code blocks > N lines, enforce “license/attribution comment” presence or ticket link proving provenance.

Lightweight policy you can ship today

Policy goals

  • Be explainable (every fail has a short reason + link to evidence)
  • Be configurable per repo/env
  • Support override with audit (who, why, for how long)

Example (YAML)

stellaops:
  ai_code_guard:
    enabled: true
    thresholds:
      similarity_block: 0.92
      similarity_review: 0.80
      max_paste_lines_without_provenance: 12
    licenses:
      allow: [MIT, BSD-2-Clause, BSD-3-Clause, Apache-2.0]
      review: [MPL-2.0, LGPL-2.1, LGPL-3.0]
      block: [GPL-3.0-only, AGPL-3.0-only]
    checks:
      - id: secrets_scan
        required: true
      - id: unsafe_api_scan
        required: true
      - id: snippet_similarity
        required: true
      - id: dep_sbom_license
        required: true
    overrides:
      require_issue_link: true
      max_duration_days: 14

Gate outcomes

  • Pass: merge/release continues
  • 🟡 Review: needs approver with role SecurityReviewer
  • Block: only SecurityOwner can override with issue link + timeboxed waiver

Minimal evidence you store (per change)

  • Hashes of changed hunks + similarity scores
  • Secret/unsafe findings with line refs
  • SBOM delta + license verdicts
  • Override metadata (who/why/expiry)

This feeds Stella Ops deterministic replay: same inputs → same verdicts → auditready.


Dropin CI snippets

GitHub Actions

jobs:
  ai-guard:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: stella-ops/cli-action@v1
        with:
          args: guard run --policy .stellaops.yml --format sarif --out guard.sarif
      - name: Upload SARIF
        uses: github/codeql-action/upload-sarif@v3
        with: { sarif_file: guard.sarif }

GitLab CI

ai_guard:
  image: stellaops/cli:latest
  script:
    - stella guard run --policy .stellaops.yml --format gitlab --out guard.json
    - test "$(jq -r .status guard.json)" = "pass"

Small UX that wins trust

  • Inline PR annotations (secret types, API names, license rule hit)
  • Oneclick “request waiver” (requires ticket link + expiry)
  • Policy badges in PR (“AI Code Guard: Pass / Review / Block”)

How this plugs into Stella Ops

  • Scanner: run the 3 checks; emit evidence (JSON + DSSE).
  • Policy/Lattice Engine: combine verdicts (e.g., Block if secrets OR blocklicense; Review if similarity_review).
  • Authority: sign the gate result; attach to release attestation.
  • Replay: store inputs + rule versions to reproduce decisions exactly.

If you want, Ill turn this into:

  • a readytouse .stellaops.yml,
  • a CLI subcommand spec (stella guard run),
  • and UI wireframes for the PR annotations + waiver flow.