docs consolidation

This commit is contained in:
master
2026-01-07 10:23:21 +02:00
parent 4789027317
commit 044cf0923c
515 changed files with 5460 additions and 5292 deletions

View File

@@ -0,0 +1,603 @@
# Regulator-Grade Threat & Evidence Model
## Supply-Chain Risk Decisioning Platform (Reference: “Stella Ops”)
**Document version:** 1.0
**Date:** 2025-12-19
**Intended audience:** Regulators, third-party auditors, internal security/compliance, and engineering leadership
**Scope:** Threat model + evidence model for a platform that ingests SBOM/VEX and other supply-chain signals, produces risk decisions, and preserves an audit-grade evidence trail.
---
## 1. Purpose and Objectives
This document defines:
1. A **threat model** for a supply-chain risk decisioning platform (“the Platform”) and its critical workflows.
2. An **evidence model** describing what records must exist, how they must be protected, and how they must be presented to support regulator-grade auditability and non-repudiation.
The model is designed to support the supply-chain transparency goals behind SBOM/VEX and secure software development expectations (e.g., SSDF), and to be compatible with supply-chain risk management (CSCRM) and control-based assessments (e.g., NIST control catalogs).
---
## 2. Scope, System Boundary, and Assumptions
### 2.1 In-scope system functions
The Platform performs the following high-level functions:
* **Ingest** software transparency artifacts (e.g., SBOMs, VEX documents), scan results, provenance attestations, and policy inputs.
* **Normalize** to a canonical internal representation (component identity graph + vulnerability/impact graph).
* **Evaluate** with a deterministic policy engine to produce decisions (e.g., allow/deny, risk tier, required remediation).
* **Record** an audit-grade evidence package supporting each decision.
* **Export** reports and attestations suitable for procurement, regulator review, and downstream consumption.
### 2.2 Deployment models supported by this model
This model is written to cover:
* **Onprem / airgapped** deployments (offline evidence and curated vulnerability feeds).
* **Dedicated single-tenant hosted** deployments.
* **Multi-tenant SaaS** deployments (requires stronger tenant isolation controls and evidence).
### 2.3 Core assumptions
* SBOM is treated as a **formal inventory and relationship record** for components used to build software.
* VEX is treated as a **machine-readable assertion** of vulnerability status for a product, including “not affected / affected / fixed / under investigation.”
* The Platform must be able to demonstrate **traceability** from decision → inputs → transformations → outputs, and preserve “known unknowns” (explicitly tracked uncertainty).
* If the Platform is used in US federal acquisition contexts, it must anticipate evolving SBOM minimum element guidance; CISAs 2025 SBOM minimum elements draft guidance explicitly aims to update the 2021 NTIA baseline to reflect tooling and maturity improvements. ([Federal Register][1])
---
## 3. Normative and Informative References
This model is aligned to the concepts and terminology used by the following:
* **SBOM minimum elements baseline (2021 NTIA)** and the “data fields / automation support / practices and processes” structure.
* **CISA 2025 SBOM minimum elements draft guidance** (published for comment; successor guidance to NTIA baseline per the Federal Register notice). ([Federal Register][1])
* **VEX overview and statuses** (NTIA one-page summary).
* **NIST SSDF** (SP 800218; includes recent Rev.1 IPD for SSDF v1.2). ([NIST Computer Security Resource Center][2])
* **NIST CSCRM guidance** (SP 800161 Rev.1). ([NIST Computer Security Resource Center][3])
* **NIST security and privacy controls catalog** (SP 80053 Rev.5, including its supply chain control family). ([NIST Computer Security Resource Center][4])
* **SLSA supply-chain threat model and mitigations** (pipeline threat clustering AI; verification threats). ([SLSA][5])
* **Attestation and transparency building blocks**:
* intoto (supply-chain metadata standard). ([in-toto][6])
* DSSE (typed signing envelope to reduce confusion attacks). ([GitHub][7])
* Sigstore Rekor (signature transparency log). ([Sigstore][8])
* **SBOM and VEX formats**:
* CycloneDX (ECMA424; SBOM/BOM standard). ([GitHub][9])
* SPDX (ISO/IEC 5962:2021; SBOM standard). ([ISO][10])
* CSAF v2.0 VEX profile (structured security advisories with VEX profile requirements). ([OASIS Documents][11])
* OpenVEX (minimal VEX implementation). ([GitHub][12])
* **Vulnerability intelligence format**:
* OSV schema maps vulnerabilities to package versions/commit ranges. ([OSV.dev][13])
---
## 4. System Overview
### 4.1 Logical architecture
**Core components:**
1. **Ingestion Gateway**
* Accepts SBOM, VEX, provenance attestations, scan outputs, and configuration inputs.
* Performs syntactic validation, content hashing, and initial authenticity checks.
2. **Normalization & Identity Resolution**
* Converts formats (SPDX, CycloneDX, proprietary) into a canonical internal model.
* Resolves component IDs (purl/CPE/name+version), dependency graph, and artifact digests.
3. **Evidence Store**
* Content-addressable object store for raw artifacts plus derived artifacts.
* Append-only metadata index (event log) referencing objects by hash.
4. **Policy & Decision Engine**
* Deterministic evaluation engine for risk policy.
* Produces a decision plus a structured explanation and “unknowns.”
5. **Attestation & Export Service**
* Packages decisions and evidence references as signed statements (DSSE/intoto compatible). ([GitHub][7])
* Optional transparency publication (e.g., Rekor or private transparency log). ([Sigstore][8])
### 4.2 Trust boundaries
**Primary trust boundaries:**
* **TB1:** External submitter → Ingestion Gateway
* **TB2:** Customer environment → Platform environment (for hosted)
* **TB3:** Policy authoring plane → decision execution plane
* **TB4:** Evidence Store (write path) → Evidence Store (read/audit path)
* **TB5:** Platform signing keys / KMS / HSM boundary → application services
* **TB6:** External intelligence feeds (vulnerability databases, advisories) → internal curated dataset
---
## 5. Threat Model
### 5.1 Methodology
This model combines:
* **STRIDE** for platform/system threats (spoofing, tampering, repudiation, information disclosure, denial of service, elevation of privilege).
* **SLSA threat clustering (AI)** for supply-chain pipeline threats relevant to artifacts being evaluated and to the Platforms own supply chain. ([SLSA][5])
Threats are evaluated as: **Impact × Likelihood**, with controls grouped into **Prevent / Detect / Respond**.
### 5.2 Assets (what must be protected)
**A1: Decision integrity assets**
* Final decision outputs (allow/deny, risk scores, exceptions).
* Decision explanations and traces.
* Policy rules and parameters (including weights/thresholds).
**A2: Evidence integrity assets**
* Original input artifacts (SBOM, VEX, provenance, scan outputs).
* Derived artifacts (normalized graphs, reachability proofs, diff outputs).
* Evidence index and chain-of-custody metadata.
**A3: Confidentiality assets**
* Customer source code and binaries (if ingested).
* Private SBOMs/VEX that reveal internal dependencies.
* Customer environment identifiers and incident details.
**A4: Trust anchor assets**
* Signing keys (decision attestations, evidence hashes, transparency submissions).
* Root of trust configuration (certificate chains, allowed issuers).
* Time source and timestamping configuration.
**A5: Availability assets**
* Evidence store accessibility.
* Policy engine uptime.
* Interface endpoints and batch processing capacity.
### 5.3 Threat actors
* **External attacker** seeking to:
* Push a malicious component into the supply chain,
* Falsify transparency artifacts,
* Or compromise the Platform to manipulate decisions/evidence.
* **Malicious insider** (customer or Platform operator) seeking to:
* Hide vulnerable components,
* Suppress detections,
* Or retroactively alter records.
* **Compromised CI/CD or registry** affecting provenance and artifact integrity (SLSA build/distribution threats). ([SLSA][5])
* **Curious but non-malicious parties** who should not gain access to sensitive SBOM details (confidentiality and least privilege).
### 5.4 Key threat scenarios and required mitigations
Below are regulator-relevant threats that materially affect auditability and trust.
---
### T1: Spoofing of submitter identity (STRIDE: S)
**Scenario:**
An attacker submits forged SBOM/VEX/provenance claiming to be a trusted supplier.
**Impact:**
Decisions are based on untrusted artifacts; audit trail is misleading.
**Controls (shall):**
* Enforce strong authentication for ingestion (mTLS/OIDC + scoped tokens).
* Require artifact signatures for “trusted supplier” classification; verify signature chain and allowed issuers.
* Bind submitter identity to evidence record at ingestion time (AU-style accountability expectations). ([NIST Computer Security Resource Center][4])
**Evidence required:**
* Auth event logs (who/when/what).
* Signature verification results (certificate chain, key ID).
* Hash of submitted artifact (content-addressable ID).
---
### T2: Tampering with stored evidence (STRIDE: T)
**Scenario:**
An attacker modifies an SBOM, a reachability artifact, or an evaluation trace after the decision, to change what regulators/auditors see.
**Impact:**
Non-repudiation and auditability collapse; regulator confidence lost.
**Controls (shall):**
* Evidence objects stored as **content-addressed blobs** (hash = identifier).
* **Append-only metadata log** referencing evidence hashes (no in-place edits).
* Cryptographically sign the “evidence package manifest” for each decision.
* Optional transparency log anchoring (public Rekor or private equivalent). ([Sigstore][8])
**Evidence required:**
* Object store digest list and integrity proofs.
* Signed manifest (DSSE envelope recommended to bind payload type). ([GitHub][7])
* Inclusion proof or anchor reference if using a transparency log. ([Sigstore][8])
---
### T3: Repudiation of decisions or approvals (STRIDE: R)
**Scenario:**
A policy author or approver claims they did not approve a policy change or a high-risk exception.
**Impact:**
Weak governance; cannot establish accountability.
**Controls (shall):**
* Two-person approval workflow for policy changes and exceptions.
* Immutable audit logs capturing: identity, time, action, object, outcome (aligned with audit record content expectations). ([NIST Computer Security Resource Center][4])
* Sign policy versions and exception artifacts.
**Evidence required:**
* Signed policy version artifacts.
* Approval records linked to identity provider logs.
* Change diff + rationale.
---
### T4: Information disclosure via SBOM/VEX outputs (STRIDE: I)
**Scenario:**
An auditor-facing export inadvertently reveals proprietary component lists, internal repo URLs, or sensitive dependency relationships.
**Impact:**
Confidentiality breach; contractual/regulatory exposure; risk of targeted exploitation.
**Controls (shall):**
* Role-based access control for evidence and exports.
* Redaction profiles (“regulator view,” “customer view,” “internal view”) with deterministic transformation rules.
* Separate encryption domains (tenant-specific keys).
* Secure export channels; optional offline export bundles for air-gapped review.
**Evidence required:**
* Access-control policy snapshots and enforcement logs.
* Export redaction policy version and redaction transformation log.
---
### T5: Denial of service against evaluation pipeline (STRIDE: D)
**Scenario:**
A malicious party floods ingestion endpoints or submits pathological SBOM graphs causing excessive compute and preventing timely decisions.
**Impact:**
Availability and timeliness failures; missed gates/releases.
**Controls (shall):**
* Input size limits, graph complexity limits, and bounded parsing.
* Quotas and rate limiting (per tenant or per submitter).
* Separate async pipeline for heavy analysis; protect decision critical path.
**Evidence required:**
* Rate limit logs and rejection metrics.
* Capacity monitoring evidence (for availability obligations).
---
### T6: Elevation of privilege to policy/admin plane (STRIDE: E)
**Scenario:**
An attacker compromises a service account and gains ability to modify policy, disable controls, or access evidence across tenants.
**Impact:**
Complete compromise of decision integrity and confidentiality.
**Controls (shall):**
* Strict separation of duties: policy authoring vs execution vs auditing.
* Least privilege IAM for services (scoped tokens; short-lived credentials).
* Strong hardening of signing key boundary (KMS/HSM boundary; key usage constrained by attestation policy).
**Evidence required:**
* IAM policy snapshots and access review logs.
* Key management logs (rotation, access, signing operations).
---
### T7: Supply-chain compromise of artifacts being evaluated (SLSA AI)
**Scenario:**
The software under evaluation is compromised via source manipulation, build pipeline compromise, dependency compromise, or distribution channel compromise.
**Impact:**
Customer receives malicious/vulnerable software; Platform may miss it without sufficient provenance and identity proofs.
**Controls (should / shall depending on assurance target):**
* Require/provide provenance attestations and verify them against expectations (SLSA-style verification). ([SLSA][5])
* Verify artifact identity by digest and signed provenance.
* Enforce policy constraints for “minimum acceptable provenance” for high-criticality deployments.
**Evidence required:**
* Verified provenance statement(s) (intoto compatible) describing how artifacts were produced. ([in-toto][6])
* Build and publication step attestations, with cryptographic binding to artifact digests.
* Evidence of expectation configuration and verification outcomes (SLSA “verification threats” include tampering with expectations). ([SLSA][5])
---
### T8: Vulnerability intelligence poisoning / drift
**Scenario:**
The Platforms vulnerability feed is manipulated or changes over time such that a past decision cannot be reproduced.
**Impact:**
Regulator cannot validate basis of decision at time-of-decision; inconsistent results over time.
**Controls (shall):**
* Snapshot all external intelligence inputs used in an evaluation (source + version + timestamp + digest).
* In offline mode, use curated signed feed bundles and record their hashes.
* Maintain deterministic evaluation by tying each decision to the exact dataset snapshot.
**Evidence required:**
* Feed snapshot manifest (hashes, source identifiers, effective date range).
* Verification record of feed authenticity (signature or trust chain).
(OSV schema design, for example, emphasizes mapping to precise versions/commits; this supports deterministic matching when captured correctly.) ([OSV.dev][13])
---
## 6. Evidence Model
### 6.1 Evidence principles (regulator-grade properties)
All evidence objects in the Platform **shall** satisfy:
1. **Integrity:** Evidence cannot be modified without detection (hashing + immutability).
2. **Authenticity:** Evidence is attributable to its source (signatures, verified identity).
3. **Traceability:** Decisions link to specific input artifacts and transformation steps.
4. **Reproducibility:** A decision can be replayed deterministically given the same inputs and dataset snapshots.
5. **Nonrepudiation:** Critical actions (policy updates, exceptions, decision signing) are attributable and auditable.
6. **Confidentiality:** Sensitive evidence is access-controlled and export-redactable.
7. **Completeness with “Known Unknowns”:** The Platform explicitly records unknown or unresolved data elements rather than silently dropping them.
### 6.2 Evidence object taxonomy
The Platform should model evidence as a graph of typed objects.
**E1: Input artifact evidence**
* SBOM documents (SPDX/CycloneDX), including dependency relationships and identifiers.
* VEX documents (CSAF VEX, OpenVEX, CycloneDX VEX) with vulnerability status assertions.
* Provenance/attestations (SLSA-style provenance, intoto statements). ([SLSA][14])
* Scan outputs (SCA, container/image scans, static/dynamic analysis outputs).
**E2: Normalization and resolution evidence**
* Parsing/validation logs (schema validation results; warnings).
* Canonical “component graph” and “vulnerability mapping” artifacts.
* Identity resolution records: how name/version/IDs were mapped.
**E3: Analysis evidence**
* Vulnerability match outputs (CVE/OSV IDs, version ranges, scoring).
* Reachability artifacts (if supported): call graph results, dependency path proofs, or “not reachable” justification artifacts.
* Diff artifacts: changes between SBOM versions (component added/removed/upgraded; license changes; vulnerability deltas).
**E4: Policy and governance evidence**
* Policy definitions and versions (rules, thresholds).
* Exception records with approver identity and rationale.
* Approval workflow records and change control logs.
**E5: Decision evidence**
* Decision outcome (e.g., pass/fail/risk tier).
* Deterministic decision trace (which rules fired, which inputs were used).
* Unknowns/assumptions list.
* Signed decision statement + manifest of linked evidence objects.
**E6: Operational security evidence**
* Authentication/authorization logs.
* Key management and signing logs.
* Evidence store integrity monitoring logs.
* Incident response records (if applicable).
### 6.3 Common metadata schema (minimum required fields)
Every evidence object **shall** include at least:
* **EvidenceID:** content-addressable ID (e.g., SHA256 digest of canonical bytes).
* **EvidenceType:** enumerated type (SBOM, VEX, Provenance, ScanResult, Policy, Decision, etc.).
* **Producer:** tool/system identity that generated the evidence (name, version).
* **Timestamp:** time created + time ingested (with time source information).
* **Subject:** the software artifact(s) the evidence applies to (artifact digest(s), package IDs).
* **Chain links:** parent EvidenceIDs (inputs/precedents).
* **Tenant / confidentiality labels:** access classification and redaction profile applicability.
This aligns with the SBOM minimum elements emphasis on baseline data, automation support, and practices/processes including known unknowns and access control.
### 6.4 Evidence integrity and signing
**6.4.1 Hashing and immutability**
* Raw evidence artifacts shall be stored as immutable blobs.
* Derived evidence shall be stored as separate immutable blobs.
* The evidence index shall be append-only and reference blobs by hash.
**6.4.2 Signed envelopes and type binding**
* For high-assurance use, the Platform shall sign:
* Decision statements,
* Per-decision evidence manifests,
* Policy versions and exception approvals.
* Use a signing format that binds the **payload type** to the signature to reduce confusion attacks; DSSE is explicitly designed to authenticate both message and type. ([GitHub][7])
**6.4.3 Attestation model**
* Use intoto-compatible statements to standardize subjects (artifact digests) and predicates (decision, SBOM, provenance). ([in-toto][6])
* CycloneDX explicitly recognizes an official predicate type for BOM attestations, which can be leveraged for standardized evidence typing. ([CycloneDX][15])
**6.4.4 Transparency anchoring (optional but strong for regulators)**
* Publish signed decision manifests to a transparency log to provide additional tamper-evidence and public verifiability (or use a private transparency log for sensitive contexts). Rekor is Sigstores signature transparency log service. ([Sigstore][8])
### 6.5 Evidence for VEX and “not affected” assertions
Because VEX is specifically intended to prevent wasted effort on non-exploitable upstream vulnerabilities and is machine-readable for automation, the Platform must treat VEX as first-class evidence.
Minimum required behaviors:
* Maintain the original VEX document and signature (if present).
* Track the VEX **status** (not affected / affected / fixed / under investigation) for each vulnerabilityproduct association.
* If the Platform generates VEX-like conclusions (e.g., “not affected” based on reachability), it shall:
* Record the analytical basis as evidence (reachability proof, configuration assumptions),
* Mark the assertion as Platform-authored (not vendor-authored),
* Provide an explicit confidence level and unknowns.
For CSAF-based VEX documents, the Platform should validate conformance to the CSAF VEX profile requirements. ([OASIS Documents][11])
### 6.6 Reproducibility and determinism controls
Each decision must be reproducible. Therefore each decision record **shall** include:
* **Algorithm version** (policy engine + scoring logic version).
* **Policy version** and policy hash.
* **All inputs by digest** (SBOM/VEX/provenance/scan outputs).
* **External dataset snapshot identifiers** (vulnerability DB snapshot digest(s), advisory feeds, scoring inputs).
* **Execution environment ID** (runtime build of the Platform component that evaluated).
* **Determinism proof fields** (e.g., “random seed = fixed/none”, stable sort order used, canonicalization rules used).
This supports regulator expectations for traceability and for consistent evaluation in supply-chain risk management programs. ([NIST Computer Security Resource Center][3])
### 6.7 Retention, legal hold, and audit packaging
**Retention (shall):**
* Evidence packages supporting released decisions must be retained for a defined minimum period (set by sector/regulator/contract), with:
* Immutable storage and integrity monitoring,
* Controlled deletion only through approved retention workflows,
* Legal hold support.
**Audit package export (shall):**
For any decision, the Platform must be able to export an “Audit Package” containing:
1. **Decision statement** (signed)
2. **Evidence manifest** (signed) listing all evidence objects by hash
3. **Inputs** (SBOM/VEX/provenance/etc.) or references to controlled-access retrieval
4. **Transformation chain** (normalization and mapping records)
5. **Policy version and evaluation trace**
6. **External dataset snapshot manifests**
7. **Access-control and integrity verification records** (to prove custody)
---
## 7. Threat-to-Evidence Traceability (Minimal Regulator View)
This section provides a compact mapping from key threat classes to the evidence that must exist to satisfy audit and non-repudiation expectations.
| Threat Class | Primary Risk | “Must-have” Evidence Outputs |
| -------------------------------- | ------------------------------- | ------------------------------------------------------------------------------------------------- |
| Spoofing submitter | Untrusted artifacts used | Auth logs + signature verification + artifact digests |
| Tampering with evidence | Retroactive manipulation | Content-addressed evidence + append-only index + signed manifest (+ optional transparency anchor) |
| Repudiation | Denial of approval/changes | Signed policy + approval workflow logs + immutable audit trail |
| Information disclosure | Sensitive SBOM leakage | Access-control evidence + redaction policy version + export logs |
| DoS | Missed gates / delayed response | Rate limiting logs + capacity metrics + bounded parsing evidence |
| Privilege escalation | Policy/evidence compromise | IAM snapshots + key access logs + segregation-of-duty records |
| Supply-chain pipeline compromise | Malicious artifact | Provenance attestations + verification results + artifact digest binding |
| Vulnerability feed drift | Non-reproducible decisions | Feed snapshot manifests + digests + authenticity verification |
(Where the threat concerns the wider software supply chain, SLSAs threat taxonomy provides an established clustering for where pipeline threats occur and the role of verification. ([SLSA][5]))
---
## 8. Governance, Control Testing, and Continuous Compliance
To be regulator-grade, the Platforms security and evidence integrity controls must be governed and tested.
### 8.1 Governance expectations
* Maintain a control mapping to a recognized catalog (e.g., NIST SP 80053) for access control, auditing, integrity, and supply-chain risk management. ([NIST Computer Security Resource Center][4])
* Maintain a supply-chain risk posture aligned with CSCRM guidance (e.g., NIST SP 800161 Rev.1). ([NIST Computer Security Resource Center][3])
* Align secure development practices to SSDF expectations and terminology, noting SSDF has an active Rev.1 IPD (v1.2) publication process at NIST. ([NIST Computer Security Resource Center][2])
### 8.2 Control testing (shall)
At minimum, perform and retain evidence of:
* Periodic integrity tests of evidence store immutability and hash verification.
* Key management audits (signing operations, rotation, restricted usage).
* Access review audits (especially multi-tenant isolation).
* Reproducibility tests: re-run evaluation from historical evidence package and confirm identical results.
---
## Appendix A: Example Signed Decision Statement Structure (Conceptual)
This is a conceptual structure (not a normative schema) showing the minimum linkage needed:
* **Subject:** artifact digest(s) + identifiers
* **Predicate type:** `.../decision` (Platform-defined)
* **Predicate:** decision outcome + rationale + policy hash + dataset snapshot hashes
* **Envelope:** DSSE signature with payload type binding ([GitHub][7])
* **Optional transparency anchor:** Rekor entry UUID / inclusion proof ([Sigstore][8])
---
## Appendix B: Practical Notes for SBOM/VEX Interoperability
* Support both SPDX and CycloneDX ingestion and preservation; both are referenced in SBOM minimum elements discussion and are widely used.
* Treat CSAF VEX and OpenVEX as acceptable VEX carriers; validate schema and preserve original artifacts. ([OASIS Documents][11])
* Capture “known unknowns” explicitly rather than forcing false precision; this is part of SBOM minimum elements practices/processes framing and is directly relevant to regulator-grade audit transparency.
---
## What you can do next with this model
If you want, I can produce any of the following artifacts derived directly from this model (without changing its underlying assertions):
1. A **control-to-evidence crosswalk** (NIST 80053 / SSDF / CSCRM oriented).
2. A **test plan** (control testing, evidence integrity validation, reproducibility drills).
3. A **formal evidence schema** (JSON schema for evidence objects + DSSE envelopes + manifest format).
4. A **regulator-ready “Audit Package” template** you can hand to third parties (including redaction tiers).
[1]: https://www.federalregister.gov/documents/2025/08/22/2025-16147/request-for-comment-on-2025-minimum-elements-for-a-software-bill-of-materials "
Federal Register
\::
Request for Comment on 2025 Minimum Elements for a Software Bill of Materials
"
[2]: https://csrc.nist.gov/pubs/sp/800/218/r1/ipd "SP 800-218 Rev. 1, Secure Software Development Framework (SSDF) Version 1.2: Recommendations for Mitigating the Risk of Software Vulnerabilities | CSRC"
[3]: https://csrc.nist.gov/pubs/sp/800/161/r1/final "SP 800-161 Rev. 1, Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations | CSRC"
[4]: https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final "SP 800-53 Rev. 5, Security and Privacy Controls for Information Systems and Organizations | CSRC"
[5]: https://slsa.dev/spec/v1.1/threats "SLSA • Threats & mitigations"
[6]: https://in-toto.io/?utm_source=chatgpt.com "in-toto"
[7]: https://github.com/secure-systems-lab/dsse?utm_source=chatgpt.com "DSSE: Dead Simple Signing Envelope"
[8]: https://docs.sigstore.dev/logging/overview/?utm_source=chatgpt.com "Rekor"
[9]: https://github.com/CycloneDX/specification?utm_source=chatgpt.com "CycloneDX/specification"
[10]: https://www.iso.org/standard/81870.html?utm_source=chatgpt.com "ISO/IEC 5962:2021 - SPDX® Specification V2.2.1"
[11]: https://docs.oasis-open.org/csaf/csaf/v2.0/os/csaf-v2.0-os.html?utm_source=chatgpt.com "Common Security Advisory Framework Version 2.0 - Index of /"
[12]: https://github.com/openvex/spec?utm_source=chatgpt.com "OpenVEX Specification"
[13]: https://osv.dev/?utm_source=chatgpt.com "OSV - Open Source Vulnerabilities"
[14]: https://slsa.dev/spec/v1.0-rc1/provenance?utm_source=chatgpt.com "Provenance"
[15]: https://cyclonedx.org/specification/overview/?utm_source=chatgpt.com "Specification Overview"

84
docs/legal/LEGAL_FAQ_QUOTA.md Executable file
View File

@@ -0,0 +1,84 @@
# LegalFAQ — FreeTier Quota & AGPLCompliance
> **Operational behaviour (limits, counters, delays) is documented in
> [`30_QUOTA_ENFORCEMENT_FLOW1.md`](30_QUOTA_ENFORCEMENT_FLOW1.md).**
> This page covers only the legal aspects of offering StellaOps as a
> service or embedding it into another product while the freetier limits are
> in place.
---
## 1·Does enforcing a quota violate the AGPL?
**No.**
AGPL3.0 does not forbid implementing usage controls in the program itself.
Recipients retain the freedoms to run, study, modify and share the software.
The StellaOps quota:
* Is enforced **solely at the service layer** (Valkey counters, Redis-compatible) — the source
code implementing the quota is published under AGPL3.0orlater.
* Never disables functionality; it introduces *time delays* only after the
free allocation is exhausted.
* Can be bypassed entirely by rebuilding from source and removing the
enforcement middleware — the licence explicitly allows such modifications.
Therefore the quota complies with §§ 0 & 2 of the AGPL.
---
## 2·Can I redistribute StellaOps with the quota removed?
Yes, provided you:
1. **Publish the full corresponding source code** of your modified version
(AGPL§13 & §5c), and
2. Clearly indicate the changes (AGPL§5a).
You may *retain* or *relax* the limits, or introduce your own tiering, as long
as the complete modified source is offered to every user of the service.
---
## 3·Embedding in a proprietary appliance
You may ship StellaOps inside a hardware or virtual appliance **only if** the
entire combined work is distributed under **AGPL3.0orlater** and you supply
the full source code for both the scanner and your integration glue.
Shipping an AGPL component while keeping the rest closedsource violates
§13 (*“remote network interaction”*).
---
## 4·SaaS redistribution
Operating a public SaaS that offers StellaOps scans to third parties triggers
the **networkuse clause**. You must:
* Provide the complete, buildable source of **your running version**
including quota patches or UI branding.
* Present the offer **conspicuously** (e.g. a “Source Code” footer link).
Failure to do so breaches §13 and can terminate your licence under §8.
---
## 5·Is email collection for the JWT legal?
* **Purpose limitation (GDPR Art. 51 b):** address is used only to deliver the
JWT or optional release notes.
* **Data minimisation (Art. 51 c):** no name, IP or marketing preferences are
required; a blank email body suffices.
* **Storage limitation (Art. 51 e):** addresses are deleted or hashed after
7days unless the sender opts into updates.
Hence the token workflow adheres to GDPR principles.
---
## 6·Changelog
| Version | Date | Notes |
|---------|------|-------|
| **2.0** | 20250716 | Removed runtime quota details; linked to new authoritative overview. |
| 1.0 | 20241220 | Initial legal FAQ. |