synergy moats product advisory implementations
This commit is contained in:
@@ -0,0 +1,202 @@
|
||||
# Product Advisory: AI Economics Moat
|
||||
ID: ADVISORY-20260116-AI-ECON-MOAT
|
||||
Status: ACTIVE
|
||||
Owner intent: Product-wide directive
|
||||
Scope: All modules, docs, sprints, and roadmap decisions
|
||||
|
||||
## 0) Thesis (why this advisory exists)
|
||||
|
||||
In AI economics, code is cheap, software is expensive.
|
||||
|
||||
Competitors (and future competitors) can produce large volumes of code quickly. Stella Ops must remain hard to catch by focusing on the parts that are still expensive:
|
||||
- trust
|
||||
- operability
|
||||
- determinism
|
||||
- evidence integrity
|
||||
- low-touch onboarding
|
||||
- low support burden at scale
|
||||
|
||||
This advisory defines the product-level objectives and non-negotiable standards that make Stella Ops defensible against "code producers".
|
||||
|
||||
## 1) Product positioning (the class we must win)
|
||||
|
||||
Stella Ops Suite must be "best in class" for:
|
||||
|
||||
Evidence-grade release orchestration for containerized applications outside Kubernetes.
|
||||
|
||||
Stella is NOT attempting to be:
|
||||
- a generic CD platform (Octopus, GitLab, Jenkins replacements)
|
||||
- a generic vulnerability scanner (Trivy, Grype replacements)
|
||||
- a "platform of everything" with infinite integrations
|
||||
|
||||
The moat is the end-to-end chain:
|
||||
digest identity -> evidence -> verdict -> gate -> promotion -> audit export -> deterministic replay
|
||||
|
||||
The product wins when customers can run verified releases with minimal human labor and produce auditor-ready evidence.
|
||||
|
||||
## 2) Target customer and adoption constraint
|
||||
|
||||
Constraint: founder operates solo until ~100 paying customers.
|
||||
|
||||
Therefore, the product must be self-serve by default:
|
||||
- install must be predictable
|
||||
- failures must be diagnosable without maintainer time
|
||||
- docs must replace support
|
||||
- "Doctor" must replace debugging sessions
|
||||
|
||||
Support must be an exception, not a workflow.
|
||||
|
||||
## 3) The five non-negotiable product invariants
|
||||
|
||||
Every meaningful product change MUST preserve and strengthen these invariants:
|
||||
|
||||
I1. Evidence-grade by design
|
||||
- Every verified decision has an evidence trail.
|
||||
- Evidence is exportable, replayable, and verifiable.
|
||||
|
||||
I2. Deterministic replay
|
||||
- Same inputs -> same outputs.
|
||||
- A verdict can be reproduced and verified later, not just explained.
|
||||
|
||||
I3. Digest-first identity
|
||||
- Releases are immutable digests, not mutable tags.
|
||||
- "What is deployed where" is anchored to digests.
|
||||
|
||||
I4. Offline-first posture
|
||||
- Air-gapped and low-egress environments must remain first-class.
|
||||
- No hidden network dependencies in core flows.
|
||||
|
||||
I5. Low-touch operability
|
||||
- Misconfigurations fail fast at startup with clear messages.
|
||||
- Runtime failures have deterministic recovery playbooks.
|
||||
- Doctor provides actionable diagnostics bundles and remediation steps.
|
||||
|
||||
If a proposed feature weakens any invariant, it must be rejected or redesigned.
|
||||
|
||||
## 4) Moats we build (how Stella stays hard to catch)
|
||||
|
||||
M1. Evidence chain continuity (no "glue work" required)
|
||||
- Scan results, reachability proofs, policy evaluation, approvals, promotions, and exports are one continuous chain.
|
||||
- Do not require customers to stitch multiple tools together to get audit-grade releases.
|
||||
|
||||
M2. Explainability with proof, not narrative
|
||||
- "Why blocked?" must produce a deterministic trace + referenced evidence artifacts.
|
||||
- The answer must be replayable, not a one-time explanation.
|
||||
|
||||
M3. Operability moat (Doctor + safe defaults)
|
||||
- Diagnostics must identify root cause, not just symptoms.
|
||||
- Provide deterministic checklists and fixes.
|
||||
- Every integration must ship with health checks and failure-mode docs.
|
||||
|
||||
M4. Controlled surface area (reduce permutations)
|
||||
- Ship a small number of Tier-1 golden integrations and targets.
|
||||
- Keep the plugin system as an escape valve, but do not expand the maintained matrix beyond what solo operations can support.
|
||||
|
||||
M5. Standards-grade outputs with stable schemas
|
||||
- SBOM, VEX, attestations, exports, and decision records must be stable, versioned, and backwards compatible where promised.
|
||||
- Stability is a moat: auditors and platform teams adopt what they can depend on.
|
||||
|
||||
## 5) Explicit non-goals (what to reject quickly)
|
||||
|
||||
Reject or de-prioritize proposals that primarily:
|
||||
- add a generic CD surface without evidence and determinism improvements
|
||||
- expand integrations broadly without a "Tier-1" support model and diagnostics coverage
|
||||
- compete on raw scanner breadth rather than evidence-grade gating outcomes
|
||||
- add UI polish that does not reduce operator labor or support load
|
||||
- add "AI features" that create nondeterminism or require external calls in core paths
|
||||
|
||||
If a feature does not strengthen at least one moat (M1-M5), it is likely not worth shipping now.
|
||||
|
||||
## 6) Agent review rubric (use this to evaluate any proposal, advisory, or sprint)
|
||||
|
||||
When reviewing any new idea, feature request, PRD, or sprint, score it against:
|
||||
|
||||
A) Moat impact (required)
|
||||
- Which moat does it strengthen (M1-M5)?
|
||||
- What measurable operator/auditor outcome improves?
|
||||
|
||||
B) Support burden risk (critical)
|
||||
- Does this increase the probability of support tickets?
|
||||
- Does Doctor cover the new failure modes?
|
||||
- Are there clear runbooks and error messages?
|
||||
|
||||
C) Determinism and evidence risk (critical)
|
||||
- Does this introduce nondeterminism?
|
||||
- Are outputs stable, canonical, and replayable?
|
||||
- Does it weaken evidence chain integrity?
|
||||
|
||||
D) Permutation risk (critical)
|
||||
- Does this increase the matrix of supported combinations?
|
||||
- Can it be constrained to a "golden path" configuration?
|
||||
|
||||
E) Time-to-value impact (important)
|
||||
- Does this reduce time to first verified release?
|
||||
- Does it reduce time to answer "why blocked"?
|
||||
|
||||
If a proposal scores poorly on B/C/D, it must be redesigned or rejected.
|
||||
|
||||
## 7) Definition of Done (feature-level) - do not ship without the boring parts
|
||||
|
||||
Any shippable feature must include, at minimum:
|
||||
|
||||
DOD-1: Operator story
|
||||
- Clear user story for operators and auditors, not just developers.
|
||||
|
||||
DOD-2: Failure modes and recovery
|
||||
- Documented expected failures, error codes/messages, and remediation steps.
|
||||
- Doctor checks added or extended to cover the common failure paths.
|
||||
|
||||
DOD-3: Determinism and evidence
|
||||
- Deterministic outputs where applicable.
|
||||
- Evidence artifacts linked to decisions.
|
||||
- Replay or verify path exists if the feature affects verdicts or gates.
|
||||
|
||||
DOD-4: Tests
|
||||
- Unit tests for logic (happy + edge cases).
|
||||
- Integration tests for contracts (DB, queues, storage where used).
|
||||
- Determinism tests when outputs are serialized, hashed, or signed.
|
||||
|
||||
DOD-5: Documentation
|
||||
- Docs updated where the feature changes behavior or contracts.
|
||||
- Include copy/paste examples for the golden path usage.
|
||||
|
||||
DOD-6: Observability
|
||||
- Structured logs and metrics for success/failure paths.
|
||||
- Explicit "reason codes" for gate decisions and failures.
|
||||
|
||||
If the feature cannot afford these, it cannot afford to exist in a solo-scaled product.
|
||||
|
||||
## 8) Product-level metrics (what we optimize)
|
||||
|
||||
These metrics are the scoreboard. Prioritize work that improves them.
|
||||
|
||||
P0 metrics (most important):
|
||||
- Time-to-first-verified-release (fresh install -> verified promotion)
|
||||
- Mean time to answer "why blocked?" (with proof)
|
||||
- Support minutes per customer per month (must trend toward near-zero)
|
||||
- Determinism regressions per release (must be near-zero)
|
||||
|
||||
P1 metrics:
|
||||
- Noise reduction ratio (reachable actionable findings vs raw findings)
|
||||
- Audit export acceptance rate (auditors can consume without manual reconstruction)
|
||||
- Upgrade success rate (low-friction updates, predictable migrations)
|
||||
|
||||
## 9) Immediate product focus areas implied by this advisory
|
||||
|
||||
When unsure what to build next, prefer investments in:
|
||||
- Doctor: diagnostics coverage, fix suggestions, bundles, and environment validation
|
||||
- Golden path onboarding: install -> connect -> scan -> gate -> promote -> export
|
||||
- Determinism gates in CI and runtime checks for canonical outputs
|
||||
- Evidence export bundles that map to common audit needs
|
||||
- "Why blocked" trace quality, completeness, and replay verification
|
||||
|
||||
Avoid "breadth expansion" unless it includes full operability coverage.
|
||||
|
||||
## 10) How to apply this advisory in planning
|
||||
|
||||
When processing this advisory:
|
||||
- Ensure docs reflect the invariants and moats at the product overview level.
|
||||
- Ensure sprints and tasks reference which moat they strengthen (M1-M5).
|
||||
- If a sprint increases complexity without decreasing operator labor or improving evidence integrity, treat it as suspect.
|
||||
|
||||
Archive this advisory only if it is superseded by a newer product-wide directive.
|
||||
Reference in New Issue
Block a user