semi implemented and features implemented save checkpoint

This commit is contained in:
master
2026-02-08 18:00:49 +02:00
parent 04360dff63
commit 1bf6bbf395
20895 changed files with 716795 additions and 64 deletions

View File

@@ -0,0 +1,34 @@
# AdvisoryAI Orchestrator (Chat + Workbench + Runs)
## Module
AdvisoryAI
## Status
IMPLEMENTED
## Description
The AdvisoryAI module provides a chat orchestrator with session management, run tracking (with artifacts and events), and tool routing. Backend web service with chat and run endpoints is operational.
## Implementation Details
- **Modules**: `src/AdvisoryAi/StellaOps.AdvisoryAI/`, `src/AdvisoryAi/StellaOps.AdvisoryAI.WebService/`, `src/AdvisoryAi/StellaOps.AdvisoryAI.Worker/`
- **Key Classes**:
- `AdvisoryPipelineOrchestrator` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Orchestration/AdvisoryPipelineOrchestrator.cs`) - main pipeline orchestrator coordinating task plans and execution
- `AdvisoryPipelineExecutor` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Execution/AdvisoryPipelineExecutor.cs`) - executes advisory pipeline stages
- `AdvisoryChatService` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Services/AdvisoryChatService.cs`) - chat session orchestration service
- `ConversationService` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/ConversationService.cs`) - manages conversation state and context
- `RunService` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Runs/RunService.cs`) - tracks runs with artifacts and events
- `InMemoryRunStore` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Runs/InMemoryRunStore.cs`) - in-memory storage for run data
- `AdvisoryChatIntentRouter` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Routing/AdvisoryChatIntentRouter.cs`) - routes chat intents to appropriate handlers
- `ChatEndpoints` (`src/AdvisoryAi/StellaOps.AdvisoryAI.WebService/Endpoints/ChatEndpoints.cs`) - REST endpoints for chat operations
- `RunEndpoints` (`src/AdvisoryAi/StellaOps.AdvisoryAI.WebService/Endpoints/RunEndpoints.cs`) - REST endpoints for run tracking
- `AdvisoryTaskWorker` (`src/AdvisoryAi/StellaOps.AdvisoryAI.Worker/Services/AdvisoryTaskWorker.cs`) - background worker processing advisory tasks
- **Interfaces**: `IAdvisoryPipelineOrchestrator`, `IRunService`, `IRunStore`
- **Source**: Feature matrix scan
## E2E Test Plan
- [ ] Submit a chat message via `ChatEndpoints` and verify `AdvisoryChatService` processes it with correct conversation context
- [ ] Create a run via `RunEndpoints` and verify `RunService` tracks artifacts and events in `InMemoryRunStore`
- [ ] Verify `AdvisoryChatIntentRouter` routes different intent types (explain, remediate, policy) to correct handlers
- [ ] Verify `AdvisoryPipelineOrchestrator` creates and executes task plans with `AdvisoryPipelineExecutor`
- [ ] Verify `AdvisoryTaskWorker` picks up queued tasks and processes them to completion
- [ ] Verify conversation context is maintained across multiple messages in the same session via `ConversationService`

View File

@@ -0,0 +1,36 @@
# AdvisoryAI Pipeline with Guardrails
## Module
AdvisoryAI
## Status
IMPLEMENTED
## Description
Full advisory AI pipeline with guardrails, chat interface, action execution, and idempotency handling. Includes retrieval, structured/vector retrievers, and SBOM context retrieval.
## Implementation Details
- **Modules**: `src/AdvisoryAi/StellaOps.AdvisoryAI/`, `src/AdvisoryAi/StellaOps.AdvisoryAI.Hosting/`
- **Key Classes**:
- `AdvisoryGuardrailPipeline` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Guardrails/AdvisoryGuardrailPipeline.cs`) - guardrail pipeline filtering AI inputs and outputs
- `AdvisoryPipelineOrchestrator` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Orchestration/AdvisoryPipelineOrchestrator.cs`) - orchestrates pipeline stages with guardrail checks
- `AdvisoryPipelineExecutor` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Execution/AdvisoryPipelineExecutor.cs`) - executes pipeline with pre/post guardrails
- `AdvisoryStructuredRetriever` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Retrievers/AdvisoryStructuredRetriever.cs`) - retrieves structured advisory data
- `AdvisoryVectorRetriever` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Retrievers/AdvisoryVectorRetriever.cs`) - vector-based semantic retrieval
- `SbomContextRetriever` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Retrievers/SbomContextRetriever.cs`) - retrieves SBOM context for vulnerability analysis
- `ActionExecutor` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Actions/ActionExecutor.cs`) - executes AI-proposed actions
- `IdempotencyHandler` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Actions/IdempotencyHandler.cs`) - ensures idempotent action execution
- `GuardrailAllowlistLoader` (`src/AdvisoryAi/StellaOps.AdvisoryAI.Hosting/GuardrailAllowlistLoader.cs`) - loads guardrail allowlists from configuration
- `GuardrailPhraseLoader` (`src/AdvisoryAi/StellaOps.AdvisoryAI.Hosting/GuardrailPhraseLoader.cs`) - loads guardrail phrase filters
- `AdvisoryAiGuardrailOptions` (`src/AdvisoryAi/StellaOps.AdvisoryAI.Hosting/AdvisoryAiGuardrailOptions.cs`) - guardrail configuration options
- **Interfaces**: `IAdvisoryStructuredRetriever`, `IAdvisoryVectorRetriever`, `ISbomContextRetriever`, `IActionExecutor`, `IIdempotencyHandler`
- **Source**: Feature matrix scan
## E2E Test Plan
- [ ] Submit a prompt through `AdvisoryGuardrailPipeline` and verify guardrails filter prohibited content before reaching LLM
- [ ] Verify `AdvisoryStructuredRetriever` returns relevant CVE/advisory data for a given vulnerability query
- [ ] Verify `AdvisoryVectorRetriever` performs semantic search and returns ranked results
- [ ] Verify `SbomContextRetriever` enriches prompts with SBOM component context
- [ ] Execute an action through `ActionExecutor` and verify `IdempotencyHandler` prevents duplicate execution
- [ ] Verify `GuardrailAllowlistLoader` and `GuardrailPhraseLoader` correctly load and enforce content filters
- [ ] Verify the full pipeline flow: retrieval -> guardrail check -> LLM inference -> output guardrail -> response

View File

@@ -0,0 +1,31 @@
# AI Action Policy Gate (K4 Lattice Governance for AI-Proposed Actions)
## Module
AdvisoryAI
## Status
IMPLEMENTED
## Description
Connects AI-proposed actions to the Policy Engine's K4 lattice for governance-aware automation. Moves beyond simple role checks to VEX-aware policy gates with approval workflows, idempotency tracking, and action audit ledger. Enables "AI that acts" with governance guardrails.
## Implementation Details
- **Modules**: `src/AdvisoryAi/StellaOps.AdvisoryAI/Actions/`
- **Key Classes**:
- `ActionPolicyGate` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Actions/ActionPolicyGate.cs`) - evaluates AI-proposed actions against K4 lattice policy rules
- `ActionRegistry` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Actions/ActionRegistry.cs`) - registry of available AI actions with metadata and policy requirements
- `ActionExecutor` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Actions/ActionExecutor.cs`) - executes approved actions with policy gate checks
- `ActionAuditLedger` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Actions/ActionAuditLedger.cs`) - immutable audit trail of all action decisions and executions
- `ApprovalWorkflowAdapter` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Actions/ApprovalWorkflowAdapter.cs`) - integrates with approval workflows for gated actions
- `IdempotencyHandler` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Actions/IdempotencyHandler.cs`) - ensures actions are not duplicated
- `ActionDefinition` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Actions/ActionDefinition.cs`) - defines an action's capabilities, constraints, and policy metadata
- **Interfaces**: `IActionPolicyGate`, `IActionRegistry`, `IActionExecutor`, `IActionAuditLedger`, `IApprovalWorkflowAdapter`, `IIdempotencyHandler`, `IGuidGenerator`
- **Source**: SPRINT_20260109_011_004_BE_policy_action_integration.md
## E2E Test Plan
- [ ] Register an action in `ActionRegistry` and verify `ActionPolicyGate` evaluates it against K4 lattice policy rules
- [ ] Submit an action requiring approval and verify `ApprovalWorkflowAdapter` creates an approval request
- [ ] Execute a gated action after approval and verify `ActionAuditLedger` records the decision, approval, and execution
- [ ] Submit a duplicate action and verify `IdempotencyHandler` prevents re-execution
- [ ] Submit an action that violates policy and verify `ActionPolicyGate` rejects it with a policy violation reason
- [ ] Verify `ActionDefinition` metadata (risk level, required approvals, allowed scopes) is enforced during gate evaluation

View File

@@ -0,0 +1,37 @@
# AI Remedy Autopilot with Multi-SCM Pull Request Generation
## Module
AdvisoryAI
## Status
IMPLEMENTED
## Description
AI-powered remediation service that generates fix plans (dependency bumps, base image upgrades, config changes, backport guidance), then creates PRs automatically across GitHub, GitLab, Azure DevOps, and Gitea via a unified SCM connector plugin architecture. Includes build verification, SBOM delta computation, signed delta verdicts, and fallback to "suggestion-only" when build/tests fail.
## Implementation Details
- **Modules**: `src/AdvisoryAi/StellaOps.AdvisoryAI/Remediation/`, `src/AdvisoryAi/StellaOps.AdvisoryAI.Scm.Plugin.Unified/`
- **Key Classes**:
- `AiRemediationPlanner` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Remediation/AiRemediationPlanner.cs`) - AI-driven remediation plan generation
- `RemediationDeltaService` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Remediation/RemediationDeltaService.cs`) - computes SBOM delta for remediation impact
- `PrTemplateBuilder` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Remediation/PrTemplateBuilder.cs`) - builds PR descriptions with evidence and delta info
- `GitHubPullRequestGenerator` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Remediation/GitHubPullRequestGenerator.cs`) - generates PRs on GitHub
- `GitLabMergeRequestGenerator` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Remediation/GitLabMergeRequestGenerator.cs`) - generates MRs on GitLab
- `AzureDevOpsPullRequestGenerator` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Remediation/AzureDevOpsPullRequestGenerator.cs`) - generates PRs on Azure DevOps
- `GiteaScmConnector` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Remediation/ScmConnector/GiteaScmConnector.cs`) - Gitea SCM integration
- `GitHubScmConnector` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Remediation/ScmConnector/GitHubScmConnector.cs`) - GitHub SCM integration
- `GitLabScmConnector` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Remediation/ScmConnector/GitLabScmConnector.cs`) - GitLab SCM integration
- `AzureDevOpsScmConnector` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Remediation/ScmConnector/AzureDevOpsScmConnector.cs`) - Azure DevOps SCM integration
- `ScmConnectorCatalog` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Remediation/ScmConnector/ScmConnectorCatalog.cs`) - catalog of available SCM connectors
- `ScmPluginAdapter` (`src/AdvisoryAi/StellaOps.AdvisoryAI.Scm.Plugin.Unified/ScmPluginAdapter.cs`) - unified plugin adapter for SCM operations
- `ScmPluginAdapterFactory` (`src/AdvisoryAi/StellaOps.AdvisoryAI.Scm.Plugin.Unified/ScmPluginAdapterFactory.cs`) - factory for creating SCM plugin adapters
- **Interfaces**: `IRemediationPlanner`, `IPullRequestGenerator`, `IScmConnector`, `IPackageVersionResolver`
- **Source**: SPRINT_20251226_016_AI_remedy_autopilot.md
## E2E Test Plan
- [ ] Generate a remediation plan via `AiRemediationPlanner` for a known CVE and verify it includes dependency bump steps
- [ ] Create a PR via `GitHubPullRequestGenerator` and verify `PrTemplateBuilder` populates the description with evidence
- [ ] Verify `RemediationDeltaService` computes SBOM delta showing before/after dependency changes
- [ ] Verify `ScmConnectorCatalog` resolves the correct connector (GitHub, GitLab, AzureDevOps, Gitea) based on repository URL
- [ ] Verify `ScmPluginAdapter` creates branches, commits changes, and opens PRs through the unified plugin interface
- [ ] Verify fallback to "suggestion-only" mode when build verification fails after applying the fix

View File

@@ -0,0 +1,35 @@
# Chat Gateway with Quotas and Scrubbing
## Module
AdvisoryAI
## Status
IMPLEMENTED
## Description
Chat gateway with configurable options (quotas, budgets) and service-layer chat orchestration is implemented.
## Implementation Details
- **Modules**: `src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/`, `src/AdvisoryAi/StellaOps.AdvisoryAI.WebService/`
- **Key Classes**:
- `AdvisoryChatService` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Services/AdvisoryChatService.cs`) - main chat service with quota enforcement
- `AdvisoryChatQuotaService` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Services/AdvisoryChatQuotaService.cs`) - per-user/tenant quota tracking and enforcement
- `AdvisoryChatOptions` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Options/AdvisoryChatOptions.cs`) - configurable chat options (quotas, budgets, limits)
- `GroundingValidator` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/GroundingValidator.cs`) - validates AI responses are grounded in evidence
- `ChatResponseStreamer` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/ChatResponseStreamer.cs`) - streams chat responses with progressive delivery
- `ChatPromptAssembler` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/ChatPromptAssembler.cs`) - assembles prompts with scrubbing and context injection
- `ConversationContextBuilder` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/ConversationContextBuilder.cs`) - builds conversation context with relevant data
- `ChatEndpoints` (`src/AdvisoryAi/StellaOps.AdvisoryAI.WebService/Endpoints/ChatEndpoints.cs`) - REST API endpoints for chat gateway
- `RateLimitsService` (`src/AdvisoryAi/StellaOps.AdvisoryAI.WebService/Services/RateLimitsService.cs`) - rate limiting for chat API calls
- `AuthorizationService` (`src/AdvisoryAi/StellaOps.AdvisoryAI.WebService/Services/AuthorizationService.cs`) - authorization checks for chat access
- **Interfaces**: `IAdvisoryChatInferenceClient`, `IAiConsentStore`
- **Source**: Feature matrix scan
## E2E Test Plan
- [ ] Send chat messages and verify `AdvisoryChatQuotaService` enforces per-user quotas (reject after limit exceeded)
- [ ] Configure quota limits via `AdvisoryChatOptions` and verify they are applied at runtime
- [ ] Verify `ChatPromptAssembler` scrubs sensitive data (credentials, tokens) from prompts before sending to LLM
- [ ] Verify `GroundingValidator` flags responses that lack evidence grounding
- [ ] Verify `RateLimitsService` rate-limits excessive chat API calls
- [ ] Verify `ChatResponseStreamer` delivers streaming responses with proper chunking
- [ ] Verify `AuthorizationService` rejects chat requests from unauthorized users

View File

@@ -0,0 +1,31 @@
# Deterministic AI Artifact Replay
## Module
AdvisoryAI
## Status
IMPLEMENTED
## Description
Deterministic replay infrastructure for AI artifacts including replay manifests, prompt template versioning, and input artifact hashing for reproducible AI outputs.
## Implementation Details
- **Modules**: `src/AdvisoryAi/StellaOps.AdvisoryAI/Replay/`, `src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Replay/`
- **Key Classes**:
- `AIArtifactReplayer` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Replay/AIArtifactReplayer.cs`) - replays AI artifacts with deterministic inputs for verification
- `ReplayInputArtifact` (`src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Replay/ReplayInputArtifact.cs`) - input artifact model with content-addressed hashing
- `ReplayPromptTemplate` (`src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Replay/ReplayPromptTemplate.cs`) - versioned prompt templates for replay
- `ReplayResult` (`src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Replay/ReplayResult.cs`) - replay execution result with comparison data
- `ReplayVerificationResult` (`src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Replay/ReplayVerificationResult.cs`) - verification of replay output against original
- `ReplayStatus` (`src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/Replay/ReplayStatus.cs`) - replay status tracking
- `DeterministicHashVectorEncoder` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Vectorization/DeterministicHashVectorEncoder.cs`) - deterministic hash-based vector encoding for reproducibility
- **Interfaces**: None (uses concrete replay pipeline)
- **Source**: Feature matrix scan
## E2E Test Plan
- [ ] Record an AI inference run and verify `AIArtifactReplayer` can replay it with identical inputs
- [ ] Verify `ReplayInputArtifact` computes content-addressed hashes that match across replay invocations
- [ ] Verify `ReplayPromptTemplate` versioning: replay with a v1 template produces the same output as the original v1 run
- [ ] Verify `ReplayVerificationResult` detects differences when the replay output diverges from the original
- [ ] Verify `DeterministicHashVectorEncoder` produces identical vectors for identical inputs across runs
- [ ] Verify replay with temperature=0 and fixed seed produces bit-identical outputs for supported providers

View File

@@ -0,0 +1,38 @@
# Evidence-First AI Outputs (Citations, Evidence Packs)
## Module
AdvisoryAI
## Status
IMPLEMENTED
## Description
Evidence bundle assembly with schema-validated JSON, data providers for citations, and evidence pack integration in chat responses is implemented.
## Implementation Details
- **Modules**: `src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Assembly/`
- **Key Classes**:
- `EvidenceBundleAssembler` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Assembly/EvidenceBundleAssembler.cs`) - assembles evidence bundles from multiple data providers
- `EvidencePackChatIntegration` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/EvidencePackChatIntegration.cs`) - integrates evidence packs into chat responses
- `AttestationIntegration` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/AttestationIntegration.cs`) - links evidence packs to attestation framework
- `SbomDataProvider` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Assembly/Providers/SbomDataProvider.cs`) - provides SBOM data for evidence bundles
- `VexDataProvider` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Assembly/Providers/VexDataProvider.cs`) - provides VEX data for evidence bundles
- `ReachabilityDataProvider` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Assembly/Providers/ReachabilityDataProvider.cs`) - provides reachability scoring data
- `PolicyDataProvider` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Assembly/Providers/PolicyDataProvider.cs`) - provides policy evaluation data
- `ProvenanceDataProvider` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Assembly/Providers/ProvenanceDataProvider.cs`) - provides provenance/SLSA data
- `FixDataProvider` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Assembly/Providers/FixDataProvider.cs`) - provides fix availability data
- `BinaryPatchDataProvider` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Assembly/Providers/BinaryPatchDataProvider.cs`) - provides binary patch analysis data
- `ContextDataProvider` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Assembly/Providers/ContextDataProvider.cs`) - provides contextual data
- `OpsMemoryDataProvider` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Assembly/Providers/OpsMemoryDataProvider.cs`) - provides OpsMemory historical decision data
- `EvidencePackEndpoints` (`src/AdvisoryAi/StellaOps.AdvisoryAI.WebService/Endpoints/EvidencePackEndpoints.cs`) - REST endpoints for evidence pack access
- **Interfaces**: `IEvidenceBundleAssembler`
- **Source**: Feature matrix scan
## E2E Test Plan
- [ ] Assemble an evidence bundle via `EvidenceBundleAssembler` and verify all data providers contribute relevant sections
- [ ] Verify `SbomDataProvider` includes component version and license data in the evidence bundle
- [ ] Verify `VexDataProvider` includes VEX status (affected/not_affected/fixed) for referenced CVEs
- [ ] Verify `ReachabilityDataProvider` includes reachability scores and call-path evidence
- [ ] Verify `EvidencePackChatIntegration` attaches evidence pack references to chat responses
- [ ] Verify `AttestationIntegration` signs evidence packs with attestation metadata
- [ ] Access evidence packs via `EvidencePackEndpoints` and verify schema-validated JSON output

View File

@@ -0,0 +1,31 @@
# Evidence-First Citations in Chat Responses
## Module
AdvisoryAI
## Status
IMPLEMENTED
## Description
Evidence bundle assembly with citations in chat responses and UI evidence drilldown is implemented.
## Implementation Details
- **Modules**: `src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/`, `src/AdvisoryAi/StellaOps.AdvisoryAI/Explanation/`
- **Key Classes**:
- `EvidenceAnchoredExplanationGenerator` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Explanation/EvidenceAnchoredExplanationGenerator.cs`) - generates explanations anchored to evidence citations
- `EvidencePackChatIntegration` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/EvidencePackChatIntegration.cs`) - embeds evidence citations into chat responses
- `GroundingValidator` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/GroundingValidator.cs`) - validates that AI claims are grounded in cited evidence
- `ExplanationPromptTemplates` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Explanation/ExplanationPromptTemplates.cs`) - prompt templates for citation-rich explanations
- `DefaultExplanationPromptService` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Explanation/DefaultExplanationPromptService.cs`) - assembles explanation prompts with citation instructions
- `InMemoryExplanationStore` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Explanation/InMemoryExplanationStore.cs`) - stores explanation requests and results
- `ActionProposalParser` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/ActionProposalParser.cs`) - parses action proposals from LLM responses with citation references
- **Interfaces**: `IExplanationGenerator`, `IExplanationRequestStore`, `IEvidenceRetrievalService`
- **Source**: Feature matrix scan
## E2E Test Plan
- [ ] Generate an explanation via `EvidenceAnchoredExplanationGenerator` and verify it contains citation references to evidence items
- [ ] Verify `GroundingValidator` rejects explanations that make claims without corresponding evidence citations
- [ ] Verify `EvidencePackChatIntegration` embeds clickable citation references in chat response markdown
- [ ] Verify `ExplanationPromptTemplates` instruct the LLM to cite evidence sources in its output
- [ ] Verify `InMemoryExplanationStore` persists explanation requests and results for later retrieval
- [ ] Verify `ActionProposalParser` extracts cited evidence IDs from LLM-generated action proposals

View File

@@ -0,0 +1,31 @@
# Immutable Audit Log for AI Interactions
## Module
AdvisoryAI
## Status
IMPLEMENTED
## Description
DSSE-signed audit envelope builder for chat interactions with prompts, tool calls, and model fingerprints is implemented.
## Implementation Details
- **Modules**: `src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Audit/`, `src/AdvisoryAi/StellaOps.AdvisoryAI.WebService/`
- **Key Classes**:
- `AdvisoryChatAuditEnvelopeBuilder` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Audit/AdvisoryChatAuditEnvelopeBuilder.cs`) - builds DSSE-signed audit envelopes for chat interactions
- `ChatAuditRecords` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Audit/ChatAuditRecords.cs`) - audit record models (prompts, responses, tool calls, model fingerprints)
- `PostgresAdvisoryChatAuditLogger` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Services/PostgresAdvisoryChatAuditLogger.cs`) - persists audit records to PostgreSQL
- `NullAdvisoryChatAuditLogger` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Services/NullAdvisoryChatAuditLogger.cs`) - no-op audit logger for testing
- `AttestationEndpoints` (`src/AdvisoryAi/StellaOps.AdvisoryAI.WebService/Endpoints/AttestationEndpoints.cs`) - REST endpoints for attestation/audit retrieval
- `NullEvidencePackSigner` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Evidence/NullEvidencePackSigner.cs`) - no-op evidence pack signer for development
- `AdvisoryPipelineMetrics` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Metrics/AdvisoryPipelineMetrics.cs`) - metrics collection for audit visibility
- **Interfaces**: None (uses concrete audit pipeline)
- **Source**: Feature matrix scan
## E2E Test Plan
- [ ] Send a chat message and verify `AdvisoryChatAuditEnvelopeBuilder` creates a DSSE-signed envelope containing the prompt, response, and model fingerprint
- [ ] Verify `ChatAuditRecords` captures tool call invocations with parameters and results
- [ ] Verify `PostgresAdvisoryChatAuditLogger` persists audit records and they are retrievable via `AttestationEndpoints`
- [ ] Verify audit envelopes are immutable: attempting to modify a persisted record fails
- [ ] Verify audit records include model identifier, temperature setting, and token counts
- [ ] Verify audit log entries are queryable by user, session, and time range

View File

@@ -0,0 +1,28 @@
# LLM Inference Response Caching
## Module
AdvisoryAI
## Status
IMPLEMENTED
## Description
In-memory LLM inference cache that deduplicates identical prompt+model combinations. Reduces API costs and latency by caching deterministic responses keyed by content hash.
## Implementation Details
- **Modules**: `src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/LlmProviders/`
- **Key Classes**:
- `LlmInferenceCache` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/LlmProviders/LlmInferenceCache.cs`) - in-memory cache keyed by content hash of prompt+model+parameters
- `LlmProviderFactory` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/LlmProviders/LlmProviderFactory.cs`) - factory that wraps providers with caching layer
- `LlmProviderOptions` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/LlmProviders/LlmProviderOptions.cs`) - provider options including cache TTL and size limits
- `ProviderBasedAdvisoryInferenceClient` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/ProviderBasedAdvisoryInferenceClient.cs`) - inference client that uses the caching layer
- **Interfaces**: `ILlmProvider`
- **Source**: SPRINT_20251226_019_AI_offline_inference.md
## E2E Test Plan
- [ ] Send identical prompts twice and verify `LlmInferenceCache` returns the cached response on the second call without hitting the LLM
- [ ] Verify cache keys include model ID and parameters: same prompt with different temperature results in cache miss
- [ ] Verify cache TTL: cached responses expire after configured duration
- [ ] Verify cache size limits: when max entries are reached, oldest entries are evicted
- [ ] Verify cache bypass: non-deterministic requests (temperature > 0) are not cached
- [ ] Verify `ProviderBasedAdvisoryInferenceClient` correctly integrates caching with the provider pipeline

View File

@@ -0,0 +1,40 @@
# LLM Provider Plugin Architecture (Multi-Provider Inference)
## Module
AdvisoryAI
## Status
IMPLEMENTED
## Description
Pluggable LLM provider architecture with ILlmProvider interface supporting OpenAI, Claude, Gemini, llama.cpp (LlamaServer), and Ollama backends. Includes LlmProviderFactory for runtime selection and configuration validation. Enables sovereign/offline inference by switching to local providers.
## Implementation Details
- **Modules**: `src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/LlmProviders/`, `src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Inference/`, `src/AdvisoryAi/StellaOps.AdvisoryAI.Plugin.Unified/`
- **Key Classes**:
- `LlmProviderFactory` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/LlmProviders/LlmProviderFactory.cs`) - factory for runtime LLM provider selection
- `OpenAiLlmProvider` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/LlmProviders/OpenAiLlmProvider.cs`) - OpenAI API provider
- `ClaudeLlmProvider` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/LlmProviders/ClaudeLlmProvider.cs`) - Anthropic Claude API provider
- `GeminiLlmProvider` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/LlmProviders/GeminiLlmProvider.cs`) - Google Gemini API provider
- `LlamaServerLlmProvider` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/LlmProviders/LlamaServerLlmProvider.cs`) - local llama.cpp server provider
- `OllamaLlmProvider` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/LlmProviders/OllamaLlmProvider.cs`) - Ollama local inference provider
- `LlmProviderOptions` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/LlmProviders/LlmProviderOptions.cs`) - provider configuration and validation
- `ClaudeInferenceClient` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Inference/ClaudeInferenceClient.cs`) - Claude-specific chat inference client
- `OpenAIInferenceClient` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Inference/OpenAIInferenceClient.cs`) - OpenAI-specific chat inference client
- `OllamaInferenceClient` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Inference/OllamaInferenceClient.cs`) - Ollama-specific chat inference client
- `LocalInferenceClient` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Inference/LocalInferenceClient.cs`) - local model inference client
- `LlmPluginAdapter` (`src/AdvisoryAi/StellaOps.AdvisoryAI.Plugin.Unified/LlmPluginAdapter.cs`) - unified plugin adapter for LLM providers
- `LlmPluginAdapterFactory` (`src/AdvisoryAi/StellaOps.AdvisoryAI.Plugin.Unified/LlmPluginAdapterFactory.cs`) - factory for creating LLM plugin adapters
- `SystemPromptLoader` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Inference/SystemPromptLoader.cs`) - loads system prompts for inference clients
- **Interfaces**: `ILlmProvider`, `ILlmProviderPlugin`, `IAdvisoryChatInferenceClient`
- **Source**: SPRINT_20251226_019_AI_offline_inference.md
## E2E Test Plan
- [ ] Configure `LlmProviderFactory` with multiple providers and verify runtime selection based on configuration
- [ ] Verify `OpenAiLlmProvider` sends requests to OpenAI API with correct authentication and model parameters
- [ ] Verify `ClaudeLlmProvider` sends requests to Claude API with correct authentication
- [ ] Verify `OllamaLlmProvider` connects to local Ollama instance and performs inference
- [ ] Verify `LlamaServerLlmProvider` connects to local llama.cpp server endpoint
- [ ] Verify `LlmProviderOptions` validation rejects invalid configurations (missing API keys, invalid endpoints)
- [ ] Verify `LlmPluginAdapter` provides health checks for configured LLM providers
- [ ] Verify provider failover: when primary provider is unavailable, factory falls back to secondary

View File

@@ -0,0 +1,31 @@
# Natural Language to Policy Rule Compiler (Policy Studio Copilot)
## Module
AdvisoryAI
## Status
IMPLEMENTED
## Description
AI-powered natural language to lattice rule translation engine including PolicyIntentType parsing, LatticeRuleGenerator, property-based test synthesizer for generated rules, and PolicyBundleCompiler. Transforms plain-English policy descriptions into formal stella-dsl@1 rules with live preview and conflict visualization.
## Implementation Details
- **Modules**: `src/AdvisoryAi/StellaOps.AdvisoryAI/PolicyStudio/`
- **Key Classes**:
- `AiPolicyIntentParser` (`src/AdvisoryAi/StellaOps.AdvisoryAI/PolicyStudio/AiPolicyIntentParser.cs`) - parses natural language into structured policy intents using LLM
- `LatticeRuleGenerator` (`src/AdvisoryAi/StellaOps.AdvisoryAI/PolicyStudio/LatticeRuleGenerator.cs`) - generates K4 lattice rules from parsed policy intents
- `PropertyBasedTestSynthesizer` (`src/AdvisoryAi/StellaOps.AdvisoryAI/PolicyStudio/PropertyBasedTestSynthesizer.cs`) - synthesizes property-based test cases for generated rules
- `PolicyBundleCompiler` (`src/AdvisoryAi/StellaOps.AdvisoryAI/PolicyStudio/PolicyBundleCompiler.cs`) - compiles generated rules into a deployable policy bundle
- `PolicyIntent` (`src/AdvisoryAi/StellaOps.AdvisoryAI/PolicyStudio/PolicyIntent.cs`) - policy intent model with type, constraints, and conditions
- `InMemoryPolicyIntentStore` (`src/AdvisoryAi/StellaOps.AdvisoryAI/PolicyStudio/InMemoryPolicyIntentStore.cs`) - stores policy intents for iterative refinement
- `NullPolicyIntentParser` (`src/AdvisoryAi/StellaOps.AdvisoryAI/PolicyStudio/NullPolicyIntentParser.cs`) - no-op parser for testing
- **Interfaces**: `IPolicyIntentParser`, `IPolicyRuleGenerator`, `ITestCaseSynthesizer`
- **Source**: SPRINT_20251226_017_AI_policy_copilot.md
## E2E Test Plan
- [ ] Submit a natural language policy description (e.g., "block critical CVEs without a fix") and verify `AiPolicyIntentParser` produces a structured `PolicyIntent`
- [ ] Verify `LatticeRuleGenerator` translates the intent into valid stella-dsl@1 lattice rules
- [ ] Verify `PropertyBasedTestSynthesizer` generates test cases that exercise the generated rule's accept/reject boundaries
- [ ] Verify `PolicyBundleCompiler` compiles rules into a deployable bundle with correct schema version
- [ ] Verify `InMemoryPolicyIntentStore` supports iterative refinement: modify an intent and regenerate rules
- [ ] Verify conflict detection: generate two conflicting rules and verify the compiler reports the conflict

View File

@@ -0,0 +1,28 @@
# OpsMemory-Chat Integration (Decision Memory in AI Conversations)
## Module
AdvisoryAI
## Status
IMPLEMENTED
## Description
Connects OpsMemory institutional decision memory to AdvisoryAI Chat, enabling the AI to surface relevant past decisions during conversations and automatically record new decisions with outcomes for feedback loop learning.
## Implementation Details
- **Modules**: `src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/`
- **Key Classes**:
- `OpsMemoryIntegration` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/OpsMemoryIntegration.cs`) - integrates OpsMemory decision retrieval into chat pipeline
- `OpsMemoryLinkResolver` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/OpsMemoryLinkResolver.cs`) - resolves OpsMemory links referenced in chat context
- `OpsMemoryDataProvider` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Assembly/Providers/OpsMemoryDataProvider.cs`) - data provider that fetches relevant OpsMemory entries for evidence bundles
- `ConversationContextBuilder` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/ConversationContextBuilder.cs`) - includes OpsMemory context in conversation history
- **Interfaces**: None (uses concrete integration classes)
- **Source**: SPRINT_20260109_011_002_BE_opsmemory_chat_integration.md
## E2E Test Plan
- [ ] Ask about a CVE that has a prior decision in OpsMemory and verify `OpsMemoryIntegration` surfaces the past decision in the response
- [ ] Verify `OpsMemoryDataProvider` includes relevant past decisions in the evidence bundle for chat responses
- [ ] Verify `OpsMemoryLinkResolver` resolves OpsMemory entry links to their full decision details
- [ ] Verify `ConversationContextBuilder` enriches prompts with relevant OpsMemory context
- [ ] Verify new decisions made during chat are recorded back into OpsMemory for future retrieval
- [ ] Verify OpsMemory integration does not include stale decisions (respects TTL/validity windows)

View File

@@ -0,0 +1,32 @@
# Sanctioned Tool Registry (Policy-Gated Tool Execution)
## Module
AdvisoryAI
## Status
IMPLEMENTED
## Description
Tool policy system with sanctioned tool registry controlling which AI tools can be invoked, with read-only defaults and confirmation-gated action tools.
## Implementation Details
- **Modules**: `src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Settings/`, `src/AdvisoryAi/StellaOps.AdvisoryAI/Tools/`
- **Key Classes**:
- `AdvisoryChatToolPolicy` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Settings/AdvisoryChatToolPolicy.cs`) - defines which tools are sanctioned, read-only, or require confirmation
- `DeterministicToolset` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Tools/DeterministicToolset.cs`) - deterministic tool implementations (version analysis, dependency analysis)
- `AdvisoryChatSettingsService` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Settings/AdvisoryChatSettingsService.cs`) - manages chat settings including tool policies
- `AdvisoryChatSettingsStore` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Settings/AdvisoryChatSettingsStore.cs`) - persists chat settings and tool policies
- `AdvisoryChatSettingsModels` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Settings/AdvisoryChatSettingsModels.cs`) - settings models for tool access levels
- `DependencyAnalysisResult` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Tools/DependencyAnalysisResult.cs`) - result model for dependency analysis tool
- `SemanticVersion` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Tools/SemanticVersion.cs`) - semantic version parsing for version analysis tool
- `SemanticVersionRange` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Tools/SemanticVersionRange.cs`) - version range matching for dependency tools
- **Interfaces**: `IDeterministicToolset`
- **Source**: Feature matrix scan
## E2E Test Plan
- [ ] Configure `AdvisoryChatToolPolicy` with sanctioned tools and verify only those tools can be invoked during chat
- [ ] Attempt to invoke a non-sanctioned tool and verify it is rejected with an access denied response
- [ ] Verify read-only tools execute without confirmation prompts
- [ ] Verify action tools (write operations) require user confirmation before execution
- [ ] Verify `DeterministicToolset` provides consistent results for version analysis and dependency analysis
- [ ] Verify `AdvisoryChatSettingsService` persists tool policy changes via `AdvisoryChatSettingsStore`

View File

@@ -0,0 +1,35 @@
# Sovereign/Offline AI Inference with Signed Model Bundles
## Module
AdvisoryAI
## Status
IMPLEMENTED
## Description
Local LLM inference for air-gapped environments via a pluggable provider architecture supporting llama.cpp server, Ollama, OpenAI, Claude, and Gemini. DSSE-signed model bundle management with regional crypto support (eIDAS/FIPS/GOST/SM), digest verification at load time, deterministic output config (temperature=0, fixed seed), inference caching, benchmarking harness, and offline replay verification.
## Implementation Details
- **Modules**: `src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/`
- **Key Classes**:
- `SignedModelBundleManager` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/SignedModelBundleManager.cs`) - manages DSSE-signed model bundles with digest verification at load time
- `ModelBundle` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/ModelBundle.cs`) - model bundle metadata including hash, signature, and regional crypto info
- `LlamaCppRuntime` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/LlamaCppRuntime.cs`) - llama.cpp local inference runtime
- `OnnxRuntime` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/OnnxRuntime.cs`) - ONNX runtime for local model inference
- `AdvisoryInferenceClient` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/AdvisoryInferenceClient.cs`) - main inference client with provider routing
- `ProviderBasedAdvisoryInferenceClient` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/ProviderBasedAdvisoryInferenceClient.cs`) - provider-based inference with caching
- `LlmBenchmark` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/LlmBenchmark.cs`) - benchmarking harness for inference performance
- `LocalInferenceOptions` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/LocalInferenceOptions.cs`) - configuration for local inference (temperature, seed, context size)
- `LocalLlmConfig` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Inference/LocalLlmConfig.cs`) - local LLM configuration (model path, quantization, GPU layers)
- `LocalChatInferenceClient` (`src/AdvisoryAi/StellaOps.AdvisoryAI/Chat/Services/LocalChatInferenceClient.cs`) - chat-specific local inference client
- **Interfaces**: `ILocalLlmRuntime`
- **Source**: SPRINT_20251226_019_AI_offline_inference.md
## E2E Test Plan
- [ ] Load a signed model bundle via `SignedModelBundleManager` and verify DSSE signature and digest are validated
- [ ] Verify `SignedModelBundleManager` rejects a model bundle with a tampered digest
- [ ] Run inference through `LlamaCppRuntime` with temperature=0 and fixed seed and verify deterministic output
- [ ] Run `LlmBenchmark` and verify it measures tokens/second and latency metrics
- [ ] Verify `OnnxRuntime` loads and runs inference with an ONNX model
- [ ] Configure `LocalInferenceOptions` with air-gap settings and verify no external network calls are made
- [ ] Verify `ProviderBasedAdvisoryInferenceClient` caches deterministic responses and returns cached results on repeat queries