8.5 KiB
8.5 KiB
Search Zero-Learning Primary Entry
Why this exists
- Search must be the default entry into Stella Ops for users who do not know Stella terminology, domain boundaries, or internal product structure.
- The current model asks the user to understand too many concepts before they can succeed:
Find/Explain/Act, explicit scope switching, recovery actions, and a separate AdvisoryAI destination. - Live verification also shows a real reliability gap: suggestion chips can render even when the active corpus cannot answer them. On 2026-03-07, the running AdvisoryAI service at
http://127.1.0.44rebuilt withdocumentCount=0,chunkCount=0, anddoctorProjectionCount=0, while unified overlays rebuilt with onlychunkCount=14. Queryingdatabase connectivitythen returned zero cards. The current UX still offered that suggestion.
What still fails after the first pass
- The product still exposes two entry mental models: global search and a route-jumping AdvisoryAI experience. Search-originated chat actions currently navigate into
/security/triage, which breaks page continuity and makes AdvisoryAI feel like a separate destination. - The assistant still exposes
Find / Explain / Actframing in chat. That pushes the user to learn the system's internal reasoning model instead of simply asking for help. - Empty-state search still carries too much product education: domain guides, quick actions, context exposition, and explanatory labels. That is useful for experts but too loud for a self-serve default.
- Current-scope behavior is correct as a ranking concept, but the UI still talks about the mechanism instead of simply showing the best in-scope answer first and only then out-of-scope overflow.
- Suggestion reliability is fixed only when the active corpus is actually ingested. A healthy process with an empty corpus is still a bad operator experience unless the UI suppresses or fails those paths explicitly.
Product rules
- Search is the primary entry point.
- AdvisoryAI is a secondary deep-dive opened from search, not a competing starting point or route jump.
- Search should infer relevance and intent; the user should not need to choose a mode.
- Search should bias to the current page context automatically; the user should not need to toggle scope.
- Search should never advertise a suggestion that cannot execute against the active corpus.
- Search history should contain only successful searches.
- Telemetry remains optional. Search behavior must not depend on analytics emission.
- Search UI must avoid teaching Stella terminology or search mechanics before the operator has even started.
Target interaction model
Entry
- Keep one primary search field in the top bar.
- Add a compact AdvisoryAI launcher icon beside the search input.
- Opening chat from that icon inherits the current page, current query, and top evidence when available.
- Opening chat must stay on the current page through a global drawer or equivalent shell-level host. Search should not navigate the operator into a different feature route just to ask follow-up questions.
Query understanding
- Infer likely user intent from the query and visible page context.
- Weight current page/domain first.
- If current-scope evidence is weak, show overflow results from outside the page scope below the in-scope section instead of asking the user to broaden scope manually.
Result presentation
- Place
Did you meandirectly below the input because it is an input correction, not a result refinement. - Remove explicit
Find / Explain / Actcontrols. - Remove the explicit scope toggle chip.
- Remove manual result-domain refinements from the primary flow; the server should rank the current page first and render any cross-scope overflow as a secondary section instead.
- Remove the recovery panel.
- If top results are close in score, compose one short summary across them.
- If one result is clearly dominant, present that answer first and then cards.
- Prefer concise operator-facing labels over implementation-facing labels. The UI should show "best match here" and "also relevant elsewhere", not expose ranking jargon.
Suggestions
- Page suggestions become executable search intents, not static chips.
- A suggestion is only shown if:
- its preferred domain has live evidence, or
- the backend confirms non-zero candidate coverage for it.
- If the current corpus is empty for that page/domain, suppress those suggestion chips instead of showing dead leads.
History
- Persist and render only searches that returned results.
- Keep history clear as a low-emphasis icon action, not a labeled call-to-action.
- Existing history entries with zero results should be ignored on load so prior failed experiments do not pollute the empty state.
Runtime rules
Implicit scope weighting
- Current route/domain is a ranking boost, not a hard filter.
- Visible entity keys and last action increase local relevance.
- Out-of-scope results are still allowed, but only after in-scope candidates are exhausted or materially weaker.
Answer blending
- If top-N cards fall within a narrow score band, emit a blended summary with citations from multiple cards.
- If score separation is strong, emit a single-result grounded answer.
- The frontend should not ask the user to choose whether they want
findorexplain; the backend should infer the dominant answer shape.
Suggestion viability
- Suggestions must be validated against the current corpus before rendering.
- Knowledge/domain emptiness should be detectable so the UI can suppress invalid chips.
- Empty-state contextual chips and page-owned common-question chips should preflight through the backend viability endpoint before they render.
- Live Playwright coverage must assert that every surfaced suggestion returns visible results.
- A service health check alone is not enough. On 2026-03-07,
http://127.1.0.44/healthreturned200while the live knowledge rebuild returneddocumentCount=0; the product still surfaced dead chips. Corpus readiness is the gate, not process liveness.
Corrective execution phases
Phase A - Search-first and assistant-second shell behavior
- Replace route-jumping chat opens with a global assistant drawer hosted by the shell.
- Remove mode switching from the assistant UI and prompt generation.
- Make the top-bar search icon the canonical assistant entry from any page.
Phase B - Zero-learning search surface collapse
- Remove or minimize empty-state product education that teaches Stella structure instead of solving the current question.
- Keep
Did you meanimmediately under the input and keep overflow presentation subtle. - Keep suggestions executable, short, and page-aware without extra explanation burden.
Phase C - Ranking, answer blending, and optional telemetry
- Refine current-page/entity/action weighting so in-scope answers dominate when sensible.
- Blend answers only when the top evidence cluster is genuinely close; otherwise show a decisive primary answer.
- Ensure analytics, feedback, and quality telemetry can be disabled without breaking search, suggestions, history, or chat handoff.
Phase D - Live ingestion-backed reliability matrix
- Keep mock-backed E2E for deterministic UI shape.
- Add live ingested E2E for the actual suggestions and answer experience on supported domains.
- Fail fast when corpus rebuild/readiness is missing so dead suggestions are treated as setup failures, not flaky UI tests.
Current state
- Implemented from the corrective phases: search now opens AdvisoryAI through a shell-level drawer with no route jump, restores focus back to search when the drawer closes, and removes visible assistant mode framing.
- Implemented from the corrective phases: the empty state no longer teaches Stella taxonomy through domain guides or quick links; it now keeps only current-page context, successful recent searches, and executable starter chips.
- Implemented before and during the corrective phases: explicit scope/mode/recovery controls were removed from the main search flow, implicit current-scope weighting and overflow contracts were added, and suggestion viability preflight now suppresses dead chips before render.
- Implemented before the corrective phases: the live Doctor suggestion suite now rebuilds the active corpus, fails on empty knowledge projections, iterates every surfaced suggestion, and verifies Ask-AdvisoryAI inherits the live search context.
- Still pending from the corrective phases: broader live-page matrices, stronger backend ranking/blending refinement across more domains, and explicit client-side telemetry opt-out.