Restructures LLM config page so it is far clearer.
Also corrects bugs around adding LLMs and having LLMs not editable post addition
---------
Co-authored-by: Sam Saffron <sam.saffron@gmail.com>
Often it is helpful to have the summary box open while composing a reply to the topic. However, the summary box currently gets closed each time you click outside the box. In this PR we add `closeOnClickOutside: false` attribute to the `DMenu` options for summary box to prevent that from occurring.
Previously we had some hardcoded markup with scss making a loading indicator wave. This code was being duplicated and used in both semantic search and summarization. We want to add the indicator wave to the AI helper diff modal as well and have the text flashing instead of the loading spinner. To ensure we do not repeat ourselves, in this PR we turn the summary indicator wave into a reusable template only component called: `AiIndicatorWave`. We then apply the usage of that component to semantic search, summarization, and the composer helper modal.
This commit fixes an issue where the composer AI helper was not visible on iPad in DiscourseHub. This was due to the z-index being different for `reply-control` when Discourse Hub inserts its `footer-nav`
Caveats
- No streaming, by design
- No tool support (including no XML tools)
- No vision
Open AI will revamt the model and more of these features may
become available.
This solution is a bit hacky for now
The `DiffModal` is triggered after selecting an option in the composer helper menu. After selecting an option, we should close the composer helper menu and only show the diff modal. On mobile, there was an edge-case where `this.args.close()` for was causing the closing of both the `DiffModal` and the `AiComposerHelperMenu`. This PR resolves that by ensuring the menu is closed _first_ asynchronously, followed by opening the relevant modal.
Polymorphic RAG means that we will be able to access RAG fragments both from AiPersona and AiCustomTool
In turn this gives us support for richer RAG implementations.
Previously we had moved the AI helper from the options menu to a selection menu that appears when selecting text in the composer. This had the benefit of making the AI helper a more discoverable feature. Now that some time has passed and the AI helper is more recognized, we will be moving it back to the composer toolbar.
This is better because:
- It consistent with other behavior and ways of accessing tools in the composer
- It has an improved mobile experience
- It reduces unnecessary code and keeps things easier to migrate when we have composer V2.
- It allows for easily triggering AI helper for all content by clicking the button instead of having to select everything.
Embedding search is rate limited due to potentially expensive
hyde operation (which require LLM access).
Embedding generally is very cheap compared to it. (usually 100x cheaper)
This raises the limit to 100 per minute for embedding searches,
while keeping the old 4 per minute for HyDE powered search.
Previously we waited 1 minute before automatically titling PMs
The new change introduces adding a title immediately after the the
llm replies
Prompt was also modified to include the LLM reply in title suggestion.
This helps situation like:
user: tell me a joke
llm: a very funy joke about horses
Then the title would be "A Funny Horse Joke"
Specs already covered some auto title logic, amended to also
catch the new message bus message we have been sending.
* FIX: we were never reindexing old content
Embedding backfill contains logic for searching for old content
change and then backfilling.
Unfortunately it was excluding all topics that had embedding
unconditionally, leading to no backfill ever happening.
This change adds a test and ensures we backfill.
* over select results, this ensures we will be more likely to find
ai results when filtered
This improves the site setting search so it performs a somewhat
fuzzy match.
Previously it did not handle seperators such as "space" and a
term such as "min_post_length" would not find "min_first_post_length"
A more liberal search algorithm makes it easier to the AI to
navigate settings.
* Minor fix, {{and parameter.enum parameter.enum.length}} is non
obviously broken.
If parameter.enum is a tracked array it will return the object
cause embers and helper implementation.
This corrects an issue where enum keeps on selecting itself by
mistake.
This allows callers of embedding based search to bypass hyde.
Hyde will expand the search term using an LLM, but if an LLM is
performing the search we can skip this expansion.
It also introduced some tests for the controller which we did not have
Previously there was too much work proofreading text, new implementation
provides a single shortcut and easy way of proofreading text.
Co-authored-by: Martin Brennan <martin@discourse.org>
`withEventValue` is not needed here, because the `onChange` event comes from ace, not a normal DOM event. But even with that fix, it seems AceEditor doesn't yet work well with the DDAU pattern. On every keypress, the editor re-renders and puts the cursor back at the beginning.
For now, this commit removes the `@onChange` hook, so we go back to relying on the two-way binding of `@content`.
Followup to a5a39dd2ee
* FEATURE: LLM Triage support for systemless models.
This change adds support for OSS models without support for system messages. LlmTriage's system message field is no longer mandatory. We now send the post contents in a separate user message.
* Models using Ollama can also disable system prompts