Splits persona permissions so you can allow a persona on:
- chat dms
- personal messages
- topic mentions
- chat channels
(any combination is allowed)
Previously we did not have this flexibility.
Additionally, adds the ability to "tether" a language model to a persona so it will always be used by the persona. This allows people to use a cheaper language model for one group of people and more expensive one for other people
On very large sites, the rare cache misses for Related Topics can take around 200ms, which affects our p99 metric on the topic page. In order to mitigate this impact, we now have several tools at our disposal.
First, one is to migrate the index embedding type from halfvec to bit and change the related topic query to leverage the new bit index by changing the search algorithm from inner product to Hamming distance. This will reduce our index sizes by 90%, severely reducing the impact of embeddings on our storage. By making the related query a bit smarter, we can have zero impact on recall by using the index to over-capture N*2 results, then re-ordering those N*2 using the full halfvec vectors and taking the top N. The expected impact is to go from 200ms to <20ms for cache misses and from a 2.5GB index to a 250MB index on a large site.
Another tool is migrating our index type from IVFFLAT to HNSW, which can increase the cache misses performance even further, eventually putting us in the under 5ms territory.
Co-authored-by: Roman Rizzi <roman@discourse.org>
This introduces another configuration that allows operators to
limit the amount of interactions with forced tool usage.
Forced tools are very handy in initial llm interactions, but as
conversation progresses they can hinder by slowing down stuff
and adding confusion.
The primary key is usually a bigint column, but the foreign key columns
usually are of integer type. This can lead to issues when joining these
columns due to mismatched types and different value ranges.
In a recent core change, all bigint sequences will start at a very high
value in the test environment to surface this type of errors. The same
change also added a temporary API that changes the column type to bigint
in order to allow for the tests to run.
The plugin API is only temporary and it is important for these plugins
to migrate their columns to bigint to avoid issues in the future.
This adds chain halting (ability to terminate llm chain in a tool)
and the ability to create uploads in a tool
Together this lets us integrate custom image generators into a
custom tool.
* FEATURE: allows forced LLM tool use
Sometimes we need to force LLMs to use tools, for example in RAG
like use cases we may want to force an unconditional search.
The new framework allows you backend to force tool usage.
Front end commit to follow
* UI for forcing tools now works, but it does not react right
* fix bugs
* fix tests, this is now ready for review
Previous to this change we could flag, but there was no way
to hide content and treat the flag as spam.
We had the option to hide topics, but this is not desirable for
a spam reply.
New option allows triage to hide a post if it is a reply, if the
post happens to be the first post on the topic, the topic will
be hidden.
This PR updates the rate limits for AI helper so that image caption follows a specific rate limit of 20 requests per minute. This should help when uploading multiple files that need to be captioned. This PR also updates the UI so that it shows toast message with the extracted error message instead of having a blocking `popupAjaxError` error dialog.
---------
Co-authored-by: Rafael dos Santos Silva <xfalcox@gmail.com>
Co-authored-by: Penar Musaraj <pmusaraj@gmail.com>
Relies on https://github.com/discourse/discourse/pull/28477,
uses AdminSectionLandingWrapper and AdminSectionLandingItem
for the section items on the LLM page which are used to create
a new LLM config from a template.
This allows our users to add the Ollama provider and use it to serve our AI bot (completion/dialect).
In this PR, we introduce:
DiscourseAi::Completions::Dialects::Ollama which would help us translate by utilizing Completions::Endpoint::Ollama
Correct extract_completion_from and partials_from in Endpoints::Ollama
Also
Add tests for Endpoints::Ollama
Introduce ollama_model fabricator
This allows custom tools access to uploads and sophisticated searches using embedding.
It introduces:
- A shared front end for listing and uploading files (shared with personas)
- Backend implementation of index.search function within a custom tool.
Custom tools now may search through uploaded files
function invoke(params) {
return index.search(params.query)
}
This means that RAG implementers now may preload tools with knowledge and have high fidelity over
the search.
The search function support
specifying max results
specifying a subset of files to search (from uploads)
Also
- Improved documentation for tools (when creating a tool a preamble explains all the functionality)
- uploads were a bit finicky, fixed an edge case where the UI would not show them as updated
Restructures LLM config page so it is far clearer.
Also corrects bugs around adding LLMs and having LLMs not editable post addition
---------
Co-authored-by: Sam Saffron <sam.saffron@gmail.com>
Often it is helpful to have the summary box open while composing a reply to the topic. However, the summary box currently gets closed each time you click outside the box. In this PR we add `closeOnClickOutside: false` attribute to the `DMenu` options for summary box to prevent that from occurring.
Previously we had some hardcoded markup with scss making a loading indicator wave. This code was being duplicated and used in both semantic search and summarization. We want to add the indicator wave to the AI helper diff modal as well and have the text flashing instead of the loading spinner. To ensure we do not repeat ourselves, in this PR we turn the summary indicator wave into a reusable template only component called: `AiIndicatorWave`. We then apply the usage of that component to semantic search, summarization, and the composer helper modal.
This commit fixes an issue where the composer AI helper was not visible on iPad in DiscourseHub. This was due to the z-index being different for `reply-control` when Discourse Hub inserts its `footer-nav`
Caveats
- No streaming, by design
- No tool support (including no XML tools)
- No vision
Open AI will revamt the model and more of these features may
become available.
This solution is a bit hacky for now
The `DiffModal` is triggered after selecting an option in the composer helper menu. After selecting an option, we should close the composer helper menu and only show the diff modal. On mobile, there was an edge-case where `this.args.close()` for was causing the closing of both the `DiffModal` and the `AiComposerHelperMenu`. This PR resolves that by ensuring the menu is closed _first_ asynchronously, followed by opening the relevant modal.
Polymorphic RAG means that we will be able to access RAG fragments both from AiPersona and AiCustomTool
In turn this gives us support for richer RAG implementations.
Previously we had moved the AI helper from the options menu to a selection menu that appears when selecting text in the composer. This had the benefit of making the AI helper a more discoverable feature. Now that some time has passed and the AI helper is more recognized, we will be moving it back to the composer toolbar.
This is better because:
- It consistent with other behavior and ways of accessing tools in the composer
- It has an improved mobile experience
- It reduces unnecessary code and keeps things easier to migrate when we have composer V2.
- It allows for easily triggering AI helper for all content by clicking the button instead of having to select everything.
Embedding search is rate limited due to potentially expensive
hyde operation (which require LLM access).
Embedding generally is very cheap compared to it. (usually 100x cheaper)
This raises the limit to 100 per minute for embedding searches,
while keeping the old 4 per minute for HyDE powered search.
Previously we waited 1 minute before automatically titling PMs
The new change introduces adding a title immediately after the the
llm replies
Prompt was also modified to include the LLM reply in title suggestion.
This helps situation like:
user: tell me a joke
llm: a very funy joke about horses
Then the title would be "A Funny Horse Joke"
Specs already covered some auto title logic, amended to also
catch the new message bus message we have been sending.