* Display gists in the hot topics list
* Adjust hot topics gist strategy and add a job to generate gists
* Replace setting with a configurable batch size
* Avoid loading summaries for other topic lists
* Tweak gist prompt to focus on latest posts in the context of the OP
* Remove serializer hack and rely on core change from discourse/discourse#29291
* Update lib/summarization/strategies/hot_topic_gists.rb
Co-authored-by: Rafael dos Santos Silva <xfalcox@gmail.com>
---------
Co-authored-by: Rafael dos Santos Silva <xfalcox@gmail.com>
Splits persona permissions so you can allow a persona on:
- chat dms
- personal messages
- topic mentions
- chat channels
(any combination is allowed)
Previously we did not have this flexibility.
Additionally, adds the ability to "tether" a language model to a persona so it will always be used by the persona. This allows people to use a cheaper language model for one group of people and more expensive one for other people
On very large sites, the rare cache misses for Related Topics can take around 200ms, which affects our p99 metric on the topic page. In order to mitigate this impact, we now have several tools at our disposal.
First, one is to migrate the index embedding type from halfvec to bit and change the related topic query to leverage the new bit index by changing the search algorithm from inner product to Hamming distance. This will reduce our index sizes by 90%, severely reducing the impact of embeddings on our storage. By making the related query a bit smarter, we can have zero impact on recall by using the index to over-capture N*2 results, then re-ordering those N*2 using the full halfvec vectors and taking the top N. The expected impact is to go from 200ms to <20ms for cache misses and from a 2.5GB index to a 250MB index on a large site.
Another tool is migrating our index type from IVFFLAT to HNSW, which can increase the cache misses performance even further, eventually putting us in the under 5ms territory.
Co-authored-by: Roman Rizzi <roman@discourse.org>
This introduces another configuration that allows operators to
limit the amount of interactions with forced tool usage.
Forced tools are very handy in initial llm interactions, but as
conversation progresses they can hinder by slowing down stuff
and adding confusion.
This adds chain halting (ability to terminate llm chain in a tool)
and the ability to create uploads in a tool
Together this lets us integrate custom image generators into a
custom tool.
* FEATURE: allows forced LLM tool use
Sometimes we need to force LLMs to use tools, for example in RAG
like use cases we may want to force an unconditional search.
The new framework allows you backend to force tool usage.
Front end commit to follow
* UI for forcing tools now works, but it does not react right
* fix bugs
* fix tests, this is now ready for review
Previous to this change we could flag, but there was no way
to hide content and treat the flag as spam.
We had the option to hide topics, but this is not desirable for
a spam reply.
New option allows triage to hide a post if it is a reply, if the
post happens to be the first post on the topic, the topic will
be hidden.
This allows our users to add the Ollama provider and use it to serve our AI bot (completion/dialect).
In this PR, we introduce:
DiscourseAi::Completions::Dialects::Ollama which would help us translate by utilizing Completions::Endpoint::Ollama
Correct extract_completion_from and partials_from in Endpoints::Ollama
Also
Add tests for Endpoints::Ollama
Introduce ollama_model fabricator
This allows custom tools access to uploads and sophisticated searches using embedding.
It introduces:
- A shared front end for listing and uploading files (shared with personas)
- Backend implementation of index.search function within a custom tool.
Custom tools now may search through uploaded files
function invoke(params) {
return index.search(params.query)
}
This means that RAG implementers now may preload tools with knowledge and have high fidelity over
the search.
The search function support
specifying max results
specifying a subset of files to search (from uploads)
Also
- Improved documentation for tools (when creating a tool a preamble explains all the functionality)
- uploads were a bit finicky, fixed an edge case where the UI would not show them as updated
Caveats
- No streaming, by design
- No tool support (including no XML tools)
- No vision
Open AI will revamt the model and more of these features may
become available.
This solution is a bit hacky for now
Polymorphic RAG means that we will be able to access RAG fragments both from AiPersona and AiCustomTool
In turn this gives us support for richer RAG implementations.
Previously we waited 1 minute before automatically titling PMs
The new change introduces adding a title immediately after the the
llm replies
Prompt was also modified to include the LLM reply in title suggestion.
This helps situation like:
user: tell me a joke
llm: a very funy joke about horses
Then the title would be "A Funny Horse Joke"
Specs already covered some auto title logic, amended to also
catch the new message bus message we have been sending.
* FIX: we were never reindexing old content
Embedding backfill contains logic for searching for old content
change and then backfilling.
Unfortunately it was excluding all topics that had embedding
unconditionally, leading to no backfill ever happening.
This change adds a test and ensures we backfill.
* over select results, this ensures we will be more likely to find
ai results when filtered
This improves the site setting search so it performs a somewhat
fuzzy match.
Previously it did not handle seperators such as "space" and a
term such as "min_post_length" would not find "min_first_post_length"
A more liberal search algorithm makes it easier to the AI to
navigate settings.
* Minor fix, {{and parameter.enum parameter.enum.length}} is non
obviously broken.
If parameter.enum is a tracked array it will return the object
cause embers and helper implementation.
This corrects an issue where enum keeps on selecting itself by
mistake.
This allows callers of embedding based search to bypass hyde.
Hyde will expand the search term using an LLM, but if an LLM is
performing the search we can skip this expansion.
It also introduced some tests for the controller which we did not have
* FEATURE: LLM Triage support for systemless models.
This change adds support for OSS models without support for system messages. LlmTriage's system message field is no longer mandatory. We now send the post contents in a separate user message.
* Models using Ollama can also disable system prompts
New `ai_pm_summarization_allowed_groups` can be used to allow
visibility of the summarization feature on PMs.
This can be useful on forums where a lot of communication happens
inside PMs.
Creating a new model, either manually or from presets, doesn't initialize the `provider_params` object, meaning their custom params won't persist.
Additionally, this change adds some validations for Bedrock params, which are mandatory, and a clear message when a completion fails because we cannot build the URL.
* FIX: Add tool support to open ai compatible dialect and vllm
Automatic tools are in progress in vllm see: https://github.com/vllm-project/vllm/pull/5649
Even when they are supported, initial support will be uneven, only some models have native tool support
notably mistral which has some special tokens for tool support.
After the above PR lands in vllm we will still need to swap to
XML based tools on models without native tool support.
* fix specs
* DEV: Remove old code now that features rely on LlmModels.
* Hide old settings and migrate persona llm overrides
* Remove shadowing special URL + seeding code. Use srv:// prefix instead.
Using RAG fragments can lead to considerably big system messages, which becomes problematic when models have a smaller context window.
Before this change, we only look at the rest of the conversation to make sure we don't surpass the limit, which could lead to two unwanted scenarios when having large system messages:
All other messages are excluded due to size.
The system message already exceeds the limit.
As a result, I'm putting a hard-limit of 60% of available tokens. We don't want to aggresively truncate because if rag fragments are included, the system message contains a lot of context to improve the model response, but we also want to make room for the recent messages in the conversation.
Using assistant role for system produces an error because
they expect alternating roles like user/assistant/user and so on.
Prompts cannot start with the assistant role.