* FEATURE: Fast-track gist regeneration when a hot topic gets a new post
* DEV: Introduce an upsert-like summarize
* FIX: Only enqueue fast-track gist for hot hot hot topics
---------
Co-authored-by: Rafael Silva <xfalcox@gmail.com>
* Display gists in the hot topics list
* Adjust hot topics gist strategy and add a job to generate gists
* Replace setting with a configurable batch size
* Avoid loading summaries for other topic lists
* Tweak gist prompt to focus on latest posts in the context of the OP
* Remove serializer hack and rely on core change from discourse/discourse#29291
* Update lib/summarization/strategies/hot_topic_gists.rb
Co-authored-by: Rafael dos Santos Silva <xfalcox@gmail.com>
---------
Co-authored-by: Rafael dos Santos Silva <xfalcox@gmail.com>
This allows custom tools access to uploads and sophisticated searches using embedding.
It introduces:
- A shared front end for listing and uploading files (shared with personas)
- Backend implementation of index.search function within a custom tool.
Custom tools now may search through uploaded files
function invoke(params) {
return index.search(params.query)
}
This means that RAG implementers now may preload tools with knowledge and have high fidelity over
the search.
The search function support
specifying max results
specifying a subset of files to search (from uploads)
Also
- Improved documentation for tools (when creating a tool a preamble explains all the functionality)
- uploads were a bit finicky, fixed an edge case where the UI would not show them as updated
Polymorphic RAG means that we will be able to access RAG fragments both from AiPersona and AiCustomTool
In turn this gives us support for richer RAG implementations.
Previously we waited 1 minute before automatically titling PMs
The new change introduces adding a title immediately after the the
llm replies
Prompt was also modified to include the LLM reply in title suggestion.
This helps situation like:
user: tell me a joke
llm: a very funy joke about horses
Then the title would be "A Funny Horse Joke"
Specs already covered some auto title logic, amended to also
catch the new message bus message we have been sending.
When navigating between topic we were not correctly resetting
internal state for summarization. This leads to a situation where
incorrect summaries can be displayed to users and wrong summaries
can be displayed.
Additionally our controller for grabbing summaries was always
streaming results via message bus, which could be delayed when
sidekiq is overloaded. We now will return the cached summary
right away if it is available direct from REST endpoint.
This allows summary to use the new LLM models and migrates of API key based model selection
Claude 3.5 etc... all work now.
---------
Co-authored-by: Roman Rizzi <rizziromanalejandro@gmail.com>
Add support for chat with AI personas
- Allow enabling chat for AI personas that have an associated user
- Add new setting `allow_chat` to AI persona to enable/disable chat
- When a message is created in a DM channel with an allowed AI persona user, schedule a reply job
- AI replies to chat messages using the persona's `max_context_posts` setting to determine context
- Store tool calls and custom prompts used to generate a chat reply on the `ChatMessageCustomPrompt` table
- Add tests for AI chat replies with tools and context
At the moment unlike posts we do not carry tool calls in the context.
No @mention support yet for ai personas in channels, this is future work
Updating the editing model's rag_uploads in the editor component broke multi-file uploading. Instead, we'll keep the uploads in the uploader and update the model when we finish.
This PR also fast-tracks the initial update so we can show feedback to the user quickly, and allows uploading MD files.
Bug reported on https://meta.discourse.org/t/discourse-ai-persona-upload-support/304049/11
* FEATURE: allow tuning of RAG generation
- change chunking to be token based vs char based (which is more accurate)
- allow control over overlap / tokens per chunk and conversation snippets inserted
- UI to control new settings
* improve ui a bit
* fix various reindex issues
* reduce concurrency
* try ultra low queue ... concurrency 1 is too slow.
This commit uses a new plugin modifier introduced in https://github.com/discourse/discourse/pull/26508
to mark all uploads as _not_ secure in shared PM AI conversations.
This is so images created by the AI bot (or uploaded by the user)
do not end up as broken URLs because of the security requirements
around them.
This relies on the UpdateTopicUploadSecurity job in core as well,
which is fired when an AI conversation is shared or deleted.
* FEATURE: Add metadata support for RAG
You may include non indexed metadata in the RAG document by using
[[metadata ....]]
This information is attached to all the text below and provided to
the retriever.
This allows for RAG to operate within a rich amount of contexts
without getting lost
Also:
- re-implemented chunking algorithm so it streams
- moved indexing to background low priority queue
* Baran gem no longer required.
* tokenizers is on 4.4 ... upgrade it ...
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
This allows users to share a static page of an AI conversation with
the rest of the world.
By default this feature is disabled, it is enabled by turning on
ai_bot_allow_public_sharing via site settings
Precautions are taken when sharing
1. We make a carbonite copy
2. We minimize work generating page
3. We limit to 100 interactions
4. Many security checks - including disallowing if there is a mix
of users in the PM.
* Bonus commit, large PRs like this PR did not work with github tool
large objects would destroy context
Co-authored-by: Martin Brennan <martin@discourse.org>
* DEV: improve internal design of ai persona and bug fix
- Fixes bug where OpenAI could not describe images
- Fixes bug where mentionable personas could not be mentioned unless overarching bot was enabled
- Improves internal design of playground and bot to allow better for non "bot" users
- Allow PMs directly to persona users (previously bot user would also have to be in PM)
- Simplify internal code
Co-authored-by: Martin Brennan <martin@discourse.org>
1. Personas are now optionally mentionable, meaning that you can mention them either from public topics or PMs
- Mentioning from PMs helps "switch" persona mid conversation, meaning if you want to look up sites setting you can invoke the site setting bot, or if you want to generate an image you can invoke dall e
- Mentioning outside of PMs allows you to inject a bot reply in a topic trivially
- We also add the support for max_context_posts this allow you to limit the amount of context you feed in, which can help control costs
2. Add support for a "random picker" tool that can be used to pick random numbers
3. Clean up routing ai_personas -> ai-personas
4. Add Max Context Posts so users can control how much history a persona can consume (this is important for mentionable personas)
Co-authored-by: Martin Brennan <martin@discourse.org>
* DEV: AI bot migration to the Llm pattern.
We added tool and conversation context support to the Llm service in discourse-ai#366, meaning we met all the conditions to migrate this module.
This PR migrates to the new pattern, meaning adding a new bot now requires minimal effort as long as the service supports it. On top of this, we introduce the concept of a "Playground" to separate the PM-specific bits from the completion, allowing us to use the bot in other contexts like chat in the future. Commands are called tools, and we simplified all the placeholder logic to perform updates in a single place, making the flow more one-wayish.
* Followup fixes based on testing
* Cleanup unused inference code
* FIX: text-based tools could be in the middle of a sentence
* GPT-4-turbo support
* Use new LLM API
Previous to this change we relied on explicit loading for a files in Discourse AI.
This had a few downsides:
- Busywork whenever you add a file (an extra require relative)
- We were not keeping to conventions internally ... some places were OpenAI others are OpenAi
- Autoloader did not work which lead to lots of full application broken reloads when developing.
This moves all of DiscourseAI into a Zeitwerk compatible structure.
It also leaves some minimal amount of manual loading (automation - which is loading into an existing namespace that may or may not be there)
To avoid needing /lib/discourse_ai/... we mount a namespace thus we are able to keep /lib pointed at ::DiscourseAi
Various files were renamed to get around zeitwerk rules and minimize usage of custom inflections
Though we can get custom inflections to work it is not worth it, will require a Discourse core patch which means we create a hard dependency.