This optional feature allows search to be performed in the context
of the user that executed it.
By default we do not allow this behavior cause it means llm gets
access to potentially secure data.
Add support for chat with AI personas
- Allow enabling chat for AI personas that have an associated user
- Add new setting `allow_chat` to AI persona to enable/disable chat
- When a message is created in a DM channel with an allowed AI persona user, schedule a reply job
- AI replies to chat messages using the persona's `max_context_posts` setting to determine context
- Store tool calls and custom prompts used to generate a chat reply on the `ChatMessageCustomPrompt` table
- Add tests for AI chat replies with tools and context
At the moment unlike posts we do not carry tool calls in the context.
No @mention support yet for ai personas in channels, this is future work
This commit introduces a new feature for AI Personas called the "Question Consolidator LLM". The purpose of the Question Consolidator is to consolidate a user's latest question into a self-contained, context-rich question before querying the vector database for relevant fragments. This helps improve the quality and relevance of the retrieved fragments.
Previous to this change we used the last 10 interactions, this is not ideal cause the RAG would "lock on" to an answer.
EG:
- User: how many cars are there in europe
- Model: detailed answer about cars in europe including the term car and vehicle many times
- User: Nice, what about trains are there in the US
In the above example "trains" and "US" becomes very low signal given there are pages and pages talking about cars and europe. This mean retrieval is sub optimal.
Instead, we pass the history to the "question consolidator", it would simply consolidate the question to "How many trains are there in the United States", which would make it fare easier for the vector db to find relevant content.
The llm used for question consolidator can often be less powerful than the model you are talking to, we recommend using lighter weight and fast models cause the task is very simple. This is configurable from the persona ui.
This PR also removes support for {uploads} placeholder, this is too complicated to get right and we want freedom to shift RAG implementation.
Key changes:
1. Added a new `question_consolidator_llm` column to the `ai_personas` table to store the LLM model used for question consolidation.
2. Implemented the `QuestionConsolidator` module which handles the logic for consolidating the user's latest question. It extracts the relevant user and model messages from the conversation history, truncates them if needed to fit within the token limit, and generates a consolidated question prompt.
3. Updated the `Persona` class to use the Question Consolidator LLM (if configured) when crafting the RAG fragments prompt. It passes the conversation context to the consolidator to generate a self-contained question.
4. Added UI elements in the AI Persona editor to allow selecting the Question Consolidator LLM. Also made some UI tweaks to conditionally show/hide certain options based on persona configuration.
5. Wrote unit tests for the QuestionConsolidator module and updated existing persona tests to cover the new functionality.
This feature enables AI Personas to better understand the context and intent behind a user's question by consolidating the conversation history into a single, focused question. This can lead to more relevant and accurate responses from the AI assistant.
- Updated AI Bot to only support Gemini 1.5 (used to support 1.0) - 1.0 was removed cause it is not appropriate for Bot usage
- Summaries and automation can now lean on Gemini 1.5 pro
- Amazon added support for Claude 3 Opus, added internal support for it on bedrock
* FIX: various RAG edge cases
- Nicer text to describe RAG, avoids the word RAG
- Do not attempt to save persona when removing uploads and it is not created
- Remove old code that avoided touching rag params on create
* FIX: Missing pause button for persona users
* Feature: allow specific users to debug ai request / response chains
This can help users easily tune RAG and figure out what is going
on with requests.
* discourse helper so it does not explode
* fix test
* simplify implementation
* FEATURE: allow tuning of RAG generation
- change chunking to be token based vs char based (which is more accurate)
- allow control over overlap / tokens per chunk and conversation snippets inserted
- UI to control new settings
* improve ui a bit
* fix various reindex issues
* reduce concurrency
* try ultra low queue ... concurrency 1 is too slow.
- Added Cohere Command models (Command, Command Light, Command R, Command R Plus) to the available model list
- Added a new site setting `ai_cohere_api_key` for configuring the Cohere API key
- Implemented a new `DiscourseAi::Completions::Endpoints::Cohere` class to handle interactions with the Cohere API, including:
- Translating request parameters to the Cohere API format
- Parsing Cohere API responses
- Supporting streaming and non-streaming completions
- Supporting "tools" which allow the model to call back to discourse to lookup additional information
- Implemented a new `DiscourseAi::Completions::Dialects::Command` class to translate between the generic Discourse AI prompt format and the Cohere Command format
- Added specs covering the new Cohere endpoint and dialect classes
- Updated `DiscourseAi::AiBot::Bot.guess_model` to map the new Cohere model to the appropriate bot user
In summary, this PR adds support for using the Cohere Command family of models with the Discourse AI plugin. It handles configuring API keys, making requests to the Cohere API, and translating between Discourse's generic prompt format and Cohere's specific format. Thorough test coverage was added for the new functionality.
it is close in performance to GPT 4 at a fraction of the cost,
nice to add it to the mix.
Also improves a test case to simulate streaming, I am hunting for
the "calls" word that is jumping into function calls and can't quite
find it.
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
This pull request makes several improvements and additions to the GitHub-related tools and personas in the `discourse-ai` repository:
1. It adds the `WebBrowser` tool to the `Researcher` persona, allowing the AI to visit web pages, retrieve HTML content, extract the main content, and convert it to plain text.
2. It updates the `GithubFileContent`, `GithubPullRequestDiff`, and `GithubSearchCode` tools to handle HTTP responses more robustly (introducing size limits).
3. It refactors the `send_http_request` method in the `Tool` class to follow redirects when specified, and to read the response body in chunks to avoid memory issues with large responses. (only for WebBrowser)
4. It updates the system prompt for the `Researcher` persona to provide more detailed guidance on when to use Google search vs web browsing, and how to optimize tool usage and reduce redundant requests.
5. It adds a new `web_browser_spec.rb` file with tests for the `WebBrowser` tool, covering various scenarios like handling different HTML structures and following redirects.
This commit adds the ability to enable vision for AI personas, allowing them to understand images that are posted in the conversation.
For personas with vision enabled, any images the user has posted will be resized to be within the configured max_pixels limit, base64 encoded and included in the prompt sent to the AI provider.
The persona editor allows enabling/disabling vision and has a dropdown to select the max supported image size (low, medium, high). Vision is disabled by default.
This initial vision support has been tested and implemented with Anthropic's claude-3 models which accept images in a special format as part of the prompt.
Other integrations will need to be updated to support images.
Several specs were added to test the new functionality at the persona, prompt building and API layers.
- Gemini is omitted, pending API support for Gemini 1.5. Current Gemini bot is not performing well, adding images is unlikely to make it perform any better.
- Open AI is omitted, vision support on GPT-4 it limited in that the API has no tool support when images are enabled so we would need to full back to a different prompting technique, something that would add lots of complexity
---------
Co-authored-by: Martin Brennan <martin@discourse.org>
* FEATURE: allow suppression of notifications from report generation
Previously we needed to do this by hand, unfortunately this uses up
too many tokens and is very hard to discover.
New option means that we can trivially disable notifications without
needing any prompt engineering.
* URI.parse is safer, use it
This allows users to share a static page of an AI conversation with
the rest of the world.
By default this feature is disabled, it is enabled by turning on
ai_bot_allow_public_sharing via site settings
Precautions are taken when sharing
1. We make a carbonite copy
2. We minimize work generating page
3. We limit to 100 interactions
4. Many security checks - including disallowing if there is a mix
of users in the PM.
* Bonus commit, large PRs like this PR did not work with github tool
large objects would destroy context
Co-authored-by: Martin Brennan <martin@discourse.org>
This PR adds AI semantic search to the search pop available on every page.
It depends on several new and optional settings, like per post embeddings and a reranker model, so this is an experimental endeavour.
---------
Co-authored-by: Rafael Silva <xfalcox@gmail.com>
Introduces a new AI Bot persona called 'GitHub Helper' which is specialized in assisting with GitHub-related tasks and questions. It includes the following key changes:
- Implements the GitHub Helper persona class with its system prompt and available tools
- Adds three new AI Bot tools for GitHub interactions:
- github_file_content: Retrieves content of files from a GitHub repository
- github_pull_request_diff: Retrieves the diff for a GitHub pull request
- github_search_code: Searches for code in a GitHub repository
- Updates the AI Bot dialects to support the new GitHub tools
- Implements multiple function calls for standard tool dialect
This provides new support for messages API from Claude.
It is required for latest model access.
Also corrects implementation of function calls.
* Fix message interleving
* fix broken spec
* add new models to automation
This PR adds a new feature where you can generate captions for images in the composer using AI.
---------
Co-authored-by: Rafael Silva <xfalcox@gmail.com>
This persona searches Discourse Meta for help with Discourse and
points users at relevant posts.
It is somewhat similar to using "Forum Helper" on meta, with the
notable difference that we can not lean on semantic search so using
some prompt engineering we try to keep it simple.
1. Personas are now optionally mentionable, meaning that you can mention them either from public topics or PMs
- Mentioning from PMs helps "switch" persona mid conversation, meaning if you want to look up sites setting you can invoke the site setting bot, or if you want to generate an image you can invoke dall e
- Mentioning outside of PMs allows you to inject a bot reply in a topic trivially
- We also add the support for max_context_posts this allow you to limit the amount of context you feed in, which can help control costs
2. Add support for a "random picker" tool that can be used to pick random numbers
3. Clean up routing ai_personas -> ai-personas
4. Add Max Context Posts so users can control how much history a persona can consume (this is important for mentionable personas)
Co-authored-by: Martin Brennan <martin@discourse.org>
* FEATURE: allow personas to supply top_p and temperature params
Code assistance generally are more focused at a lower temperature
This amends it so SQL Helper runs at 0.2 temperature vs the more
common default across LLMs of 1.0.
Reduced temperature leads to more focused, concise and predictable
answers for the SQL Helper
* fix tests
* This is not perfect, but far better than what we do today
Instead of fishing for
1. Draft sequence
2. Draft body
We skip (2), this means the composer "only" needs 1 http request to
open, we also want to eliminate (1) but it is a bit of a trickier
core change, may figure out how to pull it off (defer it to first draft save)
Value of bot drafts < value of opening bot conversations really fast
- Allow users to supply top_p and temperature values, which means people can fine tune randomness
- Fix bad localization string
- Fix bad remapping of max tokens in gemini
- Add support for top_p as a general param to llms
- Amend system prompt so persona stops treating a user as an adversary