2023-08-30 16:15:03 +10:00
|
|
|
#frozen_string_literal: true
|
|
|
|
|
|
|
|
module DiscourseAi
|
|
|
|
module AiBot
|
|
|
|
module Personas
|
|
|
|
class Persona
|
2024-01-04 10:44:07 -03:00
|
|
|
class << self
|
2024-04-12 23:32:46 +10:00
|
|
|
def rag_conversation_chunks
|
|
|
|
10
|
|
|
|
end
|
|
|
|
|
FEATURE: Add vision support to AI personas (Claude 3) (#546)
This commit adds the ability to enable vision for AI personas, allowing them to understand images that are posted in the conversation.
For personas with vision enabled, any images the user has posted will be resized to be within the configured max_pixels limit, base64 encoded and included in the prompt sent to the AI provider.
The persona editor allows enabling/disabling vision and has a dropdown to select the max supported image size (low, medium, high). Vision is disabled by default.
This initial vision support has been tested and implemented with Anthropic's claude-3 models which accept images in a special format as part of the prompt.
Other integrations will need to be updated to support images.
Several specs were added to test the new functionality at the persona, prompt building and API layers.
- Gemini is omitted, pending API support for Gemini 1.5. Current Gemini bot is not performing well, adding images is unlikely to make it perform any better.
- Open AI is omitted, vision support on GPT-4 it limited in that the API has no tool support when images are enabled so we would need to full back to a different prompting technique, something that would add lots of complexity
---------
Co-authored-by: Martin Brennan <martin@discourse.org>
2024-03-27 14:30:11 +11:00
|
|
|
def vision_enabled
|
|
|
|
false
|
|
|
|
end
|
|
|
|
|
|
|
|
def vision_max_pixels
|
|
|
|
1_048_576
|
|
|
|
end
|
|
|
|
|
FEATURE: PDF support for rag pipeline (#1118)
This PR introduces several enhancements and refactorings to the AI Persona and RAG (Retrieval-Augmented Generation) functionalities within the discourse-ai plugin. Here's a breakdown of the changes:
**1. LLM Model Association for RAG and Personas:**
- **New Database Columns:** Adds `rag_llm_model_id` to both `ai_personas` and `ai_tools` tables. This allows specifying a dedicated LLM for RAG indexing, separate from the persona's primary LLM. Adds `default_llm_id` and `question_consolidator_llm_id` to `ai_personas`.
- **Migration:** Includes a migration (`20250210032345_migrate_persona_to_llm_model_id.rb`) to populate the new `default_llm_id` and `question_consolidator_llm_id` columns in `ai_personas` based on the existing `default_llm` and `question_consolidator_llm` string columns, and a post migration to remove the latter.
- **Model Changes:** The `AiPersona` and `AiTool` models now `belong_to` an `LlmModel` via `rag_llm_model_id`. The `LlmModel.proxy` method now accepts an `LlmModel` instance instead of just an identifier. `AiPersona` now has `default_llm_id` and `question_consolidator_llm_id` attributes.
- **UI Updates:** The AI Persona and AI Tool editors in the admin panel now allow selecting an LLM for RAG indexing (if PDF/image support is enabled). The RAG options component displays an LLM selector.
- **Serialization:** The serializers (`AiCustomToolSerializer`, `AiCustomToolListSerializer`, `LocalizedAiPersonaSerializer`) have been updated to include the new `rag_llm_model_id`, `default_llm_id` and `question_consolidator_llm_id` attributes.
**2. PDF and Image Support for RAG:**
- **Site Setting:** Introduces a new hidden site setting, `ai_rag_pdf_images_enabled`, to control whether PDF and image files can be indexed for RAG. This defaults to `false`.
- **File Upload Validation:** The `RagDocumentFragmentsController` now checks the `ai_rag_pdf_images_enabled` setting and allows PDF, PNG, JPG, and JPEG files if enabled. Error handling is included for cases where PDF/image indexing is attempted with the setting disabled.
- **PDF Processing:** Adds a new utility class, `DiscourseAi::Utils::PdfToImages`, which uses ImageMagick (`magick`) to convert PDF pages into individual PNG images. A maximum PDF size and conversion timeout are enforced.
- **Image Processing:** A new utility class, `DiscourseAi::Utils::ImageToText`, is included to handle OCR for the images and PDFs.
- **RAG Digestion Job:** The `DigestRagUpload` job now handles PDF and image uploads. It uses `PdfToImages` and `ImageToText` to extract text and create document fragments.
- **UI Updates:** The RAG uploader component now accepts PDF and image file types if `ai_rag_pdf_images_enabled` is true. The UI text is adjusted to indicate supported file types.
**3. Refactoring and Improvements:**
- **LLM Enumeration:** The `DiscourseAi::Configuration::LlmEnumerator` now provides a `values_for_serialization` method, which returns a simplified array of LLM data (id, name, vision_enabled) suitable for use in serializers. This avoids exposing unnecessary details to the frontend.
- **AI Helper:** The `AiHelper::Assistant` now takes optional `helper_llm` and `image_caption_llm` parameters in its constructor, allowing for greater flexibility.
- **Bot and Persona Updates:** Several updates were made across the codebase, changing the string based association to a LLM to the new model based.
- **Audit Logs:** The `DiscourseAi::Completions::Endpoints::Base` now formats raw request payloads as pretty JSON for easier auditing.
- **Eval Script:** An evaluation script is included.
**4. Testing:**
- The PR introduces a new eval system for LLMs, this allows us to test how functionality works across various LLM providers. This lives in `/evals`
2025-02-14 12:15:07 +11:00
|
|
|
def question_consolidator_llm_id
|
FEATURE: Add Question Consolidator for robust Upload support in Personas (#596)
This commit introduces a new feature for AI Personas called the "Question Consolidator LLM". The purpose of the Question Consolidator is to consolidate a user's latest question into a self-contained, context-rich question before querying the vector database for relevant fragments. This helps improve the quality and relevance of the retrieved fragments.
Previous to this change we used the last 10 interactions, this is not ideal cause the RAG would "lock on" to an answer.
EG:
- User: how many cars are there in europe
- Model: detailed answer about cars in europe including the term car and vehicle many times
- User: Nice, what about trains are there in the US
In the above example "trains" and "US" becomes very low signal given there are pages and pages talking about cars and europe. This mean retrieval is sub optimal.
Instead, we pass the history to the "question consolidator", it would simply consolidate the question to "How many trains are there in the United States", which would make it fare easier for the vector db to find relevant content.
The llm used for question consolidator can often be less powerful than the model you are talking to, we recommend using lighter weight and fast models cause the task is very simple. This is configurable from the persona ui.
This PR also removes support for {uploads} placeholder, this is too complicated to get right and we want freedom to shift RAG implementation.
Key changes:
1. Added a new `question_consolidator_llm` column to the `ai_personas` table to store the LLM model used for question consolidation.
2. Implemented the `QuestionConsolidator` module which handles the logic for consolidating the user's latest question. It extracts the relevant user and model messages from the conversation history, truncates them if needed to fit within the token limit, and generates a consolidated question prompt.
3. Updated the `Persona` class to use the Question Consolidator LLM (if configured) when crafting the RAG fragments prompt. It passes the conversation context to the consolidator to generate a self-contained question.
4. Added UI elements in the AI Persona editor to allow selecting the Question Consolidator LLM. Also made some UI tweaks to conditionally show/hide certain options based on persona configuration.
5. Wrote unit tests for the QuestionConsolidator module and updated existing persona tests to cover the new functionality.
This feature enables AI Personas to better understand the context and intent behind a user's question by consolidating the conversation history into a single, focused question. This can lead to more relevant and accurate responses from the AI assistant.
2024-04-30 13:49:21 +10:00
|
|
|
nil
|
|
|
|
end
|
|
|
|
|
2024-10-16 07:20:31 +11:00
|
|
|
def force_default_llm
|
|
|
|
false
|
|
|
|
end
|
|
|
|
|
|
|
|
def allow_chat_channel_mentions
|
|
|
|
false
|
|
|
|
end
|
|
|
|
|
|
|
|
def allow_chat_direct_messages
|
2024-05-06 09:49:02 +10:00
|
|
|
false
|
|
|
|
end
|
|
|
|
|
2024-01-04 10:44:07 -03:00
|
|
|
def system_personas
|
|
|
|
@system_personas ||= {
|
|
|
|
Personas::General => -1,
|
|
|
|
Personas::SqlHelper => -2,
|
|
|
|
Personas::Artist => -3,
|
|
|
|
Personas::SettingsExplorer => -4,
|
|
|
|
Personas::Researcher => -5,
|
|
|
|
Personas::Creative => -6,
|
|
|
|
Personas::DallE3 => -7,
|
2024-02-19 14:52:12 +11:00
|
|
|
Personas::DiscourseHelper => -8,
|
2024-03-08 06:37:23 +11:00
|
|
|
Personas::GithubHelper => -9,
|
2024-11-19 09:22:39 +11:00
|
|
|
Personas::WebArtifactCreator => -10,
|
2024-01-04 10:44:07 -03:00
|
|
|
}
|
|
|
|
end
|
2023-08-30 16:15:03 +10:00
|
|
|
|
2024-01-04 10:44:07 -03:00
|
|
|
def system_personas_by_id
|
|
|
|
@system_personas_by_id ||= system_personas.invert
|
|
|
|
end
|
2023-09-14 16:46:56 +10:00
|
|
|
|
2024-01-04 10:44:07 -03:00
|
|
|
def all(user:)
|
|
|
|
# listing tools has to be dynamic cause site settings may change
|
|
|
|
AiPersona.all_personas.filter do |persona|
|
|
|
|
next false if !user.in_any_groups?(persona.allowed_group_ids)
|
|
|
|
|
|
|
|
if persona.system
|
|
|
|
instance = persona.new
|
|
|
|
(
|
|
|
|
instance.required_tools == [] ||
|
|
|
|
(instance.required_tools - all_available_tools).empty?
|
|
|
|
)
|
|
|
|
else
|
|
|
|
true
|
|
|
|
end
|
|
|
|
end
|
|
|
|
end
|
2023-08-30 16:15:03 +10:00
|
|
|
|
2024-01-04 10:44:07 -03:00
|
|
|
def find_by(id: nil, name: nil, user:)
|
|
|
|
all(user: user).find { |persona| persona.id == id || persona.name == name }
|
|
|
|
end
|
2023-12-08 08:42:56 +11:00
|
|
|
|
2024-01-04 10:44:07 -03:00
|
|
|
def name
|
|
|
|
I18n.t("discourse_ai.ai_bot.personas.#{to_s.demodulize.underscore}.name")
|
|
|
|
end
|
2023-09-14 16:46:56 +10:00
|
|
|
|
2024-01-04 10:44:07 -03:00
|
|
|
def description
|
|
|
|
I18n.t("discourse_ai.ai_bot.personas.#{to_s.demodulize.underscore}.description")
|
2023-08-30 16:15:03 +10:00
|
|
|
end
|
|
|
|
|
2024-01-04 10:44:07 -03:00
|
|
|
def all_available_tools
|
|
|
|
tools = [
|
|
|
|
Tools::ListCategories,
|
|
|
|
Tools::Time,
|
|
|
|
Tools::Search,
|
|
|
|
Tools::Read,
|
|
|
|
Tools::DbSchema,
|
|
|
|
Tools::SearchSettings,
|
|
|
|
Tools::SettingContext,
|
2024-02-15 16:37:59 +11:00
|
|
|
Tools::RandomPicker,
|
2024-02-19 14:52:12 +11:00
|
|
|
Tools::DiscourseMetaSearch,
|
2024-03-08 06:37:23 +11:00
|
|
|
Tools::GithubFileContent,
|
|
|
|
Tools::GithubPullRequestDiff,
|
2024-05-30 06:33:50 +10:00
|
|
|
Tools::GithubSearchFiles,
|
2024-03-28 16:01:58 +11:00
|
|
|
Tools::WebBrowser,
|
2024-05-21 07:57:01 +10:00
|
|
|
Tools::JavascriptEvaluator,
|
2024-01-04 10:44:07 -03:00
|
|
|
]
|
|
|
|
|
2024-12-03 07:23:31 +11:00
|
|
|
if SiteSetting.ai_artifact_security.in?(%w[lax strict])
|
|
|
|
tools << Tools::CreateArtifact
|
|
|
|
tools << Tools::UpdateArtifact
|
DEV: artifact system update (#1096)
### Why
This pull request fundamentally restructures how AI bots create and update web artifacts to address critical limitations in the previous approach:
1. **Improved Artifact Context for LLMs**: Previously, artifact creation and update tools included the *entire* artifact source code directly in the tool arguments. This overloaded the Language Model (LLM) with raw code, making it difficult for the LLM to maintain a clear understanding of the artifact's current state when applying changes. The LLM would struggle to differentiate between the base artifact and the requested modifications, leading to confusion and less effective updates.
2. **Reduced Token Usage and History Bloat**: Including the full artifact source code in every tool interaction was extremely token-inefficient. As conversations progressed, this redundant code in the history consumed a significant number of tokens unnecessarily. This not only increased costs but also diluted the context for the LLM with less relevant historical information.
3. **Enabling Updates for Large Artifacts**: The lack of a practical diff or targeted update mechanism made it nearly impossible to efficiently update larger web artifacts. Sending the entire source code for every minor change was both computationally expensive and prone to errors, effectively blocking the use of AI bots for meaningful modifications of complex artifacts.
**This pull request addresses these core issues by**:
* Introducing methods for the AI bot to explicitly *read* and understand the current state of an artifact.
* Implementing efficient update strategies that send *targeted* changes rather than the entire artifact source code.
* Providing options to control the level of artifact context included in LLM prompts, optimizing token usage.
### What
The main changes implemented in this PR to resolve the above issues are:
1. **`Read Artifact` Tool for Contextual Awareness**:
- A new `read_artifact` tool is introduced, enabling AI bots to fetch and process the current content of a web artifact from a given URL (local or external).
- This provides the LLM with a clear and up-to-date representation of the artifact's HTML, CSS, and JavaScript, improving its understanding of the base to be modified.
- By cloning local artifacts, it allows the bot to work with a fresh copy, further enhancing context and control.
2. **Refactored `Update Artifact` Tool with Efficient Strategies**:
- The `update_artifact` tool is redesigned to employ more efficient update strategies, minimizing token usage and improving update precision:
- **`diff` strategy**: Utilizes a search-and-replace diff algorithm to apply only the necessary, targeted changes to the artifact's code. This significantly reduces the amount of code sent to the LLM and focuses its attention on the specific modifications.
- **`full` strategy**: Provides the option to replace the entire content sections (HTML, CSS, JavaScript) when a complete rewrite is required.
- Tool options enhance the control over the update process:
- `editor_llm`: Allows selection of a specific LLM for artifact updates, potentially optimizing for code editing tasks.
- `update_algorithm`: Enables choosing between `diff` and `full` update strategies based on the nature of the required changes.
- `do_not_echo_artifact`: Defaults to true, and by *not* echoing the artifact in prompts, it further reduces token consumption in scenarios where the LLM might not need the full artifact context for every update step (though effectiveness might be slightly reduced in certain update scenarios).
3. **System and General Persona Tool Option Visibility and Customization**:
- Tool options, including those for system personas, are made visible and editable in the admin UI. This allows administrators to fine-tune the behavior of all personas and their tools, including setting specific LLMs or update algorithms. This was previously limited or hidden for system personas.
4. **Centralized and Improved Content Security Policy (CSP) Management**:
- The CSP for AI artifacts is consolidated and made more maintainable through the `ALLOWED_CDN_SOURCES` constant. This improves code organization and future updates to the allowed CDN list, while maintaining the existing security posture.
5. **Codebase Improvements**:
- Refactoring of diff utilities, introduction of strategy classes, enhanced error handling, new locales, and comprehensive testing all contribute to a more robust, efficient, and maintainable artifact management system.
By addressing the issues of LLM context confusion, token inefficiency, and the limitations of updating large artifacts, this pull request significantly improves the practicality and effectiveness of AI bots in managing web artifacts within Discourse.
2025-02-04 16:27:27 +11:00
|
|
|
tools << Tools::ReadArtifact
|
2024-12-03 07:23:31 +11:00
|
|
|
end
|
|
|
|
|
2024-03-08 06:37:23 +11:00
|
|
|
tools << Tools::GithubSearchCode if SiteSetting.ai_bot_github_access_token.present?
|
|
|
|
|
2024-01-04 10:44:07 -03:00
|
|
|
tools << Tools::ListTags if SiteSetting.tagging_enabled
|
|
|
|
tools << Tools::Image if SiteSetting.ai_stability_api_key.present?
|
|
|
|
|
|
|
|
tools << Tools::DallE if SiteSetting.ai_openai_api_key.present?
|
|
|
|
if SiteSetting.ai_google_custom_search_api_key.present? &&
|
|
|
|
SiteSetting.ai_google_custom_search_cx.present?
|
|
|
|
tools << Tools::Google
|
FEATURE: UI to update ai personas on admin page (#290)
Introduces a UI to manage customizable personas (admin only feature)
Part of the change was some extensive internal refactoring:
- AIBot now has a persona set in the constructor, once set it never changes
- Command now takes in bot as a constructor param, so it has the correct persona and is not generating AIBot objects on the fly
- Added a .prettierignore file, due to the way ALE is configured in nvim it is a pre-req for prettier to work
- Adds a bunch of validations on the AIPersona model, system personas (artist/creative etc...) are all seeded. We now ensure
- name uniqueness, and only allow certain properties to be touched for system personas.
- (JS note) the client side design takes advantage of nested routes, the parent route for personas gets all the personas via this.store.findAll("ai-persona") then child routes simply reach into this model to find a particular persona.
- (JS note) data is sideloaded into the ai-persona model the meta property supplied from the controller, resultSetMeta
- This removes ai_bot_enabled_personas and ai_bot_enabled_chat_commands, both should be controlled from the UI on a per persona basis
- Fixes a long standing bug in token accounting ... we were doing to_json.length instead of to_json.to_s.length
- Amended it so {commands} are always inserted at the end unconditionally, no need to add it to the template of the system message as it just confuses things
- Adds a concept of required_commands to stock personas, these are commands that must be configured for this stock persona to show up.
- Refactored tests so we stop requiring inference_stubs, it was very confusing to need it, added to plugin.rb for now which at least is clearer
- Migrates the persona selector to gjs
---------
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
Co-authored-by: Martin Brennan <martin@discourse.org>
2023-11-21 16:56:43 +11:00
|
|
|
end
|
|
|
|
|
2024-01-04 10:44:07 -03:00
|
|
|
tools
|
2023-08-30 16:15:03 +10:00
|
|
|
end
|
|
|
|
end
|
|
|
|
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
def id
|
|
|
|
@ai_persona&.id || self.class.system_personas[self.class]
|
|
|
|
end
|
|
|
|
|
2024-01-04 10:44:07 -03:00
|
|
|
def tools
|
|
|
|
[]
|
2023-08-30 16:15:03 +10:00
|
|
|
end
|
|
|
|
|
2024-10-05 08:46:57 +09:00
|
|
|
def force_tool_use
|
|
|
|
[]
|
|
|
|
end
|
|
|
|
|
2024-10-11 07:23:42 +11:00
|
|
|
def forced_tool_count
|
|
|
|
-1
|
|
|
|
end
|
|
|
|
|
2024-01-04 10:44:07 -03:00
|
|
|
def required_tools
|
|
|
|
[]
|
|
|
|
end
|
2023-08-30 16:15:03 +10:00
|
|
|
|
2024-02-03 07:09:34 +11:00
|
|
|
def temperature
|
|
|
|
nil
|
|
|
|
end
|
|
|
|
|
|
|
|
def top_p
|
|
|
|
nil
|
|
|
|
end
|
|
|
|
|
2024-01-04 10:44:07 -03:00
|
|
|
def options
|
|
|
|
{}
|
|
|
|
end
|
2023-08-30 16:15:03 +10:00
|
|
|
|
2024-01-04 10:44:07 -03:00
|
|
|
def available_tools
|
2024-06-27 17:27:40 +10:00
|
|
|
self
|
|
|
|
.class
|
|
|
|
.all_available_tools
|
|
|
|
.filter { |tool| tools.include?(tool) }
|
|
|
|
.concat(tools.filter(&:custom?))
|
2023-08-30 16:15:03 +10:00
|
|
|
end
|
|
|
|
|
FEATURE: Add Question Consolidator for robust Upload support in Personas (#596)
This commit introduces a new feature for AI Personas called the "Question Consolidator LLM". The purpose of the Question Consolidator is to consolidate a user's latest question into a self-contained, context-rich question before querying the vector database for relevant fragments. This helps improve the quality and relevance of the retrieved fragments.
Previous to this change we used the last 10 interactions, this is not ideal cause the RAG would "lock on" to an answer.
EG:
- User: how many cars are there in europe
- Model: detailed answer about cars in europe including the term car and vehicle many times
- User: Nice, what about trains are there in the US
In the above example "trains" and "US" becomes very low signal given there are pages and pages talking about cars and europe. This mean retrieval is sub optimal.
Instead, we pass the history to the "question consolidator", it would simply consolidate the question to "How many trains are there in the United States", which would make it fare easier for the vector db to find relevant content.
The llm used for question consolidator can often be less powerful than the model you are talking to, we recommend using lighter weight and fast models cause the task is very simple. This is configurable from the persona ui.
This PR also removes support for {uploads} placeholder, this is too complicated to get right and we want freedom to shift RAG implementation.
Key changes:
1. Added a new `question_consolidator_llm` column to the `ai_personas` table to store the LLM model used for question consolidation.
2. Implemented the `QuestionConsolidator` module which handles the logic for consolidating the user's latest question. It extracts the relevant user and model messages from the conversation history, truncates them if needed to fit within the token limit, and generates a consolidated question prompt.
3. Updated the `Persona` class to use the Question Consolidator LLM (if configured) when crafting the RAG fragments prompt. It passes the conversation context to the consolidator to generate a self-contained question.
4. Added UI elements in the AI Persona editor to allow selecting the Question Consolidator LLM. Also made some UI tweaks to conditionally show/hide certain options based on persona configuration.
5. Wrote unit tests for the QuestionConsolidator module and updated existing persona tests to cover the new functionality.
This feature enables AI Personas to better understand the context and intent behind a user's question by consolidating the conversation history into a single, focused question. This can lead to more relevant and accurate responses from the AI assistant.
2024-04-30 13:49:21 +10:00
|
|
|
def craft_prompt(context, llm: nil)
|
2024-01-04 10:44:07 -03:00
|
|
|
system_insts =
|
|
|
|
system_prompt.gsub(/\{(\w+)\}/) do |match|
|
|
|
|
found = context[match[1..-2].to_sym]
|
|
|
|
found.nil? ? match : found.to_s
|
|
|
|
end
|
2023-08-30 16:15:03 +10:00
|
|
|
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
prompt_insts = <<~TEXT.strip
|
|
|
|
#{system_insts}
|
|
|
|
#{available_tools.map(&:custom_system_message).compact_blank.join("\n")}
|
|
|
|
TEXT
|
|
|
|
|
FEATURE: Add Question Consolidator for robust Upload support in Personas (#596)
This commit introduces a new feature for AI Personas called the "Question Consolidator LLM". The purpose of the Question Consolidator is to consolidate a user's latest question into a self-contained, context-rich question before querying the vector database for relevant fragments. This helps improve the quality and relevance of the retrieved fragments.
Previous to this change we used the last 10 interactions, this is not ideal cause the RAG would "lock on" to an answer.
EG:
- User: how many cars are there in europe
- Model: detailed answer about cars in europe including the term car and vehicle many times
- User: Nice, what about trains are there in the US
In the above example "trains" and "US" becomes very low signal given there are pages and pages talking about cars and europe. This mean retrieval is sub optimal.
Instead, we pass the history to the "question consolidator", it would simply consolidate the question to "How many trains are there in the United States", which would make it fare easier for the vector db to find relevant content.
The llm used for question consolidator can often be less powerful than the model you are talking to, we recommend using lighter weight and fast models cause the task is very simple. This is configurable from the persona ui.
This PR also removes support for {uploads} placeholder, this is too complicated to get right and we want freedom to shift RAG implementation.
Key changes:
1. Added a new `question_consolidator_llm` column to the `ai_personas` table to store the LLM model used for question consolidation.
2. Implemented the `QuestionConsolidator` module which handles the logic for consolidating the user's latest question. It extracts the relevant user and model messages from the conversation history, truncates them if needed to fit within the token limit, and generates a consolidated question prompt.
3. Updated the `Persona` class to use the Question Consolidator LLM (if configured) when crafting the RAG fragments prompt. It passes the conversation context to the consolidator to generate a self-contained question.
4. Added UI elements in the AI Persona editor to allow selecting the Question Consolidator LLM. Also made some UI tweaks to conditionally show/hide certain options based on persona configuration.
5. Wrote unit tests for the QuestionConsolidator module and updated existing persona tests to cover the new functionality.
This feature enables AI Personas to better understand the context and intent behind a user's question by consolidating the conversation history into a single, focused question. This can lead to more relevant and accurate responses from the AI assistant.
2024-04-30 13:49:21 +10:00
|
|
|
question_consolidator_llm = llm
|
FEATURE: PDF support for rag pipeline (#1118)
This PR introduces several enhancements and refactorings to the AI Persona and RAG (Retrieval-Augmented Generation) functionalities within the discourse-ai plugin. Here's a breakdown of the changes:
**1. LLM Model Association for RAG and Personas:**
- **New Database Columns:** Adds `rag_llm_model_id` to both `ai_personas` and `ai_tools` tables. This allows specifying a dedicated LLM for RAG indexing, separate from the persona's primary LLM. Adds `default_llm_id` and `question_consolidator_llm_id` to `ai_personas`.
- **Migration:** Includes a migration (`20250210032345_migrate_persona_to_llm_model_id.rb`) to populate the new `default_llm_id` and `question_consolidator_llm_id` columns in `ai_personas` based on the existing `default_llm` and `question_consolidator_llm` string columns, and a post migration to remove the latter.
- **Model Changes:** The `AiPersona` and `AiTool` models now `belong_to` an `LlmModel` via `rag_llm_model_id`. The `LlmModel.proxy` method now accepts an `LlmModel` instance instead of just an identifier. `AiPersona` now has `default_llm_id` and `question_consolidator_llm_id` attributes.
- **UI Updates:** The AI Persona and AI Tool editors in the admin panel now allow selecting an LLM for RAG indexing (if PDF/image support is enabled). The RAG options component displays an LLM selector.
- **Serialization:** The serializers (`AiCustomToolSerializer`, `AiCustomToolListSerializer`, `LocalizedAiPersonaSerializer`) have been updated to include the new `rag_llm_model_id`, `default_llm_id` and `question_consolidator_llm_id` attributes.
**2. PDF and Image Support for RAG:**
- **Site Setting:** Introduces a new hidden site setting, `ai_rag_pdf_images_enabled`, to control whether PDF and image files can be indexed for RAG. This defaults to `false`.
- **File Upload Validation:** The `RagDocumentFragmentsController` now checks the `ai_rag_pdf_images_enabled` setting and allows PDF, PNG, JPG, and JPEG files if enabled. Error handling is included for cases where PDF/image indexing is attempted with the setting disabled.
- **PDF Processing:** Adds a new utility class, `DiscourseAi::Utils::PdfToImages`, which uses ImageMagick (`magick`) to convert PDF pages into individual PNG images. A maximum PDF size and conversion timeout are enforced.
- **Image Processing:** A new utility class, `DiscourseAi::Utils::ImageToText`, is included to handle OCR for the images and PDFs.
- **RAG Digestion Job:** The `DigestRagUpload` job now handles PDF and image uploads. It uses `PdfToImages` and `ImageToText` to extract text and create document fragments.
- **UI Updates:** The RAG uploader component now accepts PDF and image file types if `ai_rag_pdf_images_enabled` is true. The UI text is adjusted to indicate supported file types.
**3. Refactoring and Improvements:**
- **LLM Enumeration:** The `DiscourseAi::Configuration::LlmEnumerator` now provides a `values_for_serialization` method, which returns a simplified array of LLM data (id, name, vision_enabled) suitable for use in serializers. This avoids exposing unnecessary details to the frontend.
- **AI Helper:** The `AiHelper::Assistant` now takes optional `helper_llm` and `image_caption_llm` parameters in its constructor, allowing for greater flexibility.
- **Bot and Persona Updates:** Several updates were made across the codebase, changing the string based association to a LLM to the new model based.
- **Audit Logs:** The `DiscourseAi::Completions::Endpoints::Base` now formats raw request payloads as pretty JSON for easier auditing.
- **Eval Script:** An evaluation script is included.
**4. Testing:**
- The PR introduces a new eval system for LLMs, this allows us to test how functionality works across various LLM providers. This lives in `/evals`
2025-02-14 12:15:07 +11:00
|
|
|
if self.class.question_consolidator_llm_id.present?
|
|
|
|
question_consolidator_llm ||=
|
|
|
|
DiscourseAi::Completions::Llm.proxy(
|
|
|
|
LlmModel.find_by(id: self.class.question_consolidator_llm_id),
|
|
|
|
)
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
end
|
|
|
|
|
2024-11-05 07:43:26 +11:00
|
|
|
if context[:custom_instructions].present?
|
|
|
|
prompt_insts << "\n"
|
|
|
|
prompt_insts << context[:custom_instructions]
|
|
|
|
end
|
|
|
|
|
FEATURE: Add Question Consolidator for robust Upload support in Personas (#596)
This commit introduces a new feature for AI Personas called the "Question Consolidator LLM". The purpose of the Question Consolidator is to consolidate a user's latest question into a self-contained, context-rich question before querying the vector database for relevant fragments. This helps improve the quality and relevance of the retrieved fragments.
Previous to this change we used the last 10 interactions, this is not ideal cause the RAG would "lock on" to an answer.
EG:
- User: how many cars are there in europe
- Model: detailed answer about cars in europe including the term car and vehicle many times
- User: Nice, what about trains are there in the US
In the above example "trains" and "US" becomes very low signal given there are pages and pages talking about cars and europe. This mean retrieval is sub optimal.
Instead, we pass the history to the "question consolidator", it would simply consolidate the question to "How many trains are there in the United States", which would make it fare easier for the vector db to find relevant content.
The llm used for question consolidator can often be less powerful than the model you are talking to, we recommend using lighter weight and fast models cause the task is very simple. This is configurable from the persona ui.
This PR also removes support for {uploads} placeholder, this is too complicated to get right and we want freedom to shift RAG implementation.
Key changes:
1. Added a new `question_consolidator_llm` column to the `ai_personas` table to store the LLM model used for question consolidation.
2. Implemented the `QuestionConsolidator` module which handles the logic for consolidating the user's latest question. It extracts the relevant user and model messages from the conversation history, truncates them if needed to fit within the token limit, and generates a consolidated question prompt.
3. Updated the `Persona` class to use the Question Consolidator LLM (if configured) when crafting the RAG fragments prompt. It passes the conversation context to the consolidator to generate a self-contained question.
4. Added UI elements in the AI Persona editor to allow selecting the Question Consolidator LLM. Also made some UI tweaks to conditionally show/hide certain options based on persona configuration.
5. Wrote unit tests for the QuestionConsolidator module and updated existing persona tests to cover the new functionality.
This feature enables AI Personas to better understand the context and intent behind a user's question by consolidating the conversation history into a single, focused question. This can lead to more relevant and accurate responses from the AI assistant.
2024-04-30 13:49:21 +10:00
|
|
|
fragments_guidance =
|
|
|
|
rag_fragments_prompt(
|
|
|
|
context[:conversation_context].to_a,
|
|
|
|
llm: question_consolidator_llm,
|
|
|
|
user: context[:user],
|
|
|
|
)&.strip
|
|
|
|
|
|
|
|
prompt_insts << fragments_guidance if fragments_guidance.present?
|
|
|
|
|
2024-01-12 14:36:44 -03:00
|
|
|
prompt =
|
|
|
|
DiscourseAi::Completions::Prompt.new(
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
prompt_insts,
|
2024-01-12 14:36:44 -03:00
|
|
|
messages: context[:conversation_context].to_a,
|
2024-03-02 07:53:21 +11:00
|
|
|
topic_id: context[:topic_id],
|
|
|
|
post_id: context[:post_id],
|
2024-01-12 14:36:44 -03:00
|
|
|
)
|
2023-08-30 16:15:03 +10:00
|
|
|
|
FEATURE: Add vision support to AI personas (Claude 3) (#546)
This commit adds the ability to enable vision for AI personas, allowing them to understand images that are posted in the conversation.
For personas with vision enabled, any images the user has posted will be resized to be within the configured max_pixels limit, base64 encoded and included in the prompt sent to the AI provider.
The persona editor allows enabling/disabling vision and has a dropdown to select the max supported image size (low, medium, high). Vision is disabled by default.
This initial vision support has been tested and implemented with Anthropic's claude-3 models which accept images in a special format as part of the prompt.
Other integrations will need to be updated to support images.
Several specs were added to test the new functionality at the persona, prompt building and API layers.
- Gemini is omitted, pending API support for Gemini 1.5. Current Gemini bot is not performing well, adding images is unlikely to make it perform any better.
- Open AI is omitted, vision support on GPT-4 it limited in that the API has no tool support when images are enabled so we would need to full back to a different prompting technique, something that would add lots of complexity
---------
Co-authored-by: Martin Brennan <martin@discourse.org>
2024-03-27 14:30:11 +11:00
|
|
|
prompt.max_pixels = self.class.vision_max_pixels if self.class.vision_enabled
|
2024-01-12 14:36:44 -03:00
|
|
|
prompt.tools = available_tools.map(&:signature) if available_tools
|
DEV: artifact system update (#1096)
### Why
This pull request fundamentally restructures how AI bots create and update web artifacts to address critical limitations in the previous approach:
1. **Improved Artifact Context for LLMs**: Previously, artifact creation and update tools included the *entire* artifact source code directly in the tool arguments. This overloaded the Language Model (LLM) with raw code, making it difficult for the LLM to maintain a clear understanding of the artifact's current state when applying changes. The LLM would struggle to differentiate between the base artifact and the requested modifications, leading to confusion and less effective updates.
2. **Reduced Token Usage and History Bloat**: Including the full artifact source code in every tool interaction was extremely token-inefficient. As conversations progressed, this redundant code in the history consumed a significant number of tokens unnecessarily. This not only increased costs but also diluted the context for the LLM with less relevant historical information.
3. **Enabling Updates for Large Artifacts**: The lack of a practical diff or targeted update mechanism made it nearly impossible to efficiently update larger web artifacts. Sending the entire source code for every minor change was both computationally expensive and prone to errors, effectively blocking the use of AI bots for meaningful modifications of complex artifacts.
**This pull request addresses these core issues by**:
* Introducing methods for the AI bot to explicitly *read* and understand the current state of an artifact.
* Implementing efficient update strategies that send *targeted* changes rather than the entire artifact source code.
* Providing options to control the level of artifact context included in LLM prompts, optimizing token usage.
### What
The main changes implemented in this PR to resolve the above issues are:
1. **`Read Artifact` Tool for Contextual Awareness**:
- A new `read_artifact` tool is introduced, enabling AI bots to fetch and process the current content of a web artifact from a given URL (local or external).
- This provides the LLM with a clear and up-to-date representation of the artifact's HTML, CSS, and JavaScript, improving its understanding of the base to be modified.
- By cloning local artifacts, it allows the bot to work with a fresh copy, further enhancing context and control.
2. **Refactored `Update Artifact` Tool with Efficient Strategies**:
- The `update_artifact` tool is redesigned to employ more efficient update strategies, minimizing token usage and improving update precision:
- **`diff` strategy**: Utilizes a search-and-replace diff algorithm to apply only the necessary, targeted changes to the artifact's code. This significantly reduces the amount of code sent to the LLM and focuses its attention on the specific modifications.
- **`full` strategy**: Provides the option to replace the entire content sections (HTML, CSS, JavaScript) when a complete rewrite is required.
- Tool options enhance the control over the update process:
- `editor_llm`: Allows selection of a specific LLM for artifact updates, potentially optimizing for code editing tasks.
- `update_algorithm`: Enables choosing between `diff` and `full` update strategies based on the nature of the required changes.
- `do_not_echo_artifact`: Defaults to true, and by *not* echoing the artifact in prompts, it further reduces token consumption in scenarios where the LLM might not need the full artifact context for every update step (though effectiveness might be slightly reduced in certain update scenarios).
3. **System and General Persona Tool Option Visibility and Customization**:
- Tool options, including those for system personas, are made visible and editable in the admin UI. This allows administrators to fine-tune the behavior of all personas and their tools, including setting specific LLMs or update algorithms. This was previously limited or hidden for system personas.
4. **Centralized and Improved Content Security Policy (CSP) Management**:
- The CSP for AI artifacts is consolidated and made more maintainable through the `ALLOWED_CDN_SOURCES` constant. This improves code organization and future updates to the allowed CDN list, while maintaining the existing security posture.
5. **Codebase Improvements**:
- Refactoring of diff utilities, introduction of strategy classes, enhanced error handling, new locales, and comprehensive testing all contribute to a more robust, efficient, and maintainable artifact management system.
By addressing the issues of LLM context confusion, token inefficiency, and the limitations of updating large artifacts, this pull request significantly improves the practicality and effectiveness of AI bots in managing web artifacts within Discourse.
2025-02-04 16:27:27 +11:00
|
|
|
available_tools.each do |tool|
|
|
|
|
tool.inject_prompt(prompt: prompt, context: context, persona: self)
|
|
|
|
end
|
2024-01-12 14:36:44 -03:00
|
|
|
prompt
|
FEATURE: UI to update ai personas on admin page (#290)
Introduces a UI to manage customizable personas (admin only feature)
Part of the change was some extensive internal refactoring:
- AIBot now has a persona set in the constructor, once set it never changes
- Command now takes in bot as a constructor param, so it has the correct persona and is not generating AIBot objects on the fly
- Added a .prettierignore file, due to the way ALE is configured in nvim it is a pre-req for prettier to work
- Adds a bunch of validations on the AIPersona model, system personas (artist/creative etc...) are all seeded. We now ensure
- name uniqueness, and only allow certain properties to be touched for system personas.
- (JS note) the client side design takes advantage of nested routes, the parent route for personas gets all the personas via this.store.findAll("ai-persona") then child routes simply reach into this model to find a particular persona.
- (JS note) data is sideloaded into the ai-persona model the meta property supplied from the controller, resultSetMeta
- This removes ai_bot_enabled_personas and ai_bot_enabled_chat_commands, both should be controlled from the UI on a per persona basis
- Fixes a long standing bug in token accounting ... we were doing to_json.length instead of to_json.to_s.length
- Amended it so {commands} are always inserted at the end unconditionally, no need to add it to the template of the system message as it just confuses things
- Adds a concept of required_commands to stock personas, these are commands that must be configured for this stock persona to show up.
- Refactored tests so we stop requiring inference_stubs, it was very confusing to need it, added to plugin.rb for now which at least is clearer
- Migrates the persona selector to gjs
---------
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
Co-authored-by: Martin Brennan <martin@discourse.org>
2023-11-21 16:56:43 +11:00
|
|
|
end
|
|
|
|
|
2024-11-19 09:22:39 +11:00
|
|
|
def find_tool(partial, bot_user:, llm:, context:, existing_tools: [])
|
2024-11-12 08:14:30 +11:00
|
|
|
return nil if !partial.is_a?(DiscourseAi::Completions::ToolCall)
|
2024-11-19 09:22:39 +11:00
|
|
|
tool_instance(
|
|
|
|
partial,
|
|
|
|
bot_user: bot_user,
|
|
|
|
llm: llm,
|
|
|
|
context: context,
|
|
|
|
existing_tools: existing_tools,
|
|
|
|
)
|
|
|
|
end
|
|
|
|
|
|
|
|
def allow_partial_tool_calls?
|
|
|
|
available_tools.any? { |tool| tool.allow_partial_tool_calls? }
|
2024-03-02 07:53:21 +11:00
|
|
|
end
|
|
|
|
|
|
|
|
protected
|
|
|
|
|
2024-11-19 09:22:39 +11:00
|
|
|
def tool_instance(tool_call, bot_user:, llm:, context:, existing_tools:)
|
2024-11-12 08:14:30 +11:00
|
|
|
function_id = tool_call.id
|
|
|
|
function_name = tool_call.name
|
2024-03-09 08:46:40 +11:00
|
|
|
return nil if function_name.nil?
|
2024-01-04 10:44:07 -03:00
|
|
|
|
|
|
|
tool_klass = available_tools.find { |c| c.signature.dig(:name) == function_name }
|
2024-03-09 08:46:40 +11:00
|
|
|
return nil if tool_klass.nil?
|
2024-01-04 10:44:07 -03:00
|
|
|
|
2024-01-05 14:39:32 +11:00
|
|
|
arguments = {}
|
|
|
|
tool_klass.signature[:parameters].to_a.each do |param|
|
|
|
|
name = param[:name]
|
2024-11-12 08:14:30 +11:00
|
|
|
value = tool_call.parameters[name.to_sym]
|
2024-01-05 14:39:32 +11:00
|
|
|
|
|
|
|
if param[:type] == "array" && value
|
|
|
|
value =
|
|
|
|
begin
|
|
|
|
JSON.parse(value)
|
|
|
|
rescue JSON::ParserError
|
2024-04-19 18:08:16 +10:00
|
|
|
[value.to_s]
|
2024-01-05 14:39:32 +11:00
|
|
|
end
|
2024-05-13 19:40:11 +10:00
|
|
|
elsif param[:type] == "string" && value
|
|
|
|
value = strip_quotes(value).to_s
|
|
|
|
elsif param[:type] == "integer" && value
|
|
|
|
value = strip_quotes(value).to_i
|
|
|
|
end
|
|
|
|
|
|
|
|
if param[:enum] && value && !param[:enum].include?(value)
|
|
|
|
# invalid enum value
|
|
|
|
value = nil
|
2024-01-05 14:39:32 +11:00
|
|
|
end
|
|
|
|
|
|
|
|
arguments[name.to_sym] = value if value
|
|
|
|
end
|
2024-01-04 10:44:07 -03:00
|
|
|
|
2024-11-19 09:22:39 +11:00
|
|
|
tool_instance =
|
|
|
|
existing_tools.find { |t| t.name == function_name && t.tool_call_id == function_id }
|
|
|
|
|
|
|
|
if tool_instance
|
|
|
|
tool_instance.parameters = arguments
|
|
|
|
tool_instance
|
|
|
|
else
|
|
|
|
tool_klass.new(
|
|
|
|
arguments,
|
|
|
|
tool_call_id: function_id || function_name,
|
|
|
|
persona_options: options[tool_klass].to_h,
|
|
|
|
bot_user: bot_user,
|
|
|
|
llm: llm,
|
|
|
|
context: context,
|
|
|
|
)
|
|
|
|
end
|
2023-08-30 16:15:03 +10:00
|
|
|
end
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
|
2024-05-13 19:40:11 +10:00
|
|
|
def strip_quotes(value)
|
|
|
|
if value.is_a?(String)
|
|
|
|
if value.start_with?('"') && value.end_with?('"')
|
|
|
|
value = value[1..-2]
|
|
|
|
elsif value.start_with?("'") && value.end_with?("'")
|
|
|
|
value = value[1..-2]
|
|
|
|
else
|
|
|
|
value
|
|
|
|
end
|
|
|
|
else
|
|
|
|
value
|
|
|
|
end
|
|
|
|
end
|
|
|
|
|
FEATURE: Add Question Consolidator for robust Upload support in Personas (#596)
This commit introduces a new feature for AI Personas called the "Question Consolidator LLM". The purpose of the Question Consolidator is to consolidate a user's latest question into a self-contained, context-rich question before querying the vector database for relevant fragments. This helps improve the quality and relevance of the retrieved fragments.
Previous to this change we used the last 10 interactions, this is not ideal cause the RAG would "lock on" to an answer.
EG:
- User: how many cars are there in europe
- Model: detailed answer about cars in europe including the term car and vehicle many times
- User: Nice, what about trains are there in the US
In the above example "trains" and "US" becomes very low signal given there are pages and pages talking about cars and europe. This mean retrieval is sub optimal.
Instead, we pass the history to the "question consolidator", it would simply consolidate the question to "How many trains are there in the United States", which would make it fare easier for the vector db to find relevant content.
The llm used for question consolidator can often be less powerful than the model you are talking to, we recommend using lighter weight and fast models cause the task is very simple. This is configurable from the persona ui.
This PR also removes support for {uploads} placeholder, this is too complicated to get right and we want freedom to shift RAG implementation.
Key changes:
1. Added a new `question_consolidator_llm` column to the `ai_personas` table to store the LLM model used for question consolidation.
2. Implemented the `QuestionConsolidator` module which handles the logic for consolidating the user's latest question. It extracts the relevant user and model messages from the conversation history, truncates them if needed to fit within the token limit, and generates a consolidated question prompt.
3. Updated the `Persona` class to use the Question Consolidator LLM (if configured) when crafting the RAG fragments prompt. It passes the conversation context to the consolidator to generate a self-contained question.
4. Added UI elements in the AI Persona editor to allow selecting the Question Consolidator LLM. Also made some UI tweaks to conditionally show/hide certain options based on persona configuration.
5. Wrote unit tests for the QuestionConsolidator module and updated existing persona tests to cover the new functionality.
This feature enables AI Personas to better understand the context and intent behind a user's question by consolidating the conversation history into a single, focused question. This can lead to more relevant and accurate responses from the AI assistant.
2024-04-30 13:49:21 +10:00
|
|
|
def rag_fragments_prompt(conversation_context, llm:, user:)
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
upload_refs =
|
|
|
|
UploadReference.where(target_id: id, target_type: "AiPersona").pluck(:upload_id)
|
|
|
|
|
2025-02-06 12:18:55 -03:00
|
|
|
return nil if !DiscourseAi::Embeddings.enabled?
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
return nil if conversation_context.blank? || upload_refs.blank?
|
|
|
|
|
|
|
|
latest_interactions =
|
FEATURE: Add Question Consolidator for robust Upload support in Personas (#596)
This commit introduces a new feature for AI Personas called the "Question Consolidator LLM". The purpose of the Question Consolidator is to consolidate a user's latest question into a self-contained, context-rich question before querying the vector database for relevant fragments. This helps improve the quality and relevance of the retrieved fragments.
Previous to this change we used the last 10 interactions, this is not ideal cause the RAG would "lock on" to an answer.
EG:
- User: how many cars are there in europe
- Model: detailed answer about cars in europe including the term car and vehicle many times
- User: Nice, what about trains are there in the US
In the above example "trains" and "US" becomes very low signal given there are pages and pages talking about cars and europe. This mean retrieval is sub optimal.
Instead, we pass the history to the "question consolidator", it would simply consolidate the question to "How many trains are there in the United States", which would make it fare easier for the vector db to find relevant content.
The llm used for question consolidator can often be less powerful than the model you are talking to, we recommend using lighter weight and fast models cause the task is very simple. This is configurable from the persona ui.
This PR also removes support for {uploads} placeholder, this is too complicated to get right and we want freedom to shift RAG implementation.
Key changes:
1. Added a new `question_consolidator_llm` column to the `ai_personas` table to store the LLM model used for question consolidation.
2. Implemented the `QuestionConsolidator` module which handles the logic for consolidating the user's latest question. It extracts the relevant user and model messages from the conversation history, truncates them if needed to fit within the token limit, and generates a consolidated question prompt.
3. Updated the `Persona` class to use the Question Consolidator LLM (if configured) when crafting the RAG fragments prompt. It passes the conversation context to the consolidator to generate a self-contained question.
4. Added UI elements in the AI Persona editor to allow selecting the Question Consolidator LLM. Also made some UI tweaks to conditionally show/hide certain options based on persona configuration.
5. Wrote unit tests for the QuestionConsolidator module and updated existing persona tests to cover the new functionality.
This feature enables AI Personas to better understand the context and intent behind a user's question by consolidating the conversation history into a single, focused question. This can lead to more relevant and accurate responses from the AI assistant.
2024-04-30 13:49:21 +10:00
|
|
|
conversation_context.select { |ctx| %i[model user].include?(ctx[:type]) }.last(10)
|
|
|
|
|
|
|
|
return nil if latest_interactions.empty?
|
|
|
|
|
|
|
|
# first response
|
|
|
|
if latest_interactions.length == 1
|
|
|
|
consolidated_question = latest_interactions[0][:content]
|
|
|
|
else
|
|
|
|
consolidated_question =
|
|
|
|
DiscourseAi::AiBot::QuestionConsolidator.consolidate_question(
|
|
|
|
llm,
|
|
|
|
latest_interactions,
|
|
|
|
user,
|
|
|
|
)
|
|
|
|
end
|
|
|
|
|
|
|
|
return nil if !consolidated_question
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
|
2024-12-16 09:55:39 -03:00
|
|
|
vector = DiscourseAi::Embeddings::Vector.instance
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
reranker = DiscourseAi::Inference::HuggingFaceTextEmbeddings
|
|
|
|
|
2024-12-16 09:55:39 -03:00
|
|
|
interactions_vector = vector.vector_from(consolidated_question)
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
|
2024-04-12 23:32:46 +10:00
|
|
|
rag_conversation_chunks = self.class.rag_conversation_chunks
|
2024-12-13 10:15:21 -03:00
|
|
|
search_limit =
|
|
|
|
if reranker.reranker_configured?
|
|
|
|
rag_conversation_chunks * 5
|
|
|
|
else
|
|
|
|
rag_conversation_chunks
|
|
|
|
end
|
|
|
|
|
2025-01-21 12:23:19 -03:00
|
|
|
schema = DiscourseAi::Embeddings::Schema.for(RagDocumentFragment)
|
2024-04-12 23:32:46 +10:00
|
|
|
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
candidate_fragment_ids =
|
2024-12-13 10:15:21 -03:00
|
|
|
schema
|
|
|
|
.asymmetric_similarity_search(
|
|
|
|
interactions_vector,
|
|
|
|
limit: search_limit,
|
|
|
|
offset: 0,
|
|
|
|
) { |builder| builder.join(<<~SQL, target_id: id, target_type: "AiPersona") }
|
|
|
|
rag_document_fragments ON
|
|
|
|
rag_document_fragments.id = rag_document_fragment_id AND
|
|
|
|
rag_document_fragments.target_id = :target_id AND
|
|
|
|
rag_document_fragments.target_type = :target_type
|
|
|
|
SQL
|
|
|
|
.map(&:rag_document_fragment_id)
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
|
2024-04-05 01:02:16 +11:00
|
|
|
fragments =
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
RagDocumentFragment.where(upload_id: upload_refs, id: candidate_fragment_ids).pluck(
|
|
|
|
:fragment,
|
2024-04-05 01:02:16 +11:00
|
|
|
:metadata,
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
)
|
|
|
|
|
|
|
|
if reranker.reranker_configured?
|
2024-04-05 01:02:16 +11:00
|
|
|
guidance = fragments.map { |fragment, _metadata| fragment }
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
ranks =
|
|
|
|
DiscourseAi::Inference::HuggingFaceTextEmbeddings
|
|
|
|
.rerank(conversation_context.last[:content], guidance)
|
|
|
|
.to_a
|
2024-04-12 23:32:46 +10:00
|
|
|
.take(rag_conversation_chunks)
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
.map { _1[:index] }
|
|
|
|
|
|
|
|
if ranks.empty?
|
2024-04-12 23:32:46 +10:00
|
|
|
fragments = fragments.take(rag_conversation_chunks)
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
else
|
2024-04-05 01:02:16 +11:00
|
|
|
fragments = ranks.map { |idx| fragments[idx] }
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
end
|
|
|
|
end
|
|
|
|
|
|
|
|
<<~TEXT
|
|
|
|
<guidance>
|
2024-04-05 01:02:16 +11:00
|
|
|
The following texts will give you additional guidance for your response.
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
We included them because we believe they are relevant to this conversation topic.
|
|
|
|
|
|
|
|
Texts:
|
|
|
|
|
2024-04-05 01:02:16 +11:00
|
|
|
#{
|
|
|
|
fragments
|
|
|
|
.map do |fragment, metadata|
|
|
|
|
if metadata.present?
|
|
|
|
["# #{metadata}", fragment].join("\n")
|
|
|
|
else
|
|
|
|
fragment
|
|
|
|
end
|
|
|
|
end
|
|
|
|
.join("\n")
|
|
|
|
}
|
FEATURE: AI Bot RAG support. (#537)
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
2024-04-01 13:43:34 -03:00
|
|
|
</guidance>
|
|
|
|
TEXT
|
|
|
|
end
|
2023-08-30 16:15:03 +10:00
|
|
|
end
|
|
|
|
end
|
|
|
|
end
|
|
|
|
end
|