discourse-ai/lib/ai_bot/personas/persona.rb

393 lines
12 KiB
Ruby
Raw Normal View History

#frozen_string_literal: true
module DiscourseAi
module AiBot
module Personas
class Persona
class << self
def rag_conversation_chunks
10
end
def vision_enabled
false
end
def vision_max_pixels
1_048_576
end
FEATURE: Add Question Consolidator for robust Upload support in Personas (#596) This commit introduces a new feature for AI Personas called the "Question Consolidator LLM". The purpose of the Question Consolidator is to consolidate a user's latest question into a self-contained, context-rich question before querying the vector database for relevant fragments. This helps improve the quality and relevance of the retrieved fragments. Previous to this change we used the last 10 interactions, this is not ideal cause the RAG would "lock on" to an answer. EG: - User: how many cars are there in europe - Model: detailed answer about cars in europe including the term car and vehicle many times - User: Nice, what about trains are there in the US In the above example "trains" and "US" becomes very low signal given there are pages and pages talking about cars and europe. This mean retrieval is sub optimal. Instead, we pass the history to the "question consolidator", it would simply consolidate the question to "How many trains are there in the United States", which would make it fare easier for the vector db to find relevant content. The llm used for question consolidator can often be less powerful than the model you are talking to, we recommend using lighter weight and fast models cause the task is very simple. This is configurable from the persona ui. This PR also removes support for {uploads} placeholder, this is too complicated to get right and we want freedom to shift RAG implementation. Key changes: 1. Added a new `question_consolidator_llm` column to the `ai_personas` table to store the LLM model used for question consolidation. 2. Implemented the `QuestionConsolidator` module which handles the logic for consolidating the user's latest question. It extracts the relevant user and model messages from the conversation history, truncates them if needed to fit within the token limit, and generates a consolidated question prompt. 3. Updated the `Persona` class to use the Question Consolidator LLM (if configured) when crafting the RAG fragments prompt. It passes the conversation context to the consolidator to generate a self-contained question. 4. Added UI elements in the AI Persona editor to allow selecting the Question Consolidator LLM. Also made some UI tweaks to conditionally show/hide certain options based on persona configuration. 5. Wrote unit tests for the QuestionConsolidator module and updated existing persona tests to cover the new functionality. This feature enables AI Personas to better understand the context and intent behind a user's question by consolidating the conversation history into a single, focused question. This can lead to more relevant and accurate responses from the AI assistant.
2024-04-29 23:49:21 -04:00
def question_consolidator_llm
nil
end
def force_default_llm
false
end
def allow_chat_channel_mentions
false
end
def allow_chat_direct_messages
false
end
def system_personas
@system_personas ||= {
Personas::General => -1,
Personas::SqlHelper => -2,
Personas::Artist => -3,
Personas::SettingsExplorer => -4,
Personas::Researcher => -5,
Personas::Creative => -6,
Personas::DallE3 => -7,
Personas::DiscourseHelper => -8,
Personas::GithubHelper => -9,
FEATURE: AI artifacts (#898) This is a significant PR that introduces AI Artifacts functionality to the discourse-ai plugin along with several other improvements. Here are the key changes: 1. AI Artifacts System: - Adds a new `AiArtifact` model and database migration - Allows creation of web artifacts with HTML, CSS, and JavaScript content - Introduces security settings (`strict`, `lax`, `disabled`) for controlling artifact execution - Implements artifact rendering in iframes with sandbox protection - New `CreateArtifact` tool for AI to generate interactive content 2. Tool System Improvements: - Adds support for partial tool calls, allowing incremental updates during generation - Better handling of tool call states and progress tracking - Improved XML tool processing with CDATA support - Fixes for tool parameter handling and duplicate invocations 3. LLM Provider Updates: - Updates for Anthropic Claude models with correct token limits - Adds support for native/XML tool modes in Gemini integration - Adds new model configurations including Llama 3.1 models - Improvements to streaming response handling 4. UI Enhancements: - New artifact viewer component with expand/collapse functionality - Security controls for artifact execution (click-to-run in strict mode) - Improved dialog and response handling - Better error management for tool execution 5. Security Improvements: - Sandbox controls for artifact execution - Public/private artifact sharing controls - Security settings to control artifact behavior - CSP and frame-options handling for artifacts 6. Technical Improvements: - Better post streaming implementation - Improved error handling in completions - Better memory management for partial tool calls - Enhanced testing coverage 7. Configuration: - New site settings for artifact security - Extended LLM model configurations - Additional tool configuration options This PR significantly enhances the plugin's capabilities for generating and displaying interactive content while maintaining security and providing flexible configuration options for administrators.
2024-11-18 17:22:39 -05:00
Personas::WebArtifactCreator => -10,
}
end
def system_personas_by_id
@system_personas_by_id ||= system_personas.invert
end
def all(user:)
# listing tools has to be dynamic cause site settings may change
AiPersona.all_personas.filter do |persona|
next false if !user.in_any_groups?(persona.allowed_group_ids)
if persona.system
instance = persona.new
(
instance.required_tools == [] ||
(instance.required_tools - all_available_tools).empty?
)
else
true
end
end
end
def find_by(id: nil, name: nil, user:)
all(user: user).find { |persona| persona.id == id || persona.name == name }
end
def name
I18n.t("discourse_ai.ai_bot.personas.#{to_s.demodulize.underscore}.name")
end
def description
I18n.t("discourse_ai.ai_bot.personas.#{to_s.demodulize.underscore}.description")
end
def all_available_tools
tools = [
Tools::ListCategories,
Tools::Time,
Tools::Search,
Tools::Read,
Tools::DbSchema,
Tools::SearchSettings,
Tools::SettingContext,
Tools::RandomPicker,
Tools::DiscourseMetaSearch,
Tools::GithubFileContent,
Tools::GithubPullRequestDiff,
Tools::GithubSearchFiles,
Tools::WebBrowser,
Tools::JavascriptEvaluator,
]
if SiteSetting.ai_artifact_security.in?(%w[lax strict])
tools << Tools::CreateArtifact
tools << Tools::UpdateArtifact
end
tools << Tools::GithubSearchCode if SiteSetting.ai_bot_github_access_token.present?
tools << Tools::ListTags if SiteSetting.tagging_enabled
tools << Tools::Image if SiteSetting.ai_stability_api_key.present?
tools << Tools::DallE if SiteSetting.ai_openai_api_key.present?
if SiteSetting.ai_google_custom_search_api_key.present? &&
SiteSetting.ai_google_custom_search_cx.present?
tools << Tools::Google
FEATURE: UI to update ai personas on admin page (#290) Introduces a UI to manage customizable personas (admin only feature) Part of the change was some extensive internal refactoring: - AIBot now has a persona set in the constructor, once set it never changes - Command now takes in bot as a constructor param, so it has the correct persona and is not generating AIBot objects on the fly - Added a .prettierignore file, due to the way ALE is configured in nvim it is a pre-req for prettier to work - Adds a bunch of validations on the AIPersona model, system personas (artist/creative etc...) are all seeded. We now ensure - name uniqueness, and only allow certain properties to be touched for system personas. - (JS note) the client side design takes advantage of nested routes, the parent route for personas gets all the personas via this.store.findAll("ai-persona") then child routes simply reach into this model to find a particular persona. - (JS note) data is sideloaded into the ai-persona model the meta property supplied from the controller, resultSetMeta - This removes ai_bot_enabled_personas and ai_bot_enabled_chat_commands, both should be controlled from the UI on a per persona basis - Fixes a long standing bug in token accounting ... we were doing to_json.length instead of to_json.to_s.length - Amended it so {commands} are always inserted at the end unconditionally, no need to add it to the template of the system message as it just confuses things - Adds a concept of required_commands to stock personas, these are commands that must be configured for this stock persona to show up. - Refactored tests so we stop requiring inference_stubs, it was very confusing to need it, added to plugin.rb for now which at least is clearer - Migrates the persona selector to gjs --------- Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com> Co-authored-by: Martin Brennan <martin@discourse.org>
2023-11-21 00:56:43 -05:00
end
tools
end
end
def id
@ai_persona&.id || self.class.system_personas[self.class]
end
def tools
[]
end
def force_tool_use
[]
end
def forced_tool_count
-1
end
def required_tools
[]
end
def temperature
nil
end
def top_p
nil
end
def options
{}
end
def available_tools
FEATURE: custom user defined tools (#677) Introduces custom AI tools functionality. 1. Why it was added: The PR adds the ability to create, manage, and use custom AI tools within the Discourse AI system. This feature allows for more flexibility and extensibility in the AI capabilities of the platform. 2. What it does: - Introduces a new `AiTool` model for storing custom AI tools - Adds CRUD (Create, Read, Update, Delete) operations for AI tools - Implements a tool runner system for executing custom tool scripts - Integrates custom tools with existing AI personas - Provides a user interface for managing custom tools in the admin panel 3. Possible use cases: - Creating custom tools for specific tasks or integrations (stock quotes, currency conversion etc...) - Allowing administrators to add new functionalities to AI assistants without modifying core code - Implementing domain-specific tools for particular communities or industries 4. Code structure: The PR introduces several new files and modifies existing ones: a. Models: - `app/models/ai_tool.rb`: Defines the AiTool model - `app/serializers/ai_custom_tool_serializer.rb`: Serializer for AI tools b. Controllers: - `app/controllers/discourse_ai/admin/ai_tools_controller.rb`: Handles CRUD operations for AI tools c. Views and Components: - New Ember.js components for tool management in the admin interface - Updates to existing AI persona management components to support custom tools d. Core functionality: - `lib/ai_bot/tool_runner.rb`: Implements the custom tool execution system - `lib/ai_bot/tools/custom.rb`: Defines the custom tool class e. Routes and configurations: - Updates to route configurations to include new AI tool management pages f. Migrations: - `db/migrate/20240618080148_create_ai_tools.rb`: Creates the ai_tools table g. Tests: - New test files for AI tool functionality and integration The PR integrates the custom tools system with the existing AI persona framework, allowing personas to use both built-in and custom tools. It also includes safety measures such as timeouts and HTTP request limits to prevent misuse of custom tools. Overall, this PR significantly enhances the flexibility and extensibility of the Discourse AI system by allowing administrators to create and manage custom AI tools tailored to their specific needs. Co-authored-by: Martin Brennan <martin@discourse.org>
2024-06-27 03:27:40 -04:00
self
.class
.all_available_tools
.filter { |tool| tools.include?(tool) }
.concat(tools.filter(&:custom?))
end
FEATURE: Add Question Consolidator for robust Upload support in Personas (#596) This commit introduces a new feature for AI Personas called the "Question Consolidator LLM". The purpose of the Question Consolidator is to consolidate a user's latest question into a self-contained, context-rich question before querying the vector database for relevant fragments. This helps improve the quality and relevance of the retrieved fragments. Previous to this change we used the last 10 interactions, this is not ideal cause the RAG would "lock on" to an answer. EG: - User: how many cars are there in europe - Model: detailed answer about cars in europe including the term car and vehicle many times - User: Nice, what about trains are there in the US In the above example "trains" and "US" becomes very low signal given there are pages and pages talking about cars and europe. This mean retrieval is sub optimal. Instead, we pass the history to the "question consolidator", it would simply consolidate the question to "How many trains are there in the United States", which would make it fare easier for the vector db to find relevant content. The llm used for question consolidator can often be less powerful than the model you are talking to, we recommend using lighter weight and fast models cause the task is very simple. This is configurable from the persona ui. This PR also removes support for {uploads} placeholder, this is too complicated to get right and we want freedom to shift RAG implementation. Key changes: 1. Added a new `question_consolidator_llm` column to the `ai_personas` table to store the LLM model used for question consolidation. 2. Implemented the `QuestionConsolidator` module which handles the logic for consolidating the user's latest question. It extracts the relevant user and model messages from the conversation history, truncates them if needed to fit within the token limit, and generates a consolidated question prompt. 3. Updated the `Persona` class to use the Question Consolidator LLM (if configured) when crafting the RAG fragments prompt. It passes the conversation context to the consolidator to generate a self-contained question. 4. Added UI elements in the AI Persona editor to allow selecting the Question Consolidator LLM. Also made some UI tweaks to conditionally show/hide certain options based on persona configuration. 5. Wrote unit tests for the QuestionConsolidator module and updated existing persona tests to cover the new functionality. This feature enables AI Personas to better understand the context and intent behind a user's question by consolidating the conversation history into a single, focused question. This can lead to more relevant and accurate responses from the AI assistant.
2024-04-29 23:49:21 -04:00
def craft_prompt(context, llm: nil)
system_insts =
system_prompt.gsub(/\{(\w+)\}/) do |match|
found = context[match[1..-2].to_sym]
found.nil? ? match : found.to_s
end
prompt_insts = <<~TEXT.strip
#{system_insts}
#{available_tools.map(&:custom_system_message).compact_blank.join("\n")}
TEXT
FEATURE: Add Question Consolidator for robust Upload support in Personas (#596) This commit introduces a new feature for AI Personas called the "Question Consolidator LLM". The purpose of the Question Consolidator is to consolidate a user's latest question into a self-contained, context-rich question before querying the vector database for relevant fragments. This helps improve the quality and relevance of the retrieved fragments. Previous to this change we used the last 10 interactions, this is not ideal cause the RAG would "lock on" to an answer. EG: - User: how many cars are there in europe - Model: detailed answer about cars in europe including the term car and vehicle many times - User: Nice, what about trains are there in the US In the above example "trains" and "US" becomes very low signal given there are pages and pages talking about cars and europe. This mean retrieval is sub optimal. Instead, we pass the history to the "question consolidator", it would simply consolidate the question to "How many trains are there in the United States", which would make it fare easier for the vector db to find relevant content. The llm used for question consolidator can often be less powerful than the model you are talking to, we recommend using lighter weight and fast models cause the task is very simple. This is configurable from the persona ui. This PR also removes support for {uploads} placeholder, this is too complicated to get right and we want freedom to shift RAG implementation. Key changes: 1. Added a new `question_consolidator_llm` column to the `ai_personas` table to store the LLM model used for question consolidation. 2. Implemented the `QuestionConsolidator` module which handles the logic for consolidating the user's latest question. It extracts the relevant user and model messages from the conversation history, truncates them if needed to fit within the token limit, and generates a consolidated question prompt. 3. Updated the `Persona` class to use the Question Consolidator LLM (if configured) when crafting the RAG fragments prompt. It passes the conversation context to the consolidator to generate a self-contained question. 4. Added UI elements in the AI Persona editor to allow selecting the Question Consolidator LLM. Also made some UI tweaks to conditionally show/hide certain options based on persona configuration. 5. Wrote unit tests for the QuestionConsolidator module and updated existing persona tests to cover the new functionality. This feature enables AI Personas to better understand the context and intent behind a user's question by consolidating the conversation history into a single, focused question. This can lead to more relevant and accurate responses from the AI assistant.
2024-04-29 23:49:21 -04:00
question_consolidator_llm = llm
if self.class.question_consolidator_llm.present?
question_consolidator_llm =
DiscourseAi::Completions::Llm.proxy(self.class.question_consolidator_llm)
end
if context[:custom_instructions].present?
prompt_insts << "\n"
prompt_insts << context[:custom_instructions]
end
FEATURE: Add Question Consolidator for robust Upload support in Personas (#596) This commit introduces a new feature for AI Personas called the "Question Consolidator LLM". The purpose of the Question Consolidator is to consolidate a user's latest question into a self-contained, context-rich question before querying the vector database for relevant fragments. This helps improve the quality and relevance of the retrieved fragments. Previous to this change we used the last 10 interactions, this is not ideal cause the RAG would "lock on" to an answer. EG: - User: how many cars are there in europe - Model: detailed answer about cars in europe including the term car and vehicle many times - User: Nice, what about trains are there in the US In the above example "trains" and "US" becomes very low signal given there are pages and pages talking about cars and europe. This mean retrieval is sub optimal. Instead, we pass the history to the "question consolidator", it would simply consolidate the question to "How many trains are there in the United States", which would make it fare easier for the vector db to find relevant content. The llm used for question consolidator can often be less powerful than the model you are talking to, we recommend using lighter weight and fast models cause the task is very simple. This is configurable from the persona ui. This PR also removes support for {uploads} placeholder, this is too complicated to get right and we want freedom to shift RAG implementation. Key changes: 1. Added a new `question_consolidator_llm` column to the `ai_personas` table to store the LLM model used for question consolidation. 2. Implemented the `QuestionConsolidator` module which handles the logic for consolidating the user's latest question. It extracts the relevant user and model messages from the conversation history, truncates them if needed to fit within the token limit, and generates a consolidated question prompt. 3. Updated the `Persona` class to use the Question Consolidator LLM (if configured) when crafting the RAG fragments prompt. It passes the conversation context to the consolidator to generate a self-contained question. 4. Added UI elements in the AI Persona editor to allow selecting the Question Consolidator LLM. Also made some UI tweaks to conditionally show/hide certain options based on persona configuration. 5. Wrote unit tests for the QuestionConsolidator module and updated existing persona tests to cover the new functionality. This feature enables AI Personas to better understand the context and intent behind a user's question by consolidating the conversation history into a single, focused question. This can lead to more relevant and accurate responses from the AI assistant.
2024-04-29 23:49:21 -04:00
fragments_guidance =
rag_fragments_prompt(
context[:conversation_context].to_a,
llm: question_consolidator_llm,
user: context[:user],
)&.strip
prompt_insts << fragments_guidance if fragments_guidance.present?
prompt =
DiscourseAi::Completions::Prompt.new(
prompt_insts,
messages: context[:conversation_context].to_a,
topic_id: context[:topic_id],
post_id: context[:post_id],
)
prompt.max_pixels = self.class.vision_max_pixels if self.class.vision_enabled
prompt.tools = available_tools.map(&:signature) if available_tools
prompt
FEATURE: UI to update ai personas on admin page (#290) Introduces a UI to manage customizable personas (admin only feature) Part of the change was some extensive internal refactoring: - AIBot now has a persona set in the constructor, once set it never changes - Command now takes in bot as a constructor param, so it has the correct persona and is not generating AIBot objects on the fly - Added a .prettierignore file, due to the way ALE is configured in nvim it is a pre-req for prettier to work - Adds a bunch of validations on the AIPersona model, system personas (artist/creative etc...) are all seeded. We now ensure - name uniqueness, and only allow certain properties to be touched for system personas. - (JS note) the client side design takes advantage of nested routes, the parent route for personas gets all the personas via this.store.findAll("ai-persona") then child routes simply reach into this model to find a particular persona. - (JS note) data is sideloaded into the ai-persona model the meta property supplied from the controller, resultSetMeta - This removes ai_bot_enabled_personas and ai_bot_enabled_chat_commands, both should be controlled from the UI on a per persona basis - Fixes a long standing bug in token accounting ... we were doing to_json.length instead of to_json.to_s.length - Amended it so {commands} are always inserted at the end unconditionally, no need to add it to the template of the system message as it just confuses things - Adds a concept of required_commands to stock personas, these are commands that must be configured for this stock persona to show up. - Refactored tests so we stop requiring inference_stubs, it was very confusing to need it, added to plugin.rb for now which at least is clearer - Migrates the persona selector to gjs --------- Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com> Co-authored-by: Martin Brennan <martin@discourse.org>
2023-11-21 00:56:43 -05:00
end
FEATURE: AI artifacts (#898) This is a significant PR that introduces AI Artifacts functionality to the discourse-ai plugin along with several other improvements. Here are the key changes: 1. AI Artifacts System: - Adds a new `AiArtifact` model and database migration - Allows creation of web artifacts with HTML, CSS, and JavaScript content - Introduces security settings (`strict`, `lax`, `disabled`) for controlling artifact execution - Implements artifact rendering in iframes with sandbox protection - New `CreateArtifact` tool for AI to generate interactive content 2. Tool System Improvements: - Adds support for partial tool calls, allowing incremental updates during generation - Better handling of tool call states and progress tracking - Improved XML tool processing with CDATA support - Fixes for tool parameter handling and duplicate invocations 3. LLM Provider Updates: - Updates for Anthropic Claude models with correct token limits - Adds support for native/XML tool modes in Gemini integration - Adds new model configurations including Llama 3.1 models - Improvements to streaming response handling 4. UI Enhancements: - New artifact viewer component with expand/collapse functionality - Security controls for artifact execution (click-to-run in strict mode) - Improved dialog and response handling - Better error management for tool execution 5. Security Improvements: - Sandbox controls for artifact execution - Public/private artifact sharing controls - Security settings to control artifact behavior - CSP and frame-options handling for artifacts 6. Technical Improvements: - Better post streaming implementation - Improved error handling in completions - Better memory management for partial tool calls - Enhanced testing coverage 7. Configuration: - New site settings for artifact security - Extended LLM model configurations - Additional tool configuration options This PR significantly enhances the plugin's capabilities for generating and displaying interactive content while maintaining security and providing flexible configuration options for administrators.
2024-11-18 17:22:39 -05:00
def find_tool(partial, bot_user:, llm:, context:, existing_tools: [])
return nil if !partial.is_a?(DiscourseAi::Completions::ToolCall)
FEATURE: AI artifacts (#898) This is a significant PR that introduces AI Artifacts functionality to the discourse-ai plugin along with several other improvements. Here are the key changes: 1. AI Artifacts System: - Adds a new `AiArtifact` model and database migration - Allows creation of web artifacts with HTML, CSS, and JavaScript content - Introduces security settings (`strict`, `lax`, `disabled`) for controlling artifact execution - Implements artifact rendering in iframes with sandbox protection - New `CreateArtifact` tool for AI to generate interactive content 2. Tool System Improvements: - Adds support for partial tool calls, allowing incremental updates during generation - Better handling of tool call states and progress tracking - Improved XML tool processing with CDATA support - Fixes for tool parameter handling and duplicate invocations 3. LLM Provider Updates: - Updates for Anthropic Claude models with correct token limits - Adds support for native/XML tool modes in Gemini integration - Adds new model configurations including Llama 3.1 models - Improvements to streaming response handling 4. UI Enhancements: - New artifact viewer component with expand/collapse functionality - Security controls for artifact execution (click-to-run in strict mode) - Improved dialog and response handling - Better error management for tool execution 5. Security Improvements: - Sandbox controls for artifact execution - Public/private artifact sharing controls - Security settings to control artifact behavior - CSP and frame-options handling for artifacts 6. Technical Improvements: - Better post streaming implementation - Improved error handling in completions - Better memory management for partial tool calls - Enhanced testing coverage 7. Configuration: - New site settings for artifact security - Extended LLM model configurations - Additional tool configuration options This PR significantly enhances the plugin's capabilities for generating and displaying interactive content while maintaining security and providing flexible configuration options for administrators.
2024-11-18 17:22:39 -05:00
tool_instance(
partial,
bot_user: bot_user,
llm: llm,
context: context,
existing_tools: existing_tools,
)
end
def allow_partial_tool_calls?
available_tools.any? { |tool| tool.allow_partial_tool_calls? }
end
protected
FEATURE: AI artifacts (#898) This is a significant PR that introduces AI Artifacts functionality to the discourse-ai plugin along with several other improvements. Here are the key changes: 1. AI Artifacts System: - Adds a new `AiArtifact` model and database migration - Allows creation of web artifacts with HTML, CSS, and JavaScript content - Introduces security settings (`strict`, `lax`, `disabled`) for controlling artifact execution - Implements artifact rendering in iframes with sandbox protection - New `CreateArtifact` tool for AI to generate interactive content 2. Tool System Improvements: - Adds support for partial tool calls, allowing incremental updates during generation - Better handling of tool call states and progress tracking - Improved XML tool processing with CDATA support - Fixes for tool parameter handling and duplicate invocations 3. LLM Provider Updates: - Updates for Anthropic Claude models with correct token limits - Adds support for native/XML tool modes in Gemini integration - Adds new model configurations including Llama 3.1 models - Improvements to streaming response handling 4. UI Enhancements: - New artifact viewer component with expand/collapse functionality - Security controls for artifact execution (click-to-run in strict mode) - Improved dialog and response handling - Better error management for tool execution 5. Security Improvements: - Sandbox controls for artifact execution - Public/private artifact sharing controls - Security settings to control artifact behavior - CSP and frame-options handling for artifacts 6. Technical Improvements: - Better post streaming implementation - Improved error handling in completions - Better memory management for partial tool calls - Enhanced testing coverage 7. Configuration: - New site settings for artifact security - Extended LLM model configurations - Additional tool configuration options This PR significantly enhances the plugin's capabilities for generating and displaying interactive content while maintaining security and providing flexible configuration options for administrators.
2024-11-18 17:22:39 -05:00
def tool_instance(tool_call, bot_user:, llm:, context:, existing_tools:)
function_id = tool_call.id
function_name = tool_call.name
return nil if function_name.nil?
tool_klass = available_tools.find { |c| c.signature.dig(:name) == function_name }
return nil if tool_klass.nil?
arguments = {}
tool_klass.signature[:parameters].to_a.each do |param|
name = param[:name]
value = tool_call.parameters[name.to_sym]
if param[:type] == "array" && value
value =
begin
JSON.parse(value)
rescue JSON::ParserError
[value.to_s]
end
elsif param[:type] == "string" && value
value = strip_quotes(value).to_s
elsif param[:type] == "integer" && value
value = strip_quotes(value).to_i
end
if param[:enum] && value && !param[:enum].include?(value)
# invalid enum value
value = nil
end
arguments[name.to_sym] = value if value
end
FEATURE: AI artifacts (#898) This is a significant PR that introduces AI Artifacts functionality to the discourse-ai plugin along with several other improvements. Here are the key changes: 1. AI Artifacts System: - Adds a new `AiArtifact` model and database migration - Allows creation of web artifacts with HTML, CSS, and JavaScript content - Introduces security settings (`strict`, `lax`, `disabled`) for controlling artifact execution - Implements artifact rendering in iframes with sandbox protection - New `CreateArtifact` tool for AI to generate interactive content 2. Tool System Improvements: - Adds support for partial tool calls, allowing incremental updates during generation - Better handling of tool call states and progress tracking - Improved XML tool processing with CDATA support - Fixes for tool parameter handling and duplicate invocations 3. LLM Provider Updates: - Updates for Anthropic Claude models with correct token limits - Adds support for native/XML tool modes in Gemini integration - Adds new model configurations including Llama 3.1 models - Improvements to streaming response handling 4. UI Enhancements: - New artifact viewer component with expand/collapse functionality - Security controls for artifact execution (click-to-run in strict mode) - Improved dialog and response handling - Better error management for tool execution 5. Security Improvements: - Sandbox controls for artifact execution - Public/private artifact sharing controls - Security settings to control artifact behavior - CSP and frame-options handling for artifacts 6. Technical Improvements: - Better post streaming implementation - Improved error handling in completions - Better memory management for partial tool calls - Enhanced testing coverage 7. Configuration: - New site settings for artifact security - Extended LLM model configurations - Additional tool configuration options This PR significantly enhances the plugin's capabilities for generating and displaying interactive content while maintaining security and providing flexible configuration options for administrators.
2024-11-18 17:22:39 -05:00
tool_instance =
existing_tools.find { |t| t.name == function_name && t.tool_call_id == function_id }
if tool_instance
tool_instance.parameters = arguments
tool_instance
else
tool_klass.new(
arguments,
tool_call_id: function_id || function_name,
persona_options: options[tool_klass].to_h,
bot_user: bot_user,
llm: llm,
context: context,
)
end
end
def strip_quotes(value)
if value.is_a?(String)
if value.start_with?('"') && value.end_with?('"')
value = value[1..-2]
elsif value.start_with?("'") && value.end_with?("'")
value = value[1..-2]
else
value
end
else
value
end
end
FEATURE: Add Question Consolidator for robust Upload support in Personas (#596) This commit introduces a new feature for AI Personas called the "Question Consolidator LLM". The purpose of the Question Consolidator is to consolidate a user's latest question into a self-contained, context-rich question before querying the vector database for relevant fragments. This helps improve the quality and relevance of the retrieved fragments. Previous to this change we used the last 10 interactions, this is not ideal cause the RAG would "lock on" to an answer. EG: - User: how many cars are there in europe - Model: detailed answer about cars in europe including the term car and vehicle many times - User: Nice, what about trains are there in the US In the above example "trains" and "US" becomes very low signal given there are pages and pages talking about cars and europe. This mean retrieval is sub optimal. Instead, we pass the history to the "question consolidator", it would simply consolidate the question to "How many trains are there in the United States", which would make it fare easier for the vector db to find relevant content. The llm used for question consolidator can often be less powerful than the model you are talking to, we recommend using lighter weight and fast models cause the task is very simple. This is configurable from the persona ui. This PR also removes support for {uploads} placeholder, this is too complicated to get right and we want freedom to shift RAG implementation. Key changes: 1. Added a new `question_consolidator_llm` column to the `ai_personas` table to store the LLM model used for question consolidation. 2. Implemented the `QuestionConsolidator` module which handles the logic for consolidating the user's latest question. It extracts the relevant user and model messages from the conversation history, truncates them if needed to fit within the token limit, and generates a consolidated question prompt. 3. Updated the `Persona` class to use the Question Consolidator LLM (if configured) when crafting the RAG fragments prompt. It passes the conversation context to the consolidator to generate a self-contained question. 4. Added UI elements in the AI Persona editor to allow selecting the Question Consolidator LLM. Also made some UI tweaks to conditionally show/hide certain options based on persona configuration. 5. Wrote unit tests for the QuestionConsolidator module and updated existing persona tests to cover the new functionality. This feature enables AI Personas to better understand the context and intent behind a user's question by consolidating the conversation history into a single, focused question. This can lead to more relevant and accurate responses from the AI assistant.
2024-04-29 23:49:21 -04:00
def rag_fragments_prompt(conversation_context, llm:, user:)
upload_refs =
UploadReference.where(target_id: id, target_type: "AiPersona").pluck(:upload_id)
return nil if !SiteSetting.ai_embeddings_enabled?
return nil if conversation_context.blank? || upload_refs.blank?
latest_interactions =
FEATURE: Add Question Consolidator for robust Upload support in Personas (#596) This commit introduces a new feature for AI Personas called the "Question Consolidator LLM". The purpose of the Question Consolidator is to consolidate a user's latest question into a self-contained, context-rich question before querying the vector database for relevant fragments. This helps improve the quality and relevance of the retrieved fragments. Previous to this change we used the last 10 interactions, this is not ideal cause the RAG would "lock on" to an answer. EG: - User: how many cars are there in europe - Model: detailed answer about cars in europe including the term car and vehicle many times - User: Nice, what about trains are there in the US In the above example "trains" and "US" becomes very low signal given there are pages and pages talking about cars and europe. This mean retrieval is sub optimal. Instead, we pass the history to the "question consolidator", it would simply consolidate the question to "How many trains are there in the United States", which would make it fare easier for the vector db to find relevant content. The llm used for question consolidator can often be less powerful than the model you are talking to, we recommend using lighter weight and fast models cause the task is very simple. This is configurable from the persona ui. This PR also removes support for {uploads} placeholder, this is too complicated to get right and we want freedom to shift RAG implementation. Key changes: 1. Added a new `question_consolidator_llm` column to the `ai_personas` table to store the LLM model used for question consolidation. 2. Implemented the `QuestionConsolidator` module which handles the logic for consolidating the user's latest question. It extracts the relevant user and model messages from the conversation history, truncates them if needed to fit within the token limit, and generates a consolidated question prompt. 3. Updated the `Persona` class to use the Question Consolidator LLM (if configured) when crafting the RAG fragments prompt. It passes the conversation context to the consolidator to generate a self-contained question. 4. Added UI elements in the AI Persona editor to allow selecting the Question Consolidator LLM. Also made some UI tweaks to conditionally show/hide certain options based on persona configuration. 5. Wrote unit tests for the QuestionConsolidator module and updated existing persona tests to cover the new functionality. This feature enables AI Personas to better understand the context and intent behind a user's question by consolidating the conversation history into a single, focused question. This can lead to more relevant and accurate responses from the AI assistant.
2024-04-29 23:49:21 -04:00
conversation_context.select { |ctx| %i[model user].include?(ctx[:type]) }.last(10)
return nil if latest_interactions.empty?
# first response
if latest_interactions.length == 1
consolidated_question = latest_interactions[0][:content]
else
consolidated_question =
DiscourseAi::AiBot::QuestionConsolidator.consolidate_question(
llm,
latest_interactions,
user,
)
end
return nil if !consolidated_question
vector = DiscourseAi::Embeddings::Vector.instance
reranker = DiscourseAi::Inference::HuggingFaceTextEmbeddings
interactions_vector = vector.vector_from(consolidated_question)
rag_conversation_chunks = self.class.rag_conversation_chunks
search_limit =
if reranker.reranker_configured?
rag_conversation_chunks * 5
else
rag_conversation_chunks
end
schema = DiscourseAi::Embeddings::Schema.for(RagDocumentFragment, vector_def: vector.vdef)
candidate_fragment_ids =
schema
.asymmetric_similarity_search(
interactions_vector,
limit: search_limit,
offset: 0,
) { |builder| builder.join(<<~SQL, target_id: id, target_type: "AiPersona") }
rag_document_fragments ON
rag_document_fragments.id = rag_document_fragment_id AND
rag_document_fragments.target_id = :target_id AND
rag_document_fragments.target_type = :target_type
SQL
.map(&:rag_document_fragment_id)
fragments =
RagDocumentFragment.where(upload_id: upload_refs, id: candidate_fragment_ids).pluck(
:fragment,
:metadata,
)
if reranker.reranker_configured?
guidance = fragments.map { |fragment, _metadata| fragment }
ranks =
DiscourseAi::Inference::HuggingFaceTextEmbeddings
.rerank(conversation_context.last[:content], guidance)
.to_a
.take(rag_conversation_chunks)
.map { _1[:index] }
if ranks.empty?
fragments = fragments.take(rag_conversation_chunks)
else
fragments = ranks.map { |idx| fragments[idx] }
end
end
<<~TEXT
<guidance>
The following texts will give you additional guidance for your response.
We included them because we believe they are relevant to this conversation topic.
Texts:
#{
fragments
.map do |fragment, metadata|
if metadata.present?
["# #{metadata}", fragment].join("\n")
else
fragment
end
end
.join("\n")
}
</guidance>
TEXT
end
end
end
end
end