Creating a new model, either manually or from presets, doesn't initialize the `provider_params` object, meaning their custom params won't persist.
Additionally, this change adds some validations for Bedrock params, which are mandatory, and a clear message when a completion fails because we cannot build the URL.
- Validate fields to reduce the chance of breaking features by a misconfigured model.
- Fixed a bug where the URL might get deleted during an update.
- Display a warning when a model is currently in use.
* DEV: Remove old code now that features rely on LlmModels.
* Hide old settings and migrate persona llm overrides
* Remove shadowing special URL + seeding code. Use srv:// prefix instead.
* Seeding the SRV-backed model should happen inside an initializer.
* Keep the model up to date when the hidden setting changes.
* Use the correct Mixtral model name and fix previous data migration.
* URL validation should trigger only when we attempt to update it.
1. Repairs the identity on the summary table, we migrated data without resetting it.
2. Adds an index into ai_summary table to match expected retrieval pattern
This allows summary to use the new LLM models and migrates of API key based model selection
Claude 3.5 etc... all work now.
---------
Co-authored-by: Roman Rizzi <rizziromanalejandro@gmail.com>
Follow up to b863ddc94b
Ruby:
* Validate `summary` (the column is `not null`)
* Fix `name` validation (the column has `max_length` 100)
* Fix table annotations
* Accept missing `parameter` attributes (`required, `enum`, `enum_values`)
JS:
* Use native classes
* Don't use ember's array extensions
* Add explicit service injections
* Correct class names
* Use `||=` operator
* Use `store` service to create records
* Remove unused service injections
* Extract consts
* Group actions together
* Use `async`/`await`
* Use `withEventValue`
* Sort html attributes
* Use DButtons `@label` arg
* Use `input` elements instead of Ember's `Input` component (same w/ textarea)
* Remove `btn-default` class (automatically applied by DButton)
* Don't mix `I18n.t` and `i18n` in the same template
* Don't track props that aren't used in a template
* Correct invalid `target.value` code
* Remove unused/invalid `this.parameter`/`onChange` code
* Whitespace
* Use the new service import `inject as service` -> `service`
* Use `Object.entries()`
* Add missing i18n strings
* Fix an error in `addEnumValue` (calling `pushObject` on `undefined`)
* Use `TrackedArray`/`TrackedObject`
* Transform tool `parameters` keys (`enumValues` -> `enum_values`)
Introduces custom AI tools functionality.
1. Why it was added:
The PR adds the ability to create, manage, and use custom AI tools within the Discourse AI system. This feature allows for more flexibility and extensibility in the AI capabilities of the platform.
2. What it does:
- Introduces a new `AiTool` model for storing custom AI tools
- Adds CRUD (Create, Read, Update, Delete) operations for AI tools
- Implements a tool runner system for executing custom tool scripts
- Integrates custom tools with existing AI personas
- Provides a user interface for managing custom tools in the admin panel
3. Possible use cases:
- Creating custom tools for specific tasks or integrations (stock quotes, currency conversion etc...)
- Allowing administrators to add new functionalities to AI assistants without modifying core code
- Implementing domain-specific tools for particular communities or industries
4. Code structure:
The PR introduces several new files and modifies existing ones:
a. Models:
- `app/models/ai_tool.rb`: Defines the AiTool model
- `app/serializers/ai_custom_tool_serializer.rb`: Serializer for AI tools
b. Controllers:
- `app/controllers/discourse_ai/admin/ai_tools_controller.rb`: Handles CRUD operations for AI tools
c. Views and Components:
- New Ember.js components for tool management in the admin interface
- Updates to existing AI persona management components to support custom tools
d. Core functionality:
- `lib/ai_bot/tool_runner.rb`: Implements the custom tool execution system
- `lib/ai_bot/tools/custom.rb`: Defines the custom tool class
e. Routes and configurations:
- Updates to route configurations to include new AI tool management pages
f. Migrations:
- `db/migrate/20240618080148_create_ai_tools.rb`: Creates the ai_tools table
g. Tests:
- New test files for AI tool functionality and integration
The PR integrates the custom tools system with the existing AI persona framework, allowing personas to use both built-in and custom tools. It also includes safety measures such as timeouts and HTTP request limits to prevent misuse of custom tools.
Overall, this PR significantly enhances the flexibility and extensibility of the Discourse AI system by allowing administrators to create and manage custom AI tools tailored to their specific needs.
Co-authored-by: Martin Brennan <martin@discourse.org>
Having this as a callback prevents deploys of sites with a vLLM SRV configured and pending migrations. Additionally, this fixes a bug where we didn't delete/deactivate the companion user after deleting an LLM.
Previously, we stored request parameters like the OpenAI organization and Bedrock's access key and region as site settings. This change stores them in the `llm_models` table instead, letting us drop more settings while also becoming more flexible.
* FEATURE: LLM presets for model creation
Previous to this users needed to look up complicated settings
when setting up models.
This introduces and extensible preset system with Google/OpenAI/Anthropic
presets.
This will cover all the most common LLMs, we can always add more as
we go.
Additionally:
- Proper support for Anthropic Claude Sonnet 3.5
- Stop blurring api keys when navigating away - this made it very complex to reuse keys
We no longer support the "provider:model" format in the "ai_helper_model" and
"ai_embeddings_semantic_search_hyde_model" settings. We'll migrate existing
values and work with our new data-driven LLM configs from now on.
Previously read tool only had access to public topics, this allows
access to all topics user has access to, if admin opts for the option
Also
- Fixes VLLM migration
- Display which llms have bot enabled
* DRAFT: Create AI Bot users dynamically and support custom LlmModels
* Get user associated to llm_model
* Track enabled bots with attribute
* Don't store bot username. Minor touches to migrate default values in settings
* Handle scenario where vLLM uses a SRV record
* Made 3.5-turbo-16k the default version so we can remove hack
This is a rather huge refactor with 1 new feature (tool details can
be suppressed)
Previously we use the name "Command" to describe "Tools", this unifies
all the internal language and simplifies the code.
We also amended the persona UI to use less DToggles which aligns
with our design guidelines.
Co-authored-by: Martin Brennan <martin@discourse.org>
Initial implementation allowed internet wide sharing of
AI conversations, on sites that require login.
This feature can be an anti feature for private sites cause they
can not share conversations internally.
For now we are removing support for public sharing on login required
sites, if the community need the feature we can consider adding a
setting.
Previoulsy on GPT-4-vision was supported, change introduces support
for Google/Anthropic and new OpenAI models
Additionally this makes vision work properly in dev environments
cause we sent the encoded payload via prompt vs sending urls
This change allows us to delete custom models. It checks if there is no module using them.
It also fixes a bug where the after-create transition wasn't working. While this prevents a model from being saved multiple times, endpoint validations are still needed (will be added in a separate PR).:
* FEATURE: Set endpoint credentials directly from LlmModel.
Drop Llama2Tokenizer since we no longer use it.
* Allow http for custom LLMs
---------
Co-authored-by: Rafael Silva <xfalcox@gmail.com>
- Introduce new support for GPT4o (automation / bot / summary / helper)
- Properly account for token counts on OpenAI models
- Track feature that was used when generating AI completions
- Remove custom llm support for summarization as we need better interfaces to control registration and de-registration
There are still some limitations to which models we can support with the `LlmModel` class. This will enable support for Llama3 while we sort those out.
This PR introduces the concept of "LlmModel" as a new way to quickly add new LLM models without making any code changes. We are releasing this first version and will add incremental improvements, so expect changes.
The AI Bot can't fully take advantage of this feature as users are hard-coded. We'll fix this in a separate PR.s
Both endpoints provide OpenAI-compatible servers. The only difference is that Vllm doesn't support passing tools as a separate parameter. Even if the tool param is supported, it ultimately relies on the model's ability to handle native functions, which is not the case with the models we have today.
As a part of this change, we are dropping support for StableBeluga/Llama2 models. They don't have a chat_template, meaning the new API can translate them.
These changes let us remove some of our existing dialects and are a first step in our plan to support any LLM by defining them as data-driven concepts.
I rewrote the "translate" method to use a template method and extracted the tool support strategies into its classes to simplify the code.
Finally, these changes bring support for Ollama when running in dev mode. It only works with Mistral for now, but it will change soon..
Add support for chat with AI personas
- Allow enabling chat for AI personas that have an associated user
- Add new setting `allow_chat` to AI persona to enable/disable chat
- When a message is created in a DM channel with an allowed AI persona user, schedule a reply job
- AI replies to chat messages using the persona's `max_context_posts` setting to determine context
- Store tool calls and custom prompts used to generate a chat reply on the `ChatMessageCustomPrompt` table
- Add tests for AI chat replies with tools and context
At the moment unlike posts we do not carry tool calls in the context.
No @mention support yet for ai personas in channels, this is future work
This commit introduces a new feature for AI Personas called the "Question Consolidator LLM". The purpose of the Question Consolidator is to consolidate a user's latest question into a self-contained, context-rich question before querying the vector database for relevant fragments. This helps improve the quality and relevance of the retrieved fragments.
Previous to this change we used the last 10 interactions, this is not ideal cause the RAG would "lock on" to an answer.
EG:
- User: how many cars are there in europe
- Model: detailed answer about cars in europe including the term car and vehicle many times
- User: Nice, what about trains are there in the US
In the above example "trains" and "US" becomes very low signal given there are pages and pages talking about cars and europe. This mean retrieval is sub optimal.
Instead, we pass the history to the "question consolidator", it would simply consolidate the question to "How many trains are there in the United States", which would make it fare easier for the vector db to find relevant content.
The llm used for question consolidator can often be less powerful than the model you are talking to, we recommend using lighter weight and fast models cause the task is very simple. This is configurable from the persona ui.
This PR also removes support for {uploads} placeholder, this is too complicated to get right and we want freedom to shift RAG implementation.
Key changes:
1. Added a new `question_consolidator_llm` column to the `ai_personas` table to store the LLM model used for question consolidation.
2. Implemented the `QuestionConsolidator` module which handles the logic for consolidating the user's latest question. It extracts the relevant user and model messages from the conversation history, truncates them if needed to fit within the token limit, and generates a consolidated question prompt.
3. Updated the `Persona` class to use the Question Consolidator LLM (if configured) when crafting the RAG fragments prompt. It passes the conversation context to the consolidator to generate a self-contained question.
4. Added UI elements in the AI Persona editor to allow selecting the Question Consolidator LLM. Also made some UI tweaks to conditionally show/hide certain options based on persona configuration.
5. Wrote unit tests for the QuestionConsolidator module and updated existing persona tests to cover the new functionality.
This feature enables AI Personas to better understand the context and intent behind a user's question by consolidating the conversation history into a single, focused question. This can lead to more relevant and accurate responses from the AI assistant.
Updating the editing model's rag_uploads in the editor component broke multi-file uploading. Instead, we'll keep the uploads in the uploader and update the model when we finish.
This PR also fast-tracks the initial update so we can show feedback to the user quickly, and allows uploading MD files.
Bug reported on https://meta.discourse.org/t/discourse-ai-persona-upload-support/304049/11
* FIX: various RAG edge cases
- Nicer text to describe RAG, avoids the word RAG
- Do not attempt to save persona when removing uploads and it is not created
- Remove old code that avoided touching rag params on create
* FIX: Missing pause button for persona users
* Feature: allow specific users to debug ai request / response chains
This can help users easily tune RAG and figure out what is going
on with requests.
* discourse helper so it does not explode
* fix test
* simplify implementation
* FEATURE: allow tuning of RAG generation
- change chunking to be token based vs char based (which is more accurate)
- allow control over overlap / tokens per chunk and conversation snippets inserted
- UI to control new settings
* improve ui a bit
* fix various reindex issues
* reduce concurrency
* try ultra low queue ... concurrency 1 is too slow.
This commit uses a new plugin modifier introduced in https://github.com/discourse/discourse/pull/26508
to mark all uploads as _not_ secure in shared PM AI conversations.
This is so images created by the AI bot (or uploaded by the user)
do not end up as broken URLs because of the security requirements
around them.
This relies on the UpdateTopicUploadSecurity job in core as well,
which is fired when an AI conversation is shared or deleted.