Creating a new model, either manually or from presets, doesn't initialize the `provider_params` object, meaning their custom params won't persist.
Additionally, this change adds some validations for Bedrock params, which are mandatory, and a clear message when a completion fails because we cannot build the URL.
- Validate fields to reduce the chance of breaking features by a misconfigured model.
- Fixed a bug where the URL might get deleted during an update.
- Display a warning when a model is currently in use.
* DEV: Remove old code now that features rely on LlmModels.
* Hide old settings and migrate persona llm overrides
* Remove shadowing special URL + seeding code. Use srv:// prefix instead.
* Seeding the SRV-backed model should happen inside an initializer.
* Keep the model up to date when the hidden setting changes.
* Use the correct Mixtral model name and fix previous data migration.
* URL validation should trigger only when we attempt to update it.
1. Repairs the identity on the summary table, we migrated data without resetting it.
2. Adds an index into ai_summary table to match expected retrieval pattern
This allows summary to use the new LLM models and migrates of API key based model selection
Claude 3.5 etc... all work now.
---------
Co-authored-by: Roman Rizzi <rizziromanalejandro@gmail.com>
Follow up to b863ddc94b
Ruby:
* Validate `summary` (the column is `not null`)
* Fix `name` validation (the column has `max_length` 100)
* Fix table annotations
* Accept missing `parameter` attributes (`required, `enum`, `enum_values`)
JS:
* Use native classes
* Don't use ember's array extensions
* Add explicit service injections
* Correct class names
* Use `||=` operator
* Use `store` service to create records
* Remove unused service injections
* Extract consts
* Group actions together
* Use `async`/`await`
* Use `withEventValue`
* Sort html attributes
* Use DButtons `@label` arg
* Use `input` elements instead of Ember's `Input` component (same w/ textarea)
* Remove `btn-default` class (automatically applied by DButton)
* Don't mix `I18n.t` and `i18n` in the same template
* Don't track props that aren't used in a template
* Correct invalid `target.value` code
* Remove unused/invalid `this.parameter`/`onChange` code
* Whitespace
* Use the new service import `inject as service` -> `service`
* Use `Object.entries()`
* Add missing i18n strings
* Fix an error in `addEnumValue` (calling `pushObject` on `undefined`)
* Use `TrackedArray`/`TrackedObject`
* Transform tool `parameters` keys (`enumValues` -> `enum_values`)
Introduces custom AI tools functionality.
1. Why it was added:
The PR adds the ability to create, manage, and use custom AI tools within the Discourse AI system. This feature allows for more flexibility and extensibility in the AI capabilities of the platform.
2. What it does:
- Introduces a new `AiTool` model for storing custom AI tools
- Adds CRUD (Create, Read, Update, Delete) operations for AI tools
- Implements a tool runner system for executing custom tool scripts
- Integrates custom tools with existing AI personas
- Provides a user interface for managing custom tools in the admin panel
3. Possible use cases:
- Creating custom tools for specific tasks or integrations (stock quotes, currency conversion etc...)
- Allowing administrators to add new functionalities to AI assistants without modifying core code
- Implementing domain-specific tools for particular communities or industries
4. Code structure:
The PR introduces several new files and modifies existing ones:
a. Models:
- `app/models/ai_tool.rb`: Defines the AiTool model
- `app/serializers/ai_custom_tool_serializer.rb`: Serializer for AI tools
b. Controllers:
- `app/controllers/discourse_ai/admin/ai_tools_controller.rb`: Handles CRUD operations for AI tools
c. Views and Components:
- New Ember.js components for tool management in the admin interface
- Updates to existing AI persona management components to support custom tools
d. Core functionality:
- `lib/ai_bot/tool_runner.rb`: Implements the custom tool execution system
- `lib/ai_bot/tools/custom.rb`: Defines the custom tool class
e. Routes and configurations:
- Updates to route configurations to include new AI tool management pages
f. Migrations:
- `db/migrate/20240618080148_create_ai_tools.rb`: Creates the ai_tools table
g. Tests:
- New test files for AI tool functionality and integration
The PR integrates the custom tools system with the existing AI persona framework, allowing personas to use both built-in and custom tools. It also includes safety measures such as timeouts and HTTP request limits to prevent misuse of custom tools.
Overall, this PR significantly enhances the flexibility and extensibility of the Discourse AI system by allowing administrators to create and manage custom AI tools tailored to their specific needs.
Co-authored-by: Martin Brennan <martin@discourse.org>
Having this as a callback prevents deploys of sites with a vLLM SRV configured and pending migrations. Additionally, this fixes a bug where we didn't delete/deactivate the companion user after deleting an LLM.
Previously, we stored request parameters like the OpenAI organization and Bedrock's access key and region as site settings. This change stores them in the `llm_models` table instead, letting us drop more settings while also becoming more flexible.
* FEATURE: LLM presets for model creation
Previous to this users needed to look up complicated settings
when setting up models.
This introduces and extensible preset system with Google/OpenAI/Anthropic
presets.
This will cover all the most common LLMs, we can always add more as
we go.
Additionally:
- Proper support for Anthropic Claude Sonnet 3.5
- Stop blurring api keys when navigating away - this made it very complex to reuse keys
We no longer support the "provider:model" format in the "ai_helper_model" and
"ai_embeddings_semantic_search_hyde_model" settings. We'll migrate existing
values and work with our new data-driven LLM configs from now on.
Previously read tool only had access to public topics, this allows
access to all topics user has access to, if admin opts for the option
Also
- Fixes VLLM migration
- Display which llms have bot enabled
* DRAFT: Create AI Bot users dynamically and support custom LlmModels
* Get user associated to llm_model
* Track enabled bots with attribute
* Don't store bot username. Minor touches to migrate default values in settings
* Handle scenario where vLLM uses a SRV record
* Made 3.5-turbo-16k the default version so we can remove hack
This is a rather huge refactor with 1 new feature (tool details can
be suppressed)
Previously we use the name "Command" to describe "Tools", this unifies
all the internal language and simplifies the code.
We also amended the persona UI to use less DToggles which aligns
with our design guidelines.
Co-authored-by: Martin Brennan <martin@discourse.org>
Initial implementation allowed internet wide sharing of
AI conversations, on sites that require login.
This feature can be an anti feature for private sites cause they
can not share conversations internally.
For now we are removing support for public sharing on login required
sites, if the community need the feature we can consider adding a
setting.
Previoulsy on GPT-4-vision was supported, change introduces support
for Google/Anthropic and new OpenAI models
Additionally this makes vision work properly in dev environments
cause we sent the encoded payload via prompt vs sending urls
This change allows us to delete custom models. It checks if there is no module using them.
It also fixes a bug where the after-create transition wasn't working. While this prevents a model from being saved multiple times, endpoint validations are still needed (will be added in a separate PR).:
* FEATURE: Set endpoint credentials directly from LlmModel.
Drop Llama2Tokenizer since we no longer use it.
* Allow http for custom LLMs
---------
Co-authored-by: Rafael Silva <xfalcox@gmail.com>
- Introduce new support for GPT4o (automation / bot / summary / helper)
- Properly account for token counts on OpenAI models
- Track feature that was used when generating AI completions
- Remove custom llm support for summarization as we need better interfaces to control registration and de-registration
There are still some limitations to which models we can support with the `LlmModel` class. This will enable support for Llama3 while we sort those out.
This PR introduces the concept of "LlmModel" as a new way to quickly add new LLM models without making any code changes. We are releasing this first version and will add incremental improvements, so expect changes.
The AI Bot can't fully take advantage of this feature as users are hard-coded. We'll fix this in a separate PR.s
Both endpoints provide OpenAI-compatible servers. The only difference is that Vllm doesn't support passing tools as a separate parameter. Even if the tool param is supported, it ultimately relies on the model's ability to handle native functions, which is not the case with the models we have today.
As a part of this change, we are dropping support for StableBeluga/Llama2 models. They don't have a chat_template, meaning the new API can translate them.
These changes let us remove some of our existing dialects and are a first step in our plan to support any LLM by defining them as data-driven concepts.
I rewrote the "translate" method to use a template method and extracted the tool support strategies into its classes to simplify the code.
Finally, these changes bring support for Ollama when running in dev mode. It only works with Mistral for now, but it will change soon..
Add support for chat with AI personas
- Allow enabling chat for AI personas that have an associated user
- Add new setting `allow_chat` to AI persona to enable/disable chat
- When a message is created in a DM channel with an allowed AI persona user, schedule a reply job
- AI replies to chat messages using the persona's `max_context_posts` setting to determine context
- Store tool calls and custom prompts used to generate a chat reply on the `ChatMessageCustomPrompt` table
- Add tests for AI chat replies with tools and context
At the moment unlike posts we do not carry tool calls in the context.
No @mention support yet for ai personas in channels, this is future work
This commit introduces a new feature for AI Personas called the "Question Consolidator LLM". The purpose of the Question Consolidator is to consolidate a user's latest question into a self-contained, context-rich question before querying the vector database for relevant fragments. This helps improve the quality and relevance of the retrieved fragments.
Previous to this change we used the last 10 interactions, this is not ideal cause the RAG would "lock on" to an answer.
EG:
- User: how many cars are there in europe
- Model: detailed answer about cars in europe including the term car and vehicle many times
- User: Nice, what about trains are there in the US
In the above example "trains" and "US" becomes very low signal given there are pages and pages talking about cars and europe. This mean retrieval is sub optimal.
Instead, we pass the history to the "question consolidator", it would simply consolidate the question to "How many trains are there in the United States", which would make it fare easier for the vector db to find relevant content.
The llm used for question consolidator can often be less powerful than the model you are talking to, we recommend using lighter weight and fast models cause the task is very simple. This is configurable from the persona ui.
This PR also removes support for {uploads} placeholder, this is too complicated to get right and we want freedom to shift RAG implementation.
Key changes:
1. Added a new `question_consolidator_llm` column to the `ai_personas` table to store the LLM model used for question consolidation.
2. Implemented the `QuestionConsolidator` module which handles the logic for consolidating the user's latest question. It extracts the relevant user and model messages from the conversation history, truncates them if needed to fit within the token limit, and generates a consolidated question prompt.
3. Updated the `Persona` class to use the Question Consolidator LLM (if configured) when crafting the RAG fragments prompt. It passes the conversation context to the consolidator to generate a self-contained question.
4. Added UI elements in the AI Persona editor to allow selecting the Question Consolidator LLM. Also made some UI tweaks to conditionally show/hide certain options based on persona configuration.
5. Wrote unit tests for the QuestionConsolidator module and updated existing persona tests to cover the new functionality.
This feature enables AI Personas to better understand the context and intent behind a user's question by consolidating the conversation history into a single, focused question. This can lead to more relevant and accurate responses from the AI assistant.
Updating the editing model's rag_uploads in the editor component broke multi-file uploading. Instead, we'll keep the uploads in the uploader and update the model when we finish.
This PR also fast-tracks the initial update so we can show feedback to the user quickly, and allows uploading MD files.
Bug reported on https://meta.discourse.org/t/discourse-ai-persona-upload-support/304049/11
* FIX: various RAG edge cases
- Nicer text to describe RAG, avoids the word RAG
- Do not attempt to save persona when removing uploads and it is not created
- Remove old code that avoided touching rag params on create
* FIX: Missing pause button for persona users
* Feature: allow specific users to debug ai request / response chains
This can help users easily tune RAG and figure out what is going
on with requests.
* discourse helper so it does not explode
* fix test
* simplify implementation
* FEATURE: allow tuning of RAG generation
- change chunking to be token based vs char based (which is more accurate)
- allow control over overlap / tokens per chunk and conversation snippets inserted
- UI to control new settings
* improve ui a bit
* fix various reindex issues
* reduce concurrency
* try ultra low queue ... concurrency 1 is too slow.
This commit uses a new plugin modifier introduced in https://github.com/discourse/discourse/pull/26508
to mark all uploads as _not_ secure in shared PM AI conversations.
This is so images created by the AI bot (or uploaded by the user)
do not end up as broken URLs because of the security requirements
around them.
This relies on the UpdateTopicUploadSecurity job in core as well,
which is fired when an AI conversation is shared or deleted.
- Added Cohere Command models (Command, Command Light, Command R, Command R Plus) to the available model list
- Added a new site setting `ai_cohere_api_key` for configuring the Cohere API key
- Implemented a new `DiscourseAi::Completions::Endpoints::Cohere` class to handle interactions with the Cohere API, including:
- Translating request parameters to the Cohere API format
- Parsing Cohere API responses
- Supporting streaming and non-streaming completions
- Supporting "tools" which allow the model to call back to discourse to lookup additional information
- Implemented a new `DiscourseAi::Completions::Dialects::Command` class to translate between the generic Discourse AI prompt format and the Cohere Command format
- Added specs covering the new Cohere endpoint and dialect classes
- Updated `DiscourseAi::AiBot::Bot.guess_model` to map the new Cohere model to the appropriate bot user
In summary, this PR adds support for using the Cohere Command family of models with the Discourse AI plugin. It handles configuring API keys, making requests to the Cohere API, and translating between Discourse's generic prompt format and Cohere's specific format. Thorough test coverage was added for the new functionality.
* FEATURE: Add metadata support for RAG
You may include non indexed metadata in the RAG document by using
[[metadata ....]]
This information is attached to all the text below and provided to
the retriever.
This allows for RAG to operate within a rich amount of contexts
without getting lost
Also:
- re-implemented chunking algorithm so it streams
- moved indexing to background low priority queue
* Baran gem no longer required.
* tokenizers is on 4.4 ... upgrade it ...
This PR lets you associate uploads to an AI persona, which we'll split and generate embeddings from. When building the system prompt to get a bot reply, we'll do a similarity search followed by a re-ranking (if available). This will let us find the most relevant fragments from the body of knowledge you associated with the persona, resulting in better, more informed responses.
For now, we'll only allow plain-text files, but this will change in the future.
Commits:
* FEATURE: RAG embeddings for the AI Bot
This first commit introduces a UI where admins can upload text files, which we'll store, split into fragments,
and generate embeddings of. In a next commit, we'll use those to give the bot additional information during
conversations.
* Basic asymmetric similarity search to provide guidance in system prompt
* Fix tests and lint
* Apply reranker to fragments
* Uploads filter, css adjustments and file validations
* Add placeholder for rag fragments
* Update annotations
This commit adds the ability to enable vision for AI personas, allowing them to understand images that are posted in the conversation.
For personas with vision enabled, any images the user has posted will be resized to be within the configured max_pixels limit, base64 encoded and included in the prompt sent to the AI provider.
The persona editor allows enabling/disabling vision and has a dropdown to select the max supported image size (low, medium, high). Vision is disabled by default.
This initial vision support has been tested and implemented with Anthropic's claude-3 models which accept images in a special format as part of the prompt.
Other integrations will need to be updated to support images.
Several specs were added to test the new functionality at the persona, prompt building and API layers.
- Gemini is omitted, pending API support for Gemini 1.5. Current Gemini bot is not performing well, adding images is unlikely to make it perform any better.
- Open AI is omitted, vision support on GPT-4 it limited in that the API has no tool support when images are enabled so we would need to full back to a different prompting technique, something that would add lots of complexity
---------
Co-authored-by: Martin Brennan <martin@discourse.org>
- Adds a nonce to both script tags
- Removes the `onload=` inline script, and moves the tags to the end of the `<body>` instead. This provides the same UX (page will load and render, then hljs will be applied when ready)
This allows users to share a static page of an AI conversation with
the rest of the world.
By default this feature is disabled, it is enabled by turning on
ai_bot_allow_public_sharing via site settings
Precautions are taken when sharing
1. We make a carbonite copy
2. We minimize work generating page
3. We limit to 100 interactions
4. Many security checks - including disallowing if there is a mix
of users in the PM.
* Bonus commit, large PRs like this PR did not work with github tool
large objects would destroy context
Co-authored-by: Martin Brennan <martin@discourse.org>
This PR adds AI semantic search to the search pop available on every page.
It depends on several new and optional settings, like per post embeddings and a reranker model, so this is an experimental endeavour.
---------
Co-authored-by: Rafael Silva <xfalcox@gmail.com>
* DEV: improve internal design of ai persona and bug fix
- Fixes bug where OpenAI could not describe images
- Fixes bug where mentionable personas could not be mentioned unless overarching bot was enabled
- Improves internal design of playground and bot to allow better for non "bot" users
- Allow PMs directly to persona users (previously bot user would also have to be in PM)
- Simplify internal code
Co-authored-by: Martin Brennan <martin@discourse.org>
Utilizes the check for secure upload permissions from core PR
https://github.com/discourse/discourse/pull/25758 and cleans up
controller codes and spec code to reuse existing code and better
reflect reality.
This PR adds a new feature where you can generate captions for images in the composer using AI.
---------
Co-authored-by: Rafael Silva <xfalcox@gmail.com>
1. Personas are now optionally mentionable, meaning that you can mention them either from public topics or PMs
- Mentioning from PMs helps "switch" persona mid conversation, meaning if you want to look up sites setting you can invoke the site setting bot, or if you want to generate an image you can invoke dall e
- Mentioning outside of PMs allows you to inject a bot reply in a topic trivially
- We also add the support for max_context_posts this allow you to limit the amount of context you feed in, which can help control costs
2. Add support for a "random picker" tool that can be used to pick random numbers
3. Clean up routing ai_personas -> ai-personas
4. Add Max Context Posts so users can control how much history a persona can consume (this is important for mentionable personas)
Co-authored-by: Martin Brennan <martin@discourse.org>
* FEATURE: allow personas to supply top_p and temperature params
Code assistance generally are more focused at a lower temperature
This amends it so SQL Helper runs at 0.2 temperature vs the more
common default across LLMs of 1.0.
Reduced temperature leads to more focused, concise and predictable
answers for the SQL Helper
* fix tests
* This is not perfect, but far better than what we do today
Instead of fishing for
1. Draft sequence
2. Draft body
We skip (2), this means the composer "only" needs 1 http request to
open, we also want to eliminate (1) but it is a bit of a trickier
core change, may figure out how to pull it off (defer it to first draft save)
Value of bot drafts < value of opening bot conversations really fast
The idea is to increase the frequency so we can run with smaller batch sizes.
Big batches cause problems when running backups, so it's better to have shorter but
more frequent jobs.
1. on failure we were queuing a job to generate embeddings, it had the wrong params. This is both fixed and covered in a test.
2. backfill embedding in the order of bumped_at, so newest content is embedded first, cover with a test
3. add a safeguard for hidden site setting that only allows batches of 50k in an embedding job run
Previously old embeddings were updated in a random order, this changes it so we update in a consistent order
* REFACTOR: Represent generic prompts with an Object.
* Adds a bit more validation for clarity
* Rewrite bot title prompt and fix quirk handling
---------
Co-authored-by: Sam Saffron <sam.saffron@gmail.com>
Followup 2636efcd1b,
whenever ruby code was changed locally this would break
module loading, giving an "uninitialized constant
DiscourseAi::Embeddings::EntryPoint::SemanticRelated" error.
* DEV: AI bot migration to the Llm pattern.
We added tool and conversation context support to the Llm service in discourse-ai#366, meaning we met all the conditions to migrate this module.
This PR migrates to the new pattern, meaning adding a new bot now requires minimal effort as long as the service supports it. On top of this, we introduce the concept of a "Playground" to separate the PM-specific bits from the completion, allowing us to use the bot in other contexts like chat in the future. Commands are called tools, and we simplified all the placeholder logic to perform updates in a single place, making the flow more one-wayish.
* Followup fixes based on testing
* Cleanup unused inference code
* FIX: text-based tools could be in the middle of a sentence
* GPT-4-turbo support
* Use new LLM API
* FIX: AI helper not working correctly with mixtral
This PR introduces a new function on the generic llm called #generate
This will replace the implementation of completion!
#generate introduces a new way to pass temperature, max_tokens and stop_sequences
Then LLM implementers need to implement #normalize_model_params to
ensure the generic names match the LLM specific endpoint
This also adds temperature and stop_sequences to completion_prompts
this allows for much more robust completion prompts
* port everything over to #generate
* Fix translation
- On anthropic this no longer throws random "This is your translation:"
- On mixtral this actually works
* fix markdown table generation as well
Currently we're seeing 500s when related_topics are getting rendered. We should get the topic's category rather than on the array.
```
ActionView::Template::Error (undefined method `category' for [#<Topic id ... ]
```
Personas now support providing options for commands.
This PR introduces a single option "base_query" for the SearchCommand. When supplied all searches the persona will perform will also include the pre-supplied filter.
This can allow personas to search a subset of the forum (such as documentation)
This system is extensible we can add options to any command trivially.
Previous to this change we relied on explicit loading for a files in Discourse AI.
This had a few downsides:
- Busywork whenever you add a file (an extra require relative)
- We were not keeping to conventions internally ... some places were OpenAI others are OpenAi
- Autoloader did not work which lead to lots of full application broken reloads when developing.
This moves all of DiscourseAI into a Zeitwerk compatible structure.
It also leaves some minimal amount of manual loading (automation - which is loading into an existing namespace that may or may not be there)
To avoid needing /lib/discourse_ai/... we mount a namespace thus we are able to keep /lib pointed at ::DiscourseAi
Various files were renamed to get around zeitwerk rules and minimize usage of custom inflections
Though we can get custom inflections to work it is not worth it, will require a Discourse core patch which means we create a hard dependency.
We must ensure we can isolate titles, and the models sometimes ignore the example we give them.
Additionally, anons can generate HyDE posts, so we need to check if user is nil when attempting to log requests.
Introduces a UI to manage customizable personas (admin only feature)
Part of the change was some extensive internal refactoring:
- AIBot now has a persona set in the constructor, once set it never changes
- Command now takes in bot as a constructor param, so it has the correct persona and is not generating AIBot objects on the fly
- Added a .prettierignore file, due to the way ALE is configured in nvim it is a pre-req for prettier to work
- Adds a bunch of validations on the AIPersona model, system personas (artist/creative etc...) are all seeded. We now ensure
- name uniqueness, and only allow certain properties to be touched for system personas.
- (JS note) the client side design takes advantage of nested routes, the parent route for personas gets all the personas via this.store.findAll("ai-persona") then child routes simply reach into this model to find a particular persona.
- (JS note) data is sideloaded into the ai-persona model the meta property supplied from the controller, resultSetMeta
- This removes ai_bot_enabled_personas and ai_bot_enabled_chat_commands, both should be controlled from the UI on a per persona basis
- Fixes a long standing bug in token accounting ... we were doing to_json.length instead of to_json.to_s.length
- Amended it so {commands} are always inserted at the end unconditionally, no need to add it to the template of the system message as it just confuses things
- Adds a concept of required_commands to stock personas, these are commands that must be configured for this stock persona to show up.
- Refactored tests so we stop requiring inference_stubs, it was very confusing to need it, added to plugin.rb for now which at least is clearer
- Migrates the persona selector to gjs
---------
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
Co-authored-by: Martin Brennan <martin@discourse.org>
- New AiPersona model which can store custom personas
- Persona are restricted via group security
- They can contain custom system messages
- They can support a list of commands optionally
To avoid expensive DB calls in the serializer a Multisite friendly Hash was introduced (which can be expired on transaction commit)
This PR adds new reports for displaying information about post sentiments grouped by date and emotions group by TL.
Depends on discourse/discourse#24274
Adds an AI Helper function when selecting text while viewing a topic.
---------
Co-authored-by: Keegan George <kgeorge13@gmail.com>
Co-authored-by: Roman Rizzi <roman@discourse.org>