Restructures LLM config page so it is far clearer.
Also corrects bugs around adding LLMs and having LLMs not editable post addition
---------
Co-authored-by: Sam Saffron <sam.saffron@gmail.com>
Often it is helpful to have the summary box open while composing a reply to the topic. However, the summary box currently gets closed each time you click outside the box. In this PR we add `closeOnClickOutside: false` attribute to the `DMenu` options for summary box to prevent that from occurring.
Previously we had some hardcoded markup with scss making a loading indicator wave. This code was being duplicated and used in both semantic search and summarization. We want to add the indicator wave to the AI helper diff modal as well and have the text flashing instead of the loading spinner. To ensure we do not repeat ourselves, in this PR we turn the summary indicator wave into a reusable template only component called: `AiIndicatorWave`. We then apply the usage of that component to semantic search, summarization, and the composer helper modal.
This commit fixes an issue where the composer AI helper was not visible on iPad in DiscourseHub. This was due to the z-index being different for `reply-control` when Discourse Hub inserts its `footer-nav`
The `DiffModal` is triggered after selecting an option in the composer helper menu. After selecting an option, we should close the composer helper menu and only show the diff modal. On mobile, there was an edge-case where `this.args.close()` for was causing the closing of both the `DiffModal` and the `AiComposerHelperMenu`. This PR resolves that by ensuring the menu is closed _first_ asynchronously, followed by opening the relevant modal.
Previously we had moved the AI helper from the options menu to a selection menu that appears when selecting text in the composer. This had the benefit of making the AI helper a more discoverable feature. Now that some time has passed and the AI helper is more recognized, we will be moving it back to the composer toolbar.
This is better because:
- It consistent with other behavior and ways of accessing tools in the composer
- It has an improved mobile experience
- It reduces unnecessary code and keeps things easier to migrate when we have composer V2.
- It allows for easily triggering AI helper for all content by clicking the button instead of having to select everything.
This improves the site setting search so it performs a somewhat
fuzzy match.
Previously it did not handle seperators such as "space" and a
term such as "min_post_length" would not find "min_first_post_length"
A more liberal search algorithm makes it easier to the AI to
navigate settings.
* Minor fix, {{and parameter.enum parameter.enum.length}} is non
obviously broken.
If parameter.enum is a tracked array it will return the object
cause embers and helper implementation.
This corrects an issue where enum keeps on selecting itself by
mistake.
Previously there was too much work proofreading text, new implementation
provides a single shortcut and easy way of proofreading text.
Co-authored-by: Martin Brennan <martin@discourse.org>
`withEventValue` is not needed here, because the `onChange` event comes from ace, not a normal DOM event. But even with that fix, it seems AceEditor doesn't yet work well with the DDAU pattern. On every keypress, the editor re-renders and puts the cursor back at the beginning.
For now, this commit removes the `@onChange` hook, so we go back to relying on the two-way binding of `@content`.
Followup to a5a39dd2ee
* FEATURE: LLM Triage support for systemless models.
This change adds support for OSS models without support for system messages. LlmTriage's system message field is no longer mandatory. We now send the post contents in a separate user message.
* Models using Ollama can also disable system prompts
When navigating between topic we were not correctly resetting
internal state for summarization. This leads to a situation where
incorrect summaries can be displayed to users and wrong summaries
can be displayed.
Additionally our controller for grabbing summaries was always
streaming results via message bus, which could be delayed when
sidekiq is overloaded. We now will return the cached summary
right away if it is available direct from REST endpoint.
Creating a new model, either manually or from presets, doesn't initialize the `provider_params` object, meaning their custom params won't persist.
Additionally, this change adds some validations for Bedrock params, which are mandatory, and a clear message when a completion fails because we cannot build the URL.
- Validate fields to reduce the chance of breaking features by a misconfigured model.
- Fixed a bug where the URL might get deleted during an update.
- Display a warning when a model is currently in use.
* DEV: Remove old code now that features rely on LlmModels.
* Hide old settings and migrate persona llm overrides
* Remove shadowing special URL + seeding code. Use srv:// prefix instead.
This allows summary to use the new LLM models and migrates of API key based model selection
Claude 3.5 etc... all work now.
---------
Co-authored-by: Roman Rizzi <rizziromanalejandro@gmail.com>
Follow up to b863ddc94b
Ruby:
* Validate `summary` (the column is `not null`)
* Fix `name` validation (the column has `max_length` 100)
* Fix table annotations
* Accept missing `parameter` attributes (`required, `enum`, `enum_values`)
JS:
* Use native classes
* Don't use ember's array extensions
* Add explicit service injections
* Correct class names
* Use `||=` operator
* Use `store` service to create records
* Remove unused service injections
* Extract consts
* Group actions together
* Use `async`/`await`
* Use `withEventValue`
* Sort html attributes
* Use DButtons `@label` arg
* Use `input` elements instead of Ember's `Input` component (same w/ textarea)
* Remove `btn-default` class (automatically applied by DButton)
* Don't mix `I18n.t` and `i18n` in the same template
* Don't track props that aren't used in a template
* Correct invalid `target.value` code
* Remove unused/invalid `this.parameter`/`onChange` code
* Whitespace
* Use the new service import `inject as service` -> `service`
* Use `Object.entries()`
* Add missing i18n strings
* Fix an error in `addEnumValue` (calling `pushObject` on `undefined`)
* Use `TrackedArray`/`TrackedObject`
* Transform tool `parameters` keys (`enumValues` -> `enum_values`)
Introduces custom AI tools functionality.
1. Why it was added:
The PR adds the ability to create, manage, and use custom AI tools within the Discourse AI system. This feature allows for more flexibility and extensibility in the AI capabilities of the platform.
2. What it does:
- Introduces a new `AiTool` model for storing custom AI tools
- Adds CRUD (Create, Read, Update, Delete) operations for AI tools
- Implements a tool runner system for executing custom tool scripts
- Integrates custom tools with existing AI personas
- Provides a user interface for managing custom tools in the admin panel
3. Possible use cases:
- Creating custom tools for specific tasks or integrations (stock quotes, currency conversion etc...)
- Allowing administrators to add new functionalities to AI assistants without modifying core code
- Implementing domain-specific tools for particular communities or industries
4. Code structure:
The PR introduces several new files and modifies existing ones:
a. Models:
- `app/models/ai_tool.rb`: Defines the AiTool model
- `app/serializers/ai_custom_tool_serializer.rb`: Serializer for AI tools
b. Controllers:
- `app/controllers/discourse_ai/admin/ai_tools_controller.rb`: Handles CRUD operations for AI tools
c. Views and Components:
- New Ember.js components for tool management in the admin interface
- Updates to existing AI persona management components to support custom tools
d. Core functionality:
- `lib/ai_bot/tool_runner.rb`: Implements the custom tool execution system
- `lib/ai_bot/tools/custom.rb`: Defines the custom tool class
e. Routes and configurations:
- Updates to route configurations to include new AI tool management pages
f. Migrations:
- `db/migrate/20240618080148_create_ai_tools.rb`: Creates the ai_tools table
g. Tests:
- New test files for AI tool functionality and integration
The PR integrates the custom tools system with the existing AI persona framework, allowing personas to use both built-in and custom tools. It also includes safety measures such as timeouts and HTTP request limits to prevent misuse of custom tools.
Overall, this PR significantly enhances the flexibility and extensibility of the Discourse AI system by allowing administrators to create and manage custom AI tools tailored to their specific needs.
Co-authored-by: Martin Brennan <martin@discourse.org>
Previously, we stored request parameters like the OpenAI organization and Bedrock's access key and region as site settings. This change stores them in the `llm_models` table instead, letting us drop more settings while also becoming more flexible.
* FEATURE: LLM presets for model creation
Previous to this users needed to look up complicated settings
when setting up models.
This introduces and extensible preset system with Google/OpenAI/Anthropic
presets.
This will cover all the most common LLMs, we can always add more as
we go.
Additionally:
- Proper support for Anthropic Claude Sonnet 3.5
- Stop blurring api keys when navigating away - this made it very complex to reuse keys
Previously read tool only had access to public topics, this allows
access to all topics user has access to, if admin opts for the option
Also
- Fixes VLLM migration
- Display which llms have bot enabled
* DRAFT: Create AI Bot users dynamically and support custom LlmModels
* Get user associated to llm_model
* Track enabled bots with attribute
* Don't store bot username. Minor touches to migrate default values in settings
* Handle scenario where vLLM uses a SRV record
* Made 3.5-turbo-16k the default version so we can remove hack
This is a rather huge refactor with 1 new feature (tool details can
be suppressed)
Previously we use the name "Command" to describe "Tools", this unifies
all the internal language and simplifies the code.
We also amended the persona UI to use less DToggles which aligns
with our design guidelines.
Co-authored-by: Martin Brennan <martin@discourse.org>
This change allows us to delete custom models. It checks if there is no module using them.
It also fixes a bug where the after-create transition wasn't working. While this prevents a model from being saved multiple times, endpoint validations are still needed (will be added in a separate PR).:
When triggering a PM from new-message route, we still had the UI
for "this is an official warning"
This removes that UI from bot messages, which is all clutter.
* FEATURE: Set endpoint credentials directly from LlmModel.
Drop Llama2Tokenizer since we no longer use it.
* Allow http for custom LLMs
---------
Co-authored-by: Rafael Silva <xfalcox@gmail.com>
LLM selector control had no memory and was awkward to click.
Instead we now:
- Clearly display which llm you are talking to
- Allow you to change llm direct from composer
This PR introduces the concept of "LlmModel" as a new way to quickly add new LLM models without making any code changes. We are releasing this first version and will add incremental improvements, so expect changes.
The AI Bot can't fully take advantage of this feature as users are hard-coded. We'll fix this in a separate PR.s
The menu service doesn’t implement an activeMenu property anymore as it can now support concurrent menus. The solution to this is to use `getByIdentifier`.
This optional feature allows search to be performed in the context
of the user that executed it.
By default we do not allow this behavior cause it means llm gets
access to potentially secure data.
Add support for chat with AI personas
- Allow enabling chat for AI personas that have an associated user
- Add new setting `allow_chat` to AI persona to enable/disable chat
- When a message is created in a DM channel with an allowed AI persona user, schedule a reply job
- AI replies to chat messages using the persona's `max_context_posts` setting to determine context
- Store tool calls and custom prompts used to generate a chat reply on the `ChatMessageCustomPrompt` table
- Add tests for AI chat replies with tools and context
At the moment unlike posts we do not carry tool calls in the context.
No @mention support yet for ai personas in channels, this is future work
The initial setup done in fb0d56324f
clashed with other plugins, I found this when trying to do the same
for Gamification. This uses a better routing setup and removes the
need to define the config nav link for Settings -- that is always inserted.
Relies on https://github.com/discourse/discourse/pull/26707
This commit introduces a new feature for AI Personas called the "Question Consolidator LLM". The purpose of the Question Consolidator is to consolidate a user's latest question into a self-contained, context-rich question before querying the vector database for relevant fragments. This helps improve the quality and relevance of the retrieved fragments.
Previous to this change we used the last 10 interactions, this is not ideal cause the RAG would "lock on" to an answer.
EG:
- User: how many cars are there in europe
- Model: detailed answer about cars in europe including the term car and vehicle many times
- User: Nice, what about trains are there in the US
In the above example "trains" and "US" becomes very low signal given there are pages and pages talking about cars and europe. This mean retrieval is sub optimal.
Instead, we pass the history to the "question consolidator", it would simply consolidate the question to "How many trains are there in the United States", which would make it fare easier for the vector db to find relevant content.
The llm used for question consolidator can often be less powerful than the model you are talking to, we recommend using lighter weight and fast models cause the task is very simple. This is configurable from the persona ui.
This PR also removes support for {uploads} placeholder, this is too complicated to get right and we want freedom to shift RAG implementation.
Key changes:
1. Added a new `question_consolidator_llm` column to the `ai_personas` table to store the LLM model used for question consolidation.
2. Implemented the `QuestionConsolidator` module which handles the logic for consolidating the user's latest question. It extracts the relevant user and model messages from the conversation history, truncates them if needed to fit within the token limit, and generates a consolidated question prompt.
3. Updated the `Persona` class to use the Question Consolidator LLM (if configured) when crafting the RAG fragments prompt. It passes the conversation context to the consolidator to generate a self-contained question.
4. Added UI elements in the AI Persona editor to allow selecting the Question Consolidator LLM. Also made some UI tweaks to conditionally show/hide certain options based on persona configuration.
5. Wrote unit tests for the QuestionConsolidator module and updated existing persona tests to cover the new functionality.
This feature enables AI Personas to better understand the context and intent behind a user's question by consolidating the conversation history into a single, focused question. This can lead to more relevant and accurate responses from the AI assistant.
Updating the editing model's rag_uploads in the editor component broke multi-file uploading. Instead, we'll keep the uploads in the uploader and update the model when we finish.
This PR also fast-tracks the initial update so we can show feedback to the user quickly, and allows uploading MD files.
Bug reported on https://meta.discourse.org/t/discourse-ai-persona-upload-support/304049/11
* FIX: various RAG edge cases
- Nicer text to describe RAG, avoids the word RAG
- Do not attempt to save persona when removing uploads and it is not created
- Remove old code that avoided touching rag params on create
* FIX: Missing pause button for persona users
* Feature: allow specific users to debug ai request / response chains
This can help users easily tune RAG and figure out what is going
on with requests.
* discourse helper so it does not explode
* fix test
* simplify implementation
* FEATURE: allow tuning of RAG generation
- change chunking to be token based vs char based (which is more accurate)
- allow control over overlap / tokens per chunk and conversation snippets inserted
- UI to control new settings
* improve ui a bit
* fix various reindex issues
* reduce concurrency
* try ultra low queue ... concurrency 1 is too slow.