92 Commits

Author SHA1 Message Date
Sam
5e80f93e4c
FEATURE: PDF support for rag pipeline (#1118)
This PR introduces several enhancements and refactorings to the AI Persona and RAG (Retrieval-Augmented Generation) functionalities within the discourse-ai plugin. Here's a breakdown of the changes:

**1. LLM Model Association for RAG and Personas:**

-   **New Database Columns:** Adds `rag_llm_model_id` to both `ai_personas` and `ai_tools` tables. This allows specifying a dedicated LLM for RAG indexing, separate from the persona's primary LLM.  Adds `default_llm_id` and `question_consolidator_llm_id` to `ai_personas`.
-   **Migration:**  Includes a migration (`20250210032345_migrate_persona_to_llm_model_id.rb`) to populate the new `default_llm_id` and `question_consolidator_llm_id` columns in `ai_personas` based on the existing `default_llm` and `question_consolidator_llm` string columns, and a post migration to remove the latter.
-   **Model Changes:**  The `AiPersona` and `AiTool` models now `belong_to` an `LlmModel` via `rag_llm_model_id`. The `LlmModel.proxy` method now accepts an `LlmModel` instance instead of just an identifier.  `AiPersona` now has `default_llm_id` and `question_consolidator_llm_id` attributes.
-   **UI Updates:**  The AI Persona and AI Tool editors in the admin panel now allow selecting an LLM for RAG indexing (if PDF/image support is enabled).  The RAG options component displays an LLM selector.
-   **Serialization:** The serializers (`AiCustomToolSerializer`, `AiCustomToolListSerializer`, `LocalizedAiPersonaSerializer`) have been updated to include the new `rag_llm_model_id`, `default_llm_id` and `question_consolidator_llm_id` attributes.

**2. PDF and Image Support for RAG:**

-   **Site Setting:** Introduces a new hidden site setting, `ai_rag_pdf_images_enabled`, to control whether PDF and image files can be indexed for RAG. This defaults to `false`.
-   **File Upload Validation:** The `RagDocumentFragmentsController` now checks the `ai_rag_pdf_images_enabled` setting and allows PDF, PNG, JPG, and JPEG files if enabled.  Error handling is included for cases where PDF/image indexing is attempted with the setting disabled.
-   **PDF Processing:** Adds a new utility class, `DiscourseAi::Utils::PdfToImages`, which uses ImageMagick (`magick`) to convert PDF pages into individual PNG images. A maximum PDF size and conversion timeout are enforced.
-   **Image Processing:** A new utility class, `DiscourseAi::Utils::ImageToText`, is included to handle OCR for the images and PDFs.
-   **RAG Digestion Job:** The `DigestRagUpload` job now handles PDF and image uploads. It uses `PdfToImages` and `ImageToText` to extract text and create document fragments.
-   **UI Updates:**  The RAG uploader component now accepts PDF and image file types if `ai_rag_pdf_images_enabled` is true. The UI text is adjusted to indicate supported file types.

**3. Refactoring and Improvements:**

-   **LLM Enumeration:** The `DiscourseAi::Configuration::LlmEnumerator` now provides a `values_for_serialization` method, which returns a simplified array of LLM data (id, name, vision_enabled) suitable for use in serializers. This avoids exposing unnecessary details to the frontend.
-   **AI Helper:** The `AiHelper::Assistant` now takes optional `helper_llm` and `image_caption_llm` parameters in its constructor, allowing for greater flexibility.
-   **Bot and Persona Updates:** Several updates were made across the codebase, changing the string based association to a LLM to the new model based.
-   **Audit Logs:** The `DiscourseAi::Completions::Endpoints::Base` now formats raw request payloads as pretty JSON for easier auditing.
- **Eval Script:** An evaluation script is included.

**4. Testing:**

-    The PR introduces a new eval system for LLMs, this allows us to test how functionality works across various LLM providers. This lives in `/evals`
2025-02-14 12:15:07 +11:00
Martin Brennan
7b1bdbde6d
FIX: Check post action creator result when flagging spam (#1119)
Currently in core re-flagging something that is already flagged as spam
is not supported, long term we may want to support this but in the meantime
we should not be silencing/hiding if the PostActionCreator fails
when flagging things as spam.

---------

Co-authored-by: Ted Johansson <drenmi@gmail.com>
2025-02-11 13:29:27 +10:00
Hoa Nguyen
b60926c6e6
FEATURE: Tool name validation (#842)
* FEATURE: Tool name validation

- Add unique index to the name column of the ai_tools table
- correct our tests for AiToolController
- tool_name field which will be used to represent to LLM
- Add tool_name to Tools's presets
- Add duplicate tools validation for AiPersona
- Add unique constraint to the name column of the ai_tools table

* DEV: Validate duplicate tool_name between builin tools and custom tools

* lint

* chore: fix linting

* fix conlict mistakes

* chore: correct icon class

* chore: fix failed specs

* Add max_length to tool_name

* chore: correct the option name

* lintings

* fix lintings
2025-02-07 14:34:47 +11:00
Roman Rizzi
a53719ab8e
FIX: Open AI embeddings config migration & Seeded indexes cleanup & (#1092)
This change fixes two different problems.

First, we add a data migration to migrate the configuration of sites using Open AI's embedding model. There was a window between the embedding config changes and #1087, where sites could end up in a broken state due to an unconfigured selected model setting, as reported on https://meta.discourse.org/t/-/348964

The second fix drops pre-seeded search indexes of the models we didn't migrate and corrects the ones where the dimensions don't match. Since the index uses the model ID, new embedding configs could use one of these ones even when the dimensions no longer match.
2025-01-27 15:24:43 -03:00
Roman Rizzi
ad7bb9bd31
DEV: Promote historical post-deploy migrations (#1091) 2025-01-24 11:49:15 -03:00
Roman Rizzi
5a97752117
FIX: Always raise the single exception/Open AI models migration (#1087) 2025-01-23 15:30:06 -03:00
Sam
8bf350206e
FEATURE: track duration of AI calls (#1082)
* FEATURE: track duration of AI calls

* annotate
2025-01-23 11:32:12 +11:00
Roman Rizzi
e2e753d73c
FEATURE: Formalize support for matryoshka dimensions. (#1083)
We have a flag to signal we are shortening the embeddings of a model.
Only used in Open AI's text-embedding-3-*, but we plan to use it for other services.
2025-01-22 11:26:46 -03:00
我秦始皇
654f90f1cd
FIX: convert provider_params hash to json before db insert (#1081)
* FIX: convert provider_params hash to json before db insert

* FIX: lint issues in config migration

* FIX: simplify provider_params json conversion
2025-01-22 09:55:41 -03:00
Roman Rizzi
3b66fb3e87
FIX: Restore the accidentally deleted query prefix. (#1079)
Additionally, we add a prefix for embedding generation.
Both are stored in the definitions table.
2025-01-21 14:10:31 -03:00
Roman Rizzi
f5cf1019fb
FEATURE: configurable embeddings (#1049)
* Use AR model for embeddings features

* endpoints

* Embeddings CRUD UI

* Add presets. Hide a couple more settings

* system specs

* Seed embedding definition from old settings

* Generate search bit index on the fly. cleanup orphaned data

* support for seeded models

* Fix run test for new embedding

* fix selected model not set correctly
2025-01-21 12:23:19 -03:00
Roman Rizzi
4784e7fe43
FIX: Set default for existing records. (#1073)
We'll later copy the correct value from content_range. 1 should be the min highest post number a topic has.
2025-01-16 10:38:53 -03:00
Roman Rizzi
46fcdb6ba5
FIX: Make summaries backfill job more resilient. (#1071)
To quickly select backfill candidates without comparing SHAs, we compare the last summarized post to the topic's highest_post_number. However, hiding or deleting a post and adding a small action will update this column, causing the job to stall and re-generate the same summary repeatedly until someone posts a regular reply. On top of this, this is not always true for topics with `best_replies`, as this last reply isn't necessarily included.

Since this is not evident at first glance and each summarization strategy picks its targets differently, I'm opting to simplify the backfill logic and how we track potential candidates.

The first step is dropping `content_range`, which serves no purpose and it's there because summary caching was supposed to work differently at the beginning. So instead, I'm replacing it with a column called `highest_target_number`, which tracks `highest_post_number` for topics and could track other things like channel's `message_count` in the future.

Now that we have this column when selecting every potential backfill candidate, we'll check if the summary is truly outdated by comparing the SHAs, and if it's not, we just update the column and move on
2025-01-16 09:42:53 -03:00
Roman Rizzi
65456c8b30
DEV: Migration to remove old embeddings tables~ (#1067)
* DEV: Migration to remove old embeddings tables~

* Check for table existence
2025-01-14 17:13:34 -03:00
Roman Rizzi
c4d2b7de1d
PERF: Optimize backfill query to prevent statement timeouts (#1066) 2025-01-14 15:39:19 -03:00
Roman Rizzi
6721c6751d
FIX: Do batches for backfilling huge embeddings tables (#1065) 2025-01-14 14:42:40 -03:00
Roman Rizzi
356ea77201
FIX: Split backfill into separate migrations to use independent transactions (#1063) 2025-01-14 13:30:52 -03:00
Roman Rizzi
09ca123757
FIX: Split statements to avoid timeout (#1062) 2025-01-14 12:54:18 -03:00
Roman Rizzi
65bbcd71fc
DEV: Embedding tables' model_id has to be a bigint (#1058)
* DEV: Embedding tables' model_id has to be a bigint

* Drop old search_bit indexes

* copy rag fragment embeddings created during deploy window
2025-01-14 10:53:06 -03:00
Sam
d07cf51653
FEATURE: llm quotas (#1047)
Adds a comprehensive quota management system for LLM models that allows:

- Setting per-group (applied per user in the group) token and usage limits with configurable durations
- Tracking and enforcing token/usage limits across user groups
- Quota reset periods (hourly, daily, weekly, or custom)
-  Admin UI for managing quotas with real-time updates

This system provides granular control over LLM API usage by allowing admins
to define limits on both total tokens and number of requests per group.
Supports multiple concurrent quotas per model and automatically handles
quota resets.


Co-authored-by: Keegan George <kgeorge13@gmail.com>
2025-01-14 15:54:09 +11:00
Roman Rizzi
eae527f99d
REFACTOR: A Simpler way of interacting with embeddings tables. (#1023)
* REFACTOR: A Simpler way of interacting with embeddings' tables.

This change adds a new abstraction called `Schema`, which acts as a repository that supports the same DB features `VectorRepresentation::Base` has, with the exception that removes the need to have duplicated methods per embeddings table.

It is also a bit more flexible when performing a similarity search because you can pass it a block that gives you access to the builder, allowing you to add multiple joins/where conditions.
2024-12-13 10:15:21 -03:00
Sam
47f5da7e42
FEATURE: Add AI-powered spam detection for new user posts (#1004)
This introduces a comprehensive spam detection system that uses LLM models
to automatically identify and flag potential spam posts. The system is
designed to be both powerful and configurable while preventing false positives.

Key Features:
* Automatically scans first 3 posts from new users (TL0/TL1)
* Creates dedicated AI flagging user to distinguish from system flags
* Tracks false positives/negatives for quality monitoring
* Supports custom instructions to fine-tune detection
* Includes test interface for trying detection on any post

Technical Implementation:
* New database tables:
  - ai_spam_logs: Stores scan history and results
  - ai_moderation_settings: Stores LLM config and custom instructions
* Rate limiting and safeguards:
  - Minimum 10-minute delay between rescans
  - Only scans significant edits (>10 char difference)
  - Maximum 3 scans per post
  - 24-hour maximum age for scannable posts
* Admin UI features:
  - Real-time testing capabilities
  - 7-day statistics dashboard
  - Configurable LLM model selection
  - Custom instruction support

Security and Performance:
* Respects trust levels - only scans TL0/TL1 users
* Skips private messages entirely
* Stops scanning users after 3 successful public posts
* Includes comprehensive test coverage
* Maintains audit log of all scan attempts


---------

Co-authored-by: Keegan George <kgeorge13@gmail.com>
Co-authored-by: Martin Brennan <martin@discourse.org>
2024-12-12 09:17:25 +11:00
Sam
117c06220e
FEATURE: allow artifacts to be updated (#980)
Add support for versioned artifacts with improved diff handling

* Add versioned artifacts support allowing artifacts to be updated and tracked
  - New `ai_artifact_versions` table to store version history
  - Support for updating artifacts through a new `UpdateArtifact` tool
  - Add version-aware artifact rendering in posts
  - Include change descriptions for version tracking

* Enhance artifact rendering and security
  - Add support for module-type scripts and external JS dependencies
  - Expand CSP to allow trusted CDN sources (unpkg, cdnjs, jsdelivr, googleapis)
  - Improve JavaScript handling in artifacts

* Implement robust diff handling system (this is dormant but ready to use once LLMs catch up)
  - Add new DiffUtils module for applying changes to artifacts
  - Support for unified diff format with multiple hunks
  - Intelligent handling of whitespace and line endings
  - Comprehensive error handling for diff operations

* Update routes and UI components
  - Add versioned artifact routes
  - Update markdown processing for versioned artifacts

Also

- Tweaks summary prompt
- Improves upload support in custom tool to also provide urls
2024-12-03 07:23:31 +11:00
Roman Rizzi
0abd4b1244
FIX: Sentiment classification results needs to be transformed before saving (#983) 2024-11-29 17:31:56 -03:00
Sam
bc0657f478
FEATURE: AI Usage page (#964)
- Added a new admin interface to track AI usage metrics, including tokens, features, and models.
- Introduced a new route `/admin/plugins/discourse-ai/ai-usage` and supporting API endpoint in `AiUsageController`.
- Implemented `AiUsageSerializer` for structuring AI usage data.
- Integrated CSS stylings for charts and tables under `stylesheets/modules/llms/common/usage.scss`.
- Enhanced backend with `AiApiAuditLog` model changes: added `cached_tokens` column  (implemented with OpenAI for now) with relevant DB migration and indexing.
- Created `Report` module for efficient aggregation and filtering of AI usage metrics.
- Updated AI Bot title generation logic to log correctly to user vs bot
- Extended test coverage for the new tracking features, ensuring data consistency and access controls.
2024-11-29 06:26:48 +11:00
Rafael dos Santos Silva
23193ee6f2
FEATURE: Calculate gists from non hot topics too (#958)
Also renames some settings to remove 'hot' references.
2024-11-26 13:44:12 -03:00
Roman Rizzi
95762723de
PERF: Preload only gists when including summaries in topic list (#948)
* PERF: Preload only gists when including summaries in topic list

* Add unique index on summaries and dedup existing records

* Make hot topics batch size setting hidden
2024-11-25 12:24:02 -03:00
Sam
0d7f353284
FEATURE: AI artifacts (#898)
This is a significant PR that introduces AI Artifacts functionality to the discourse-ai plugin along with several other improvements. Here are the key changes:

1. AI Artifacts System:
   - Adds a new `AiArtifact` model and database migration
   - Allows creation of web artifacts with HTML, CSS, and JavaScript content
   - Introduces security settings (`strict`, `lax`, `disabled`) for controlling artifact execution
   - Implements artifact rendering in iframes with sandbox protection
   - New `CreateArtifact` tool for AI to generate interactive content

2. Tool System Improvements:
   - Adds support for partial tool calls, allowing incremental updates during generation
   - Better handling of tool call states and progress tracking
   - Improved XML tool processing with CDATA support
   - Fixes for tool parameter handling and duplicate invocations

3. LLM Provider Updates:
   - Updates for Anthropic Claude models with correct token limits
   - Adds support for native/XML tool modes in Gemini integration
   - Adds new model configurations including Llama 3.1 models
   - Improvements to streaming response handling

4. UI Enhancements:
   - New artifact viewer component with expand/collapse functionality
   - Security controls for artifact execution (click-to-run in strict mode)
   - Improved dialog and response handling
   - Better error management for tool execution

5. Security Improvements:
   - Sandbox controls for artifact execution
   - Public/private artifact sharing controls
   - Security settings to control artifact behavior
   - CSP and frame-options handling for artifacts

6. Technical Improvements:
   - Better post streaming implementation
   - Improved error handling in completions
   - Better memory management for partial tool calls
   - Enhanced testing coverage

7. Configuration:
   - New site settings for artifact security
   - Extended LLM model configurations
   - Additional tool configuration options

This PR significantly enhances the plugin's capabilities for generating and displaying interactive content while maintaining security and providing flexible configuration options for administrators.
2024-11-19 09:22:39 +11:00
Roman Rizzi
9505a8976c
FEATURE: Automatically backfill regular summaries. (#892)
This change introduces a job to summarize topics and cache the results automatically. We provide a setting to control how many topics we'll backfill per hour and what the topic's minimum word count is to qualify.

We'll prioritize topics without summary over outdated ones.
2024-11-04 17:48:11 -03:00
Sam
be0b78cacd
FEATURE: new endpoint for directly accessing a persona (#876)
The new `/admin/plugins/discourse-ai/ai-personas/stream-reply.json` was added.

This endpoint streams data direct from a persona and can be used
to access a persona from remote systems leaving a paper trail in
PMs about the conversation that happened

This endpoint is only accessible to admins.

---------

Co-authored-by: Gabriel Grubba <70247653+Grubba27@users.noreply.github.com>
Co-authored-by: Keegan George <kgeorge13@gmail.com>
2024-10-30 10:28:20 +11:00
Bianca Nenciu
294c364a75
DEV: Fix mismatched column types (#868)
The primary key is usually a bigint column, but the foreign key columns
are usually of integer type. This can lead to issues when joining these
columns due to mismatched types and different value ranges.

This was using a temporary plugin / test API to make tests pass, but it
is safe to alter "ai_document_fragment_embeddings" and
"rag_document_fragments" tables because they usually have less than 1M
rows and migration is going to be fast.

Depending on the size of the community, "classification_results" table
may have more than 1M rows and the migration will lock the table for a
longer time. However, classification runs in background jobs and they
will be automatically retried if they fail due to the lock, which makes
it acceptable.
2024-10-28 15:36:42 +02:00
Sam
059d3b6fd2
FEATURE: better logging for automation reports (#853)
A new feature_context json column was added to ai_api_audit_logs

This allows us to store rich json like context on any LLM request
made.

This new field now stores automation id and name.

Additionally allows llm_triage to specify maximum number of tokens

This means that you can limit the cost of llm triage by scanning only
first N tokens of a post.
2024-10-23 16:49:56 +11:00
Sam
bdf3b6268b
FEATURE: smarter persona tethering (#832)
Splits persona permissions so you can allow a persona on:

- chat dms
- personal messages
- topic mentions
- chat channels

(any combination is allowed)

Previously we did not have this flexibility.

Additionally, adds the ability to "tether" a language model to a persona so it will always be used by the persona. This allows people to use a cheaper language model for one group of people and more expensive one for other people
2024-10-16 07:20:31 +11:00
Roman Rizzi
c7acb4a6a0
REFACTOR: Support of different summarization targets/prompts. (#835)
* DEV: Add summary types

* Refactor for different summary types

* Use enum for summary types

* Update lib/summarization/strategies/topic_summary.rb

Co-authored-by: Penar Musaraj <pmusaraj@gmail.com>

* Update lib/summarization/strategies/topic_gist.rb

Co-authored-by: Penar Musaraj <pmusaraj@gmail.com>

* Update lib/summarization/strategies/chat_messages.rb

Co-authored-by: Penar Musaraj <pmusaraj@gmail.com>

* Fix chat_messages single prompt

* Small tweak to the chat summarization prompt

---------

Co-authored-by: Penar Musaraj <pmusaraj@gmail.com>
2024-10-15 13:53:26 -03:00
Rafael dos Santos Silva
791fad1e6a
FEATURE: Index embeddings using bit vectors (#824)
On very large sites, the rare cache misses for Related Topics can take around 200ms, which affects our p99 metric on the topic page. In order to mitigate this impact, we now have several tools at our disposal.

First, one is to migrate the index embedding type from halfvec to bit and change the related topic query to leverage the new bit index by changing the search algorithm from inner product to Hamming distance. This will reduce our index sizes by 90%, severely reducing the impact of embeddings on our storage. By making the related query a bit smarter, we can have zero impact on recall by using the index to over-capture N*2 results, then re-ordering those N*2 using the full halfvec vectors and taking the top N. The expected impact is to go from 200ms to <20ms for cache misses and from a 2.5GB index to a 250MB index on a large site.

Another tool is migrating our index type from IVFFLAT to HNSW, which can increase the cache misses performance even further, eventually putting us in the under 5ms territory. 

Co-authored-by: Roman Rizzi <roman@discourse.org>
2024-10-14 13:26:03 -03:00
Sam
6c4c96e83c
FEATURE: allow persona to only force tool calls on limited replies (#827)
This introduces another configuration that allows operators to
limit the amount of interactions with forced tool usage.

Forced tools are very handy in initial llm interactions, but as
conversation progresses they can hinder by slowing down stuff
and adding confusion.
2024-10-11 07:23:42 +11:00
Sam
5cbc9190eb
FEATURE: RAG search within tools (#802)
This allows custom tools access to uploads and sophisticated searches using embedding.

It introduces:

 - A shared front end for listing and uploading files (shared with personas)
 -  Backend implementation of index.search function within a custom tool.

Custom tools now may search through uploaded files

function invoke(params) {
   return index.search(params.query)
}

This means that RAG implementers now may preload tools with knowledge and have high fidelity over
the search.

The search function support

    specifying max results
    specifying a subset of files to search (from uploads)

Also

 - Improved documentation for tools (when creating a tool a preamble explains all the functionality)
  - uploads were a bit finicky, fixed an edge case where the UI would not show them as updated
2024-09-30 17:27:50 +10:00
Sam
03eccbe392
FEATURE: Make tool support polymorphic (#798)
Polymorphic RAG means that we will be able to access RAG fragments both from AiPersona and AiCustomTool

In turn this gives us support for richer RAG implementations.
2024-09-16 08:17:17 +10:00
Rafael dos Santos Silva
1686a8a683
DEV: Move to single table per embeddings type (#561)
Also move us to halfvecs for speed and disk usage gains
2024-08-08 11:55:20 -03:00
Roman Rizzi
20efc9285e
FIX: Correctly save provider-specific params for new models. (#744)
Creating a new model, either manually or from presets, doesn't initialize the `provider_params` object, meaning their custom params won't persist.

Additionally, this change adds some validations for Bedrock params, which are mandatory, and a clear message when a completion fails because we cannot build the URL.
2024-08-07 16:08:56 -03:00
Natalie Tay
7cd7f71857
DEV: Promote historical post-deploy migrations (#728) 2024-07-30 01:44:57 +08:00
Rafael dos Santos Silva
665637fbad
FIX: Properly fix ai_summaries table sequence (#727)
* FIX: Properly fix ai_summaries table sequence

Previous attempt at 3815360 could fail due to a race introduced in 1b0ba91 where summaries are migrated to core in a post_migrate erroneously.
2024-07-26 14:45:01 -03:00
Roman Rizzi
5c196bca89
FEATURE: Track if a model can do vision in the llm_models table (#725)
* FEATURE: Track if a model can do vision in the llm_models table

* Data migration
2024-07-24 16:29:47 -03:00
Sam
38153608f8
FIX: repair id sequence identity on summary table (#701)
1. Repairs the identity on the summary table, we migrated data without resetting it.
2. Adds an index into ai_summary table to match expected retrieval pattern
2024-07-04 12:23:46 +10:00
Keegan George
1b0ba9197c
DEV: Add summarization logic from core (#658) 2024-07-02 08:51:59 -07:00
Sam
b863ddc94b
FEATURE: custom user defined tools (#677)
Introduces custom AI tools functionality. 

1. Why it was added:
   The PR adds the ability to create, manage, and use custom AI tools within the Discourse AI system. This feature allows for more flexibility and extensibility in the AI capabilities of the platform.

2. What it does:
   - Introduces a new `AiTool` model for storing custom AI tools
   - Adds CRUD (Create, Read, Update, Delete) operations for AI tools
   - Implements a tool runner system for executing custom tool scripts
   - Integrates custom tools with existing AI personas
   - Provides a user interface for managing custom tools in the admin panel

3. Possible use cases:
   - Creating custom tools for specific tasks or integrations (stock quotes, currency conversion etc...)
   - Allowing administrators to add new functionalities to AI assistants without modifying core code
   - Implementing domain-specific tools for particular communities or industries

4. Code structure:
   The PR introduces several new files and modifies existing ones:

   a. Models:
      - `app/models/ai_tool.rb`: Defines the AiTool model
      - `app/serializers/ai_custom_tool_serializer.rb`: Serializer for AI tools

   b. Controllers:
      - `app/controllers/discourse_ai/admin/ai_tools_controller.rb`: Handles CRUD operations for AI tools

   c. Views and Components:
      - New Ember.js components for tool management in the admin interface
      - Updates to existing AI persona management components to support custom tools 

   d. Core functionality:
      - `lib/ai_bot/tool_runner.rb`: Implements the custom tool execution system
      - `lib/ai_bot/tools/custom.rb`: Defines the custom tool class

   e. Routes and configurations:
      - Updates to route configurations to include new AI tool management pages

   f. Migrations:
      - `db/migrate/20240618080148_create_ai_tools.rb`: Creates the ai_tools table

   g. Tests:
      - New test files for AI tool functionality and integration

The PR integrates the custom tools system with the existing AI persona framework, allowing personas to use both built-in and custom tools. It also includes safety measures such as timeouts and HTTP request limits to prevent misuse of custom tools.

Overall, this PR significantly enhances the flexibility and extensibility of the Discourse AI system by allowing administrators to create and manage custom AI tools tailored to their specific needs.

Co-authored-by: Martin Brennan <martin@discourse.org>
2024-06-27 17:27:40 +10:00
Loïc Guitaut
6f5873b072 DEV: Use Rails 7.0 instead of 7.1 in migrations 2024-06-26 18:32:11 +02:00
Roman Rizzi
f622e2644f
FEATURE: Store provider-specific parameters. (#686)
Previously, we stored request parameters like the OpenAI organization and Bedrock's access key and region as site settings. This change stores them in the `llm_models` table instead, letting us drop more settings while also becoming more flexible.
2024-06-25 08:26:30 +10:00
Roman Rizzi
8d5f901a67
DEV: Rewire AI bot internals to use LlmModel (#638)
* DRAFT: Create AI Bot users dynamically and support custom LlmModels

* Get user associated to llm_model

* Track enabled bots with attribute

* Don't store bot username. Minor touches to migrate default values in settings

* Handle scenario where vLLM uses a SRV record

* Made 3.5-turbo-16k the default version so we can remove hack
2024-06-18 14:32:14 -03:00
Sam
5abf80cb4e
FIX: do not mark column read only so certain deployments work (#663)
In some case we may be deploying migrations, seeding and then
running post migrations, we need this to work so we give up
on this small window of protection
2024-06-11 21:32:49 +10:00