180 Commits

Author SHA1 Message Date
Keegan George
a927fd2270
fix 2025-07-17 12:00:05 -07:00
Sam
ab5edae121
FIX: make AI helper more robust (#1484)
* FIX: make AI helper more robust

- If JSON is broken for structured output then lean on a more forgiving parser
- Gemini 2.5 flash does not support temp, support opting out
- Evals for assistant were broken, fix interface
- Add some missing LLMs
- Translator was not mapped correctly to the feature - fix that
- Don't mix XML in prompt for translator

* lint

* correct logic

* simplify code

* implement best effort json parsing direct in the structured output object
2025-07-04 14:47:11 +10:00
Rafael dos Santos Silva
d792919ddf
DEV: Move tokenizers to a gem (#1481)
Also renames the Mixtral tokenizer to Mistral.

See gem at github.com/discourse/discourse_ai-tokenizers


Co-authored-by: Roman Rizzi <roman@discourse.org>
2025-07-02 14:43:03 -03:00
Roman Rizzi
5ca7d5f256
FIX: Strip uploads from msg when searching for rag fragments (#1475) 2025-06-30 15:03:17 -03:00
Roman Rizzi
b35f9bcc7c
FEATURE: Use Persona's when scanning posts for spam (#1465) 2025-06-27 10:35:47 -03:00
Sam
cc4e9e030f
FIX: normalize keys in structured output (#1468)
* FIX: normalize keys in structured output

Previously we did not validate the hash passed in to structured
outputs which could either be string based or symbol base

Specifically this broke structured outputs for Gemini in some
specific cases.

* comment out flake
2025-06-27 15:42:48 +10:00
Sam
37dbd48513
FIX: implement max_output tokens (anthropic/openai/bedrock/gemini/open router) (#1447)
* FIX: implement max_output tokens (anthropic/openai/bedrock/gemini/open router)

Previously this feature existed but was not implemented
Also updates a bunch of models to in our preset to point to latest

* implementing in base is safer, simpler and easier to manage

* anthropic 3.5 is getting older, lets use 4.0 here and fix spec
2025-06-19 16:00:11 +10:00
Roman Rizzi
8c8fd969ef
FIX: Don't check for #blank? when manipulating chunks (#1428) 2025-06-11 20:38:58 -03:00
Sam
d97307e99b
FEATURE: optionally support OpenAI responses API (#1423)
OpenAI ship a new API for completions called "Responses API"

Certain models (o3-pro) require this API.
Additionally certain features are only made available to the new API.

This allow enabling it per LLM.

see: https://platform.openai.com/docs/api-reference/responses
2025-06-11 17:12:25 +10:00
Roman Rizzi
0338dbea23 FEATURE: Use different personas to power AI helper features.
You can now edit each AI helper prompt individually through personas, limit access to specific groups, set different LLMs, etc.
2025-06-04 14:23:00 -03:00
Rafael dos Santos Silva
478f31de47
FEATURE: add inferred concepts system (#1330)
* FEATURE: add inferred concepts system

This commit adds a new inferred concepts system that:
- Creates a model for storing concept labels that can be applied to topics
- Provides AI personas for finding new concepts and matching existing ones
- Adds jobs for generating concepts from popular topics
- Includes a scheduled job that automatically processes engaging topics

* FEATURE: Extend inferred concepts to include posts

* Adds support for concepts to be inferred from and applied to posts
* Replaces daily task with one that handles both topics and posts
* Adds database migration for posts_inferred_concepts join table
* Updates PersonaContext to include inferred concepts



Co-authored-by: Roman Rizzi <rizziromanalejandro@gmail.com>
Co-authored-by: Keegan George <kgeorge13@gmail.com>
2025-06-02 14:29:20 -03:00
Roman Rizzi
0ce17a122f
FIX: Correctly pass tool_choice when using Claude models. (#1364)
The `ClaudePrompt` object couldn't access the original prompt's tool_choice attribute, affecting both Anthropic and Bedrock.
2025-05-23 10:36:52 -03:00
Roman Rizzi
d72ad84f8f
FIX: Retry parsing escaped inner JSON to handle control chars. (#1357)
The structured output JSON comes embedded inside the API response, which is also a JSON. Since we have to parse the response to process it, any control characters inside the structured output are unescaped into regular characters, leading to invalid JSON and breaking during parsing. This change adds a retry mechanism that escapes
the string again if parsing fails, preventing the parser from breaking on malformed input and working around this issue.

For example:

```
  original = '{ "a": "{\\"key\\":\\"value with \\n newline\\"}" }'
  JSON.parse(original) => { "a" => "{\"key\":\"value with \n newline\"}" }
  # At this point, the inner JSON string contains an actual newline.
```
2025-05-21 11:25:59 -03:00
Roman Rizzi
e207eba1a4
FIX: Don't dig on nil when checking for the gemini schema (#1356) 2025-05-21 08:30:47 -03:00
Roman Rizzi
ff2e18f9ca
FIX: Structured output discrepancies. (#1340)
This change fixes two bugs and adds a safeguard.

The first issue is that the schema Gemini expected differed from the one sent, resulting in 400 errors when performing completions.

The second issue was that creating a new persona won't define a method
for `response_format`. This has to be explicitly defined when we wrap it inside the Persona class. Also, There was a mismatch between the default value and what we stored in the DB. Some parts of the code expected symbols as keys and others as strings.

Finally, we add a safeguard when, even if asked to, the model refuses to reply with a valid JSON. In this case, we are making a best-effort to recover and stream the raw response.
2025-05-15 11:32:10 -03:00
Sam
2c6459429f
DEV: use a proper object for tool definition (#1337)
* DEV: use a proper object for tool definition

This moves away from using a loose hash to define tools, which
is error prone.

Instead given a proper object we will also be able to coerce the
return values to match tool definition correctly

* fix xml tools

* fix anthropic tools

* fix specs... a few more to go

* specs are passing

* FIX: coerce values for XML tool calls

* Update spec/lib/completions/tool_definition_spec.rb

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-05-15 17:32:39 +10:00
Sam
c34fcc8a95
FEATURE: forum researcher persona for deep research (#1313)
This commit introduces a new Forum Researcher persona specialized in deep forum content analysis along with comprehensive improvements to our AI infrastructure.

Key additions:

    New Forum Researcher persona with advanced filtering and analysis capabilities
    Robust filtering system supporting tags, categories, dates, users, and keywords
    LLM formatter to efficiently process and chunk research results

Infrastructure improvements:

    Implemented CancelManager class to centrally manage AI completion cancellations
    Replaced callback-based cancellation with a more robust pattern
    Added systematic cancellation monitoring with callbacks

Other improvements:

    Added configurable default_enabled flag to control which personas are enabled by default
    Updated translation strings for the new researcher functionality
    Added comprehensive specs for the new components

    Renames Researcher -> Web Researcher

This change makes our AI platform more stable while adding powerful research capabilities that can analyze forum trends and surface relevant content.
2025-05-14 12:36:16 +10:00
Sam
2a62658248
FEATURE: support configurable thinking tokens for Gemini (#1322) 2025-05-08 07:39:50 +10:00
Roman Rizzi
5bc9fdc06b
FIX: Return structured output on non-streaming mode (#1318) 2025-05-06 15:34:30 -03:00
Roman Rizzi
c0a2d4c935
DEV: Use structured responses for summaries (#1252)
* DEV: Use structured responses for summaries

* Fix system specs

* Make response_format a first class citizen and update endpoints to support it

* Response format can be specified in the persona

* lint

* switch to jsonb and make column nullable

* Reify structured output chunks. Move JSON parsing to the depths of Completion

* Switch to JsonStreamingTracker for partial JSON parsing
2025-05-06 10:09:39 -03:00
Rafael dos Santos Silva
4eac377987
DEV: Zero delays on fake endpoint used in tests (#1311) 2025-05-05 17:47:32 -03:00
Sam
a6fa619c31
FEATURE: enforce jpg/png for all images (#1309)
This ensures webp / gif are converted to png prior to sending
to LLMs given webp and gif are not evenly supported.

png/jpg is universally supported and are the only supported format.

longer term we need to add support for audio/video/pdf which is supported by some models.

* more specs
2025-05-05 17:46:37 +10:00
Sam
17f04c76d8
FEATURE: add OpenAI image generation and editing capabilities (#1293)
This commit enhances the AI image generation functionality by adding support for:

1. OpenAI's GPT-based image generation model (gpt-image-1)
2. Image editing capabilities through the OpenAI API
3. A new "Designer" persona specialized in image generation and editing
4. Two new AI tools: CreateImage and EditImage

Technical changes include:
- Renaming `ai_openai_dall_e_3_url` to `ai_openai_image_generation_url` with a migration
- Adding `ai_openai_image_edit_url` setting for the image edit API endpoint
- Refactoring image generation code to handle both DALL-E and the newer GPT models
- Supporting multipart/form-data for image editing requests

* wild guess but maybe quantization is breaking the test sometimes

this increases distance

* Update lib/personas/designer.rb

Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>

* simplify and de-flake code

* fix, in chat we need enough context so we know exactly what uploads a user uploaded.

* Update lib/personas/tools/edit_image.rb

Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>

* cleanup downloaded files right away

* fix implementation

---------

Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
2025-04-29 17:38:54 +10:00
Mark VanLandingham
b7b9179bc8
FEATURE: Allow for persona & llm selection in bot conversations page (#1276) 2025-04-24 11:17:24 -05:00
Rafael dos Santos Silva
4470e8af9b
FIX: Tables should group only per their key on usage page (#1277) 2025-04-23 15:47:34 -03:00
Keegan George
d26c7ac48d
FEATURE: Add spending metrics to AI usage (#1268)
This update adds metrics for estimated spending in AI usage. To make use of it, admins must add cost details to the LLM config page (input, output, and cached input costs per 1M tokens). After doing so, the metrics will appear in the AI usage dashboard as the AI plugin is used.
2025-04-17 15:09:48 -07:00
Sam
274a54a324
FEATURE: Update model names and specs (#1262)
* FEATURE: Update model names and specs

- not a bug, but made it explicit that tools and thinking are not a chat thing
- updated all models to latest in presets (Gemini and OpenAI)

* allow larger context windows
2025-04-15 16:33:44 +10:00
Sam
38b492529f
FEATURE: improve context management (#1260)
1. Add age of post to topic context (1 month ago, 1 year ago, etc)
2. Refactor code for simplicity
3. Fix handling of post context in DMs which was not using new handling of uploads
2025-04-14 21:45:48 +10:00
Sam
67e3a610cb
FIX: invalid context construction for responders (#1257)
Previous to this fix we assumed the name field contained usernames
when in fact it was stored in the id field.

This fixes the context contruction and also adds some basic user
information to the context to assist responders in understanding
the cast of chars
2025-04-12 08:15:31 +10:00
Sam
8b573fe743
FIX: maintain newest uploads correctly when constructing context (#1242) 2025-04-02 10:09:38 -03:00
Sam
5b6d39a206
FEATURE: flexible image handling within messages (#1214)
* DEV: refactor bot internals

This introduces a proper object for bot context, this makes
it simpler to improve context management as we go cause we
have a nice object to work with

Starts refactoring allowing for a single message to have
multiple uploads throughout

* transplant method to message builder

* chipping away at inline uploads

* image support is improved but not fully fixed yet

partially working in anthropic, still got quite a few dialects to go

* open ai and claude are now working

* Gemini is now working as well

* fix nova

* more dialects...

* fix ollama

* fix specs

* update artifact fixed

* more tests

* spam scanner

* pass more specs

* bunch of specs improved

* more bug fixes.

* all the rest of the tests are working

* improve tests coverage and ensure custom tools are aware of new context object

* tests are working, but we need more tests

* resolve merge conflict

* new preamble and expanded specs on ai tool

* remove concept of "standalone tools"

This is no longer needed, we can set custom raw, tool details are injected into tool calls
2025-03-31 12:39:07 -03:00
Sam
1dde82eb58
FEATURE: allow specifying tool use none in completion prompt
This PR adds support for disabling further tool calls by setting tool_choice to :none across all supported LLM providers:

- OpenAI: Uses "none" tool_choice parameter
- Anthropic: Uses {type: "none"} and adds a prefill message to prevent confusion
- Gemini: Sets function_calling_config mode to "NONE"
- AWS Bedrock: Doesn't natively support tool disabling, so adds a prefill message

We previously used to disable tool calls by simply removing tool definitions, but this would cause errors with some providers. This implementation uses the supported method appropriate for each provider while providing a fallback for Bedrock.

Co-authored-by: Natalie Tay <natalie.tay@gmail.com>

* remove stray puts

* cleaner chain breaker for last tool call (works in thinking)

remove unused code

* improve test

---------

Co-authored-by: Natalie Tay <natalie.tay@gmail.com>
2025-03-25 08:06:43 +11:00
Rafael dos Santos Silva
0e2dd7378f
DEV: Support for extra model params in LLM completions (#1208)
This will be useful to start experiment with non-standard features like
structured outputs.
2025-03-21 13:49:23 -03:00
Sam
9211b211f5
FEATURE: silent triage using ai persona (#1193)
This allows for a new mode in persona triage where nothing is posted on topics.

This allows people to perform all triage actions using tools

Additionally introduces new APIs to create chat messages from tools which can be useful in certain moderation scenarios

Co-authored-by: Natalie Tay <natalie.tay@gmail.com>

* remove TODO code

---------

Co-authored-by: Natalie Tay <natalie.tay@gmail.com>
2025-03-17 15:14:09 +11:00
Sam
8f4cd2fcbd
FEATURE: allow disabling of top_p and temp for thinking models (#1184)
thinking models such as Claude 3.7 Thinking and o1 / o3 do not
support top_p or temp.

Previously you would have to carefully remove it from everywhere
by having it be a provider param we now support blanker removing
without forcing people to update automation rules or personas
2025-03-11 16:54:02 +11:00
Sam
01893bb6ed
FEATURE: Add persona-based replies and whisper support to LLM triage (#1170)
This PR enhances the LLM triage automation with several important improvements:

- Add ability to use AI personas for automated replies instead of canned replies
- Add support for whisper responses
- Refactor LLM persona reply functionality into a reusable method
- Add new settings to configure response behavior in automations
- Improve error handling and logging
- Fix handling of personal messages in the triage flow
- Add comprehensive test coverage for new features
- Make personas configurable with more flexible requirements

This allows for more dynamic and context-aware responses in automated workflows, with better control over visibility and attribution.
2025-03-06 17:18:15 +11:00
Sam
e255c7a8f0
FEATURE: automation triage using personas (#1126)
## LLM Persona Triage
- Allows automated responses to posts using AI personas
- Configurable to respond as regular posts or whispers
- Adds context-aware formatting for topics and private messages
- Provides special handling for topic metadata (title, category, tags)

## LLM Tool Triage
- Enables custom AI tools to process and respond to posts
- Tools can analyze post content and invoke personas when needed
- Zero-parameter tools can be used for automated workflows
- Not enabled in production yet

## Implementation Details
- Added new scriptable registration in discourse_automation/ directory
- Created core implementation in lib/automation/ modules
- Enhanced PromptMessagesBuilder with topic-style formatting
- Added helper methods for persona and tool selection in UI
- Extended AI Bot functionality to support whisper responses
- Added rate limiting to prevent abuse

## Other Changes
- Added comprehensive test coverage for both automation types
- Enhanced tool runner with LLM integration capabilities
- Improved error handling and logging

This feature allows forum admins to configure AI personas to automatically respond to posts based on custom criteria and leverage AI tools for more complex triage workflows.

Tool Triage has been disabled in production while we finalize details of new scripting capabilities.
2025-03-06 09:41:09 +11:00
Sam
f6eedf3e0b
FEATURE: implement thinking token support (#1155)
adds support for "thinking tokens" - a feature that exposes the model's reasoning process before providing the final response. Key improvements include:

- Add a new Thinking class to handle thinking content from LLMs
- Modify endpoints (Claude, AWS Bedrock) to handle thinking output
- Update AI bot to display thinking in collapsible details section
- Fix SEARCH/REPLACE blocks to support empty replacement strings and general improvements to artifact editing
- Allow configurable temperature in triage and report automations
- Various bug fixes and improvements to diff parsing
2025-03-04 12:22:30 +11:00
Rafael dos Santos Silva
eccfbad046
UX: Update Sambanova LLM templates (#1157) 2025-02-28 15:00:15 -03:00
Sam
fe19133dd4
FEATURE: full support for Sonnet 3.7 (#1151)
* FEATURE: full support for Sonnet 3.7

- Adds support for Sonnet 3.7 with reasoning on bedrock and anthropic
- Fixes regression where provider params were not populated

Note. reasoning tokens are hardcoded to minimum of 100 maximum of 65536

* FIX: open ai non reasoning models need to use deprecate max_tokens
2025-02-25 17:32:12 +11:00
Sam
84e791a941
FIX: legacy reasoning models not working, missing provider params (#1149)
* FIX: legacy reasoning models not working, missing provider params

1. Legacy reasoning models (o1-preview / o1-mini) do not support developer or system messages, do not use them.
2. LLM editor form not showing all provider params due to missing remap

* add system test
2025-02-24 16:38:23 +11:00
Sam
12f00a62d2
FIX: use max_completion_tokens for open ai models (#1134)
max_tokens is now deprecated per API
2025-02-19 15:49:15 +11:00
Sam
9a6aec2cf6
DEV: eval support for tool calls (#1128)
Also fixes anthropic with no params, streaming calls
2025-02-18 07:58:54 +11:00
Sam
5e80f93e4c
FEATURE: PDF support for rag pipeline (#1118)
This PR introduces several enhancements and refactorings to the AI Persona and RAG (Retrieval-Augmented Generation) functionalities within the discourse-ai plugin. Here's a breakdown of the changes:

**1. LLM Model Association for RAG and Personas:**

-   **New Database Columns:** Adds `rag_llm_model_id` to both `ai_personas` and `ai_tools` tables. This allows specifying a dedicated LLM for RAG indexing, separate from the persona's primary LLM.  Adds `default_llm_id` and `question_consolidator_llm_id` to `ai_personas`.
-   **Migration:**  Includes a migration (`20250210032345_migrate_persona_to_llm_model_id.rb`) to populate the new `default_llm_id` and `question_consolidator_llm_id` columns in `ai_personas` based on the existing `default_llm` and `question_consolidator_llm` string columns, and a post migration to remove the latter.
-   **Model Changes:**  The `AiPersona` and `AiTool` models now `belong_to` an `LlmModel` via `rag_llm_model_id`. The `LlmModel.proxy` method now accepts an `LlmModel` instance instead of just an identifier.  `AiPersona` now has `default_llm_id` and `question_consolidator_llm_id` attributes.
-   **UI Updates:**  The AI Persona and AI Tool editors in the admin panel now allow selecting an LLM for RAG indexing (if PDF/image support is enabled).  The RAG options component displays an LLM selector.
-   **Serialization:** The serializers (`AiCustomToolSerializer`, `AiCustomToolListSerializer`, `LocalizedAiPersonaSerializer`) have been updated to include the new `rag_llm_model_id`, `default_llm_id` and `question_consolidator_llm_id` attributes.

**2. PDF and Image Support for RAG:**

-   **Site Setting:** Introduces a new hidden site setting, `ai_rag_pdf_images_enabled`, to control whether PDF and image files can be indexed for RAG. This defaults to `false`.
-   **File Upload Validation:** The `RagDocumentFragmentsController` now checks the `ai_rag_pdf_images_enabled` setting and allows PDF, PNG, JPG, and JPEG files if enabled.  Error handling is included for cases where PDF/image indexing is attempted with the setting disabled.
-   **PDF Processing:** Adds a new utility class, `DiscourseAi::Utils::PdfToImages`, which uses ImageMagick (`magick`) to convert PDF pages into individual PNG images. A maximum PDF size and conversion timeout are enforced.
-   **Image Processing:** A new utility class, `DiscourseAi::Utils::ImageToText`, is included to handle OCR for the images and PDFs.
-   **RAG Digestion Job:** The `DigestRagUpload` job now handles PDF and image uploads. It uses `PdfToImages` and `ImageToText` to extract text and create document fragments.
-   **UI Updates:**  The RAG uploader component now accepts PDF and image file types if `ai_rag_pdf_images_enabled` is true. The UI text is adjusted to indicate supported file types.

**3. Refactoring and Improvements:**

-   **LLM Enumeration:** The `DiscourseAi::Configuration::LlmEnumerator` now provides a `values_for_serialization` method, which returns a simplified array of LLM data (id, name, vision_enabled) suitable for use in serializers. This avoids exposing unnecessary details to the frontend.
-   **AI Helper:** The `AiHelper::Assistant` now takes optional `helper_llm` and `image_caption_llm` parameters in its constructor, allowing for greater flexibility.
-   **Bot and Persona Updates:** Several updates were made across the codebase, changing the string based association to a LLM to the new model based.
-   **Audit Logs:** The `DiscourseAi::Completions::Endpoints::Base` now formats raw request payloads as pretty JSON for easier auditing.
- **Eval Script:** An evaluation script is included.

**4. Testing:**

-    The PR introduces a new eval system for LLMs, this allows us to test how functionality works across various LLM providers. This lives in `/evals`
2025-02-14 12:15:07 +11:00
Sam
a7d032fa28
DEV: artifact system update (#1096)
### Why

This pull request fundamentally restructures how AI bots create and update web artifacts to address critical limitations in the previous approach:

1.  **Improved Artifact Context for LLMs**: Previously, artifact creation and update tools included the *entire* artifact source code directly in the tool arguments. This overloaded the Language Model (LLM) with raw code, making it difficult for the LLM to maintain a clear understanding of the artifact's current state when applying changes. The LLM would struggle to differentiate between the base artifact and the requested modifications, leading to confusion and less effective updates.
2.  **Reduced Token Usage and History Bloat**: Including the full artifact source code in every tool interaction was extremely token-inefficient.  As conversations progressed, this redundant code in the history consumed a significant number of tokens unnecessarily. This not only increased costs but also diluted the context for the LLM with less relevant historical information.
3.  **Enabling Updates for Large Artifacts**: The lack of a practical diff or targeted update mechanism made it nearly impossible to efficiently update larger web artifacts.  Sending the entire source code for every minor change was both computationally expensive and prone to errors, effectively blocking the use of AI bots for meaningful modifications of complex artifacts.

**This pull request addresses these core issues by**:

*   Introducing methods for the AI bot to explicitly *read* and understand the current state of an artifact.
*   Implementing efficient update strategies that send *targeted* changes rather than the entire artifact source code.
*   Providing options to control the level of artifact context included in LLM prompts, optimizing token usage.

### What

The main changes implemented in this PR to resolve the above issues are:

1.  **`Read Artifact` Tool for Contextual Awareness**:
    - A new `read_artifact` tool is introduced, enabling AI bots to fetch and process the current content of a web artifact from a given URL (local or external).
    - This provides the LLM with a clear and up-to-date representation of the artifact's HTML, CSS, and JavaScript, improving its understanding of the base to be modified.
    - By cloning local artifacts, it allows the bot to work with a fresh copy, further enhancing context and control.

2.  **Refactored `Update Artifact` Tool with Efficient Strategies**:
    - The `update_artifact` tool is redesigned to employ more efficient update strategies, minimizing token usage and improving update precision:
        - **`diff` strategy**:  Utilizes a search-and-replace diff algorithm to apply only the necessary, targeted changes to the artifact's code. This significantly reduces the amount of code sent to the LLM and focuses its attention on the specific modifications.
        - **`full` strategy**:  Provides the option to replace the entire content sections (HTML, CSS, JavaScript) when a complete rewrite is required.
    - Tool options enhance the control over the update process:
        - `editor_llm`:  Allows selection of a specific LLM for artifact updates, potentially optimizing for code editing tasks.
        - `update_algorithm`: Enables choosing between `diff` and `full` update strategies based on the nature of the required changes.
        - `do_not_echo_artifact`:  Defaults to true, and by *not* echoing the artifact in prompts, it further reduces token consumption in scenarios where the LLM might not need the full artifact context for every update step (though effectiveness might be slightly reduced in certain update scenarios).

3.  **System and General Persona Tool Option Visibility and Customization**:
    - Tool options, including those for system personas, are made visible and editable in the admin UI. This allows administrators to fine-tune the behavior of all personas and their tools, including setting specific LLMs or update algorithms. This was previously limited or hidden for system personas.

4.  **Centralized and Improved Content Security Policy (CSP) Management**:
    - The CSP for AI artifacts is consolidated and made more maintainable through the `ALLOWED_CDN_SOURCES` constant. This improves code organization and future updates to the allowed CDN list, while maintaining the existing security posture.

5.  **Codebase Improvements**:
    - Refactoring of diff utilities, introduction of strategy classes, enhanced error handling, new locales, and comprehensive testing all contribute to a more robust, efficient, and maintainable artifact management system.

By addressing the issues of LLM context confusion, token inefficiency, and the limitations of updating large artifacts, this pull request significantly improves the practicality and effectiveness of AI bots in managing web artifacts within Discourse.
2025-02-04 16:27:27 +11:00
Sam
cf86d274a0
FEATURE: improve o3-mini support (#1106)
* DEV: raise timeout for reasoning LLMs

* FIX: use id to identify llms, not model_name

model_name is not unique, in the case of reasoning models
you may configure the same llm multiple times using different
reasoning levels.
2025-02-03 08:45:56 +11:00
Sam
381a2715c8
FEATURE: o3-mini supports (#1105)
1. Adds o3-mini presets
2. Adds support for reasoning effort
3. properly use "developer" messages for reasoning models
2025-02-01 14:08:34 +11:00
Rafael dos Santos Silva
67a1257b89
FEATURE: Gemini Tokenizer (#1088) 2025-01-23 18:20:35 -03:00
Sam
8bf350206e
FEATURE: track duration of AI calls (#1082)
* FEATURE: track duration of AI calls

* annotate
2025-01-23 11:32:12 +11:00
Rafael dos Santos Silva
f9aa2de413
FIX: AWS Bedrock non-streaming calls response log (#1072) 2025-01-15 18:51:25 -03:00