This commit
- normalizes locales like en_GB and variants to en. With this, the feature will not translate en_GB posts to en (or similarly pt_BR to pt_PT)
- consolidates whether the feature is enabled in `DiscourseAi::Translation.enabled?`
- similarly for backfill in `DiscourseAi::Translation.backfill_enabled?`
- turns off backfill if `ai_translation_backfill_max_age_days` is 0 to keep true to what it says. Set it to a high number to backfill everything
* FIX: make AI helper more robust
- If JSON is broken for structured output then lean on a more forgiving parser
- Gemini 2.5 flash does not support temp, support opting out
- Evals for assistant were broken, fix interface
- Add some missing LLMs
- Translator was not mapped correctly to the feature - fix that
- Don't mix XML in prompt for translator
* lint
* correct logic
* simplify code
* implement best effort json parsing direct in the structured output object
Introducing a typo isn't the right way to bypass the check that blocks the term "private messages".
Nice try, though ;)
I changed it to "personal messages".
- allows features to have multiple llms and multiple personas
- sorts module list
- adds Bot as a first class module
- fixes issue where search module was always configured
- some tests
Introduces import/export feature for tools and personas.
Uploads are omitted for now, and will be added in a future PR
* **Backend:**
* Adds `import` and `export` actions to `Admin::AiPersonasController` and `Admin::AiToolsController`.
* Introduces `DiscourseAi::PersonaExporter` and `DiscourseAi::PersonaImporter` services to manage JSON serialization and deserialization.
* The export format for a persona embeds its associated custom tools. To ensure portability, `AiTool` references are serialized using their `tool_name` rather than their internal database `id`.
* The import logic detects conflicts by name. A `force=true` parameter can be passed to overwrite existing records.
* **Frontend:**
* `AiPersonaListEditor` and `AiToolListEditor` components now include an "Import" button that handles file selection and POSTs the JSON data to the respective `import` endpoint.
* `AiPersonaEditorForm` and `AiToolEditorForm` components feature an "Export" button that triggers a download of the serialized record.
* Handles import conflicts (HTTP `409` for tools, `422` for personas) by showing a `dialog.confirm` prompt to allow the user to force an overwrite.
* **Testing:**
* Adds comprehensive request specs for the new controller actions (`#import`, `#export`).
* Includes unit specs for the `PersonaExporter` and `PersonaImporter` services.
* Persona import and export implemented
* FIX: implement max_output tokens (anthropic/openai/bedrock/gemini/open router)
Previously this feature existed but was not implemented
Also updates a bunch of models to in our preset to point to latest
* implementing in base is safer, simpler and easier to manage
* anthropic 3.5 is getting older, lets use 4.0 here and fix spec
is update adds logging for changes made in the AI admin panel. When making configuration changes to Embeddings, LLMs, Personas, Tools, or Spam that aren't site setting related, changes will now be logged in Admin > Logs & Screening. This will help admins debug issues related to AI. In this update a helper lib is created called `AiStaffActionLogger` which can be easily used in the future to add logging support for any other admin config we need logged for AI.
In hybrid mode ai artifacts can optionally automatically run.
This is useful for cases where you may want to embed a survey and so on.
Additionally, artifacts now allow for better fidelity around display:
<div class="ai-artifact" data-ai-artifact-id="501" data-ai-artifact-height="300px" data-ai-artifact-autorun data-ai-artifact-seamless></div>
User can supply height and seamless mode to be seamlessly rendered with no box shadow and show full screen button.
## 🔍 Overview
When the title suggestions return no suggestions, there is no indication in the UI. In the tag suggester we show a toast when there aren't any suggestions but the request was a success. In this update we make a similar UI indication with a toast for both category and title suggestions. Additionally, for all suggestions we add a "Try again" button to the toast so that suggestions can be generated again if the results yield nothing the first time.
OpenAI ship a new API for completions called "Responses API"
Certain models (o3-pro) require this API.
Additionally certain features are only made available to the new API.
This allow enabling it per LLM.
see: https://platform.openai.com/docs/api-reference/responses
Introduces a persistent, user-scoped key-value storage system for
AI Artifacts, enabling them to be stateful and interactive. This
transforms artifacts from static content into mini-applications that can
save user input, preferences, and other data.
The core components of this feature are:
1. **Model and API**:
- A new `AiArtifactKeyValue` model and corresponding database table to
store data associated with a user and an artifact.
- A new `ArtifactKeyValuesController` provides a RESTful API for
CRUD operations (`index`, `set`, `destroy`) on the key-value data.
- Permissions are enforced: users can only modify their own data but
can view public data from other users.
2. **Secure JavaScript Bridge**:
- A `postMessage` communication bridge is established between the
sandboxed artifact `iframe` and the parent Discourse window.
- A JavaScript API is exposed to the artifact as `window.discourseArtifact`
with async methods: `get(key)`, `set(key, value, options)`,
`delete(key)`, and `index(filter)`.
- The parent window handles these requests, makes authenticated calls to the
new controller, and returns the results to the iframe. This ensures
security by keeping untrusted JS isolated.
3. **AI Tool Integration**:
- The `create_artifact` tool is updated with a `requires_storage`
boolean parameter.
- If an artifact requires storage, its metadata is flagged, and the
system prompt for the code-generating AI is augmented with detailed
documentation for the new storage API.
4. **Configuration**:
- Adds hidden site settings `ai_artifact_kv_value_max_length` and
`ai_artifact_max_keys_per_user_per_artifact` for throttling.
This also includes a minor fix to use `jsonb_set` when updating
artifact metadata, ensuring other metadata fields are preserved.
* FEATURE: Display features that rely on multiple personas.
This change makes the previously hidden feature page visible while displaying features, like the AI helper, which relies on multiple personas.
* Fix system specs
## 🔍 Overview
This update re-introduces the validator used on the `ai_spam_detection_enabled` setting. It was initially added here: https://github.com/discourse/discourse-ai/pull/1374 to prevent Spam from being enabled without creating an `AiModerationSetting` value in the database. However, due to issues with backups/migrations we temporarily removed it here: https://github.com/discourse/discourse-ai/pull/1393. Now with some internal fixes, we can re-introduce it. We also update the validator so that it only validates when trying to turn on rather than when turning off too.
Adds context length controls to researcher (max tokens per post and batch)
Allow picking LLM for researcher
Fix bug where unicode usernames were not working
Fix documentation of OR logic
* FEATURE: add inferred concepts system
This commit adds a new inferred concepts system that:
- Creates a model for storing concept labels that can be applied to topics
- Provides AI personas for finding new concepts and matching existing ones
- Adds jobs for generating concepts from popular topics
- Includes a scheduled job that automatically processes engaging topics
* FEATURE: Extend inferred concepts to include posts
* Adds support for concepts to be inferred from and applied to posts
* Replaces daily task with one that handles both topics and posts
* Adds database migration for posts_inferred_concepts join table
* Updates PersonaContext to include inferred concepts
Co-authored-by: Roman Rizzi <rizziromanalejandro@gmail.com>
Co-authored-by: Keegan George <kgeorge13@gmail.com>
Related: https://github.com/discourse/discourse-translator/pull/310
This commit includes all the jobs and event hooks to localize posts, topics, and categories.
A few notes:
- `feature_name: "translation"` because the site setting is `ai-translation` and module is `Translation`
- we will switch to proper ai-feature in the near future, and can consider using the persona_user as `localization.localizer_user_id`
- keeping things flat within the module for now as we will be moving to ai-feature soon and have to rearrange
- Settings renamed/introduced are:
- ai_translation_backfill_rate (0)
- ai_translation_backfill_limit_to_public_content (true)
- ai_translation_backfill_max_age_days (5)
- ai_translation_verbose_logs (false)
## 🔍 Overview
We want to unhide `ai_spam_detection_enabled` setting so that we can retain staff action log features. However, we also want to ensure users cannot enable spam detection without having `AiModerationSetting.spam` present in the database.
In this update we unhide the setting, but also add a validator to ensure the necessary configuration is in place before allowing the setting to be enabled.
* Small fix, reasoning is now available on Claude 4 models
* fix invalid filters should raise, topic filter not working
* fix spec so we are consistent
* FEATURE: allow researcher to also research specific topics
Also improve UI around research with more accurate info
* this ensures that under no conditions PMs will be included
This commit introduces a new Forum Researcher persona specialized in deep forum content analysis along with comprehensive improvements to our AI infrastructure.
Key additions:
New Forum Researcher persona with advanced filtering and analysis capabilities
Robust filtering system supporting tags, categories, dates, users, and keywords
LLM formatter to efficiently process and chunk research results
Infrastructure improvements:
Implemented CancelManager class to centrally manage AI completion cancellations
Replaced callback-based cancellation with a more robust pattern
Added systematic cancellation monitoring with callbacks
Other improvements:
Added configurable default_enabled flag to control which personas are enabled by default
Updated translation strings for the new researcher functionality
Added comprehensive specs for the new components
Renames Researcher -> Web Researcher
This change makes our AI platform more stable while adding powerful research capabilities that can analyze forum trends and surface relevant content.
Examples simulate previous interactions with an LLM and come
right after the system prompt. This helps grounding the model and
producing better responses.