This commit
- normalizes locales like en_GB and variants to en. With this, the feature will not translate en_GB posts to en (or similarly pt_BR to pt_PT)
- consolidates whether the feature is enabled in `DiscourseAi::Translation.enabled?`
- similarly for backfill in `DiscourseAi::Translation.backfill_enabled?`
- turns off backfill if `ai_translation_backfill_max_age_days` is 0 to keep true to what it says. Set it to a high number to backfill everything
This update fixes a regression from https://github.com/discourse/discourse-ai/pull/1484, which caused AI helper title suggestions to begin suggesting numerous non-unique titles because it was looping through structured responses incorrectly.
* FIX: make AI helper more robust
- If JSON is broken for structured output then lean on a more forgiving parser
- Gemini 2.5 flash does not support temp, support opting out
- Evals for assistant were broken, fix interface
- Add some missing LLMs
- Translator was not mapped correctly to the feature - fix that
- Don't mix XML in prompt for translator
* lint
* correct logic
* simplify code
* implement best effort json parsing direct in the structured output object
A more deterministic way of making sure the LLM detects the correct language (instead of relying on prompt to LLM to ignore it) is to take the cooked and remove unwanted elements.
In this commit
- we remove quotes, image captions, etc. and only take the remaining text, falling back to the unadulterated cooked
- and update prompts related to detection and translation
- /152465/12
Also renames the Mixtral tokenizer to Mistral.
See gem at github.com/discourse/discourse_ai-tokenizers
Co-authored-by: Roman Rizzi <roman@discourse.org>
Previous to this change we reused channels for proofreading progress and
ai helper progress
The new changeset ensures each POST to stream progress gets a dedicated
message bus channel
This fixes a class of issues where the wrong information could be displayed
to end users on subsequent proofreading or helper calls
* fix tests
* fix implementation (got to subscribe at 0)
When an invalid model is set for embeddings, topics do not load even if embeddings is disabled.
Error:
## RuntimeError in TopicsController#show
Invalid embeddings selected model
This commit checks for valid settings before attempting to load related topics.
* FIX: normalize keys in structured output
Previously we did not validate the hash passed in to structured
outputs which could either be string based or symbol base
Specifically this broke structured outputs for Gemini in some
specific cases.
* comment out flake
- allows features to have multiple llms and multiple personas
- sorts module list
- adds Bot as a first class module
- fixes issue where search module was always configured
- some tests
- Add support for `chain.streamCustomRaw(test)` that can be used to stream text from a JS tool direct to composer
- Add support for llm params in `llm.generate` which unlocks stuff like structured outputs
- Add discourse.createStagedUser, discourse.createTopic and discourse.createPost - for content creation
* FIX: A typo in bot filtration in ai-bot-header-icon
* FIX: Show header icon when there's only one persona with a default LLM set
---------
Co-authored-by: Roman Rizzi <rizziromanalejandro@gmail.com>
In discourse/discourse-translator#249 we introduced splitting content (post.raw) prior to sending to translation as we were using a sync api.
Now that we're streaming thanks to #1424, we'll chunk based on the LlmModel.max_output_tokens.
New implementation uses core concurrent job queue, it is more
robust and predictable than the one shipped in Concurrent.
Additionally:
- Trickles through updates during bulk classification
- Reports errors if we fail during a bulk classification
* push concurrency down to 40. 100 feels quite high.
## 🔍 Overview
This update fixes an issue where message bus streaming related specs
were not working correctly. To do so we pass the `last_id` when
subscribing to `MessageBus` which allows us to unskip those broken
tests.
---------
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
The current menu was rendering inside the post text toolbar (on desktop). This is not ideal as the post text toolbar rendering is conditioned on the presence of text selection, when you click a button on the toolbar, by design of the web browsers you will lose your text selection, making all of this super tricky.
This commit makes desktop and mobile behave in the same way by rendering their own menu and capturing the quote state when we render the post text selection toolbar, this allows us to reason a much simpler way about the AI helper.
This commit also removes what appears to be an unused file and corrects which was seemingly copy/paste mistakes.
⚠️ Technical note, this commit is correcting the message bus subscription which amongst other things allows to write specs which are not flaky. However due to the current implementation we have a channel per post, which means we need to serialize on last message bus id per post.
We have two possible solutions here:
- subscribe at the topic level
- refactor the code to be able to use `MessageBus.last_ids` to be able to grab multiple posts at once instead of having to call `MessageBus.last_id` and done one Redis call per post
---------
Co-authored-by: Keegan George <kgeorge13@gmail.com>
* FIX: implement max_output tokens (anthropic/openai/bedrock/gemini/open router)
Previously this feature existed but was not implemented
Also updates a bunch of models to in our preset to point to latest
* implementing in base is safer, simpler and easier to manage
* anthropic 3.5 is getting older, lets use 4.0 here and fix spec
We're seeing an aggressive number of translations being enqueued for a single post and locale. Historically, we trigger translation on `cooked` not `raw`, but that has changed a while back.
```
# from AiApiAuditLog, the same post is getting translated to the same locale within a few secs of each other
zh_CN - 2025-06-17 13:02:31 UTC
zh_CN - 2025-06-17 13:02:34 UTC
zh_CN - 2025-06-17 13:02:35 UTC
zh_CN - 2025-06-17 13:02:36 UTC
zh_CN - 2025-06-17 13:02:38 UTC
zh_CN - 2025-06-17 13:02:39 UTC
zh_CN - 2025-06-17 13:02:40 UTC
zh_CN - 2025-06-17 13:02:40 UTC
zh_CN - 2025-06-17 13:02:43 UTC
zh_CN - 2025-06-17 13:02:44 UTC
```
This PR prevents this from happening.
Previously staff and bots would get scanned if TL was low
Additionally if somehow spam scanner user was blocked
(deactivated, silenced, banned) it would stop the feature from working
This adds an override that ensures unconditionally the user is setup correctly prior to scanning
is update adds logging for changes made in the AI admin panel. When making configuration changes to Embeddings, LLMs, Personas, Tools, or Spam that aren't site setting related, changes will now be logged in Admin > Logs & Screening. This will help admins debug issues related to AI. In this update a helper lib is created called `AiStaffActionLogger` which can be easily used in the future to add logging support for any other admin config we need logged for AI.
In hybrid mode ai artifacts can optionally automatically run.
This is useful for cases where you may want to embed a survey and so on.
Additionally, artifacts now allow for better fidelity around display:
<div class="ai-artifact" data-ai-artifact-id="501" data-ai-artifact-height="300px" data-ai-artifact-autorun data-ai-artifact-seamless></div>
User can supply height and seamless mode to be seamlessly rendered with no box shadow and show full screen button.
OpenAI ship a new API for completions called "Responses API"
Certain models (o3-pro) require this API.
Additionally certain features are only made available to the new API.
This allow enabling it per LLM.
see: https://platform.openai.com/docs/api-reference/responses
Introduces a persistent, user-scoped key-value storage system for
AI Artifacts, enabling them to be stateful and interactive. This
transforms artifacts from static content into mini-applications that can
save user input, preferences, and other data.
The core components of this feature are:
1. **Model and API**:
- A new `AiArtifactKeyValue` model and corresponding database table to
store data associated with a user and an artifact.
- A new `ArtifactKeyValuesController` provides a RESTful API for
CRUD operations (`index`, `set`, `destroy`) on the key-value data.
- Permissions are enforced: users can only modify their own data but
can view public data from other users.
2. **Secure JavaScript Bridge**:
- A `postMessage` communication bridge is established between the
sandboxed artifact `iframe` and the parent Discourse window.
- A JavaScript API is exposed to the artifact as `window.discourseArtifact`
with async methods: `get(key)`, `set(key, value, options)`,
`delete(key)`, and `index(filter)`.
- The parent window handles these requests, makes authenticated calls to the
new controller, and returns the results to the iframe. This ensures
security by keeping untrusted JS isolated.
3. **AI Tool Integration**:
- The `create_artifact` tool is updated with a `requires_storage`
boolean parameter.
- If an artifact requires storage, its metadata is flagged, and the
system prompt for the code-generating AI is augmented with detailed
documentation for the new storage API.
4. **Configuration**:
- Adds hidden site settings `ai_artifact_kv_value_max_length` and
`ai_artifact_max_keys_per_user_per_artifact` for throttling.
This also includes a minor fix to use `jsonb_set` when updating
artifact metadata, ensuring other metadata fields are preserved.
Additional changes:
Adds a "#features" method in AiPersona to find which features are using that persona.
Serializes a basic version of a LlmModel in the persona's "#default_llm" serializer attribute.
* FEATURE: Display features that rely on multiple personas.
This change makes the previously hidden feature page visible while displaying features, like the AI helper, which relies on multiple personas.
* Fix system specs
## 🔍 Overview
This update re-introduces the validator used on the `ai_spam_detection_enabled` setting. It was initially added here: https://github.com/discourse/discourse-ai/pull/1374 to prevent Spam from being enabled without creating an `AiModerationSetting` value in the database. However, due to issues with backups/migrations we temporarily removed it here: https://github.com/discourse/discourse-ai/pull/1393. Now with some internal fixes, we can re-introduce it. We also update the validator so that it only validates when trying to turn on rather than when turning off too.
The AiApiAuditLog per translation event doesn't trace back easily to a post or topic.
This commit adds support to that, and also switches the translators to named arguments rather than positional arguments.
Previously I had omitted to add `locale` to the category, as categories tended to be just a single word, and I did not find it would be worth to carry locale information.
Due to certain LLMs that do poorer at translation, category descriptions got pretty messy. We added locale support here - https://github.com/discourse/discourse/pull/32962.
This PR adds the automatic locale detection, and skips translating to the category's locale.