702 Commits

Author SHA1 Message Date
awesomerobot
294824cb17 test fix 2025-06-27 17:42:56 -04:00
awesomerobot
d9659867b9 update toggle spec 2025-06-27 17:13:15 -04:00
Roman Rizzi
8d943fa29d
FEATURE: Display spam module on features list. (#1469) 2025-06-27 14:18:01 -03:00
Roman Rizzi
b35f9bcc7c
FEATURE: Use Persona's when scanning posts for spam (#1465) 2025-06-27 10:35:47 -03:00
Sam
cc4e9e030f
FIX: normalize keys in structured output (#1468)
* FIX: normalize keys in structured output

Previously we did not validate the hash passed in to structured
outputs which could either be string based or symbol base

Specifically this broke structured outputs for Gemini in some
specific cases.

* comment out flake
2025-06-27 15:42:48 +10:00
Sam
73768ce920
FEATURE: Display bot in feature list (#1466)
- allows features to have multiple llms and multiple personas
- sorts module list
- adds Bot as a first class module
- fixes issue where search module was always configured
- some tests
2025-06-27 12:35:41 +10:00
Rafael dos Santos Silva
a40e2d3156
FEATURE: Update OpenAI tokenizer to GPT-4o and later (#1467) 2025-06-26 15:26:09 -03:00
Sam
3e74f09d06
FEATURE: improve custom tool infra (#1463)
- Add support for `chain.streamCustomRaw(test)` that can be used to stream text from a JS tool direct to composer
- Add support for llm params in `llm.generate` which unlocks stuff like structured outputs
- Add discourse.createStagedUser, discourse.createTopic  and discourse.createPost - for content creation
2025-06-25 16:25:44 +10:00
Sam
471f96f972
FEATURE: allow seeing configured LLM on feature page (#1460)
This is an interim fix so we can at least tell what feature is
being used for what LLM.

It also adds some test coverage to the feature page.
2025-06-24 17:42:47 +10:00
Sam
9f2a4094f5
FEATURE: persona/tool import and export (#1450)
Introduces import/export feature for tools and personas.

Uploads are omitted for now, and will be added in a future PR 

*   **Backend:**
    *   Adds `import` and `export` actions to `Admin::AiPersonasController` and `Admin::AiToolsController`.
    *   Introduces `DiscourseAi::PersonaExporter` and `DiscourseAi::PersonaImporter` services to manage JSON serialization and deserialization.
    *   The export format for a persona embeds its associated custom tools. To ensure portability, `AiTool` references are serialized using their `tool_name` rather than their internal database `id`.
    *   The import logic detects conflicts by name. A `force=true` parameter can be passed to overwrite existing records.

*   **Frontend:**
    *   `AiPersonaListEditor` and `AiToolListEditor` components now include an "Import" button that handles file selection and POSTs the JSON data to the respective `import` endpoint.
    *   `AiPersonaEditorForm` and `AiToolEditorForm` components feature an "Export" button that triggers a download of the serialized record.
    *   Handles import conflicts (HTTP `409` for tools, `422` for personas) by showing a `dialog.confirm` prompt to allow the user to force an overwrite.

*   **Testing:**
    *   Adds comprehensive request specs for the new controller actions (`#import`, `#export`).
    *   Includes unit specs for the `PersonaExporter` and `PersonaImporter` services.
* Persona import and export implemented
2025-06-24 12:41:10 +10:00
Natalie Tay
683bb5725b
DEV: Split content based on llmmodel's max_output_tokens (#1456)
In discourse/discourse-translator#249 we introduced splitting content (post.raw) prior to sending to translation as we were using a sync api.

Now that we're streaming thanks to #1424, we'll chunk based on the LlmModel.max_output_tokens.
2025-06-23 21:11:20 +08:00
Natalie Tay
740be26625
DEV: Also make sure locale detection skips PMs that are not group PMs when public content only (#1457)
In the earlier PR https://github.com/discourse/discourse-ai/pull/1432, when `SiteSetting.ai_translation_backfill_limit_to_public_content = false`, we **translate** PMs but **skip translating** PMs that do not involve groups.

This commit covers the missing case on **locale detection**.
2025-06-23 19:07:40 +08:00
Natalie Tay
e2d7ca0bb9
DEV: Indicate backfill rate for translations is hourly (#1451)
* DEV: Indicate backfill rate for translations is hourly

* add ai_translation_max_post_length

* default value update
2025-06-21 15:45:09 +08:00
Keegan George
a4194d3fb2
FIX: AI preferences tab button not appearing unless Helper enabled (#1452)
This update fixes an issue where the AI user preferences tab was not appearing unless `SiteSetting.ai_helper_enabled` was `true`. This is because we previously checked for it's presence when user preferences only had a single setting related to Helper. However, since then, we've also added search discoveries setting there too. As such, we don't want it to depend on Helper. We also sneak in this update a modernization of converting the preferences template from `.hbs` to `.gjs`.
2025-06-20 10:12:08 -07:00
Keegan George
baaa3d199a
FIX: streaming related specs (#1448)
## 🔍 Overview
This update fixes an issue where message bus streaming related specs
were not working correctly. To do so we pass the `last_id` when
subscribing to `MessageBus` which allows us to unskip those broken
tests.

---------

Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
2025-06-19 07:41:18 -07:00
Joffrey JAFFEUX
6a33e5154d
DEV: makes ai menu helper a standalone menu (#1434)
The current menu was rendering inside the post text toolbar (on desktop). This is not ideal as the post text toolbar rendering is conditioned on the presence of text selection, when you click a button on the toolbar, by design of the web browsers you will lose your text selection, making all of this super tricky.

This commit makes desktop and mobile behave in the same way by rendering their own menu and capturing the quote state when we render the post text selection toolbar, this allows us to reason a much simpler way about the AI helper.

This commit also removes what appears to be an unused file and corrects which was seemingly copy/paste mistakes.

⚠️ Technical note, this commit is correcting the message bus subscription which amongst other things allows to write specs which are not flaky. However due to the current implementation we have a channel per post, which means we need to serialize on last message bus id per post. 

We have two possible solutions here:
- subscribe at the topic level
- refactor the code to be able to use `MessageBus.last_ids` to be able to grab multiple posts at once instead of having to call `MessageBus.last_id` and done one Redis call per post

---------

Co-authored-by: Keegan George <kgeorge13@gmail.com>
2025-06-19 11:56:00 +02:00
Sam
37dbd48513
FIX: implement max_output tokens (anthropic/openai/bedrock/gemini/open router) (#1447)
* FIX: implement max_output tokens (anthropic/openai/bedrock/gemini/open router)

Previously this feature existed but was not implemented
Also updates a bunch of models to in our preset to point to latest

* implementing in base is safer, simpler and easier to manage

* anthropic 3.5 is getting older, lets use 4.0 here and fix spec
2025-06-19 16:00:11 +10:00
Natalie Tay
3e87e92631
DEV: Remove 'experimental' from translation features (#1439)
* DEV: Remove 'experimental' from translation features

* include compat

* include compat
2025-06-19 12:23:56 +08:00
Mark VanLandingham
cd14b0c0be
FIX: Bring back empty state message when appropriate (#1446)
The Today section was added always, but a side-effect was that we hid the empty state component. This commit brings back the empty state
2025-06-18 17:34:08 -05:00
Natalie Tay
d7a2af5505
DEV: Prevent multiple translation per post (#1443)
We're seeing an aggressive number of translations being enqueued for a single post and locale. Historically, we trigger translation on `cooked` not `raw`, but that has changed a while back.

```
# from AiApiAuditLog, the same post is getting translated to the same locale within a few secs of each other
zh_CN - 2025-06-17 13:02:31 UTC
zh_CN - 2025-06-17 13:02:34 UTC
zh_CN - 2025-06-17 13:02:35 UTC
zh_CN - 2025-06-17 13:02:36 UTC
zh_CN - 2025-06-17 13:02:38 UTC
zh_CN - 2025-06-17 13:02:39 UTC
zh_CN - 2025-06-17 13:02:40 UTC
zh_CN - 2025-06-17 13:02:40 UTC
zh_CN - 2025-06-17 13:02:43 UTC
zh_CN - 2025-06-17 13:02:44 UTC
```

This PR prevents this from happening.
2025-06-18 13:24:02 +08:00
Rafael dos Santos Silva
9dccc1eb93
FEATURE: Add Qwen3 tokenizer and update Gemma to version 3 (#1440) 2025-06-17 10:25:03 -03:00
Natalie Tay
df925f8304
DEV: Move examples out of prompt (#1438)
* DEV: Move examples out of prompt
2025-06-17 16:12:52 +08:00
Sam
32dc45ba4f
FIX: never block spam scanning user (#1437)
Previously staff and bots would get scanned if TL was low
Additionally if somehow spam scanner user was blocked
(deactivated, silenced, banned) it would stop the feature from working

This adds an override that ensures unconditionally the user is setup correctly prior to scanning
2025-06-17 14:51:27 +10:00
Rafael dos Santos Silva
bc8e57d7e8
DEV: Move title suggestion to an array (#1435) 2025-06-16 18:06:54 -03:00
Natalie Tay
b5e8277083
DEV: Move AI translation feature into an AI Feature (#1424)
This PR moves translations into an AI Feature

See https://github.com/discourse/discourse-ai/pull/1424 for screenshots
2025-06-13 10:17:27 +08:00
Keegan George
9be1049de6
DEV: Log AI related configuration to staff action log (#1416)
is update adds logging for changes made in the AI admin panel. When making configuration changes to Embeddings, LLMs, Personas, Tools, or Spam that aren't site setting related, changes will now be logged in Admin > Logs & Screening. This will help admins debug issues related to AI. In this update a helper lib is created called `AiStaffActionLogger` which can be easily used in the future to add logging support for any other admin config we need logged for AI.
2025-06-12 12:39:58 -07:00
Natalie Tay
fc83bed7cd
FIX: When allowing private content translation, only translate group PMs and not personal PMs (#1432)
We want to avoid translating PMs that are not group PMs. This condition is applied when `SiteSetting.ai_translation_backfill_limit_to_public_content = false`
2025-06-13 00:55:52 +08:00
Roman Rizzi
9b7f1e6ee9
FIX: Helper wasn't working when the persona doesn't use structured output (#1433) 2025-06-12 12:33:12 -03:00
Sam
ed311de937
FIX: various bugs in AI interface (#1430)
* FIX: improve transition logic in forms

previously back button would take you back to the /new route

* FIX: enum selection not working for persona tools

* seed information correctly in the DB

* fix broken spec

* Update assets/javascripts/discourse/components/ai-tool-editor-form.gjs

Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>

---------

Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
2025-06-12 13:50:52 +10:00
Roman Rizzi
8c8fd969ef
FIX: Don't check for #blank? when manipulating chunks (#1428) 2025-06-11 20:38:58 -03:00
Joffrey JAFFEUX
26217e51f9
DEV: a real selection change has a pointerup event (#1427)
This is needed for https://github.com/discourse/discourse/pull/33143 as we now rely on this pointerup event.
2025-06-12 00:59:21 +02:00
Sam
a907bc891a
FIX: improve admin api for artifact key values (#1425)
Previously we had a logic error and were showing admins keys
that are not theirs when querying for all keys

This makes the API cleaner, to get all results you need to be explicit always
2025-06-11 19:33:34 +10:00
Sam
d97307e99b
FEATURE: optionally support OpenAI responses API (#1423)
OpenAI ship a new API for completions called "Responses API"

Certain models (o3-pro) require this API.
Additionally certain features are only made available to the new API.

This allow enabling it per LLM.

see: https://platform.openai.com/docs/api-reference/responses
2025-06-11 17:12:25 +10:00
Natalie Tay
35d62a659b
FIX: Skip edits if localization exists (#1422)
We will fine tune updating an outdated localization in the future. For now we are seeing that quick edits are happening and we need to prevent the job from being too trigger-happy.
2025-06-11 11:00:22 +08:00
Sam
fdf0ff8a25
FEATURE: persistent key-value storage for AI Artifacts (#1417)
Introduces a persistent, user-scoped key-value storage system for
AI Artifacts, enabling them to be stateful and interactive. This
transforms artifacts from static content into mini-applications that can
save user input, preferences, and other data.

The core components of this feature are:

1.  **Model and API**:
    - A new `AiArtifactKeyValue` model and corresponding database table to
      store data associated with a user and an artifact.
    - A new `ArtifactKeyValuesController` provides a RESTful API for
      CRUD operations (`index`, `set`, `destroy`) on the key-value data.
    - Permissions are enforced: users can only modify their own data but
      can view public data from other users.

2.  **Secure JavaScript Bridge**:
    - A `postMessage` communication bridge is established between the
      sandboxed artifact `iframe` and the parent Discourse window.
    - A JavaScript API is exposed to the artifact as `window.discourseArtifact`
      with async methods: `get(key)`, `set(key, value, options)`,
      `delete(key)`, and `index(filter)`.
    - The parent window handles these requests, makes authenticated calls to the
      new controller, and returns the results to the iframe. This ensures
      security by keeping untrusted JS isolated.

3.  **AI Tool Integration**:
    - The `create_artifact` tool is updated with a `requires_storage`
      boolean parameter.
    - If an artifact requires storage, its metadata is flagged, and the
      system prompt for the code-generating AI is augmented with detailed
      documentation for the new storage API.

4.  **Configuration**:
    - Adds hidden site settings `ai_artifact_kv_value_max_length` and
      `ai_artifact_max_keys_per_user_per_artifact` for throttling.

This also includes a minor fix to use `jsonb_set` when updating
artifact metadata, ensuring other metadata fields are preserved.
2025-06-11 06:59:46 +10:00
Roman Rizzi
f7e0ea888d
DEV: Use a PORO to represent modules/features. (#1421)
Additional changes:

Adds a "#features" method in AiPersona to find which features are using that persona.
Serializes a basic version of a LlmModel in the persona's "#default_llm" serializer attribute.
2025-06-10 14:37:53 -03:00
Roman Rizzi
98afd7f8c3
FEATURE: Display features that rely on multiple personas. (#1411)
* FEATURE: Display features that rely on multiple personas.

This change makes the previously hidden feature page visible while displaying features, like the AI helper, which relies on multiple personas.

* Fix system specs
2025-06-09 16:13:09 -03:00
Keegan George
33fd6801e5
DEV: Add back validator for Spam setting (#1415)
## 🔍 Overview
This update re-introduces the validator used on the `ai_spam_detection_enabled` setting. It was initially added here: https://github.com/discourse/discourse-ai/pull/1374 to prevent Spam from being enabled without creating an `AiModerationSetting` value in the database. However, due to issues with backups/migrations we temporarily removed it here: https://github.com/discourse/discourse-ai/pull/1393. Now with some internal fixes, we can re-introduce it. We also update the validator so that it only validates when trying to turn on rather than when turning off too.
2025-06-06 10:56:36 -07:00
Natalie Tay
6827147362
DEV: Add topic and post id when using completions for traceability to AiApiAuditLog (#1414)
The AiApiAuditLog per translation event doesn't trace back easily to a post or topic.

This commit adds support to that, and also switches the translators to named arguments rather than positional arguments.
2025-06-06 23:24:24 +08:00
Natalie Tay
8a3a247b11
DEV: Also detect locale of categories and do not translate if already in the locale (#1413)
Previously I had omitted to add `locale` to the category, as categories tended to be just a single word, and I did not find it would be worth to carry locale information.

Due to certain LLMs that do poorer at translation, category descriptions got pretty messy. We added locale support here - https://github.com/discourse/discourse/pull/32962. 

This PR adds the automatic locale detection, and skips translating to the category's locale.
2025-06-06 22:41:48 +08:00
Sam
6817866de9
FEATURE: allow access to assigns from forum researcher (#1412)
* FEATURE: allow access to assigns from forum researcher

* FIX: should properly be checking for empty

* finish PR
2025-06-06 16:59:00 +10:00
Sam
b3d78a6a10
FIX: when tool options are added they should be available (#1406)
Fixes a regression where tool option editor was not showing
all tools
2025-06-05 12:05:55 +10:00
Roman Rizzi
c885e5697f review feedback 2025-06-04 14:23:00 -03:00
Roman Rizzi
0338dbea23 FEATURE: Use different personas to power AI helper features.
You can now edit each AI helper prompt individually through personas, limit access to specific groups, set different LLMs, etc.
2025-06-04 14:23:00 -03:00
David Taylor
cab39839fd
Revert "DEV: Patch Net::BufferedIO to help debug spec flakes (#1375)" (#1403)
This reverts commit ca78b1a1c588bd8708418bc42855837aafc6ab15.

Problem resolved by https://github.com/discourse/discourse-perspective-api/pull/110
2025-06-04 14:13:45 +01:00
Sam
3e74eea1e5
FEATURE: add context and llm controls to researcher, fix username filter (#1401)
Adds context length controls to researcher (max tokens per post and batch)
Allow picking LLM for researcher
Fix bug where unicode usernames were not working
Fix documentation of OR logic
2025-06-04 16:39:43 +10:00
Kris
fa51e9d948
REFACTOR: update AI conversation sidebar to use sidebar sections for date grouping (#1389) 2025-06-03 09:40:52 -05:00
Joffrey JAFFEUX
306fec2b24
FIX: edit-topic is not invisible on desktop (#1394)
Fix due to https://github.com/discourse/discourse/pull/32941
2025-06-03 16:30:19 +02:00
Sam
4dffd0b2c5
DEV: improve tool infra, improve forum researcher prompts, improve logging (#1391)
- add sleep function for tool polling with rate limits
- Support base64 encoding for HTTP requests and uploads
-  Enhance forum researcher with cost warnings and comprehensive planning
- Add cancellation support for research operations
- Include feature_name parameter for bot analytics
- richer research support (OR queries)
2025-06-03 15:17:55 +10:00
Rafael dos Santos Silva
27de71fc4f
FIX: Proper default LLM detection for inferred concepts (#1392) 2025-06-02 17:56:47 -03:00