698 Commits

Author SHA1 Message Date
Jarek Radosz
b809d723a8
rubocop fix 2025-06-21 00:04:23 +02:00
Sam
eab6dd3f8e
DEV: re-implement bulk sentiment classifier (#1449)
New implementation uses core concurrent job queue, it is more
robust and predictable than the one shipped in Concurrent.

Additionally:

- Trickles through updates during bulk classification
- Reports errors if we fail during a bulk classification

* push concurrency down to 40. 100 feels quite high.
2025-06-20 16:06:03 +10:00
Keegan George
baaa3d199a
FIX: streaming related specs (#1448)
## 🔍 Overview
This update fixes an issue where message bus streaming related specs
were not working correctly. To do so we pass the `last_id` when
subscribing to `MessageBus` which allows us to unskip those broken
tests.

---------

Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
2025-06-19 07:41:18 -07:00
Joffrey JAFFEUX
6a33e5154d
DEV: makes ai menu helper a standalone menu (#1434)
The current menu was rendering inside the post text toolbar (on desktop). This is not ideal as the post text toolbar rendering is conditioned on the presence of text selection, when you click a button on the toolbar, by design of the web browsers you will lose your text selection, making all of this super tricky.

This commit makes desktop and mobile behave in the same way by rendering their own menu and capturing the quote state when we render the post text selection toolbar, this allows us to reason a much simpler way about the AI helper.

This commit also removes what appears to be an unused file and corrects which was seemingly copy/paste mistakes.

⚠️ Technical note, this commit is correcting the message bus subscription which amongst other things allows to write specs which are not flaky. However due to the current implementation we have a channel per post, which means we need to serialize on last message bus id per post. 

We have two possible solutions here:
- subscribe at the topic level
- refactor the code to be able to use `MessageBus.last_ids` to be able to grab multiple posts at once instead of having to call `MessageBus.last_id` and done one Redis call per post

---------

Co-authored-by: Keegan George <kgeorge13@gmail.com>
2025-06-19 11:56:00 +02:00
Sam
37dbd48513
FIX: implement max_output tokens (anthropic/openai/bedrock/gemini/open router) (#1447)
* FIX: implement max_output tokens (anthropic/openai/bedrock/gemini/open router)

Previously this feature existed but was not implemented
Also updates a bunch of models to in our preset to point to latest

* implementing in base is safer, simpler and easier to manage

* anthropic 3.5 is getting older, lets use 4.0 here and fix spec
2025-06-19 16:00:11 +10:00
Natalie Tay
d7a2af5505
DEV: Prevent multiple translation per post (#1443)
We're seeing an aggressive number of translations being enqueued for a single post and locale. Historically, we trigger translation on `cooked` not `raw`, but that has changed a while back.

```
# from AiApiAuditLog, the same post is getting translated to the same locale within a few secs of each other
zh_CN - 2025-06-17 13:02:31 UTC
zh_CN - 2025-06-17 13:02:34 UTC
zh_CN - 2025-06-17 13:02:35 UTC
zh_CN - 2025-06-17 13:02:36 UTC
zh_CN - 2025-06-17 13:02:38 UTC
zh_CN - 2025-06-17 13:02:39 UTC
zh_CN - 2025-06-17 13:02:40 UTC
zh_CN - 2025-06-17 13:02:40 UTC
zh_CN - 2025-06-17 13:02:43 UTC
zh_CN - 2025-06-17 13:02:44 UTC
```

This PR prevents this from happening.
2025-06-18 13:24:02 +08:00
Rafael dos Santos Silva
9dccc1eb93
FEATURE: Add Qwen3 tokenizer and update Gemma to version 3 (#1440) 2025-06-17 10:25:03 -03:00
Natalie Tay
df925f8304
DEV: Move examples out of prompt (#1438)
* DEV: Move examples out of prompt
2025-06-17 16:12:52 +08:00
Sam
32dc45ba4f
FIX: never block spam scanning user (#1437)
Previously staff and bots would get scanned if TL was low
Additionally if somehow spam scanner user was blocked
(deactivated, silenced, banned) it would stop the feature from working

This adds an override that ensures unconditionally the user is setup correctly prior to scanning
2025-06-17 14:51:27 +10:00
Rafael dos Santos Silva
bc8e57d7e8
DEV: Move title suggestion to an array (#1435) 2025-06-16 18:06:54 -03:00
Natalie Tay
b5e8277083
DEV: Move AI translation feature into an AI Feature (#1424)
This PR moves translations into an AI Feature

See https://github.com/discourse/discourse-ai/pull/1424 for screenshots
2025-06-13 10:17:27 +08:00
Keegan George
9be1049de6
DEV: Log AI related configuration to staff action log (#1416)
is update adds logging for changes made in the AI admin panel. When making configuration changes to Embeddings, LLMs, Personas, Tools, or Spam that aren't site setting related, changes will now be logged in Admin > Logs & Screening. This will help admins debug issues related to AI. In this update a helper lib is created called `AiStaffActionLogger` which can be easily used in the future to add logging support for any other admin config we need logged for AI.
2025-06-12 12:39:58 -07:00
Roman Rizzi
9b7f1e6ee9
FIX: Helper wasn't working when the persona doesn't use structured output (#1433) 2025-06-12 12:33:12 -03:00
Sam
02bc9f645e
FEATURE: hybrid artifact security mode (#1431)
In hybrid mode ai artifacts can optionally automatically run.

This is useful for cases where you may want to embed a survey and so on.

Additionally, artifacts now allow for better fidelity around display:

<div class="ai-artifact" data-ai-artifact-id="501" data-ai-artifact-height="300px" data-ai-artifact-autorun data-ai-artifact-seamless></div>

User can supply height and seamless mode to be seamlessly rendered with no box shadow and show full screen button.
2025-06-12 20:04:48 +10:00
Roman Rizzi
8c8fd969ef
FIX: Don't check for #blank? when manipulating chunks (#1428) 2025-06-11 20:38:58 -03:00
Sam
d97307e99b
FEATURE: optionally support OpenAI responses API (#1423)
OpenAI ship a new API for completions called "Responses API"

Certain models (o3-pro) require this API.
Additionally certain features are only made available to the new API.

This allow enabling it per LLM.

see: https://platform.openai.com/docs/api-reference/responses
2025-06-11 17:12:25 +10:00
Sam
fdf0ff8a25
FEATURE: persistent key-value storage for AI Artifacts (#1417)
Introduces a persistent, user-scoped key-value storage system for
AI Artifacts, enabling them to be stateful and interactive. This
transforms artifacts from static content into mini-applications that can
save user input, preferences, and other data.

The core components of this feature are:

1.  **Model and API**:
    - A new `AiArtifactKeyValue` model and corresponding database table to
      store data associated with a user and an artifact.
    - A new `ArtifactKeyValuesController` provides a RESTful API for
      CRUD operations (`index`, `set`, `destroy`) on the key-value data.
    - Permissions are enforced: users can only modify their own data but
      can view public data from other users.

2.  **Secure JavaScript Bridge**:
    - A `postMessage` communication bridge is established between the
      sandboxed artifact `iframe` and the parent Discourse window.
    - A JavaScript API is exposed to the artifact as `window.discourseArtifact`
      with async methods: `get(key)`, `set(key, value, options)`,
      `delete(key)`, and `index(filter)`.
    - The parent window handles these requests, makes authenticated calls to the
      new controller, and returns the results to the iframe. This ensures
      security by keeping untrusted JS isolated.

3.  **AI Tool Integration**:
    - The `create_artifact` tool is updated with a `requires_storage`
      boolean parameter.
    - If an artifact requires storage, its metadata is flagged, and the
      system prompt for the code-generating AI is augmented with detailed
      documentation for the new storage API.

4.  **Configuration**:
    - Adds hidden site settings `ai_artifact_kv_value_max_length` and
      `ai_artifact_max_keys_per_user_per_artifact` for throttling.

This also includes a minor fix to use `jsonb_set` when updating
artifact metadata, ensuring other metadata fields are preserved.
2025-06-11 06:59:46 +10:00
Roman Rizzi
f7e0ea888d
DEV: Use a PORO to represent modules/features. (#1421)
Additional changes:

Adds a "#features" method in AiPersona to find which features are using that persona.
Serializes a basic version of a LlmModel in the persona's "#default_llm" serializer attribute.
2025-06-10 14:37:53 -03:00
Rafael dos Santos Silva
b54db133cd
FIX: No need for XML in gists responses anymore (#1420) 2025-06-10 14:21:31 -03:00
Roman Rizzi
98afd7f8c3
FEATURE: Display features that rely on multiple personas. (#1411)
* FEATURE: Display features that rely on multiple personas.

This change makes the previously hidden feature page visible while displaying features, like the AI helper, which relies on multiple personas.

* Fix system specs
2025-06-09 16:13:09 -03:00
Keegan George
33fd6801e5
DEV: Add back validator for Spam setting (#1415)
## 🔍 Overview
This update re-introduces the validator used on the `ai_spam_detection_enabled` setting. It was initially added here: https://github.com/discourse/discourse-ai/pull/1374 to prevent Spam from being enabled without creating an `AiModerationSetting` value in the database. However, due to issues with backups/migrations we temporarily removed it here: https://github.com/discourse/discourse-ai/pull/1393. Now with some internal fixes, we can re-introduce it. We also update the validator so that it only validates when trying to turn on rather than when turning off too.
2025-06-06 10:56:36 -07:00
Natalie Tay
6827147362
DEV: Add topic and post id when using completions for traceability to AiApiAuditLog (#1414)
The AiApiAuditLog per translation event doesn't trace back easily to a post or topic.

This commit adds support to that, and also switches the translators to named arguments rather than positional arguments.
2025-06-06 23:24:24 +08:00
Natalie Tay
8a3a247b11
DEV: Also detect locale of categories and do not translate if already in the locale (#1413)
Previously I had omitted to add `locale` to the category, as categories tended to be just a single word, and I did not find it would be worth to carry locale information.

Due to certain LLMs that do poorer at translation, category descriptions got pretty messy. We added locale support here - https://github.com/discourse/discourse/pull/32962. 

This PR adds the automatic locale detection, and skips translating to the category's locale.
2025-06-06 22:41:48 +08:00
Sam
6817866de9
FEATURE: allow access to assigns from forum researcher (#1412)
* FEATURE: allow access to assigns from forum researcher

* FIX: should properly be checking for empty

* finish PR
2025-06-06 16:59:00 +10:00
Roman Rizzi
a68ab76eb6
FIX: Update topic summarization prompt to work better when using full names (#1409) 2025-06-05 12:28:29 -03:00
Roman Rizzi
c885e5697f review feedback 2025-06-04 14:23:00 -03:00
Roman Rizzi
0338dbea23 FEATURE: Use different personas to power AI helper features.
You can now edit each AI helper prompt individually through personas, limit access to specific groups, set different LLMs, etc.
2025-06-04 14:23:00 -03:00
Sam
3e74eea1e5
FEATURE: add context and llm controls to researcher, fix username filter (#1401)
Adds context length controls to researcher (max tokens per post and batch)
Allow picking LLM for researcher
Fix bug where unicode usernames were not working
Fix documentation of OR logic
2025-06-04 16:39:43 +10:00
Sam
4dffd0b2c5
DEV: improve tool infra, improve forum researcher prompts, improve logging (#1391)
- add sleep function for tool polling with rate limits
- Support base64 encoding for HTTP requests and uploads
-  Enhance forum researcher with cost warnings and comprehensive planning
- Add cancellation support for research operations
- Include feature_name parameter for bot analytics
- richer research support (OR queries)
2025-06-03 15:17:55 +10:00
Rafael dos Santos Silva
27de71fc4f
FIX: Proper default LLM detection for inferred concepts (#1392) 2025-06-02 17:56:47 -03:00
Rafael dos Santos Silva
478f31de47
FEATURE: add inferred concepts system (#1330)
* FEATURE: add inferred concepts system

This commit adds a new inferred concepts system that:
- Creates a model for storing concept labels that can be applied to topics
- Provides AI personas for finding new concepts and matching existing ones
- Adds jobs for generating concepts from popular topics
- Includes a scheduled job that automatically processes engaging topics

* FEATURE: Extend inferred concepts to include posts

* Adds support for concepts to be inferred from and applied to posts
* Replaces daily task with one that handles both topics and posts
* Adds database migration for posts_inferred_concepts join table
* Updates PersonaContext to include inferred concepts



Co-authored-by: Roman Rizzi <rizziromanalejandro@gmail.com>
Co-authored-by: Keegan George <kgeorge13@gmail.com>
2025-06-02 14:29:20 -03:00
Keegan George
34c98de864
FIX: Exporting overall sentiment fails (#1388)
## 🔍 Overview

When exporting an Overall Sentiment report in the admin panel, the export fails with:

```ruby
Job exception: no implicit conversion of Symbol into Integer
```

This was happening because we are passing a single _Hash_ to `report.data` however, exports expect `report.data` to be an _Array of Hashes_. This update fixes this issue by wrapping the data in an array.
2025-05-30 11:58:28 -07:00
Sam
77ae426d95
FEATURE: support upload.getUrl in custom tools (#1384)
* FEATURE: support upload.getUrl in custom tools

Some tools need to share images with an API. A common pattern
is for APIs to expect a URL.

This allows converting upload://123123 to a proper CDN friendly
URL from within a custom tool

* no support for secure uploads, so be explicit about it.
2025-05-30 15:47:07 +10:00
Natalie Tay
373e2305d6
FEATURE: Automatic translation and localization of posts, topics, categories (#1376)
Related: https://github.com/discourse/discourse-translator/pull/310

This commit includes all the jobs and event hooks to localize posts, topics, and categories.

A few notes:
- `feature_name: "translation"` because the site setting is `ai-translation` and module is `Translation`
- we will switch to proper ai-feature in the near future, and can consider using the persona_user as `localization.localizer_user_id`
- keeping things flat within the module for now as we will be moving to ai-feature soon and have to rearrange
- Settings renamed/introduced are:
  - ai_translation_backfill_rate (0)
  - ai_translation_backfill_limit_to_public_content (true)
  - ai_translation_backfill_max_age_days (5)
  - ai_translation_verbose_logs (false)
2025-05-29 17:28:06 +08:00
Roman Rizzi
01e29ca5d8
FIX: Bump persona's examples length (#1377) 2025-05-28 14:01:44 -03:00
Keegan George
297c64c3b8
DEV: Unhide spam detection setting (#1374)
## 🔍 Overview
We want to unhide `ai_spam_detection_enabled` setting so that we can retain staff action log features. However, we also want to ensure users cannot enable spam detection without having `AiModerationSetting.spam` present in the database.

In this update we unhide the setting, but also add a validator to ensure the necessary configuration is in place before allowing the setting to be enabled.
2025-05-28 07:50:56 -07:00
Sam
70b0db2871
FIX: improve researcher tool - fix topic filters (#1368)
* Small fix, reasoning is now available on Claude 4 models

* fix invalid filters should raise, topic filter not working

* fix spec so we are consistent
2025-05-26 16:00:44 +10:00
Roman Rizzi
0ce17a122f
FIX: Correctly pass tool_choice when using Claude models. (#1364)
The `ClaudePrompt` object couldn't access the original prompt's tool_choice attribute, affecting both Anthropic and Bedrock.
2025-05-23 10:36:52 -03:00
Sam
cf220c530c
FIX: Improve MessageBus efficiency and correctly stop streaming (#1362)
* FIX: Improve MessageBus efficiency and correctly stop streaming

This commit enhances the message bus implementation for AI helper streaming by:

- Adding client_id targeting for message bus publications to ensure only the requesting client receives streaming updates
- Limiting MessageBus backlog size (2) and age (60 seconds) to prevent Redis bloat
- Replacing clearTimeout with Ember's cancel method for proper runloop management, we were leaking a stop
- Adding tests for client-specific message delivery

These changes improve memory usage and make streaming more reliable by ensuring messages are properly directed to the requesting client.

* composer suggestion needed a fix as well.

* backlog size of 2 is risky here cause same channel name is reused between clients
2025-05-23 16:23:06 +10:00
Roman Rizzi
d72ad84f8f
FIX: Retry parsing escaped inner JSON to handle control chars. (#1357)
The structured output JSON comes embedded inside the API response, which is also a JSON. Since we have to parse the response to process it, any control characters inside the structured output are unescaped into regular characters, leading to invalid JSON and breaking during parsing. This change adds a retry mechanism that escapes
the string again if parsing fails, preventing the parser from breaking on malformed input and working around this issue.

For example:

```
  original = '{ "a": "{\\"key\\":\\"value with \\n newline\\"}" }'
  JSON.parse(original) => { "a" => "{\"key\":\"value with \n newline\"}" }
  # At this point, the inner JSON string contains an actual newline.
```
2025-05-21 11:25:59 -03:00
Roman Rizzi
e207eba1a4
FIX: Don't dig on nil when checking for the gemini schema (#1356) 2025-05-21 08:30:47 -03:00
Sam
7db2589cc4
DEV: prompt engineering to improve citations (#1351) 2025-05-20 13:01:35 +10:00
Roman Rizzi
2fb691cba8
FEATURE: Triage can hide posts after adding them to the review queue (#1348) 2025-05-20 08:19:00 +10:00
Sam
de0625571b
FIX: add safe navigation to serializer include conditions (#1349)
In some rare cases a post can exist without a topic
2025-05-19 18:55:55 -03:00
Keegan George
9ee82fd8be
DEV: Temporarily suppress diff animation as we fix issues (#1341)
The diff animation introduced in https://github.com/discourse/discourse-ai/pull/1332 and with attempts to improve it in https://github.com/discourse/discourse-ai/pull/1338 still has various issues. As we work on a fix, we want to revert the animation to simply stream the diff without animation so users are not left with a janky unusable experience.
2025-05-15 14:55:30 -07:00
Keegan George
dfea784fc4
DEV: Improve diff streaming accuracy with safety checker (#1338)
This update adds a safety checker which scans the streamed updates. It ensures that incomplete segments of text are not sent yet over message bus as this will cause breakage with the diff streamer. It also updates the diff streamer to handle a thinking state for when we are waiting for message bus updates.
2025-05-15 11:38:46 -07:00
Roman Rizzi
ff2e18f9ca
FIX: Structured output discrepancies. (#1340)
This change fixes two bugs and adds a safeguard.

The first issue is that the schema Gemini expected differed from the one sent, resulting in 400 errors when performing completions.

The second issue was that creating a new persona won't define a method
for `response_format`. This has to be explicitly defined when we wrap it inside the Persona class. Also, There was a mismatch between the default value and what we stored in the DB. Some parts of the code expected symbols as keys and others as strings.

Finally, we add a safeguard when, even if asked to, the model refuses to reply with a valid JSON. In this case, we are making a best-effort to recover and stream the raw response.
2025-05-15 11:32:10 -03:00
Sam
1b3fdad5c7
FEATURE: allow researcher to also research specific topics (#1339)
* FEATURE: allow researcher to also research specific topics

Also improve UI around research with more accurate info

* this ensures that under no conditions PMs will be included
2025-05-15 17:48:21 +10:00
Sam
2c6459429f
DEV: use a proper object for tool definition (#1337)
* DEV: use a proper object for tool definition

This moves away from using a loose hash to define tools, which
is error prone.

Instead given a proper object we will also be able to coerce the
return values to match tool definition correctly

* fix xml tools

* fix anthropic tools

* fix specs... a few more to go

* specs are passing

* FIX: coerce values for XML tool calls

* Update spec/lib/completions/tool_definition_spec.rb

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-05-15 17:32:39 +10:00
Sam
c34fcc8a95
FEATURE: forum researcher persona for deep research (#1313)
This commit introduces a new Forum Researcher persona specialized in deep forum content analysis along with comprehensive improvements to our AI infrastructure.

Key additions:

    New Forum Researcher persona with advanced filtering and analysis capabilities
    Robust filtering system supporting tags, categories, dates, users, and keywords
    LLM formatter to efficiently process and chunk research results

Infrastructure improvements:

    Implemented CancelManager class to centrally manage AI completion cancellations
    Replaced callback-based cancellation with a more robust pattern
    Added systematic cancellation monitoring with callbacks

Other improvements:

    Added configurable default_enabled flag to control which personas are enabled by default
    Updated translation strings for the new researcher functionality
    Added comprehensive specs for the new components

    Renames Researcher -> Web Researcher

This change makes our AI platform more stable while adding powerful research capabilities that can analyze forum trends and surface relevant content.
2025-05-14 12:36:16 +10:00