1054 Commits

Author SHA1 Message Date
Natalie Tay
fe44d78156
DEV: Expose AI spam scanning metrics (#1077)
This should give us a better idea on how our scanner is faring across sites.

```
# HELP discourse_discourse_ai_spam_detection AI spam scanning statistics
# TYPE discourse_discourse_ai_spam_detection counter
discourse_discourse_ai_spam_detection{db="default",type="scanned"} 16
discourse_discourse_ai_spam_detection{db="default",type="is_spam"} 7
discourse_discourse_ai_spam_detection{db="default",type="false_positive"} 1
discourse_discourse_ai_spam_detection{db="default",type="false_negative"} 2
```
2025-01-27 11:57:01 +08:00
Roman Rizzi
ad7bb9bd31
DEV: Promote historical post-deploy migrations (#1091) 2025-01-24 11:49:15 -03:00
Kris
99e73f09ff
UX: improve embeddings config styles (#1085)
* WIP: improve embeddings config styles

* switch to textarea, fix back button

* remove log, update button, fix tests

* stree

* fix spec

* spec fix

* remove comment
2025-01-24 16:24:59 +11:00
Martin Brennan
952e0a51d6
UX: Update usage "Learn more..." link (#1090)
There is a new Meta topic for this:

https://meta.discourse.org/t/discourse-ai-ai-usage/348677
2025-01-24 14:18:18 +10:00
Kris
956efba8cb
UX: set usage as first AI admin tab (#1089) 2025-01-24 10:14:52 +11:00
Rafael dos Santos Silva
67a1257b89
FEATURE: Gemini Tokenizer (#1088) 2025-01-23 18:20:35 -03:00
Roman Rizzi
5a97752117
FIX: Always raise the single exception/Open AI models migration (#1087) 2025-01-23 15:30:06 -03:00
Penar Musaraj
d5cf53e8e0
UX: Fix composer helper z-index (#1086)
Followup to https://github.com/discourse/discourse-ai/pull/1064

That commits adds a higher z-index due to core changes, we no longer need
an iPad-specific z-index.
2025-01-23 10:07:27 -05:00
Sam
8bf350206e
FEATURE: track duration of AI calls (#1082)
* FEATURE: track duration of AI calls

* annotate
2025-01-23 11:32:12 +11:00
Roman Rizzi
faa8e6e873
FIX: Embeddings backfill rake task was using old code (#1084) 2025-01-22 14:00:26 -03:00
Roman Rizzi
e2e753d73c
FEATURE: Formalize support for matryoshka dimensions. (#1083)
We have a flag to signal we are shortening the embeddings of a model.
Only used in Open AI's text-embedding-3-*, but we plan to use it for other services.
2025-01-22 11:26:46 -03:00
我秦始皇
654f90f1cd
FIX: convert provider_params hash to json before db insert (#1081)
* FIX: convert provider_params hash to json before db insert

* FIX: lint issues in config migration

* FIX: simplify provider_params json conversion
2025-01-22 09:55:41 -03:00
Roman Rizzi
a5e5ae72a8
FIX: Open AI embedding shortening is only available for some models (#1080) 2025-01-21 17:50:40 -03:00
Roman Rizzi
3b66fb3e87
FIX: Restore the accidentally deleted query prefix. (#1079)
Additionally, we add a prefix for embedding generation.
Both are stored in the definitions table.
2025-01-21 14:10:31 -03:00
Roman Rizzi
f5cf1019fb
FEATURE: configurable embeddings (#1049)
* Use AR model for embeddings features

* endpoints

* Embeddings CRUD UI

* Add presets. Hide a couple more settings

* system specs

* Seed embedding definition from old settings

* Generate search bit index on the fly. cleanup orphaned data

* support for seeded models

* Fix run test for new embedding

* fix selected model not set correctly
2025-01-21 12:23:19 -03:00
Discourse Translator Bot
fad4b65d4f
Update translations (#1078) 2025-01-21 15:55:40 +01:00
Sam
2c609e165b
FEATURE: Add user location info to spam scanner context (#1076)
This adds registration and last known IP information and email to scanning context.

This provides another hint for spam scanner about possible malicious users.

For example registered in India, replying from Australia or
email is clearly a throwaway email address.
2025-01-21 17:51:21 +11:00
Kelv
7957796e56
DEV: update all suffix alt icon names (#1075) 2025-01-20 17:33:25 +08:00
David Taylor
890b85bff3
DEV: Update icons for FA6 (#1074) 2025-01-17 10:12:57 +00:00
Roman Rizzi
4784e7fe43
FIX: Set default for existing records. (#1073)
We'll later copy the correct value from content_range. 1 should be the min highest post number a topic has.
2025-01-16 10:38:53 -03:00
Roman Rizzi
46fcdb6ba5
FIX: Make summaries backfill job more resilient. (#1071)
To quickly select backfill candidates without comparing SHAs, we compare the last summarized post to the topic's highest_post_number. However, hiding or deleting a post and adding a small action will update this column, causing the job to stall and re-generate the same summary repeatedly until someone posts a regular reply. On top of this, this is not always true for topics with `best_replies`, as this last reply isn't necessarily included.

Since this is not evident at first glance and each summarization strategy picks its targets differently, I'm opting to simplify the backfill logic and how we track potential candidates.

The first step is dropping `content_range`, which serves no purpose and it's there because summary caching was supposed to work differently at the beginning. So instead, I'm replacing it with a column called `highest_target_number`, which tracks `highest_post_number` for topics and could track other things like channel's `message_count` in the future.

Now that we have this column when selecting every potential backfill candidate, we'll check if the summary is truly outdated by comparing the SHAs, and if it's not, we just update the column and move on
2025-01-16 09:42:53 -03:00
Rafael dos Santos Silva
f9aa2de413
FIX: AWS Bedrock non-streaming calls response log (#1072) 2025-01-15 18:51:25 -03:00
Sam
81b952d56e
FIX: only hide posts detected explicitly as spam (#1070)
When enabling spam scanner it there may be old unscanned posts
this can create a risky situation where spam scanner operates
on legit posts during false positives

To keep this a lot safer we no longer try to hide old stuff by
the spammers.
2025-01-15 16:50:41 +11:00
Natalie Tay
c881f8b361
DEV: Add rake task to send topics or posts to spam scanner (#1059) 2025-01-15 11:48:57 +08:00
Rafael dos Santos Silva
92f122c54d
SECURITY: Fix XSS on Shared AI Conversations local Onebox (#1069) 2025-01-14 18:05:37 -03:00
Roman Rizzi
cd03874b4d
FIX: Missing table check in post_migration (#1068) 2025-01-14 17:33:01 -03:00
Roman Rizzi
65456c8b30
DEV: Migration to remove old embeddings tables~ (#1067)
* DEV: Migration to remove old embeddings tables~

* Check for table existence
2025-01-14 17:13:34 -03:00
Roman Rizzi
c4d2b7de1d
PERF: Optimize backfill query to prevent statement timeouts (#1066) 2025-01-14 15:39:19 -03:00
Roman Rizzi
6721c6751d
FIX: Do batches for backfilling huge embeddings tables (#1065) 2025-01-14 14:42:40 -03:00
Keegan George
bbae790c2b
FIX: Composer helper not appearing on tablets (#1064)
This update fixes an issue when the composer helper menu was not being shown on tablets in desktop mode. Updating the `z-index` to use the modal-dialog case is more appropriate here.
2025-01-14 09:35:31 -08:00
Roman Rizzi
356ea77201
FIX: Split backfill into separate migrations to use independent transactions (#1063) 2025-01-14 13:30:52 -03:00
Roman Rizzi
09ca123757
FIX: Split statements to avoid timeout (#1062) 2025-01-14 12:54:18 -03:00
Discourse Translator Bot
c029bc8979
Update translations (#1060) 2025-01-14 16:20:00 +01:00
Roman Rizzi
65bbcd71fc
DEV: Embedding tables' model_id has to be a bigint (#1058)
* DEV: Embedding tables' model_id has to be a bigint

* Drop old search_bit indexes

* copy rag fragment embeddings created during deploy window
2025-01-14 10:53:06 -03:00
Sam
d07cf51653
FEATURE: llm quotas (#1047)
Adds a comprehensive quota management system for LLM models that allows:

- Setting per-group (applied per user in the group) token and usage limits with configurable durations
- Tracking and enforcing token/usage limits across user groups
- Quota reset periods (hourly, daily, weekly, or custom)
-  Admin UI for managing quotas with real-time updates

This system provides granular control over LLM API usage by allowing admins
to define limits on both total tokens and number of requests per group.
Supports multiple concurrent quotas per model and automatically handles
quota resets.


Co-authored-by: Keegan George <kgeorge13@gmail.com>
2025-01-14 15:54:09 +11:00
Sam
20612fde52
FEATURE: add the ability to disable streaming on an Open AI LLM
Disabling streaming is required for models such o1 that do not have streaming
enabled yet

It is good to carry this feature around in case various apis decide not to support streaming endpoints and Discourse AI can continue to work just as it did before. 

Also: fixes issue where sharing artifacts would miss viewport leading to tiny artifacts on mobile
2025-01-13 17:01:01 +11:00
Jarek Radosz
7e9c0dc076
FIX: Invalid locale yaml (#1057) 2025-01-11 23:31:33 +01:00
Mark VanLandingham
2d1ce01320
DEV: Add no-results className to full page search toggle (#1055) 2025-01-10 10:59:18 -06:00
Keegan George
b24669c810
DEV: Add structure for errors in spam (#1054)
This update adds some structure for handling errors in the spam config while also handling a specific error related to the spam scanning user not being an admin account.
2025-01-09 09:17:06 -08:00
Keegan George
24b69bf840
FIX: Update spam controller action should consider seeded LLM properly (#1053)
The seeded LLM setting: `SiteSetting.ai_spam_detection_model_allowed_seeded_models` returns a _string_ with IDs separated by pipes. running `_map` on it will return an array with strings. We were previously checking for the id with custom prefix identifier, but instead we should be checking the stringified ID.
2025-01-08 13:41:25 -08:00
Guhyoun Nam
404092a68c
DEV: Add appEvents trigger when Ai search results toggled (#1052)
This PR adds appEvents triggers when Ai search results are toggled.
2025-01-08 12:17:25 -06:00
Mark VanLandingham
327adbde29
UX: Full page search -- always show tooltip & add msg (#1051) 2025-01-08 09:05:30 -06:00
Kris
749af40fad
UX: close summary modal on click outside (#1050) 2025-01-07 11:24:27 -05:00
Discourse Translator Bot
61758ff8a6
Update translations (#1040) 2025-01-03 14:01:41 +01:00
Mark VanLandingham
b6cefd10fa
DEV: Move semantic search from connector to component (#1048) 2025-01-02 12:32:49 -06:00
Sam
11d0f60f1e
FEATURE: smart date support for AI helper (#1044)
* FEATURE: smart date support for AI helper

This feature allows conversion of human typed in dates and times
to smart "Discourse" timezone friendly dates.

* fix specs and lint

* lint

* address feedback

* add specs
2024-12-31 08:04:25 +11:00
Sam
f9f89adac5
FIX: keep track of silence reason when spam detection flags user (#1046)
Previously reason was blank for silencing user
2024-12-27 17:47:16 +11:00
Keegan George
b480f13a0f
FIX: Prevent LLM enumerator from erroring when spam enabled (#1045)
This PR fixes an issue where LLM enumerator would error out when `SiteSetting.ai_spam_detection = true` but there was no `AiModerationSetting.spam` present.

Typically, we add an `LlmDependencyValidator` for the setting itself, however, since Spam is unique in that it has it's model set in `AiModerationSetting` instead of a `SiteSetting`, we'll add a simple check here to prevent erroring out.
2024-12-27 09:12:29 +11:00
Sam
47ecf86aa1
FIX: embedding validation (#1043) 2024-12-24 09:37:23 +11:00
Rafael dos Santos Silva
792df58fbc
FIX: AI Helper category / tag suggestion when user does not categories muted (#1042) 2024-12-23 15:58:26 -03:00