This commit adds an "unavailable" state for the AI semantic search toggle. Currently the AI toggle disappears when the sort by is anything but Relevance which makes the UI confusing for users looking for AI results. This should help!
In a previous refactor, we moved the responsibility of querying and storing embeddings into the `Schema` class. Now, it's time for embedding generation.
The motivation behind these changes is to isolate vector characteristics in simple objects to later replace them with a DB-backed version, similar to what we did with LLM configs.
* FIX: Make sure gists are atleast five minutes old before updating them
* Update app/jobs/regular/fast_track_topic_gist.rb
Co-authored-by: Keegan George <kgeorge13@gmail.com>
---------
Co-authored-by: Keegan George <kgeorge13@gmail.com>
* REFACTOR: A Simpler way of interacting with embeddings' tables.
This change adds a new abstraction called `Schema`, which acts as a repository that supports the same DB features `VectorRepresentation::Base` has, with the exception that removes the need to have duplicated methods per embeddings table.
It is also a bit more flexible when performing a similarity search because you can pass it a block that gives you access to the builder, allowing you to add multiple joins/where conditions.
In this PR, we added functionality to hide the admin header for edit/new actions - https://github.com/discourse/discourse/pull/30175
To make it work properly, we have to rename `show` to `edit` which is also a more accurate name.
This introduces a comprehensive spam detection system that uses LLM models
to automatically identify and flag potential spam posts. The system is
designed to be both powerful and configurable while preventing false positives.
Key Features:
* Automatically scans first 3 posts from new users (TL0/TL1)
* Creates dedicated AI flagging user to distinguish from system flags
* Tracks false positives/negatives for quality monitoring
* Supports custom instructions to fine-tune detection
* Includes test interface for trying detection on any post
Technical Implementation:
* New database tables:
- ai_spam_logs: Stores scan history and results
- ai_moderation_settings: Stores LLM config and custom instructions
* Rate limiting and safeguards:
- Minimum 10-minute delay between rescans
- Only scans significant edits (>10 char difference)
- Maximum 3 scans per post
- 24-hour maximum age for scannable posts
* Admin UI features:
- Real-time testing capabilities
- 7-day statistics dashboard
- Configurable LLM model selection
- Custom instruction support
Security and Performance:
* Respects trust levels - only scans TL0/TL1 users
* Skips private messages entirely
* Stops scanning users after 3 successful public posts
* Includes comprehensive test coverage
* Maintains audit log of all scan attempts
---------
Co-authored-by: Keegan George <kgeorge13@gmail.com>
Co-authored-by: Martin Brennan <martin@discourse.org>
* UX: Improve rough edges of AI usage page
* Ensure all text uses I18n
* Change from <button> usage to <DButton>
* Use <AdminConfigAreaCard> in place of custom card styles
* Format numbers nicely using our number format helper,
show full values on hover using title attr
* Ensure 0 is always shown for counters, instead of being blank
* FEATURE: Load usage data after page load
Use ConditionalLoadingSpinner to hide load of usage
data, this prevents us hanging on page load with a white
screen.
* UX: Split users table, and add empty placeholders and page subheader
* DEV: Test fix
Instead of a stacked chart showing a separate series for positive and negative, this PR introduces a simplification to the overall sentiment dashboard. It comprises the sentiment into a single series of the difference between `positive - negative` instead. This should allow for the data to be more easy to scan and look for trends
Instead of a stacked chart showing a separate series for positive and negative, this PR introduces a simplification to the overall sentiment dashboard. It comprises the sentiment into a single series of the difference between `positive - negative` instead. This should allow for the data to be more easy to scan and look for trends.
* FEATURE: first class support for OpenRouter
This new implementation supports picking quantization and provider pref
Also:
- Improve logging for summary generation
- Improve error message when contacting LLMs fails
* Better support for full screen artifacts on iPad
Support back button to close full screen
Refactor dialect selection and add Nova API support
Change dialect selection to use llm_model object instead of just provider name
Add support for Amazon Bedrock's Nova API with native tools
Implement Nova-specific message processing and formatting
Update specs for Nova and AWS Bedrock endpoints
Enhance AWS Bedrock support to handle Nova models
Fix Gemini beta API detection logic
Previously, when clicking add footnote on an explain suggestion it would replace the selected word by finding the first occurrence of the word. This results in issues when there are more than one occurrences of a word in a post. This is not trivial to solve, so this PR instead prevents incorrect text replacements by only allowing the replacement if it's unique. We use the same logic here that we use to determine if something can be fast edited.
In this PR we also update tests for post helper explain suggestions. For a while, we haven't had tests here due to streaming/timing issues, we've been skipping our system specs. In this PR, we add acceptance tests to handle this which gives us improved ability to publish message bus updates in the testing environment so that it can be better tested without issues.
* FEATURE: Backfill posts sentiment.
It adds a scheduled job to backfill posts' sentiment, similar to our existing rake task, but with two settings to control the batch size and posts' max-age.
* Make sure model_name order is consistent.
For a while now we have not been sending the examples to AI
helper, which can lead to inconsistent results.
Note: this also means that in non English we did not send
English results, so this may end up reducing performance
That said first thing we need to do is fix the regression.
This PR fixes an issue where the tag suggester for edit title topic area was suggesting tags that are already assigned on a post. It also updates the amount of suggested tags to 7 so that there is still a decent amount of tags suggested when tags are already assigned.
Add support for versioned artifacts with improved diff handling
* Add versioned artifacts support allowing artifacts to be updated and tracked
- New `ai_artifact_versions` table to store version history
- Support for updating artifacts through a new `UpdateArtifact` tool
- Add version-aware artifact rendering in posts
- Include change descriptions for version tracking
* Enhance artifact rendering and security
- Add support for module-type scripts and external JS dependencies
- Expand CSP to allow trusted CDN sources (unpkg, cdnjs, jsdelivr, googleapis)
- Improve JavaScript handling in artifacts
* Implement robust diff handling system (this is dormant but ready to use once LLMs catch up)
- Add new DiffUtils module for applying changes to artifacts
- Support for unified diff format with multiple hunks
- Intelligent handling of whitespace and line endings
- Comprehensive error handling for diff operations
* Update routes and UI components
- Add versioned artifact routes
- Update markdown processing for versioned artifacts
Also
- Tweaks summary prompt
- Improves upload support in custom tool to also provide urls
In the tag suggester menu we use `DButton` as a wrapper element and use the `discourseTag` helper to render the text inside the element. So visually there is text content inside the button. However, since `DButton` assumes that no `label`/`translatedLabel` inside an element means `.no-text` CSS style should be applied to the button's element, it was resulting in some incorrect styling being applied to this menu. This PR resolves that by programmatically adding the tag as a `translatedLabel` and then visually hiding it with CSS.
The description of the automatic captions dialog prompt included "Would you like to enable automatic auto captions on image uploads." I removed the second "auto" at the suggestion of translators.