This introduces a comprehensive spam detection system that uses LLM models
to automatically identify and flag potential spam posts. The system is
designed to be both powerful and configurable while preventing false positives.
Key Features:
* Automatically scans first 3 posts from new users (TL0/TL1)
* Creates dedicated AI flagging user to distinguish from system flags
* Tracks false positives/negatives for quality monitoring
* Supports custom instructions to fine-tune detection
* Includes test interface for trying detection on any post
Technical Implementation:
* New database tables:
- ai_spam_logs: Stores scan history and results
- ai_moderation_settings: Stores LLM config and custom instructions
* Rate limiting and safeguards:
- Minimum 10-minute delay between rescans
- Only scans significant edits (>10 char difference)
- Maximum 3 scans per post
- 24-hour maximum age for scannable posts
* Admin UI features:
- Real-time testing capabilities
- 7-day statistics dashboard
- Configurable LLM model selection
- Custom instruction support
Security and Performance:
* Respects trust levels - only scans TL0/TL1 users
* Skips private messages entirely
* Stops scanning users after 3 successful public posts
* Includes comprehensive test coverage
* Maintains audit log of all scan attempts
---------
Co-authored-by: Keegan George <kgeorge13@gmail.com>
Co-authored-by: Martin Brennan <martin@discourse.org>
* UX: Improve rough edges of AI usage page
* Ensure all text uses I18n
* Change from <button> usage to <DButton>
* Use <AdminConfigAreaCard> in place of custom card styles
* Format numbers nicely using our number format helper,
show full values on hover using title attr
* Ensure 0 is always shown for counters, instead of being blank
* FEATURE: Load usage data after page load
Use ConditionalLoadingSpinner to hide load of usage
data, this prevents us hanging on page load with a white
screen.
* UX: Split users table, and add empty placeholders and page subheader
* DEV: Test fix
* FEATURE: first class support for OpenRouter
This new implementation supports picking quantization and provider pref
Also:
- Improve logging for summary generation
- Improve error message when contacting LLMs fails
* Better support for full screen artifacts on iPad
Support back button to close full screen
In the tag suggester menu we use `DButton` as a wrapper element and use the `discourseTag` helper to render the text inside the element. So visually there is text content inside the button. However, since `DButton` assumes that no `label`/`translatedLabel` inside an element means `.no-text` CSS style should be applied to the button's element, it was resulting in some incorrect styling being applied to this menu. This PR resolves that by programmatically adding the tag as a `translatedLabel` and then visually hiding it with CSS.
- Added a new admin interface to track AI usage metrics, including tokens, features, and models.
- Introduced a new route `/admin/plugins/discourse-ai/ai-usage` and supporting API endpoint in `AiUsageController`.
- Implemented `AiUsageSerializer` for structuring AI usage data.
- Integrated CSS stylings for charts and tables under `stylesheets/modules/llms/common/usage.scss`.
- Enhanced backend with `AiApiAuditLog` model changes: added `cached_tokens` column (implemented with OpenAI for now) with relevant DB migration and indexing.
- Created `Report` module for efficient aggregation and filtering of AI usage metrics.
- Updated AI Bot title generation logic to log correctly to user vs bot
- Extended test coverage for the new tracking features, ensuring data consistency and access controls.
This commit applies further admin UI guidelines, now that they have been more
fleshed out in core, to the AI admin UI:
* Tools
* LLMs
* Personas
The changes include but are not limited to:
* Applying the table CSS classes, for desktop and mobile
* Adding a description and learn more link for each tab
* Adding an empty list placeholder with CTA using `AdminConfigAreaEmptyList`
* Replacing custom headings with `AdminPageSubheader`
This is a significant PR that introduces AI Artifacts functionality to the discourse-ai plugin along with several other improvements. Here are the key changes:
1. AI Artifacts System:
- Adds a new `AiArtifact` model and database migration
- Allows creation of web artifacts with HTML, CSS, and JavaScript content
- Introduces security settings (`strict`, `lax`, `disabled`) for controlling artifact execution
- Implements artifact rendering in iframes with sandbox protection
- New `CreateArtifact` tool for AI to generate interactive content
2. Tool System Improvements:
- Adds support for partial tool calls, allowing incremental updates during generation
- Better handling of tool call states and progress tracking
- Improved XML tool processing with CDATA support
- Fixes for tool parameter handling and duplicate invocations
3. LLM Provider Updates:
- Updates for Anthropic Claude models with correct token limits
- Adds support for native/XML tool modes in Gemini integration
- Adds new model configurations including Llama 3.1 models
- Improvements to streaming response handling
4. UI Enhancements:
- New artifact viewer component with expand/collapse functionality
- Security controls for artifact execution (click-to-run in strict mode)
- Improved dialog and response handling
- Better error management for tool execution
5. Security Improvements:
- Sandbox controls for artifact execution
- Public/private artifact sharing controls
- Security settings to control artifact behavior
- CSP and frame-options handling for artifacts
6. Technical Improvements:
- Better post streaming implementation
- Improved error handling in completions
- Better memory management for partial tool calls
- Enhanced testing coverage
7. Configuration:
- New site settings for artifact security
- Extended LLM model configurations
- Additional tool configuration options
This PR significantly enhances the plugin's capabilities for generating and displaying interactive content while maintaining security and providing flexible configuration options for administrators.
In preparation for applying the streaming animation elsewhere, we want to better improve the organization of folder structure and methods used in the `ai-streamer`
This changeset:
1. Corrects some issues with "force_default_llm" not applying
2. Expands the LLM list page to show LLM usage
3. Clarifies better what "enabling a bot" on an llm means (you get it in the selector)
Previously, when we added smooth streaming animation to summarization (https://github.com/discourse/discourse-ai/pull/778) we used the same logic and lib we did for AI Bot. However, since `AiSummaryBox` is an Ember component, the direct DOM manipulation done in the streamer (`SummaryUpdater`) would often result in issues with summarization where sometimes summarization updates would hang, especially on the last result. This is likely due to the DOM manipulation being done in the streamer being incongruent with Ember's way of rendering.
In this PR, we remove the direct DOM manipulation done in the lib `SummaryUpdater` in favour of directly updating the properties in `AiSummaryBox` using the `componentContext`. Instead of messing with Ember's rendered DOM, passing the updates and allowing the component to render the updates directly should likely prevent further issues with summarization.
The bug itself is quite difficult to repro and also difficult to test, so no tests have been added to this PR. But I will be manually testing and assessing for any potential issues.
* Display gists in the hot topics list
* Adjust hot topics gist strategy and add a job to generate gists
* Replace setting with a configurable batch size
* Avoid loading summaries for other topic lists
* Tweak gist prompt to focus on latest posts in the context of the OP
* Remove serializer hack and rely on core change from discourse/discourse#29291
* Update lib/summarization/strategies/hot_topic_gists.rb
Co-authored-by: Rafael dos Santos Silva <xfalcox@gmail.com>
---------
Co-authored-by: Rafael dos Santos Silva <xfalcox@gmail.com>
Splits persona permissions so you can allow a persona on:
- chat dms
- personal messages
- topic mentions
- chat channels
(any combination is allowed)
Previously we did not have this flexibility.
Additionally, adds the ability to "tether" a language model to a persona so it will always be used by the persona. This allows people to use a cheaper language model for one group of people and more expensive one for other people
This allows custom tools access to uploads and sophisticated searches using embedding.
It introduces:
- A shared front end for listing and uploading files (shared with personas)
- Backend implementation of index.search function within a custom tool.
Custom tools now may search through uploaded files
function invoke(params) {
return index.search(params.query)
}
This means that RAG implementers now may preload tools with knowledge and have high fidelity over
the search.
The search function support
specifying max results
specifying a subset of files to search (from uploads)
Also
- Improved documentation for tools (when creating a tool a preamble explains all the functionality)
- uploads were a bit finicky, fixed an edge case where the UI would not show them as updated
Restructures LLM config page so it is far clearer.
Also corrects bugs around adding LLMs and having LLMs not editable post addition
---------
Co-authored-by: Sam Saffron <sam.saffron@gmail.com>
Previously we had some hardcoded markup with scss making a loading indicator wave. This code was being duplicated and used in both semantic search and summarization. We want to add the indicator wave to the AI helper diff modal as well and have the text flashing instead of the loading spinner. To ensure we do not repeat ourselves, in this PR we turn the summary indicator wave into a reusable template only component called: `AiIndicatorWave`. We then apply the usage of that component to semantic search, summarization, and the composer helper modal.
This commit fixes an issue where the composer AI helper was not visible on iPad in DiscourseHub. This was due to the z-index being different for `reply-control` when Discourse Hub inserts its `footer-nav`
Previously we had moved the AI helper from the options menu to a selection menu that appears when selecting text in the composer. This had the benefit of making the AI helper a more discoverable feature. Now that some time has passed and the AI helper is more recognized, we will be moving it back to the composer toolbar.
This is better because:
- It consistent with other behavior and ways of accessing tools in the composer
- It has an improved mobile experience
- It reduces unnecessary code and keeps things easier to migrate when we have composer V2.
- It allows for easily triggering AI helper for all content by clicking the button instead of having to select everything.
Previously there was too much work proofreading text, new implementation
provides a single shortcut and easy way of proofreading text.
Co-authored-by: Martin Brennan <martin@discourse.org>
* FEATURE: LLM Triage support for systemless models.
This change adds support for OSS models without support for system messages. LlmTriage's system message field is no longer mandatory. We now send the post contents in a separate user message.
* Models using Ollama can also disable system prompts