281 Commits

Author SHA1 Message Date
Sam
a7d032fa28
DEV: artifact system update (#1096)
### Why

This pull request fundamentally restructures how AI bots create and update web artifacts to address critical limitations in the previous approach:

1.  **Improved Artifact Context for LLMs**: Previously, artifact creation and update tools included the *entire* artifact source code directly in the tool arguments. This overloaded the Language Model (LLM) with raw code, making it difficult for the LLM to maintain a clear understanding of the artifact's current state when applying changes. The LLM would struggle to differentiate between the base artifact and the requested modifications, leading to confusion and less effective updates.
2.  **Reduced Token Usage and History Bloat**: Including the full artifact source code in every tool interaction was extremely token-inefficient.  As conversations progressed, this redundant code in the history consumed a significant number of tokens unnecessarily. This not only increased costs but also diluted the context for the LLM with less relevant historical information.
3.  **Enabling Updates for Large Artifacts**: The lack of a practical diff or targeted update mechanism made it nearly impossible to efficiently update larger web artifacts.  Sending the entire source code for every minor change was both computationally expensive and prone to errors, effectively blocking the use of AI bots for meaningful modifications of complex artifacts.

**This pull request addresses these core issues by**:

*   Introducing methods for the AI bot to explicitly *read* and understand the current state of an artifact.
*   Implementing efficient update strategies that send *targeted* changes rather than the entire artifact source code.
*   Providing options to control the level of artifact context included in LLM prompts, optimizing token usage.

### What

The main changes implemented in this PR to resolve the above issues are:

1.  **`Read Artifact` Tool for Contextual Awareness**:
    - A new `read_artifact` tool is introduced, enabling AI bots to fetch and process the current content of a web artifact from a given URL (local or external).
    - This provides the LLM with a clear and up-to-date representation of the artifact's HTML, CSS, and JavaScript, improving its understanding of the base to be modified.
    - By cloning local artifacts, it allows the bot to work with a fresh copy, further enhancing context and control.

2.  **Refactored `Update Artifact` Tool with Efficient Strategies**:
    - The `update_artifact` tool is redesigned to employ more efficient update strategies, minimizing token usage and improving update precision:
        - **`diff` strategy**:  Utilizes a search-and-replace diff algorithm to apply only the necessary, targeted changes to the artifact's code. This significantly reduces the amount of code sent to the LLM and focuses its attention on the specific modifications.
        - **`full` strategy**:  Provides the option to replace the entire content sections (HTML, CSS, JavaScript) when a complete rewrite is required.
    - Tool options enhance the control over the update process:
        - `editor_llm`:  Allows selection of a specific LLM for artifact updates, potentially optimizing for code editing tasks.
        - `update_algorithm`: Enables choosing between `diff` and `full` update strategies based on the nature of the required changes.
        - `do_not_echo_artifact`:  Defaults to true, and by *not* echoing the artifact in prompts, it further reduces token consumption in scenarios where the LLM might not need the full artifact context for every update step (though effectiveness might be slightly reduced in certain update scenarios).

3.  **System and General Persona Tool Option Visibility and Customization**:
    - Tool options, including those for system personas, are made visible and editable in the admin UI. This allows administrators to fine-tune the behavior of all personas and their tools, including setting specific LLMs or update algorithms. This was previously limited or hidden for system personas.

4.  **Centralized and Improved Content Security Policy (CSP) Management**:
    - The CSP for AI artifacts is consolidated and made more maintainable through the `ALLOWED_CDN_SOURCES` constant. This improves code organization and future updates to the allowed CDN list, while maintaining the existing security posture.

5.  **Codebase Improvements**:
    - Refactoring of diff utilities, introduction of strategy classes, enhanced error handling, new locales, and comprehensive testing all contribute to a more robust, efficient, and maintainable artifact management system.

By addressing the issues of LLM context confusion, token inefficiency, and the limitations of updating large artifacts, this pull request significantly improves the practicality and effectiveness of AI bots in managing web artifacts within Discourse.
2025-02-04 16:27:27 +11:00
Roman Rizzi
e2e753d73c
FEATURE: Formalize support for matryoshka dimensions. (#1083)
We have a flag to signal we are shortening the embeddings of a model.
Only used in Open AI's text-embedding-3-*, but we plan to use it for other services.
2025-01-22 11:26:46 -03:00
Roman Rizzi
a5e5ae72a8
FIX: Open AI embedding shortening is only available for some models (#1080) 2025-01-21 17:50:40 -03:00
Roman Rizzi
3b66fb3e87
FIX: Restore the accidentally deleted query prefix. (#1079)
Additionally, we add a prefix for embedding generation.
Both are stored in the definitions table.
2025-01-21 14:10:31 -03:00
Roman Rizzi
f5cf1019fb
FEATURE: configurable embeddings (#1049)
* Use AR model for embeddings features

* endpoints

* Embeddings CRUD UI

* Add presets. Hide a couple more settings

* system specs

* Seed embedding definition from old settings

* Generate search bit index on the fly. cleanup orphaned data

* support for seeded models

* Fix run test for new embedding

* fix selected model not set correctly
2025-01-21 12:23:19 -03:00
Sam
2c609e165b
FEATURE: Add user location info to spam scanner context (#1076)
This adds registration and last known IP information and email to scanning context.

This provides another hint for spam scanner about possible malicious users.

For example registered in India, replying from Australia or
email is clearly a throwaway email address.
2025-01-21 17:51:21 +11:00
Sam
81b952d56e
FIX: only hide posts detected explicitly as spam (#1070)
When enabling spam scanner it there may be old unscanned posts
this can create a risky situation where spam scanner operates
on legit posts during false positives

To keep this a lot safer we no longer try to hide old stuff by
the spammers.
2025-01-15 16:50:41 +11:00
Natalie Tay
c881f8b361
DEV: Add rake task to send topics or posts to spam scanner (#1059) 2025-01-15 11:48:57 +08:00
Sam
11d0f60f1e
FEATURE: smart date support for AI helper (#1044)
* FEATURE: smart date support for AI helper

This feature allows conversion of human typed in dates and times
to smart "Discourse" timezone friendly dates.

* fix specs and lint

* lint

* address feedback

* add specs
2024-12-31 08:04:25 +11:00
Sam
f9f89adac5
FIX: keep track of silence reason when spam detection flags user (#1046)
Previously reason was blank for silencing user
2024-12-27 17:47:16 +11:00
Roman Rizzi
534b0df391
REFACTOR: Separation of concerns for embedding generation. (#1027)
In a previous refactor, we moved the responsibility of querying and storing embeddings into the `Schema` class. Now, it's time for embedding generation.

The motivation behind these changes is to isolate vector characteristics in simple objects to later replace them with a DB-backed version, similar to what we did with LLM configs.
2024-12-16 09:55:39 -03:00
Roman Rizzi
94b85ece80
FIX: Make sure gists are atleast five minutes old before updating them (#1029)
* FIX: Make sure gists are atleast five minutes old before updating them

* Update app/jobs/regular/fast_track_topic_gist.rb

Co-authored-by: Keegan George <kgeorge13@gmail.com>

---------

Co-authored-by: Keegan George <kgeorge13@gmail.com>
2024-12-13 19:36:34 -03:00
Roman Rizzi
eae527f99d
REFACTOR: A Simpler way of interacting with embeddings tables. (#1023)
* REFACTOR: A Simpler way of interacting with embeddings' tables.

This change adds a new abstraction called `Schema`, which acts as a repository that supports the same DB features `VectorRepresentation::Base` has, with the exception that removes the need to have duplicated methods per embeddings table.

It is also a bit more flexible when performing a similarity search because you can pass it a block that gives you access to the builder, allowing you to add multiple joins/where conditions.
2024-12-13 10:15:21 -03:00
Sam
47f5da7e42
FEATURE: Add AI-powered spam detection for new user posts (#1004)
This introduces a comprehensive spam detection system that uses LLM models
to automatically identify and flag potential spam posts. The system is
designed to be both powerful and configurable while preventing false positives.

Key Features:
* Automatically scans first 3 posts from new users (TL0/TL1)
* Creates dedicated AI flagging user to distinguish from system flags
* Tracks false positives/negatives for quality monitoring
* Supports custom instructions to fine-tune detection
* Includes test interface for trying detection on any post

Technical Implementation:
* New database tables:
  - ai_spam_logs: Stores scan history and results
  - ai_moderation_settings: Stores LLM config and custom instructions
* Rate limiting and safeguards:
  - Minimum 10-minute delay between rescans
  - Only scans significant edits (>10 char difference)
  - Maximum 3 scans per post
  - 24-hour maximum age for scannable posts
* Admin UI features:
  - Real-time testing capabilities
  - 7-day statistics dashboard
  - Configurable LLM model selection
  - Custom instruction support

Security and Performance:
* Respects trust levels - only scans TL0/TL1 users
* Skips private messages entirely
* Stops scanning users after 3 successful public posts
* Includes comprehensive test coverage
* Maintains audit log of all scan attempts


---------

Co-authored-by: Keegan George <kgeorge13@gmail.com>
Co-authored-by: Martin Brennan <martin@discourse.org>
2024-12-12 09:17:25 +11:00
Keegan George
a4440c507b
UX: Make sentiment trends more readable (#1018)
Instead of a stacked chart showing a separate series for positive and negative, this PR introduces a simplification to the overall sentiment dashboard. It comprises the sentiment into a single series of the difference between `positive - negative` instead. This should allow for the data to be more easy to scan and look for trends
2024-12-11 09:13:18 -08:00
Roman Rizzi
5fc7a730ef
FIX: Triage rule should append selected tags instead of replacing them (#1022) 2024-12-11 11:19:44 -03:00
Roman Rizzi
6da35d8e66
FIX: Gemini inference client was missing #instance (#1019) 2024-12-10 15:42:31 -03:00
Keegan George
700e9de073
Revert "UX: Make sentiment trends more readable in time series data (#1013)" (#1016)
This reverts commit 375dd702b29a29773184d8fda57e22d246504d24.
2024-12-10 08:15:27 -08:00
Keegan George
375dd702b2
UX: Make sentiment trends more readable in time series data (#1013)
Instead of a stacked chart showing a separate series for positive and negative, this PR introduces a simplification to the overall sentiment dashboard. It comprises the sentiment into a single series of the difference between `positive - negative` instead. This should allow for the data to be more easy to scan and look for trends.
2024-12-10 07:22:41 -08:00
Roman Rizzi
085dde7042
FEATURE: Select stop sequences from triage script (#1010) 2024-12-06 11:13:47 -03:00
Roman Rizzi
7ebbcd2de3
FIX: Make sure prompt uploads get included in the prompt when triaging (#1008) 2024-12-05 21:04:35 -03:00
Roman Rizzi
b32b1cf241
FIX: Add a digest check to avoid repeteadly generating embeddings (bulk) (#1001) 2024-12-04 17:47:28 -03:00
Roman Rizzi
ce6a2eca21
FEATURE: Backfill posts sentiment. (#982)
* FEATURE: Backfill posts sentiment.

It adds a scheduled job to backfill posts' sentiment, similar to our existing rake task, but with two settings to control the batch size and posts' max-age.

* Make sure model_name order is consistent.
2024-12-03 10:27:03 -03:00
Sam
117c06220e
FEATURE: allow artifacts to be updated (#980)
Add support for versioned artifacts with improved diff handling

* Add versioned artifacts support allowing artifacts to be updated and tracked
  - New `ai_artifact_versions` table to store version history
  - Support for updating artifacts through a new `UpdateArtifact` tool
  - Add version-aware artifact rendering in posts
  - Include change descriptions for version tracking

* Enhance artifact rendering and security
  - Add support for module-type scripts and external JS dependencies
  - Expand CSP to allow trusted CDN sources (unpkg, cdnjs, jsdelivr, googleapis)
  - Improve JavaScript handling in artifacts

* Implement robust diff handling system (this is dormant but ready to use once LLMs catch up)
  - Add new DiffUtils module for applying changes to artifacts
  - Support for unified diff format with multiple hunks
  - Intelligent handling of whitespace and line endings
  - Comprehensive error handling for diff operations

* Update routes and UI components
  - Add versioned artifact routes
  - Update markdown processing for versioned artifacts

Also

- Tweaks summary prompt
- Improves upload support in custom tool to also provide urls
2024-12-03 07:23:31 +11:00
Rafael dos Santos Silva
0ac18d157b
FEATURE: Adjustments to gist summaries (#988)
- makes visible to everyone by default
- backfills gists before full summaries
- adds configurable max age setting to backfill job
2024-12-02 15:22:35 -03:00
Rafael dos Santos Silva
3828370679
DEV: Cleanup deprecations (#952) 2024-12-02 14:18:03 -03:00
Roman Rizzi
0abd4b1244
FIX: Sentiment classification results needs to be transformed before saving (#983) 2024-11-29 17:31:56 -03:00
Sam
0cb2c413ba
FEATURE: exclude muted categories from category suggester (#979)
The logic here is that users do not particularly care about
topics in the category so we can exclude them from tag
and category suggestions
2024-11-29 12:17:28 +11:00
Sam
bc0657f478
FEATURE: AI Usage page (#964)
- Added a new admin interface to track AI usage metrics, including tokens, features, and models.
- Introduced a new route `/admin/plugins/discourse-ai/ai-usage` and supporting API endpoint in `AiUsageController`.
- Implemented `AiUsageSerializer` for structuring AI usage data.
- Integrated CSS stylings for charts and tables under `stylesheets/modules/llms/common/usage.scss`.
- Enhanced backend with `AiApiAuditLog` model changes: added `cached_tokens` column  (implemented with OpenAI for now) with relevant DB migration and indexing.
- Created `Report` module for efficient aggregation and filtering of AI usage metrics.
- Updated AI Bot title generation logic to log correctly to user vs bot
- Extended test coverage for the new tracking features, ensuring data consistency and access controls.
2024-11-29 06:26:48 +11:00
Roman Rizzi
c980c34d77
REFACTOR: Simplify sentiment classification (#977)
This change adds a simpler class for sentiment classification, replacing the soon-to-be removed `Classificator` hierarchy. Additionally, it adds a method for classifying concurrently, speeding up the backfill rake task.
2024-11-28 15:38:23 -03:00
Keegan George
f1c7ee8624
DEV: Better control what prompts can appear in post/composer (#969)
This PR updates the logic for the location map so it permits only the desired prompts through to the composer/post menu. Anything else won't be shown by default.

This PR also adds relevant tests to prevent regression.
2024-11-27 16:14:21 -08:00
Roman Rizzi
ef07fcb308
FIX: Skip records without content to classify (#960) 2024-11-26 15:54:20 -03:00
Roman Rizzi
ddf2bf7034
DEV: Backfill embeddings concurrently. (#941)
We are adding a new method for generating and storing embeddings in bulk, which relies on `Concurrent::Promises::Future`. Generating an embedding consists of three steps:

Prepare text
HTTP call to retrieve the vector
Save to DB.
Each one is independently executed on whatever thread the pool gives us.

We are bringing a custom thread pool instead of the global executor since we want control over how many threads we spawn to limit concurrency. We also avoid firing thousands of HTTP requests when working with large batches.
2024-11-26 14:12:32 -03:00
Rafael dos Santos Silva
23193ee6f2
FEATURE: Calculate gists from non hot topics too (#958)
Also renames some settings to remove 'hot' references.
2024-11-26 13:44:12 -03:00
Sam
54f2d34ccb
DEV: skip flakey spec (#955)
This spec fails inconsistently with:

-fragment-n14
       +You are a helpful Discourse assistant.
       +You _understand_ and **generate** Discourse Markdown.
       +You live in a Discourse Forum Message.
       +
       +You live in the forum with the URL: http://test.localhost
       +The title of your site: test site title
       +The description is: test site description
       +The participants in this conversation are: joe, jane
       +The date now is: 2024-11-25 20:23:02 UTC, much has changed since you were trained.
       +
       +You were trained on OLD data, lean on search to get up to date information about this forum
       +When searching try to SIMPLIFY search terms
       +Discourse search joins all terms with AND. Reduce and simplify terms to find more results.<guidance>
       +The following texts will give you additional guidance for your response.
       +We included them because we believe they are relevant to this conversation topic.
       +
       +Texts:
       +
       +fragment-n10
       +fragment-n9
       +fragment-n8
       +fragment-n7
       +fragment-n6
       +fragment-n5
       +fragment-n4
       +fragment-n3
       +fragment-n2
       +fragment-n1
       +</guidance>
2024-11-26 07:49:33 +11:00
Sam
616b990894
FEATURE: LLM mentions and auto silence (#949)
* FEATURE: allow mentioning an LLM mid conversation to switch

This is a edgecase feature that allow you to start a conversation
in a PM with LLM1 and then use LLM2 to evaluation or continue
the conversation

* FEATURE: allow auto silencing of spam accounts

New rule can also allow for silencing an account automatically

This can prevent spammers from creating additional posts.
2024-11-26 07:19:56 +11:00
Natalie Tay
f8231d259b
FEATURE: Add locale detection prompt from translator (#946) 2024-11-25 08:33:54 +11:00
Roman Rizzi
e54f2da1a5
FIX: Unnecessary complex preloading accidentally filters some topics. (#945)
The `topic_query_create_list_topics` modifier we append was always meant to avoid an N+1 situation when serializing gists. However, I tried to be too smart and only preload these, which resulted in some topics with *only* regular summaries getting removed from the list. This issue became apparent now we are adding gists to other lists besides hot.

Let's simplify the preloading, which still solves the N+1 issue, and let the serializer get the needed summary.
2024-11-22 12:07:27 -03:00
Joffrey JAFFEUX
2cc8115b48
FIX: disables temporarily ai_summaries filtering (#943) 2024-11-22 08:34:54 +01:00
Sam
d56ed53eb1
FIX: cancel functionality regressed (#938)
The cancel messaging was not floating correctly to the HTTP call leading to impossible to cancel completions 

This is now fully tested as well.
2024-11-21 17:51:45 +11:00
Sam
52c644798d
DEV: improve artifact presentation (#932)
1. Keep source in a "details" block after rendered so it does
not overwhelm users

2. Ensure artifacts are never indexed by robots

3. Cache break our CSS that changed recently
2024-11-20 18:53:19 +11:00
Roman Rizzi
530a795d43
FIX: Instruct AR that we want to use ai_summaries for filtering. (#927)
We use `includes` instead of `joins` because we want to eager-load summaries, avoiding an extra query when summarizing. However, Rails will complain unless you explicitly inform them you plan to use that inside a `WHERE` clause.
2024-11-19 17:32:13 -03:00
Roman Rizzi
fb80d776d8
FEATURE: Enable gists on all topic lists (#922) 2024-11-19 11:04:34 -03:00
Rafael dos Santos Silva
48d08dedd4
FEATURE: Emotion activity metrics table (#916) 2024-11-19 10:01:10 -03:00
Sam
0d7f353284
FEATURE: AI artifacts (#898)
This is a significant PR that introduces AI Artifacts functionality to the discourse-ai plugin along with several other improvements. Here are the key changes:

1. AI Artifacts System:
   - Adds a new `AiArtifact` model and database migration
   - Allows creation of web artifacts with HTML, CSS, and JavaScript content
   - Introduces security settings (`strict`, `lax`, `disabled`) for controlling artifact execution
   - Implements artifact rendering in iframes with sandbox protection
   - New `CreateArtifact` tool for AI to generate interactive content

2. Tool System Improvements:
   - Adds support for partial tool calls, allowing incremental updates during generation
   - Better handling of tool call states and progress tracking
   - Improved XML tool processing with CDATA support
   - Fixes for tool parameter handling and duplicate invocations

3. LLM Provider Updates:
   - Updates for Anthropic Claude models with correct token limits
   - Adds support for native/XML tool modes in Gemini integration
   - Adds new model configurations including Llama 3.1 models
   - Improvements to streaming response handling

4. UI Enhancements:
   - New artifact viewer component with expand/collapse functionality
   - Security controls for artifact execution (click-to-run in strict mode)
   - Improved dialog and response handling
   - Better error management for tool execution

5. Security Improvements:
   - Sandbox controls for artifact execution
   - Public/private artifact sharing controls
   - Security settings to control artifact behavior
   - CSP and frame-options handling for artifacts

6. Technical Improvements:
   - Better post streaming implementation
   - Improved error handling in completions
   - Better memory management for partial tool calls
   - Enhanced testing coverage

7. Configuration:
   - New site settings for artifact security
   - Extended LLM model configurations
   - Additional tool configuration options

This PR significantly enhances the plugin's capabilities for generating and displaying interactive content while maintaining security and providing flexible configuration options for administrators.
2024-11-19 09:22:39 +11:00
Rafael dos Santos Silva
4fb686a548
FIX: Move emotion /filter logic into a CTE to keep cardinality sane (#915) 2024-11-14 17:16:48 -03:00
Rafael dos Santos Silva
5026ab52d0
FEATURE: Order by emotion on /filter (#913) 2024-11-14 12:45:40 -03:00
Sam
823e8ef490
FEATURE: partial tool call support for OpenAI and Anthropic (#908)
Implement streaming tool call implementation for Anthropic and Open AI.

When calling:

llm.generate(..., partial_tool_calls: true) do ...
Partials may contain ToolCall instances with partial: true, These tool calls are partially populated with json partially parsed.

So for example when performing a search you may get:

ToolCall(..., {search: "hello" })
ToolCall(..., {search: "hello world" })

The library used to parse json is:

https://github.com/dgraham/json-stream

We use a fork cause we need access to the internal buffer.

This prepares internals to perform partial tool calls, but does not implement it yet.
2024-11-14 06:58:24 +11:00
Sam
e817b7dc11
FEATURE: improve tool support (#904)
This re-implements tool support in DiscourseAi::Completions::Llm #generate

Previously tool support was always returned via XML and it would be the responsibility of the caller to parse XML

New implementation has the endpoints return ToolCall objects.

Additionally this simplifies the Llm endpoint interface and gives it more clarity. Llms must implement

decode, decode_chunk (for streaming)

It is the implementers responsibility to figure out how to decode chunks, base no longer implements. To make this easy we ship a flexible json decoder which is easy to wire up.

Also (new)

    Better debugging for PMs, we now have a next / previous button to see all the Llm messages associated with a PM
    Token accounting is fixed for vllm (we were not correctly counting tokens)
2024-11-12 08:14:30 +11:00
Roman Rizzi
9505a8976c
FEATURE: Automatically backfill regular summaries. (#892)
This change introduces a job to summarize topics and cache the results automatically. We provide a setting to control how many topics we'll backfill per hour and what the topic's minimum word count is to qualify.

We'll prioritize topics without summary over outdated ones.
2024-11-04 17:48:11 -03:00