Refactor dialect selection and add Nova API support
Change dialect selection to use llm_model object instead of just provider name
Add support for Amazon Bedrock's Nova API with native tools
Implement Nova-specific message processing and formatting
Update specs for Nova and AWS Bedrock endpoints
Enhance AWS Bedrock support to handle Nova models
Fix Gemini beta API detection logic
Previously, when clicking add footnote on an explain suggestion it would replace the selected word by finding the first occurrence of the word. This results in issues when there are more than one occurrences of a word in a post. This is not trivial to solve, so this PR instead prevents incorrect text replacements by only allowing the replacement if it's unique. We use the same logic here that we use to determine if something can be fast edited.
In this PR we also update tests for post helper explain suggestions. For a while, we haven't had tests here due to streaming/timing issues, we've been skipping our system specs. In this PR, we add acceptance tests to handle this which gives us improved ability to publish message bus updates in the testing environment so that it can be better tested without issues.
* FEATURE: Backfill posts sentiment.
It adds a scheduled job to backfill posts' sentiment, similar to our existing rake task, but with two settings to control the batch size and posts' max-age.
* Make sure model_name order is consistent.
For a while now we have not been sending the examples to AI
helper, which can lead to inconsistent results.
Note: this also means that in non English we did not send
English results, so this may end up reducing performance
That said first thing we need to do is fix the regression.
This PR fixes an issue where the tag suggester for edit title topic area was suggesting tags that are already assigned on a post. It also updates the amount of suggested tags to 7 so that there is still a decent amount of tags suggested when tags are already assigned.
Add support for versioned artifacts with improved diff handling
* Add versioned artifacts support allowing artifacts to be updated and tracked
- New `ai_artifact_versions` table to store version history
- Support for updating artifacts through a new `UpdateArtifact` tool
- Add version-aware artifact rendering in posts
- Include change descriptions for version tracking
* Enhance artifact rendering and security
- Add support for module-type scripts and external JS dependencies
- Expand CSP to allow trusted CDN sources (unpkg, cdnjs, jsdelivr, googleapis)
- Improve JavaScript handling in artifacts
* Implement robust diff handling system (this is dormant but ready to use once LLMs catch up)
- Add new DiffUtils module for applying changes to artifacts
- Support for unified diff format with multiple hunks
- Intelligent handling of whitespace and line endings
- Comprehensive error handling for diff operations
* Update routes and UI components
- Add versioned artifact routes
- Update markdown processing for versioned artifacts
Also
- Tweaks summary prompt
- Improves upload support in custom tool to also provide urls
- Added a new admin interface to track AI usage metrics, including tokens, features, and models.
- Introduced a new route `/admin/plugins/discourse-ai/ai-usage` and supporting API endpoint in `AiUsageController`.
- Implemented `AiUsageSerializer` for structuring AI usage data.
- Integrated CSS stylings for charts and tables under `stylesheets/modules/llms/common/usage.scss`.
- Enhanced backend with `AiApiAuditLog` model changes: added `cached_tokens` column (implemented with OpenAI for now) with relevant DB migration and indexing.
- Created `Report` module for efficient aggregation and filtering of AI usage metrics.
- Updated AI Bot title generation logic to log correctly to user vs bot
- Extended test coverage for the new tracking features, ensuring data consistency and access controls.
This change adds a simpler class for sentiment classification, replacing the soon-to-be removed `Classificator` hierarchy. Additionally, it adds a method for classifying concurrently, speeding up the backfill rake task.
This PR updates the logic for the location map so it permits only the desired prompts through to the composer/post menu. Anything else won't be shown by default.
This PR also adds relevant tests to prevent regression.
This commit applies further admin UI guidelines, now that they have been more
fleshed out in core, to the AI admin UI:
* Tools
* LLMs
* Personas
The changes include but are not limited to:
* Applying the table CSS classes, for desktop and mobile
* Adding a description and learn more link for each tab
* Adding an empty list placeholder with CTA using `AdminConfigAreaEmptyList`
* Replacing custom headings with `AdminPageSubheader`
We are adding a new method for generating and storing embeddings in bulk, which relies on `Concurrent::Promises::Future`. Generating an embedding consists of three steps:
Prepare text
HTTP call to retrieve the vector
Save to DB.
Each one is independently executed on whatever thread the pool gives us.
We are bringing a custom thread pool instead of the global executor since we want control over how many threads we spawn to limit concurrency. We also avoid firing thousands of HTTP requests when working with large batches.
This spec fails inconsistently with:
-fragment-n14
+You are a helpful Discourse assistant.
+You _understand_ and **generate** Discourse Markdown.
+You live in a Discourse Forum Message.
+
+You live in the forum with the URL: http://test.localhost
+The title of your site: test site title
+The description is: test site description
+The participants in this conversation are: joe, jane
+The date now is: 2024-11-25 20:23:02 UTC, much has changed since you were trained.
+
+You were trained on OLD data, lean on search to get up to date information about this forum
+When searching try to SIMPLIFY search terms
+Discourse search joins all terms with AND. Reduce and simplify terms to find more results.<guidance>
+The following texts will give you additional guidance for your response.
+We included them because we believe they are relevant to this conversation topic.
+
+Texts:
+
+fragment-n10
+fragment-n9
+fragment-n8
+fragment-n7
+fragment-n6
+fragment-n5
+fragment-n4
+fragment-n3
+fragment-n2
+fragment-n1
+</guidance>
* FEATURE: allow mentioning an LLM mid conversation to switch
This is a edgecase feature that allow you to start a conversation
in a PM with LLM1 and then use LLM2 to evaluation or continue
the conversation
* FEATURE: allow auto silencing of spam accounts
New rule can also allow for silencing an account automatically
This can prevent spammers from creating additional posts.
Two changes worth mentioning:
`#instance` returns a fully configured embedding endpoint ready to use.
All endpoints respond to the same method and have the same signature - `perform!(text)`
This makes it easier to reuse them when generating embeddings in bulk.
The `topic_query_create_list_topics` modifier we append was always meant to avoid an N+1 situation when serializing gists. However, I tried to be too smart and only preload these, which resulted in some topics with *only* regular summaries getting removed from the list. This issue became apparent now we are adding gists to other lists besides hot.
Let's simplify the preloading, which still solves the N+1 issue, and let the serializer get the needed summary.
* FIX: automatically bust cache for share ai assets
CDNs can be configured to strip query params in Discourse
hosting. This is generally safe, but in this case we had
no way of busting the cache using the path.
New design properly caches and properly breaks busts the
cache if asset changes so we don't need to worry about versions
* one day I will set up conditional lint on save :)
1. Keep source in a "details" block after rendered so it does
not overwhelm users
2. Ensure artifacts are never indexed by robots
3. Cache break our CSS that changed recently
We use `includes` instead of `joins` because we want to eager-load summaries, avoiding an extra query when summarizing. However, Rails will complain unless you explicitly inform them you plan to use that inside a `WHERE` clause.
It's important that artifacts are never given 'same origin' access to the forum domain, so that they cannot access cookies, or make authenticated HTTP requests. So even when visiting the URL directly, we need to wrap them in a sandboxed iframe.
This is a significant PR that introduces AI Artifacts functionality to the discourse-ai plugin along with several other improvements. Here are the key changes:
1. AI Artifacts System:
- Adds a new `AiArtifact` model and database migration
- Allows creation of web artifacts with HTML, CSS, and JavaScript content
- Introduces security settings (`strict`, `lax`, `disabled`) for controlling artifact execution
- Implements artifact rendering in iframes with sandbox protection
- New `CreateArtifact` tool for AI to generate interactive content
2. Tool System Improvements:
- Adds support for partial tool calls, allowing incremental updates during generation
- Better handling of tool call states and progress tracking
- Improved XML tool processing with CDATA support
- Fixes for tool parameter handling and duplicate invocations
3. LLM Provider Updates:
- Updates for Anthropic Claude models with correct token limits
- Adds support for native/XML tool modes in Gemini integration
- Adds new model configurations including Llama 3.1 models
- Improvements to streaming response handling
4. UI Enhancements:
- New artifact viewer component with expand/collapse functionality
- Security controls for artifact execution (click-to-run in strict mode)
- Improved dialog and response handling
- Better error management for tool execution
5. Security Improvements:
- Sandbox controls for artifact execution
- Public/private artifact sharing controls
- Security settings to control artifact behavior
- CSP and frame-options handling for artifacts
6. Technical Improvements:
- Better post streaming implementation
- Improved error handling in completions
- Better memory management for partial tool calls
- Enhanced testing coverage
7. Configuration:
- New site settings for artifact security
- Extended LLM model configurations
- Additional tool configuration options
This PR significantly enhances the plugin's capabilities for generating and displaying interactive content while maintaining security and providing flexible configuration options for administrators.
Implement streaming tool call implementation for Anthropic and Open AI.
When calling:
llm.generate(..., partial_tool_calls: true) do ...
Partials may contain ToolCall instances with partial: true, These tool calls are partially populated with json partially parsed.
So for example when performing a search you may get:
ToolCall(..., {search: "hello" })
ToolCall(..., {search: "hello world" })
The library used to parse json is:
https://github.com/dgraham/json-stream
We use a fork cause we need access to the internal buffer.
This prepares internals to perform partial tool calls, but does not implement it yet.
This re-implements tool support in DiscourseAi::Completions::Llm #generate
Previously tool support was always returned via XML and it would be the responsibility of the caller to parse XML
New implementation has the endpoints return ToolCall objects.
Additionally this simplifies the Llm endpoint interface and gives it more clarity. Llms must implement
decode, decode_chunk (for streaming)
It is the implementers responsibility to figure out how to decode chunks, base no longer implements. To make this easy we ship a flexible json decoder which is easy to wire up.
Also (new)
Better debugging for PMs, we now have a next / previous button to see all the Llm messages associated with a PM
Token accounting is fixed for vllm (we were not correctly counting tokens)