* FIX: implement max_output tokens (anthropic/openai/bedrock/gemini/open router)
Previously this feature existed but was not implemented
Also updates a bunch of models to in our preset to point to latest
* implementing in base is safer, simpler and easier to manage
* anthropic 3.5 is getting older, lets use 4.0 here and fix spec
OpenAI ship a new API for completions called "Responses API"
Certain models (o3-pro) require this API.
Additionally certain features are only made available to the new API.
This allow enabling it per LLM.
see: https://platform.openai.com/docs/api-reference/responses
* FEATURE: add inferred concepts system
This commit adds a new inferred concepts system that:
- Creates a model for storing concept labels that can be applied to topics
- Provides AI personas for finding new concepts and matching existing ones
- Adds jobs for generating concepts from popular topics
- Includes a scheduled job that automatically processes engaging topics
* FEATURE: Extend inferred concepts to include posts
* Adds support for concepts to be inferred from and applied to posts
* Replaces daily task with one that handles both topics and posts
* Adds database migration for posts_inferred_concepts join table
* Updates PersonaContext to include inferred concepts
Co-authored-by: Roman Rizzi <rizziromanalejandro@gmail.com>
Co-authored-by: Keegan George <kgeorge13@gmail.com>
The structured output JSON comes embedded inside the API response, which is also a JSON. Since we have to parse the response to process it, any control characters inside the structured output are unescaped into regular characters, leading to invalid JSON and breaking during parsing. This change adds a retry mechanism that escapes
the string again if parsing fails, preventing the parser from breaking on malformed input and working around this issue.
For example:
```
original = '{ "a": "{\\"key\\":\\"value with \\n newline\\"}" }'
JSON.parse(original) => { "a" => "{\"key\":\"value with \n newline\"}" }
# At this point, the inner JSON string contains an actual newline.
```
This change fixes two bugs and adds a safeguard.
The first issue is that the schema Gemini expected differed from the one sent, resulting in 400 errors when performing completions.
The second issue was that creating a new persona won't define a method
for `response_format`. This has to be explicitly defined when we wrap it inside the Persona class. Also, There was a mismatch between the default value and what we stored in the DB. Some parts of the code expected symbols as keys and others as strings.
Finally, we add a safeguard when, even if asked to, the model refuses to reply with a valid JSON. In this case, we are making a best-effort to recover and stream the raw response.
* DEV: use a proper object for tool definition
This moves away from using a loose hash to define tools, which
is error prone.
Instead given a proper object we will also be able to coerce the
return values to match tool definition correctly
* fix xml tools
* fix anthropic tools
* fix specs... a few more to go
* specs are passing
* FIX: coerce values for XML tool calls
* Update spec/lib/completions/tool_definition_spec.rb
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
This commit introduces a new Forum Researcher persona specialized in deep forum content analysis along with comprehensive improvements to our AI infrastructure.
Key additions:
New Forum Researcher persona with advanced filtering and analysis capabilities
Robust filtering system supporting tags, categories, dates, users, and keywords
LLM formatter to efficiently process and chunk research results
Infrastructure improvements:
Implemented CancelManager class to centrally manage AI completion cancellations
Replaced callback-based cancellation with a more robust pattern
Added systematic cancellation monitoring with callbacks
Other improvements:
Added configurable default_enabled flag to control which personas are enabled by default
Updated translation strings for the new researcher functionality
Added comprehensive specs for the new components
Renames Researcher -> Web Researcher
This change makes our AI platform more stable while adding powerful research capabilities that can analyze forum trends and surface relevant content.
* DEV: Use structured responses for summaries
* Fix system specs
* Make response_format a first class citizen and update endpoints to support it
* Response format can be specified in the persona
* lint
* switch to jsonb and make column nullable
* Reify structured output chunks. Move JSON parsing to the depths of Completion
* Switch to JsonStreamingTracker for partial JSON parsing
* DEV: refactor bot internals
This introduces a proper object for bot context, this makes
it simpler to improve context management as we go cause we
have a nice object to work with
Starts refactoring allowing for a single message to have
multiple uploads throughout
* transplant method to message builder
* chipping away at inline uploads
* image support is improved but not fully fixed yet
partially working in anthropic, still got quite a few dialects to go
* open ai and claude are now working
* Gemini is now working as well
* fix nova
* more dialects...
* fix ollama
* fix specs
* update artifact fixed
* more tests
* spam scanner
* pass more specs
* bunch of specs improved
* more bug fixes.
* all the rest of the tests are working
* improve tests coverage and ensure custom tools are aware of new context object
* tests are working, but we need more tests
* resolve merge conflict
* new preamble and expanded specs on ai tool
* remove concept of "standalone tools"
This is no longer needed, we can set custom raw, tool details are injected into tool calls
This PR adds support for disabling further tool calls by setting tool_choice to :none across all supported LLM providers:
- OpenAI: Uses "none" tool_choice parameter
- Anthropic: Uses {type: "none"} and adds a prefill message to prevent confusion
- Gemini: Sets function_calling_config mode to "NONE"
- AWS Bedrock: Doesn't natively support tool disabling, so adds a prefill message
We previously used to disable tool calls by simply removing tool definitions, but this would cause errors with some providers. This implementation uses the supported method appropriate for each provider while providing a fallback for Bedrock.
Co-authored-by: Natalie Tay <natalie.tay@gmail.com>
* remove stray puts
* cleaner chain breaker for last tool call (works in thinking)
remove unused code
* improve test
---------
Co-authored-by: Natalie Tay <natalie.tay@gmail.com>
thinking models such as Claude 3.7 Thinking and o1 / o3 do not
support top_p or temp.
Previously you would have to carefully remove it from everywhere
by having it be a provider param we now support blanker removing
without forcing people to update automation rules or personas
adds support for "thinking tokens" - a feature that exposes the model's reasoning process before providing the final response. Key improvements include:
- Add a new Thinking class to handle thinking content from LLMs
- Modify endpoints (Claude, AWS Bedrock) to handle thinking output
- Update AI bot to display thinking in collapsible details section
- Fix SEARCH/REPLACE blocks to support empty replacement strings and general improvements to artifact editing
- Allow configurable temperature in triage and report automations
- Various bug fixes and improvements to diff parsing
* FEATURE: full support for Sonnet 3.7
- Adds support for Sonnet 3.7 with reasoning on bedrock and anthropic
- Fixes regression where provider params were not populated
Note. reasoning tokens are hardcoded to minimum of 100 maximum of 65536
* FIX: open ai non reasoning models need to use deprecate max_tokens
Disabling streaming is required for models such o1 that do not have streaming
enabled yet
It is good to carry this feature around in case various apis decide not to support streaming endpoints and Discourse AI can continue to work just as it did before.
Also: fixes issue where sharing artifacts would miss viewport leading to tiny artifacts on mobile
* FEATURE: first class support for OpenRouter
This new implementation supports picking quantization and provider pref
Also:
- Improve logging for summary generation
- Improve error message when contacting LLMs fails
* Better support for full screen artifacts on iPad
Support back button to close full screen
Refactor dialect selection and add Nova API support
Change dialect selection to use llm_model object instead of just provider name
Add support for Amazon Bedrock's Nova API with native tools
Implement Nova-specific message processing and formatting
Update specs for Nova and AWS Bedrock endpoints
Enhance AWS Bedrock support to handle Nova models
Fix Gemini beta API detection logic
- Added a new admin interface to track AI usage metrics, including tokens, features, and models.
- Introduced a new route `/admin/plugins/discourse-ai/ai-usage` and supporting API endpoint in `AiUsageController`.
- Implemented `AiUsageSerializer` for structuring AI usage data.
- Integrated CSS stylings for charts and tables under `stylesheets/modules/llms/common/usage.scss`.
- Enhanced backend with `AiApiAuditLog` model changes: added `cached_tokens` column (implemented with OpenAI for now) with relevant DB migration and indexing.
- Created `Report` module for efficient aggregation and filtering of AI usage metrics.
- Updated AI Bot title generation logic to log correctly to user vs bot
- Extended test coverage for the new tracking features, ensuring data consistency and access controls.
This is a significant PR that introduces AI Artifacts functionality to the discourse-ai plugin along with several other improvements. Here are the key changes:
1. AI Artifacts System:
- Adds a new `AiArtifact` model and database migration
- Allows creation of web artifacts with HTML, CSS, and JavaScript content
- Introduces security settings (`strict`, `lax`, `disabled`) for controlling artifact execution
- Implements artifact rendering in iframes with sandbox protection
- New `CreateArtifact` tool for AI to generate interactive content
2. Tool System Improvements:
- Adds support for partial tool calls, allowing incremental updates during generation
- Better handling of tool call states and progress tracking
- Improved XML tool processing with CDATA support
- Fixes for tool parameter handling and duplicate invocations
3. LLM Provider Updates:
- Updates for Anthropic Claude models with correct token limits
- Adds support for native/XML tool modes in Gemini integration
- Adds new model configurations including Llama 3.1 models
- Improvements to streaming response handling
4. UI Enhancements:
- New artifact viewer component with expand/collapse functionality
- Security controls for artifact execution (click-to-run in strict mode)
- Improved dialog and response handling
- Better error management for tool execution
5. Security Improvements:
- Sandbox controls for artifact execution
- Public/private artifact sharing controls
- Security settings to control artifact behavior
- CSP and frame-options handling for artifacts
6. Technical Improvements:
- Better post streaming implementation
- Improved error handling in completions
- Better memory management for partial tool calls
- Enhanced testing coverage
7. Configuration:
- New site settings for artifact security
- Extended LLM model configurations
- Additional tool configuration options
This PR significantly enhances the plugin's capabilities for generating and displaying interactive content while maintaining security and providing flexible configuration options for administrators.
Implement streaming tool call implementation for Anthropic and Open AI.
When calling:
llm.generate(..., partial_tool_calls: true) do ...
Partials may contain ToolCall instances with partial: true, These tool calls are partially populated with json partially parsed.
So for example when performing a search you may get:
ToolCall(..., {search: "hello" })
ToolCall(..., {search: "hello world" })
The library used to parse json is:
https://github.com/dgraham/json-stream
We use a fork cause we need access to the internal buffer.
This prepares internals to perform partial tool calls, but does not implement it yet.
This re-implements tool support in DiscourseAi::Completions::Llm #generate
Previously tool support was always returned via XML and it would be the responsibility of the caller to parse XML
New implementation has the endpoints return ToolCall objects.
Additionally this simplifies the Llm endpoint interface and gives it more clarity. Llms must implement
decode, decode_chunk (for streaming)
It is the implementers responsibility to figure out how to decode chunks, base no longer implements. To make this easy we ship a flexible json decoder which is easy to wire up.
Also (new)
Better debugging for PMs, we now have a next / previous button to see all the Llm messages associated with a PM
Token accounting is fixed for vllm (we were not correctly counting tokens)
Fixes encoding of params on LLM function calls.
Previously we would improperly return results if a function parameter returned an HTML tag.
Additionally adds some missing HTTP verbs to tool calls.
* FIX: Llm selector / forced tools / search tool
This fixes a few issues:
1. When search was not finding any semantic results we would break the tool
2. Gemin / Anthropic models did not implement forced tools previously despite it being an API option
3. Mechanics around displaying llm selector were not right. If you disabled LLM selector server side persona PM did not work correctly.
4. Disabling native tools for anthropic model moved out of a site setting. This deliberately does not migrate cause this feature is really rare to need now, people who had it set probably did not need it.
5. Updates anthropic model names to latest release
* linting
* fix a couple of tests I missed
* clean up conditional
* FEATURE: allows forced LLM tool use
Sometimes we need to force LLMs to use tools, for example in RAG
like use cases we may want to force an unconditional search.
The new framework allows you backend to force tool usage.
Front end commit to follow
* UI for forcing tools now works, but it does not react right
* fix bugs
* fix tests, this is now ready for review
This allows our users to add the Ollama provider and use it to serve our AI bot (completion/dialect).
In this PR, we introduce:
DiscourseAi::Completions::Dialects::Ollama which would help us translate by utilizing Completions::Endpoint::Ollama
Correct extract_completion_from and partials_from in Endpoints::Ollama
Also
Add tests for Endpoints::Ollama
Introduce ollama_model fabricator
* FIX: Add tool support to open ai compatible dialect and vllm
Automatic tools are in progress in vllm see: https://github.com/vllm-project/vllm/pull/5649
Even when they are supported, initial support will be uneven, only some models have native tool support
notably mistral which has some special tokens for tool support.
After the above PR lands in vllm we will still need to swap to
XML based tools on models without native tool support.
* fix specs
* DEV: Remove old code now that features rely on LlmModels.
* Hide old settings and migrate persona llm overrides
* Remove shadowing special URL + seeding code. Use srv:// prefix instead.
Using assistant role for system produces an error because
they expect alternating roles like user/assistant/user and so on.
Prompts cannot start with the assistant role.
Native tools do not work well on Opus.
Chain of Thought prompting means it consumes enormous amounts of
tokens and has poor latency.
This commit introduce and XML stripper to remove various chain of
thought XML islands from anthropic prompts when tools are involved.
This mean Opus native tools is now functions (albeit slowly)
From local testing XML just works better now.
Also fixes enum support in Anthropic native tools
Add native Cohere tool support
- Introduce CohereTools class for tool translation and result processing
- Update Command dialect to integrate with CohereTools
- Modify Cohere endpoint to support passing tools and processing tool calls
- Add spec for testing tool triggering with Cohere endpoint
- Introduce new support for GPT4o (automation / bot / summary / helper)
- Properly account for token counts on OpenAI models
- Track feature that was used when generating AI completions
- Remove custom llm support for summarization as we need better interfaces to control registration and de-registration
This PR introduces the concept of "LlmModel" as a new way to quickly add new LLM models without making any code changes. We are releasing this first version and will add incremental improvements, so expect changes.
The AI Bot can't fully take advantage of this feature as users are hard-coded. We'll fix this in a separate PR.s
Both endpoints provide OpenAI-compatible servers. The only difference is that Vllm doesn't support passing tools as a separate parameter. Even if the tool param is supported, it ultimately relies on the model's ability to handle native functions, which is not the case with the models we have today.
As a part of this change, we are dropping support for StableBeluga/Llama2 models. They don't have a chat_template, meaning the new API can translate them.
These changes let us remove some of our existing dialects and are a first step in our plan to support any LLM by defining them as data-driven concepts.
I rewrote the "translate" method to use a template method and extracted the tool support strategies into its classes to simplify the code.
Finally, these changes bring support for Ollama when running in dev mode. It only works with Mistral for now, but it will change soon..
A recent change meant that llm instance got cached internally, repeat calls
to inference would cache data in Endpoint object leading model to
failures.
Both Gemini and Open AI expect a clean endpoint object cause they
set data.
This amends internals to make sure llm.generate will always operate
on clean objects