This re-implements tool support in DiscourseAi::Completions::Llm #generate
Previously tool support was always returned via XML and it would be the responsibility of the caller to parse XML
New implementation has the endpoints return ToolCall objects.
Additionally this simplifies the Llm endpoint interface and gives it more clarity. Llms must implement
decode, decode_chunk (for streaming)
It is the implementers responsibility to figure out how to decode chunks, base no longer implements. To make this easy we ship a flexible json decoder which is easy to wire up.
Also (new)
Better debugging for PMs, we now have a next / previous button to see all the Llm messages associated with a PM
Token accounting is fixed for vllm (we were not correctly counting tokens)
Fixes encoding of params on LLM function calls.
Previously we would improperly return results if a function parameter returned an HTML tag.
Additionally adds some missing HTTP verbs to tool calls.
The new `/admin/plugins/discourse-ai/ai-personas/stream-reply.json` was added.
This endpoint streams data direct from a persona and can be used
to access a persona from remote systems leaving a paper trail in
PMs about the conversation that happened
This endpoint is only accessible to admins.
---------
Co-authored-by: Gabriel Grubba <70247653+Grubba27@users.noreply.github.com>
Co-authored-by: Keegan George <kgeorge13@gmail.com>
* FIX: Llm selector / forced tools / search tool
This fixes a few issues:
1. When search was not finding any semantic results we would break the tool
2. Gemin / Anthropic models did not implement forced tools previously despite it being an API option
3. Mechanics around displaying llm selector were not right. If you disabled LLM selector server side persona PM did not work correctly.
4. Disabling native tools for anthropic model moved out of a site setting. This deliberately does not migrate cause this feature is really rare to need now, people who had it set probably did not need it.
5. Updates anthropic model names to latest release
* linting
* fix a couple of tests I missed
* clean up conditional
A new feature_context json column was added to ai_api_audit_logs
This allows us to store rich json like context on any LLM request
made.
This new field now stores automation id and name.
Additionally allows llm_triage to specify maximum number of tokens
This means that you can limit the cost of llm triage by scanning only
first N tokens of a post.
* FEATURE: allows forced LLM tool use
Sometimes we need to force LLMs to use tools, for example in RAG
like use cases we may want to force an unconditional search.
The new framework allows you backend to force tool usage.
Front end commit to follow
* UI for forcing tools now works, but it does not react right
* fix bugs
* fix tests, this is now ready for review
This allows our users to add the Ollama provider and use it to serve our AI bot (completion/dialect).
In this PR, we introduce:
DiscourseAi::Completions::Dialects::Ollama which would help us translate by utilizing Completions::Endpoint::Ollama
Correct extract_completion_from and partials_from in Endpoints::Ollama
Also
Add tests for Endpoints::Ollama
Introduce ollama_model fabricator
Caveats
- No streaming, by design
- No tool support (including no XML tools)
- No vision
Open AI will revamt the model and more of these features may
become available.
This solution is a bit hacky for now
* FEATURE: LLM Triage support for systemless models.
This change adds support for OSS models without support for system messages. LlmTriage's system message field is no longer mandatory. We now send the post contents in a separate user message.
* Models using Ollama can also disable system prompts
Creating a new model, either manually or from presets, doesn't initialize the `provider_params` object, meaning their custom params won't persist.
Additionally, this change adds some validations for Bedrock params, which are mandatory, and a clear message when a completion fails because we cannot build the URL.
* FIX: Add tool support to open ai compatible dialect and vllm
Automatic tools are in progress in vllm see: https://github.com/vllm-project/vllm/pull/5649
Even when they are supported, initial support will be uneven, only some models have native tool support
notably mistral which has some special tokens for tool support.
After the above PR lands in vllm we will still need to swap to
XML based tools on models without native tool support.
* fix specs
* DEV: Remove old code now that features rely on LlmModels.
* Hide old settings and migrate persona llm overrides
* Remove shadowing special URL + seeding code. Use srv:// prefix instead.
Using RAG fragments can lead to considerably big system messages, which becomes problematic when models have a smaller context window.
Before this change, we only look at the rest of the conversation to make sure we don't surpass the limit, which could lead to two unwanted scenarios when having large system messages:
All other messages are excluded due to size.
The system message already exceeds the limit.
As a result, I'm putting a hard-limit of 60% of available tokens. We don't want to aggresively truncate because if rag fragments are included, the system message contains a lot of context to improve the model response, but we also want to make room for the recent messages in the conversation.
Using assistant role for system produces an error because
they expect alternating roles like user/assistant/user and so on.
Prompts cannot start with the assistant role.
This allows summary to use the new LLM models and migrates of API key based model selection
Claude 3.5 etc... all work now.
---------
Co-authored-by: Roman Rizzi <rizziromanalejandro@gmail.com>
Previously, we stored request parameters like the OpenAI organization and Bedrock's access key and region as site settings. This change stores them in the `llm_models` table instead, letting us drop more settings while also becoming more flexible.
* FEATURE: LLM presets for model creation
Previous to this users needed to look up complicated settings
when setting up models.
This introduces and extensible preset system with Google/OpenAI/Anthropic
presets.
This will cover all the most common LLMs, we can always add more as
we go.
Additionally:
- Proper support for Anthropic Claude Sonnet 3.5
- Stop blurring api keys when navigating away - this made it very complex to reuse keys
We no longer support the "provider:model" format in the "ai_helper_model" and
"ai_embeddings_semantic_search_hyde_model" settings. We'll migrate existing
values and work with our new data-driven LLM configs from now on.
* DRAFT: Create AI Bot users dynamically and support custom LlmModels
* Get user associated to llm_model
* Track enabled bots with attribute
* Don't store bot username. Minor touches to migrate default values in settings
* Handle scenario where vLLM uses a SRV record
* Made 3.5-turbo-16k the default version so we can remove hack
- Display filtered search correctly, so it is not confusing
- When XML stripping, if a chunk was `<` it would crash
- SQL Helper improved to be better aware of Data Explorer
Native tools do not work well on Opus.
Chain of Thought prompting means it consumes enormous amounts of
tokens and has poor latency.
This commit introduce and XML stripper to remove various chain of
thought XML islands from anthropic prompts when tools are involved.
This mean Opus native tools is now functions (albeit slowly)
From local testing XML just works better now.
Also fixes enum support in Anthropic native tools
Add native Cohere tool support
- Introduce CohereTools class for tool translation and result processing
- Update Command dialect to integrate with CohereTools
- Modify Cohere endpoint to support passing tools and processing tool calls
- Add spec for testing tool triggering with Cohere endpoint
Previoulsy on GPT-4-vision was supported, change introduces support
for Google/Anthropic and new OpenAI models
Additionally this makes vision work properly in dev environments
cause we sent the encoded payload via prompt vs sending urls
* FEATURE: Set endpoint credentials directly from LlmModel.
Drop Llama2Tokenizer since we no longer use it.
* Allow http for custom LLMs
---------
Co-authored-by: Rafael Silva <xfalcox@gmail.com>
- Introduce new support for GPT4o (automation / bot / summary / helper)
- Properly account for token counts on OpenAI models
- Track feature that was used when generating AI completions
- Remove custom llm support for summarization as we need better interfaces to control registration and de-registration
There are still some limitations to which models we can support with the `LlmModel` class. This will enable support for Llama3 while we sort those out.
This PR introduces the concept of "LlmModel" as a new way to quickly add new LLM models without making any code changes. We are releasing this first version and will add incremental improvements, so expect changes.
The AI Bot can't fully take advantage of this feature as users are hard-coded. We'll fix this in a separate PR.s
When the bot is @mentioned, we need to be a lot more careful
about constructing context otherwise bot gets ultra confused.
This changes multiple things:
1. We were omitting all thread first messages (fixed)
2. Include thread title (if available) in context
3. Construct context in a clearer way separating user request from data