* DEV: Remove old code now that features rely on LlmModels.
* Hide old settings and migrate persona llm overrides
* Remove shadowing special URL + seeding code. Use srv:// prefix instead.
* FIX: Properly fix ai_summaries table sequence
Previous attempt at 3815360 could fail due to a race introduced in 1b0ba91 where summaries are migrated to core in a post_migrate erroneously.
Using RAG fragments can lead to considerably big system messages, which becomes problematic when models have a smaller context window.
Before this change, we only look at the rest of the conversation to make sure we don't surpass the limit, which could lead to two unwanted scenarios when having large system messages:
All other messages are excluded due to size.
The system message already exceeds the limit.
As a result, I'm putting a hard-limit of 60% of available tokens. We don't want to aggresively truncate because if rag fragments are included, the system message contains a lot of context to improve the model response, but we also want to make room for the recent messages in the conversation.
Followup 8d4a67fbe2
The prompt worked better, but it took the instructions
about never using lowercase a little too literally,
it wasn't using it for things like LLM or Discourse,
also it was almost always framing the title as questions
so now I asked it for a mix of questions and statements
because that's less ambiguous.
Changes the title generator prompt to avoid clickbait-y
titles and also try to avoid AI's favourite title format,
which is "Some Thing: Other Thing"
Leaving the chat thread title generator for now, that's
not as important, the bizarre titles add to the experience there.
* Seeding the SRV-backed model should happen inside an initializer.
* Keep the model up to date when the hidden setting changes.
* Use the correct Mixtral model name and fix previous data migration.
* URL validation should trigger only when we attempt to update it.
Using assistant role for system produces an error because
they expect alternating roles like user/assistant/user and so on.
Prompts cannot start with the assistant role.
1. Repairs the identity on the summary table, we migrated data without resetting it.
2. Adds an index into ai_summary table to match expected retrieval pattern
This allows summary to use the new LLM models and migrates of API key based model selection
Claude 3.5 etc... all work now.
---------
Co-authored-by: Roman Rizzi <rizziromanalejandro@gmail.com>
Follow up to b863ddc94b
Ruby:
* Validate `summary` (the column is `not null`)
* Fix `name` validation (the column has `max_length` 100)
* Fix table annotations
* Accept missing `parameter` attributes (`required, `enum`, `enum_values`)
JS:
* Use native classes
* Don't use ember's array extensions
* Add explicit service injections
* Correct class names
* Use `||=` operator
* Use `store` service to create records
* Remove unused service injections
* Extract consts
* Group actions together
* Use `async`/`await`
* Use `withEventValue`
* Sort html attributes
* Use DButtons `@label` arg
* Use `input` elements instead of Ember's `Input` component (same w/ textarea)
* Remove `btn-default` class (automatically applied by DButton)
* Don't mix `I18n.t` and `i18n` in the same template
* Don't track props that aren't used in a template
* Correct invalid `target.value` code
* Remove unused/invalid `this.parameter`/`onChange` code
* Whitespace
* Use the new service import `inject as service` -> `service`
* Use `Object.entries()`
* Add missing i18n strings
* Fix an error in `addEnumValue` (calling `pushObject` on `undefined`)
* Use `TrackedArray`/`TrackedObject`
* Transform tool `parameters` keys (`enumValues` -> `enum_values`)
* FIX: Use base64 encoded images in AI Image Caption via LLaVa
This fixed a regression introduced in #646 where we started sending
schemaless URLs for our LLaVa service, which doesn't handle it well.
Moving to base64 encoded images solves:
- The service needing to download images
Now the service running LLaVa doesn't need internet access
- Secure uploads compat
Every image is treated the same, less branching for secure uploads
- Image Size problems
Discourse is now responsible for ensure a max size for images
- Troublesome dev env
Previously to this commit you would need a dev env that was internet
acessible to use llava image captions
Introduces custom AI tools functionality.
1. Why it was added:
The PR adds the ability to create, manage, and use custom AI tools within the Discourse AI system. This feature allows for more flexibility and extensibility in the AI capabilities of the platform.
2. What it does:
- Introduces a new `AiTool` model for storing custom AI tools
- Adds CRUD (Create, Read, Update, Delete) operations for AI tools
- Implements a tool runner system for executing custom tool scripts
- Integrates custom tools with existing AI personas
- Provides a user interface for managing custom tools in the admin panel
3. Possible use cases:
- Creating custom tools for specific tasks or integrations (stock quotes, currency conversion etc...)
- Allowing administrators to add new functionalities to AI assistants without modifying core code
- Implementing domain-specific tools for particular communities or industries
4. Code structure:
The PR introduces several new files and modifies existing ones:
a. Models:
- `app/models/ai_tool.rb`: Defines the AiTool model
- `app/serializers/ai_custom_tool_serializer.rb`: Serializer for AI tools
b. Controllers:
- `app/controllers/discourse_ai/admin/ai_tools_controller.rb`: Handles CRUD operations for AI tools
c. Views and Components:
- New Ember.js components for tool management in the admin interface
- Updates to existing AI persona management components to support custom tools
d. Core functionality:
- `lib/ai_bot/tool_runner.rb`: Implements the custom tool execution system
- `lib/ai_bot/tools/custom.rb`: Defines the custom tool class
e. Routes and configurations:
- Updates to route configurations to include new AI tool management pages
f. Migrations:
- `db/migrate/20240618080148_create_ai_tools.rb`: Creates the ai_tools table
g. Tests:
- New test files for AI tool functionality and integration
The PR integrates the custom tools system with the existing AI persona framework, allowing personas to use both built-in and custom tools. It also includes safety measures such as timeouts and HTTP request limits to prevent misuse of custom tools.
Overall, this PR significantly enhances the flexibility and extensibility of the Discourse AI system by allowing administrators to create and manage custom AI tools tailored to their specific needs.
Co-authored-by: Martin Brennan <martin@discourse.org>
Having this as a callback prevents deploys of sites with a vLLM SRV configured and pending migrations. Additionally, this fixes a bug where we didn't delete/deactivate the companion user after deleting an LLM.
Previously, we stored request parameters like the OpenAI organization and Bedrock's access key and region as site settings. This change stores them in the `llm_models` table instead, letting us drop more settings while also becoming more flexible.
* FEATURE: LLM presets for model creation
Previous to this users needed to look up complicated settings
when setting up models.
This introduces and extensible preset system with Google/OpenAI/Anthropic
presets.
This will cover all the most common LLMs, we can always add more as
we go.
Additionally:
- Proper support for Anthropic Claude Sonnet 3.5
- Stop blurring api keys when navigating away - this made it very complex to reuse keys