* DEV: AI bot migration to the Llm pattern.
We added tool and conversation context support to the Llm service in discourse-ai#366, meaning we met all the conditions to migrate this module.
This PR migrates to the new pattern, meaning adding a new bot now requires minimal effort as long as the service supports it. On top of this, we introduce the concept of a "Playground" to separate the PM-specific bits from the completion, allowing us to use the bot in other contexts like chat in the future. Commands are called tools, and we simplified all the placeholder logic to perform updates in a single place, making the flow more one-wayish.
* Followup fixes based on testing
* Cleanup unused inference code
* FIX: text-based tools could be in the middle of a sentence
* GPT-4-turbo support
* Use new LLM API
* FIX: AI helper not working correctly with mixtral
This PR introduces a new function on the generic llm called #generate
This will replace the implementation of completion!
#generate introduces a new way to pass temperature, max_tokens and stop_sequences
Then LLM implementers need to implement #normalize_model_params to
ensure the generic names match the LLM specific endpoint
This also adds temperature and stop_sequences to completion_prompts
this allows for much more robust completion prompts
* port everything over to #generate
* Fix translation
- On anthropic this no longer throws random "This is your translation:"
- On mixtral this actually works
* fix markdown table generation as well
Personas now support providing options for commands.
This PR introduces a single option "base_query" for the SearchCommand. When supplied all searches the persona will perform will also include the pre-supplied filter.
This can allow personas to search a subset of the forum (such as documentation)
This system is extensible we can add options to any command trivially.
We must ensure we can isolate titles, and the models sometimes ignore the example we give them.
Additionally, anons can generate HyDE posts, so we need to check if user is nil when attempting to log requests.
Introduces a UI to manage customizable personas (admin only feature)
Part of the change was some extensive internal refactoring:
- AIBot now has a persona set in the constructor, once set it never changes
- Command now takes in bot as a constructor param, so it has the correct persona and is not generating AIBot objects on the fly
- Added a .prettierignore file, due to the way ALE is configured in nvim it is a pre-req for prettier to work
- Adds a bunch of validations on the AIPersona model, system personas (artist/creative etc...) are all seeded. We now ensure
- name uniqueness, and only allow certain properties to be touched for system personas.
- (JS note) the client side design takes advantage of nested routes, the parent route for personas gets all the personas via this.store.findAll("ai-persona") then child routes simply reach into this model to find a particular persona.
- (JS note) data is sideloaded into the ai-persona model the meta property supplied from the controller, resultSetMeta
- This removes ai_bot_enabled_personas and ai_bot_enabled_chat_commands, both should be controlled from the UI on a per persona basis
- Fixes a long standing bug in token accounting ... we were doing to_json.length instead of to_json.to_s.length
- Amended it so {commands} are always inserted at the end unconditionally, no need to add it to the template of the system message as it just confuses things
- Adds a concept of required_commands to stock personas, these are commands that must be configured for this stock persona to show up.
- Refactored tests so we stop requiring inference_stubs, it was very confusing to need it, added to plugin.rb for now which at least is clearer
- Migrates the persona selector to gjs
---------
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
Co-authored-by: Martin Brennan <martin@discourse.org>
- New AiPersona model which can store custom personas
- Persona are restricted via group security
- They can contain custom system messages
- They can support a list of commands optionally
To avoid expensive DB calls in the serializer a Multisite friendly Hash was introduced (which can be expired on transaction commit)
Adds an AI Helper function when selecting text while viewing a topic.
---------
Co-authored-by: Keegan George <kgeorge13@gmail.com>
Co-authored-by: Roman Rizzi <roman@discourse.org>
Also fixes it so users without bot in header can send it messages.
Previous to this change we would seed all bots with database seeds.
This lead to lots of confusion for people who do not enable ai bot.
Instead:
1. We do not seed any bots **until** user enables the ai_bot_enabled setting
2. If it is disabled we will
a. If no messages were created by bot - delete it
b. Otherwise we will deactivate account
* FEATURE: Embeddings to main db
This commit moves our embeddings store from an external configurable PostgreSQL
instance back into the main database. This is done to simplify the setup.
There is a migration that will try to import the external embeddings into
the main DB if it is configured and there are rows.
It removes support from embeddings models that aren't all_mpnet_base_v2 or OpenAI
text_embedding_ada_002. However it will now be easier to add new models.
It also now takes into account:
- topic title
- topic category
- topic tags
- replies (as much as the model allows)
We introduce an interface so we can eventually support multiple strategies
for handling long topics.
This PR severely damages the semantic search performance, but this is a
temporary until we can get adapt HyDE to make semantic search use the same
embeddings we have for semantic related with good performance.
Here we also have some ground work to add post level embeddings, but this
will be added in a future PR.
Please note that this PR will also block Discourse from booting / updating if
this plugin is installed and the pgvector extension isn't available on the
PostgreSQL instance Discourse uses.
Given latest GPT 3.5 16k which is both better steered and supports functions
we can now support rich bot integration.
Clunky system message based steering is removed and instead we use the
function framework provided by Open AI
This change-set connects GPT based chat with the forum it runs on. Allowing it to perform search, lookup tags and categories and summarize topics.
The integration is currently restricted to public portions of the forum.
Changes made:
- Do not run ai reply job for small actions
- Improved composable system prompt
- Trivial summarizer for topics
- Image generator
- Google command for searching via Google
- Corrected trimming of posts raw (was replacing with numbers)
- Bypass of problem specs
The feature works best with GPT-4
---------
Co-authored-by: Roman Rizzi <rizziromanalejandro@gmail.com>
We'll create one bot user for each available model. When listed in the `ai_bot_enabled_chat_bots` setting, they will reply.
This PR lets us use Claude-v1 in stream mode.
* Minor... use username suggester in case username already exists
* FIX: ensure we truncate long prompts
Previously we
1. Used raw length instead of token counts for counting length
2. We totally dropped a prompt if it was too long
New implementation will truncate "raw" if it gets too long maintaining
meaning.
This module lets you chat with our GPT bot inside a PM. The bot only replies to members of the groups listed on the ai_bot_allowed_groups setting and only if you invite it to participate in the PM.
* FIX: Remove `null: false` from dropped column
It's causing an issue when attempting to seed data in fresh installs.
The column was dropped in a post-migration.
* Update db/migrate/20230322142028_make_dropped_value_column_nullable.rb
Co-authored-by: David Taylor <david@taylorhq.com>
---------
Co-authored-by: David Taylor <david@taylorhq.com>
A prompt with multiple messages leads to better results, as the AI can learn for given examples. Alongside this change, we provide a better default proofreading prompt.
We switched to negative integers for seeded prompts, but we also
need to delete the old ones using IDs 1/2/3 if they exist,
or deploys will fail due to the unique index on the name column.
This change adds two new reviewable types: ReviewableAIPost and ReviewableAIChatMessage. They have the same actions as their existing counterparts: ReviewableFlaggedPost and ReviewableChatMessage.
We'll display the model used and their accuracy when showing these flags in the review queue and adjust the latter after staff performs an action, tracking a global accuracy per existing model in a separate table.
* FEATURE: Dedicated reviewables for AI flags
* Store and adjust model accuracy
* Display accuracy in reviewable templates