491 Commits

Author SHA1 Message Date
Sam
dcafc8032f
FIX: improve embedding generation (#452)
1. on failure we were queuing a job to generate embeddings, it had the wrong params. This is both fixed and covered in a test.
2. backfill embedding in the order of bumped_at, so newest content is embedded first, cover with a test
3. add a safeguard for hidden site setting that only allows batches of 50k in an embedding job run

Previously old embeddings were updated in a random order, this changes it so we update in a consistent order
2024-01-31 10:38:47 -03:00
Sam
abcf5ea94a
FEATURE: fine tune llm report to follow instructions more closely (#451)
- Allow users to supply top_p and temperature values, which means people can fine tune randomness
- Fix bad localization string
- Fix bad remapping of max tokens in gemini
- Add support for top_p as a general param to llms
- Amend system prompt so persona stops treating a user as an adversary
2024-01-31 09:58:25 +11:00
Rafael dos Santos Silva
9543ded3ee
DEV: Make per post embeddings a hidden setting (#450) 2024-01-30 15:51:54 -03:00
Rafael dos Santos Silva
b41c5cc31c
FIX: Add table name to remove ambiguous column reference in SQL (#449) 2024-01-30 15:50:26 -03:00
Discourse Translator Bot
57d350c913
Update translations (#448) 2024-01-30 17:03:35 +01:00
Sam
ab7e9e31aa
FEATURE: allow excluding tags and categories from LLM report (#447)
Also

- Better diagnostics, output model being used
- Prompt LLM that true content is being injected in <context> tag
2024-01-30 15:55:05 +11:00
Roman Rizzi
bae71eb047
FIX: Include provider in automation models (#446) 2024-01-29 18:07:29 -03:00
Roman Rizzi
0634b85a81
UX: Validations to LLM-backed features (except AI Bot) (#436)
* UX: Validations to Llm-backed features (except AI Bot)

This change is part of an ongoing effort to prevent enabling a broken feature due to lack of configuration. We also want to explicit which provider we are going to use. For example, Claude models are available through AWS Bedrock and Anthropic, but the configuration differs.

Validations are:

* You must choose a model before enabling the feature.
* You must turn off the feature before setting the model to blank.
* You must configure each model settings before being able to select it.

* Add provider name to summarization options

* vLLM can technically support same models as HF

* Check we can talk to the selected model

* Check for Bedrock instead of anthropic as a site could have both creds setup
2024-01-29 16:04:25 -03:00
Sam
b2b01185f2
FEATURE: add support for new OpenAI embedding models (#445)
* FEATURE: add support for new OpenAI embedding models

This adds support for just released text_embedding_3_small and large

Note, we have not yet implemented truncation support which is a
new API feature. (triggered using dimensions)

* Tiny side fix, recalc bots when ai is enabled or disabled

* FIX: downsample to 2000 items per vector which is a pgvector limitation
2024-01-29 13:24:30 -03:00
Keegan George
4c4b418cff
DEV: Not necessary to show modal with errors (#444) 2024-01-26 09:54:43 -08:00
Sam
092da860e2
FEATURE: support gpt-4-0125 which was just released (#443)
The new model has better performance and is always preferable to
the old one which has unicode issues during function calls.
2024-01-26 09:08:02 +11:00
Roman Rizzi
b461ebc4ca
FIX: typo in Automation::AVAILABLE_MODELS (#442) 2024-01-25 11:56:28 -03:00
Rafael dos Santos Silva
fa6bc7f409
FIX: Automatic embeddings index could fail if it existed in the backup schema (#441) 2024-01-24 15:57:26 -03:00
Rafael dos Santos Silva
16d666fe69
FIX: Misconfigured OpenAI API for embeddings shouldn't spam logs (#440) 2024-01-24 15:57:18 -03:00
Rafael dos Santos Silva
04bc402aae
FEATURE: Setting to control per post embeddings (#439)
* FEATURE: Setting to control per post embeddings
2024-01-23 22:09:27 -03:00
Discourse Translator Bot
797f5971b6
Update translations (#438) 2024-01-23 18:29:44 +01:00
Kris
900df4e8c8
UX: start progress dot animation instantly if it's the only content (#437) 2024-01-22 13:10:51 -05:00
Jarek Radosz
4b4aedb50f
DEV: Use the new controller/period component for the dashboard (#435) 2024-01-19 13:27:33 +01:00
Jarek Radosz
5802cd1a0c
DEV: Fix various typos (#434) 2024-01-19 12:51:26 +01:00
Rafael dos Santos Silva
d4e23e0df6
FIX: Don't try to generate embeddings of posts in deleted topics (#433) 2024-01-18 16:10:25 -03:00
Dax74
f65314bdab
FIX: typo (#432) 2024-01-18 16:38:29 +01:00
Rafael dos Santos Silva
c70f43f130
FIX: Truncate content for sentiment/toxicity classification (#431) 2024-01-17 15:17:58 -03:00
Roman Rizzi
5bdf3dc1f4
DEV: Stop using shared_examples for endpoint specs (#430) 2024-01-17 15:08:49 -03:00
Gerhard Schlager
8eb1e851fc
DEV: Spec didn't work correctly with translations (#429) 2024-01-16 16:28:24 +01:00
Discourse Translator Bot
14020e7095
Update translations (#428) 2024-01-16 14:54:42 +01:00
Sam
370074ef21
FIX: always ensure #generate gets a valid input (#427)
We were not validating input for generate leading to 2 tests not
failing correctly despite functionality being broken.

This ensures that input is validated,and in turn fixes the broken
specs
2024-01-16 15:21:58 +11:00
Sam
05d8b021f1
FIX: scrub invalid prompts when truncating (#426)
When you trim a prompt we never want to have a state where there
is a "tool" reply without a corresponding tool call, it makes no
sense

Also

- GPT-4-Turbo is 128k, fix that
- Claude was not preserving username in prompt
- We were throwing away unicode usernames instead of adding to
message
2024-01-16 13:48:00 +11:00
Roman Rizzi
ff4da6ace8
FIX: Clean unicode usernames when adding messages through prompt's contrstuctor (#425) 2024-01-15 12:01:40 -03:00
Ted Johansson
37e6ac169e
DEV: Update test setup to work with auto groups (#424)
We're updating core to change TL based access settings to be group based. This requires some updates of tests to work correctly. (The existing test setup gives false positives.)
2024-01-15 20:18:56 +08:00
Sam
825f01cfb2
FEATURE: even smoother streaming (#420)
Account properly for function calls, don't stream through <details> blocks
- Rush cooked content back to client
- Wait longer (up to 60 seconds) before giving up on streaming
- Clean up message bus channels so we don't have leftover data
- Make ai streamer much more reusable and much easier to read
- If buffer grows quickly, rush update so you are not artificially waiting
- Refine prompt interface
- Fix lost system message when prompt gets long
2024-01-15 18:51:14 +11:00
Jarek Radosz
6b8a57d957
DEV: Update linting (#423)
Co-authored-by: Keegan George <kgeorge13@gmail.com>
2024-01-13 00:28:06 +01:00
Keegan George
1748ebcb8c
DEV: Prevent HyDE search from being called multiple times (#422) 2024-01-12 11:48:07 -08:00
Roman Rizzi
04eae76f68
REFACTOR: Represent generic prompts with an Object. (#416)
* REFACTOR: Represent generic prompts with an Object.

* Adds a bit more validation for clarity

* Rewrite bot title prompt and fix quirk handling

---------

Co-authored-by: Sam Saffron <sam.saffron@gmail.com>
2024-01-12 14:36:44 -03:00
Rafael dos Santos Silva
705ef986b4
FIX: Set ivfflat.probes using topic count, not post count (#421)
Fixes a regression from 140359c which caused we to set this globally based on post count, rendering the cost of an index scan on the topics table too high and making the planner, correctly, not use the index anymore.

Hopefully https://github.com/pgvector/pgvector/issues/235 lands soon.
2024-01-12 11:20:23 -03:00
Rafael dos Santos Silva
3be76ebd7a
FEATURE: Move the default embeddings model to bge-large-en (#417) 2024-01-11 14:16:25 -03:00
Sam
8df966e9c5
FEATURE: smooth streaming of AI responses on the client (#413)
This PR introduces 3 things:

1. Fake bot that can be used on local so you can test LLMs, to enable on dev use:

SiteSetting.ai_bot_enabled_chat_bots = "fake"

2. More elegant smooth streaming of progress on LLM completion

This leans on JavaScript to buffer and trickle llm results through. It also amends it so the progress dot is much 
more consistently rendered

3. It fixes the Claude dialect 

Claude needs newlines **exactly** at the right spot, amended so it is happy 

---------

Co-authored-by: Martin Brennan <martin@discourse.org>
2024-01-11 15:56:40 +11:00
Martin Brennan
37b957dbbb
DEV: Fix SemanticRelated module load error (#419)
Followup 2636efcd1bf6eaa0a6d0d868affb9d41d49bdda2,
whenever ruby code was changed locally this would break
module loading, giving an "uninitialized constant
DiscourseAi::Embeddings::EntryPoint::SemanticRelated" error.
2024-01-11 13:52:50 +10:00
Keegan George
5c9b570562
FIX: Revert AI action not working in Firefox (#418)
* FIX: Revert AI action not working in Firefox

* Make it pretty 💄
2024-01-11 11:43:39 +11:00
Rafael dos Santos Silva
8fcba12fae
FEATURE: Support for SRV records for Discourse services (#414)
This allows admins to configure services with multiple backends using DNS SRV records. This PR also adds support for shared secret auth via headers for TEI and vLLM endpoints, so they are inline with the other ones.
2024-01-10 19:23:07 -03:00
Keegan George
9d8bbe32a9
FIX: AI Explain copy button not working (#415) 2024-01-10 10:41:48 -08:00
Keegan George
726cffc8af
FIX: New illustrate post suggestions should be auto tracked (#412) 2024-01-10 09:04:10 -08:00
Roman Rizzi
abde82c1f3
FIX: Use claude-2.1 to enable system prompts (#411) 2024-01-09 14:10:20 -03:00
Discourse Translator Bot
0f4e7723d7
Update translations (#410) 2024-01-09 15:09:46 +01:00
Sam
05f7808057
FEATURE: more elegant progress (#409)
Previous to this change it was very hard to tell if completion was
stuck or not.

This introduces a "dot" that follows the completion and starts
flashing after 5 seconds.
2024-01-09 09:20:28 -03:00
Sam
b0a0cbe3ca
FIX: improve bot behavior (#408)
* FIX: improve bot behavior

- Provide more information to Gemini context post function execution
- Use system prompts for Claude (fixes Dall E)
- Ensure Assistant is properly separated
- Teach Claude to return arrays in JSON vs XML

Also refactors tests so we do not copy tool preamble everywhere

* System msg is claude-2 only. fix typo

---------

Co-authored-by: Roman Rizzi <rizziromanalejandro@gmail.com>
2024-01-08 10:28:03 -03:00
Roman Rizzi
6124f910c1
FIX: Bring back Azure support. (#407)
We thought Azure's latest API version didn't have tool support yet, but I didn't understand it was complaining about a required field in the tool call message.
2024-01-05 17:08:10 -03:00
Sam
17cc09ec9c
FIX: don't include <details> in context (#406)
* FIX: don't include <details> in context

We need to be careful adding <details> into context of conversations
it can cause LLMs to hallucinate results

* Fix Gemini multi-turn ctx flattening

---------

Co-authored-by: Roman Rizzi <rizziromanalejandro@gmail.com>
2024-01-05 15:21:14 -03:00
Keegan George
7201d482d5
FEATURE: Add DallE support to AI helper's illustrate post (#404) 2024-01-05 09:03:23 -08:00
Rafael dos Santos Silva
23b2809638
FEATURE: Generate proper embeddings for posts/topics with embedded content (#401) 2024-01-05 10:27:45 -03:00
Rafael dos Santos Silva
6fc1c9f7a6
FEATURE: Try to automatically handle larger embedding indexes (#403)
* FEATURE: Try to automatically handle larger embedding indexes

* linteeeeeeeer
2024-01-05 09:56:28 -03:00