464 Commits

Author SHA1 Message Date
Keegan George
a9b2d6a30a
FEATURE: AI image caption (#470)
This PR adds a new feature where you can generate captions for images in the composer using AI.

---------

Co-authored-by: Rafael Silva <xfalcox@gmail.com>
2024-02-19 14:56:28 -03:00
Sam
1f74a77e17
DEV: correct flaky spec (#475)
We were not properly expiring prompt cache
2024-02-19 15:21:55 +11:00
Sam
0fb87b00e2
FEATURE: new Discourse Helper persona (#473)
This persona searches Discourse Meta for help with Discourse and
points users at relevant posts.

It is somewhat similar to using "Forum Helper" on meta, with the
notable difference that we can not lean on semantic search so using
some prompt engineering we try to keep it simple.
2024-02-19 14:52:12 +11:00
Krzysztof Kotlarek
dd6b073fc3
DEV: Make more group-based settings client: false (#474)
Affects the following settings:

ai_toxicity_groups_bypass
ai_helper_allowed_groups
ai_helper_custom_prompts_allowed_groups
post_ai_helper_allowed_groups

This turns off client: true for these group-based settings,
because there is no guarantee that the current user gets all
their group memberships serialized to the client. Better to check
server-side first.
2024-02-19 13:26:24 +11:00
Keegan George
d66915ecc1
DEV: Make prompts available on CurrentUserSerializer (#472) 2024-02-16 10:57:14 -08:00
Sam
3a8d95f6b2
FEATURE: mentionable personas and random picker tool, context limits (#466)
1. Personas are now optionally mentionable, meaning that you can mention them either from public topics or PMs
       - Mentioning from PMs helps "switch" persona mid conversation, meaning if you want to look up sites setting you can invoke the site setting bot, or if you want to generate an image you can invoke dall e
        - Mentioning outside of PMs allows you to inject a bot reply in a topic trivially
     - We also add the support for max_context_posts this allow you to limit the amount of context you feed in, which can help control costs

2. Add support for a "random picker" tool that can be used to pick random numbers 

3. Clean up routing ai_personas -> ai-personas

4. Add Max Context Posts so users can control how much history a persona can consume (this is important for mentionable personas) 

Co-authored-by: Martin Brennan <martin@discourse.org>
2024-02-15 16:37:59 +11:00
Rafael dos Santos Silva
33164a0fec
FIX: Cleanup AI search results when a subsequent search happens (#469) 2024-02-14 11:08:41 +11:00
Discourse Translator Bot
2092ffd141
Update translations (#471) 2024-02-13 16:11:39 +01:00
Rafael dos Santos Silva
59fbbb156b
DEV: Make indexing less frequent when related topics is disabled (#468) 2024-02-09 16:08:54 -03:00
Rafael dos Santos Silva
0dba6623a0
FIX: Better AI chat thread titles (#467)
* FIX: Better AI chat thread titles

- Fix quote removal when multi-line

- Use XML tags for better LLM output parsing

- Use stop_sequences for faster and less wasteful LLM calls

- Adds truncation as the last line of defense
2024-02-09 14:49:28 -03:00
Rafael dos Santos Silva
8b1f542238
UX: Add missing settings descriptions (#465) 2024-02-08 12:18:05 -03:00
Rafael dos Santos Silva
bccb7efdd6
FIX: Use a dedicated prompt for thread titles (#464) 2024-02-07 15:05:50 -03:00
Roman Rizzi
0ff5c0c2c4
FIX: Explicit check for empty string in compat migration (#463) 2024-02-07 14:51:51 -03:00
Discourse Translator Bot
9168c75eb6
Update translations (#462) 2024-02-06 22:35:35 +01:00
Roman Rizzi
8bd8280427
FIX: Hide related topics when module is disabled (#461) 2024-02-05 11:45:24 -03:00
Sam
ba3c3951cf
FIX: typo causing text_embedding_3_large to fail (#460) 2024-02-05 11:16:36 +11:00
Sam
a3c827efcc
FEATURE: allow personas to supply top_p and temperature params (#459)
* FEATURE: allow personas to supply top_p and temperature params

Code assistance generally are more focused at a lower temperature
This amends it so SQL Helper runs at 0.2 temperature vs the more
common default across LLMs of 1.0.

Reduced temperature leads to more focused, concise and predictable
answers for the SQL Helper

* fix tests

* This is not perfect, but far better than what we do today

Instead of fishing for

1. Draft sequence
2. Draft body

We skip (2), this means the composer "only" needs 1 http request to
open, we also want to eliminate (1) but it is a bit of a trickier
core change, may figure out how to pull it off (defer it to first draft save)

Value of bot drafts < value of opening bot conversations really fast
2024-02-03 07:09:34 +11:00
Keegan George
944fd6569c
DEV: Add granular control for AI composer helper features (#458) 2024-02-01 14:58:04 -08:00
Roman Rizzi
fba9c1bf2c
UX: Re-introduce embedding settings validations (#457)
* Revert "Revert "UX: Validate embeddings settings (#455)" (#456)"

This reverts commit 392e2e8aef7d5b0d988b3c3bc5cc19f1d83c4491.

* Resstore previous default
2024-02-01 16:54:09 -03:00
Roman Rizzi
392e2e8aef
Revert "UX: Validate embeddings settings (#455)" (#456)
This reverts commit 85fca89e011933a0479abaf4bf0945983fb948b8.
2024-02-01 14:06:51 -03:00
Roman Rizzi
85fca89e01
UX: Validate embeddings settings (#455) 2024-02-01 13:05:38 -03:00
Sam
cec4251b00
DEV: improve error bedrock error messages (#454)
When bedrock rate limits it returns a 200 BUT also returns a JSON
document with the error.

Previously we had no special case here so we complained about nil

New code properly logs the problem
2024-02-01 08:01:07 -03:00
Rafael dos Santos Silva
fd6fcfdb61
DEV: Increase embeddings backfill job frequency (#453)
The idea is to increase the frequency so we can run with smaller batch sizes.
Big batches cause problems when running backups, so it's better to have shorter but
more frequent jobs.
2024-01-31 15:09:39 -03:00
Sam
dcafc8032f
FIX: improve embedding generation (#452)
1. on failure we were queuing a job to generate embeddings, it had the wrong params. This is both fixed and covered in a test.
2. backfill embedding in the order of bumped_at, so newest content is embedded first, cover with a test
3. add a safeguard for hidden site setting that only allows batches of 50k in an embedding job run

Previously old embeddings were updated in a random order, this changes it so we update in a consistent order
2024-01-31 10:38:47 -03:00
Sam
abcf5ea94a
FEATURE: fine tune llm report to follow instructions more closely (#451)
- Allow users to supply top_p and temperature values, which means people can fine tune randomness
- Fix bad localization string
- Fix bad remapping of max tokens in gemini
- Add support for top_p as a general param to llms
- Amend system prompt so persona stops treating a user as an adversary
2024-01-31 09:58:25 +11:00
Rafael dos Santos Silva
9543ded3ee
DEV: Make per post embeddings a hidden setting (#450) 2024-01-30 15:51:54 -03:00
Rafael dos Santos Silva
b41c5cc31c
FIX: Add table name to remove ambiguous column reference in SQL (#449) 2024-01-30 15:50:26 -03:00
Discourse Translator Bot
57d350c913
Update translations (#448) 2024-01-30 17:03:35 +01:00
Sam
ab7e9e31aa
FEATURE: allow excluding tags and categories from LLM report (#447)
Also

- Better diagnostics, output model being used
- Prompt LLM that true content is being injected in <context> tag
2024-01-30 15:55:05 +11:00
Roman Rizzi
bae71eb047
FIX: Include provider in automation models (#446) 2024-01-29 18:07:29 -03:00
Roman Rizzi
0634b85a81
UX: Validations to LLM-backed features (except AI Bot) (#436)
* UX: Validations to Llm-backed features (except AI Bot)

This change is part of an ongoing effort to prevent enabling a broken feature due to lack of configuration. We also want to explicit which provider we are going to use. For example, Claude models are available through AWS Bedrock and Anthropic, but the configuration differs.

Validations are:

* You must choose a model before enabling the feature.
* You must turn off the feature before setting the model to blank.
* You must configure each model settings before being able to select it.

* Add provider name to summarization options

* vLLM can technically support same models as HF

* Check we can talk to the selected model

* Check for Bedrock instead of anthropic as a site could have both creds setup
2024-01-29 16:04:25 -03:00
Sam
b2b01185f2
FEATURE: add support for new OpenAI embedding models (#445)
* FEATURE: add support for new OpenAI embedding models

This adds support for just released text_embedding_3_small and large

Note, we have not yet implemented truncation support which is a
new API feature. (triggered using dimensions)

* Tiny side fix, recalc bots when ai is enabled or disabled

* FIX: downsample to 2000 items per vector which is a pgvector limitation
2024-01-29 13:24:30 -03:00
Keegan George
4c4b418cff
DEV: Not necessary to show modal with errors (#444) 2024-01-26 09:54:43 -08:00
Sam
092da860e2
FEATURE: support gpt-4-0125 which was just released (#443)
The new model has better performance and is always preferable to
the old one which has unicode issues during function calls.
2024-01-26 09:08:02 +11:00
Roman Rizzi
b461ebc4ca
FIX: typo in Automation::AVAILABLE_MODELS (#442) 2024-01-25 11:56:28 -03:00
Rafael dos Santos Silva
fa6bc7f409
FIX: Automatic embeddings index could fail if it existed in the backup schema (#441) 2024-01-24 15:57:26 -03:00
Rafael dos Santos Silva
16d666fe69
FIX: Misconfigured OpenAI API for embeddings shouldn't spam logs (#440) 2024-01-24 15:57:18 -03:00
Rafael dos Santos Silva
04bc402aae
FEATURE: Setting to control per post embeddings (#439)
* FEATURE: Setting to control per post embeddings
2024-01-23 22:09:27 -03:00
Discourse Translator Bot
797f5971b6
Update translations (#438) 2024-01-23 18:29:44 +01:00
Kris
900df4e8c8
UX: start progress dot animation instantly if it's the only content (#437) 2024-01-22 13:10:51 -05:00
Jarek Radosz
4b4aedb50f
DEV: Use the new controller/period component for the dashboard (#435) 2024-01-19 13:27:33 +01:00
Jarek Radosz
5802cd1a0c
DEV: Fix various typos (#434) 2024-01-19 12:51:26 +01:00
Rafael dos Santos Silva
d4e23e0df6
FIX: Don't try to generate embeddings of posts in deleted topics (#433) 2024-01-18 16:10:25 -03:00
Dax74
f65314bdab
FIX: typo (#432) 2024-01-18 16:38:29 +01:00
Rafael dos Santos Silva
c70f43f130
FIX: Truncate content for sentiment/toxicity classification (#431) 2024-01-17 15:17:58 -03:00
Roman Rizzi
5bdf3dc1f4
DEV: Stop using shared_examples for endpoint specs (#430) 2024-01-17 15:08:49 -03:00
Gerhard Schlager
8eb1e851fc
DEV: Spec didn't work correctly with translations (#429) 2024-01-16 16:28:24 +01:00
Discourse Translator Bot
14020e7095
Update translations (#428) 2024-01-16 14:54:42 +01:00
Sam
370074ef21
FIX: always ensure #generate gets a valid input (#427)
We were not validating input for generate leading to 2 tests not
failing correctly despite functionality being broken.

This ensures that input is validated,and in turn fixes the broken
specs
2024-01-16 15:21:58 +11:00
Sam
05d8b021f1
FIX: scrub invalid prompts when truncating (#426)
When you trim a prompt we never want to have a state where there
is a "tool" reply without a corresponding tool call, it makes no
sense

Also

- GPT-4-Turbo is 128k, fix that
- Claude was not preserving username in prompt
- We were throwing away unicode usernames instead of adding to
message
2024-01-16 13:48:00 +11:00