Commit Graph

30 Commits

Author SHA1 Message Date
Sam c294b6d394
FEATURE: allow llm triage to automatically hide posts (#820)
Previous to this change we could flag, but there was no way
to hide content and treat the flag as spam.

We had the option to hide topics, but this is not desirable for
a spam reply.

New option allows triage to hide a post if it is a reply, if the
post happens to be the first post on the topic, the topic will
be hidden.
2024-10-04 16:11:30 +10:00
Roman Rizzi eac83eb619
FIX: Triage's search_for_text should be case-insensitive (#767) 2024-08-22 18:32:42 -03:00
Roman Rizzi 64641b6175
FEATURE: LLM Triage support for systemless models. (#757)
* FEATURE: LLM Triage support for systemless models.

This change adds support for OSS models without support for system messages. LlmTriage's system message field is no longer mandatory. We now send the post contents in a separate user message.

* Models using Ollama can also disable system prompts
2024-08-21 11:41:55 -03:00
Roman Rizzi f789d3ee96
FIX: Triage-flagged posts didn't have a score. (#752)
The score will contain the LLM result, and make sure the flag isn't displayed when a minimum score threshold is present.
2024-08-14 15:54:09 -03:00
Sam b671ffe7fa
FIX: info not working, not suppressing hidden tags from report (#696)
2 small fixes

1. The info button was not properly working post refactor
2. Suppress any secured tags from report input
2024-07-02 16:38:33 +10:00
Roman Rizzi 558574fa87
DEV: Use LlmModels as options in automation rules (#676) 2024-06-21 08:07:17 +10:00
Sam 8eee6893d6
FEATURE: GPT4o support and better auditing (#618)
- Introduce new support for GPT4o (automation / bot / summary / helper)
- Properly account for token counts on OpenAI models
- Track feature that was used when generating AI completions
- Remove custom llm support for summarization as we need better interfaces to control registration and de-registration
2024-05-14 13:28:46 +10:00
Sam 4d8b7742da
FIX: many missing topics when categories excluded (#585)
We were forgetting about the NULL parent_category_id handling in
our check for sub categories
2024-04-23 08:53:51 +10:00
Sam 5ab86923ff
FIX: when excluding categories also exclude children (#583)
This allows you to exclude trees of categories in a simple way

It also means you can no longer exclude "just the parent" but
this is a fair compromise.
2024-04-22 16:05:24 +10:00
Rafael dos Santos Silva 253e0b7b39
FEATURE: Mixtral/Mistral/Haiku Automation Support (#571)
Adds new models to automation, and makes LLM output parsing more robust.
2024-04-11 09:50:46 -03:00
Sam 5cac47a30a
FIX: unify automation model translation (#540)
report runner and llm triage used different paths to figure out
underlying model name, unify so we use the same path.

fixes claude 3 based models on llm triage
2024-03-21 11:32:35 +11:00
Sam e8b2a200c1
FIX: prompt engineering for summary prompt (#539)
Prompt was steering incorrectly into the wrong language.

New prompt attempts to be more concise and clear and provides
better guidance about size of summary and how to format it.
2024-03-20 16:33:05 +11:00
Sam 41f1530078
FIX: mention suppression was not working right (#538)
We were only suppressing non mentions, ones that become spans.

@sam in the test was not resolving to a mention cause the user
did not exist.

depends on: https://github.com/discourse/discourse/pull/26253 for tests to pass.
2024-03-20 13:00:39 +11:00
Sam cc0369dd39
FEATURE: friendlier reply behavior in bot PMs (#535)
- Stop replying as bot, when human replies to another human
- Reply as correct persona when replying directly to a persona
- Fix paper cut where suppressing notifications was not doing so
2024-03-19 20:15:12 +11:00
Sam f62703760f
FEATURE: add Claude 3 sonnet/haiku support for Amazon Bedrock (#534)
This PR consolidates the  implements new Anthropic Messages interface for Bedrock Claude endpoints and adds support for the new Claude 3 models (haiku, opus, sonnet).

Key changes:
- Renamed `AnthropicMessages` and `Anthropic` endpoint classes into a single `Anthropic` class (ditto for ClaudeMessages -> Claude)
- Updated `AwsBedrock` endpoints to use the new `/messages` API format for all Claude models
- Added `claude-3-haiku`, `claude-3-opus` and `claude-3-sonnet` model support in both Anthropic and AWS Bedrock endpoints
- Updated specs for the new consolidated endpoints and Claude 3 model support

This refactor removes support for old non messages API which has been deprecated by anthropic
2024-03-19 06:48:46 +11:00
Sam d7ed8180af
FEATURE: allow suppression of notifications from report generation (#533)
* FEATURE: allow suppression of notifications from report generation

Previously we needed to do this by hand, unfortunately this uses up
too many tokens and is very hard to discover.

New option means that we can trivially disable notifications without
needing any prompt engineering.

* URI.parse is safer, use it
2024-03-16 08:05:03 +11:00
Sam 8b382d6098
FEATURE: support for claude opus and sonnet (#508)
This provides new support for messages API from Claude.

It is required for latest model access.

Also corrects implementation of function calls.

* Fix message interleving

* fix broken spec

* add new models to automation
2024-03-06 06:04:37 +11:00
Rafael dos Santos Silva 1b72a00d2c
FEATURE: Option for AI triage to send a post to the review queue (#498)
Option for AI triage to send a post to the review queue
2024-02-29 12:33:28 +11:00
Sam abcf5ea94a
FEATURE: fine tune llm report to follow instructions more closely (#451)
- Allow users to supply top_p and temperature values, which means people can fine tune randomness
- Fix bad localization string
- Fix bad remapping of max tokens in gemini
- Add support for top_p as a general param to llms
- Amend system prompt so persona stops treating a user as an adversary
2024-01-31 09:58:25 +11:00
Sam ab7e9e31aa
FEATURE: allow excluding tags and categories from LLM report (#447)
Also

- Better diagnostics, output model being used
- Prompt LLM that true content is being injected in <context> tag
2024-01-30 15:55:05 +11:00
Roman Rizzi bae71eb047
FIX: Include provider in automation models (#446) 2024-01-29 18:07:29 -03:00
Jarek Radosz 5802cd1a0c
DEV: Fix various typos (#434) 2024-01-19 12:51:26 +01:00
Sam 370074ef21
FIX: always ensure `#generate` gets a valid input (#427)
We were not validating input for generate leading to 2 tests not
failing correctly despite functionality being broken.

This ensures that input is validated,and in turn fixes the broken
specs
2024-01-16 15:21:58 +11:00
Roman Rizzi 04eae76f68
REFACTOR: Represent generic prompts with an Object. (#416)
* REFACTOR: Represent generic prompts with an Object.

* Adds a bit more validation for clarity

* Rewrite bot title prompt and fix quirk handling

---------

Co-authored-by: Sam Saffron <sam.saffron@gmail.com>
2024-01-12 14:36:44 -03:00
Sam 03fc94684b
FIX: AI helper not working correctly with mixtral (#399)
* FIX: AI helper not working correctly with mixtral

This PR introduces a new function on the generic llm called #generate

This will replace the implementation of completion!

#generate introduces a new way to pass temperature, max_tokens and stop_sequences

Then LLM implementers need to implement #normalize_model_params to
ensure the generic names match the LLM specific endpoint

This also adds temperature and stop_sequences to completion_prompts
this allows for much more robust completion prompts

* port everything over to #generate

* Fix translation

- On anthropic this no longer throws random "This is your translation:"
- On mixtral this actually works

* fix markdown table generation as well
2024-01-04 09:53:47 -03:00
Sam a5d240991f
FEATURE: allow sending AI based report to a topic (#377)
This makes the reporting far more flexible cause it can target a
far wider audience by pointing it at a topic in a secure category
or an existing PM
2023-12-22 11:46:23 +11:00
Sam 37dd98c937
FIX: exclude non visible topics from report context (#375)
Generally non visible topics are not that interesting, do not add
this noise to the report context
2023-12-21 19:08:36 +11:00
Sam 8664771b7f
FIX: triage no longer working with claude (#369) 2023-12-20 07:58:38 +11:00
Sam 529703b5ec
FEATURE: support sending AI report to an email address (#368)
Support emailing the AI report to any arbitrary email
2023-12-19 17:51:49 +11:00
Sam d0f54443ae
FEATURE: LLM based peroidical summary report (#357)
Introduce a Discourse Automation based periodical report. Depends on Discourse Automation.

Report works best with very large context language models such as GPT-4-Turbo and Claude 2.

- Introduces final_insts to generic llm format, for claude to work best it is better to guide the last assistant message (we should add this to other spots as well)
- Adds GPT-4 turbo support to generic llm interface
2023-12-19 12:04:15 +11:00