* FEATURE: allow mentioning an LLM mid conversation to switch
This is a edgecase feature that allow you to start a conversation
in a PM with LLM1 and then use LLM2 to evaluation or continue
the conversation
* FEATURE: allow auto silencing of spam accounts
New rule can also allow for silencing an account automatically
This can prevent spammers from creating additional posts.
A new feature_context json column was added to ai_api_audit_logs
This allows us to store rich json like context on any LLM request
made.
This new field now stores automation id and name.
Additionally allows llm_triage to specify maximum number of tokens
This means that you can limit the cost of llm triage by scanning only
first N tokens of a post.
Previous to this change we could flag, but there was no way
to hide content and treat the flag as spam.
We had the option to hide topics, but this is not desirable for
a spam reply.
New option allows triage to hide a post if it is a reply, if the
post happens to be the first post on the topic, the topic will
be hidden.
* FEATURE: LLM Triage support for systemless models.
This change adds support for OSS models without support for system messages. LlmTriage's system message field is no longer mandatory. We now send the post contents in a separate user message.
* Models using Ollama can also disable system prompts
- Introduce new support for GPT4o (automation / bot / summary / helper)
- Properly account for token counts on OpenAI models
- Track feature that was used when generating AI completions
- Remove custom llm support for summarization as we need better interfaces to control registration and de-registration
This allows you to exclude trees of categories in a simple way
It also means you can no longer exclude "just the parent" but
this is a fair compromise.
report runner and llm triage used different paths to figure out
underlying model name, unify so we use the same path.
fixes claude 3 based models on llm triage
Prompt was steering incorrectly into the wrong language.
New prompt attempts to be more concise and clear and provides
better guidance about size of summary and how to format it.
We were only suppressing non mentions, ones that become spans.
@sam in the test was not resolving to a mention cause the user
did not exist.
depends on: https://github.com/discourse/discourse/pull/26253 for tests to pass.
- Stop replying as bot, when human replies to another human
- Reply as correct persona when replying directly to a persona
- Fix paper cut where suppressing notifications was not doing so
This PR consolidates the implements new Anthropic Messages interface for Bedrock Claude endpoints and adds support for the new Claude 3 models (haiku, opus, sonnet).
Key changes:
- Renamed `AnthropicMessages` and `Anthropic` endpoint classes into a single `Anthropic` class (ditto for ClaudeMessages -> Claude)
- Updated `AwsBedrock` endpoints to use the new `/messages` API format for all Claude models
- Added `claude-3-haiku`, `claude-3-opus` and `claude-3-sonnet` model support in both Anthropic and AWS Bedrock endpoints
- Updated specs for the new consolidated endpoints and Claude 3 model support
This refactor removes support for old non messages API which has been deprecated by anthropic
* FEATURE: allow suppression of notifications from report generation
Previously we needed to do this by hand, unfortunately this uses up
too many tokens and is very hard to discover.
New option means that we can trivially disable notifications without
needing any prompt engineering.
* URI.parse is safer, use it
This provides new support for messages API from Claude.
It is required for latest model access.
Also corrects implementation of function calls.
* Fix message interleving
* fix broken spec
* add new models to automation
- Allow users to supply top_p and temperature values, which means people can fine tune randomness
- Fix bad localization string
- Fix bad remapping of max tokens in gemini
- Add support for top_p as a general param to llms
- Amend system prompt so persona stops treating a user as an adversary
We were not validating input for generate leading to 2 tests not
failing correctly despite functionality being broken.
This ensures that input is validated,and in turn fixes the broken
specs
* REFACTOR: Represent generic prompts with an Object.
* Adds a bit more validation for clarity
* Rewrite bot title prompt and fix quirk handling
---------
Co-authored-by: Sam Saffron <sam.saffron@gmail.com>
* FIX: AI helper not working correctly with mixtral
This PR introduces a new function on the generic llm called #generate
This will replace the implementation of completion!
#generate introduces a new way to pass temperature, max_tokens and stop_sequences
Then LLM implementers need to implement #normalize_model_params to
ensure the generic names match the LLM specific endpoint
This also adds temperature and stop_sequences to completion_prompts
this allows for much more robust completion prompts
* port everything over to #generate
* Fix translation
- On anthropic this no longer throws random "This is your translation:"
- On mixtral this actually works
* fix markdown table generation as well
Introduce a Discourse Automation based periodical report. Depends on Discourse Automation.
Report works best with very large context language models such as GPT-4-Turbo and Claude 2.
- Introduces final_insts to generic llm format, for claude to work best it is better to guide the last assistant message (we should add this to other spots as well)
- Adds GPT-4 turbo support to generic llm interface