mirror of
https://github.com/discourse/discourse-ai.git
synced 2025-03-06 09:20:14 +00:00
This PR enhances the LLM triage automation with several important improvements: - Add ability to use AI personas for automated replies instead of canned replies - Add support for whisper responses - Refactor LLM persona reply functionality into a reusable method - Add new settings to configure response behavior in automations - Improve error handling and logging - Fix handling of personal messages in the triage flow - Add comprehensive test coverage for new features - Make personas configurable with more flexible requirements This allows for more dynamic and context-aware responses in automated workflows, with better control over visibility and attribution.
Discourse AI Plugin
Plugin Summary
For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco
Evals
The directory evals
contains AI evals for the Discourse AI plugin.
You may create a local config by copying config/eval-llms.yml
to config/eval-llms.local.yml
and modifying the values.
To run them use:
cd evals ./run --help
Usage: evals/run [options]
-e, --eval NAME Name of the evaluation to run
--list-models List models
-m, --model NAME Model to evaluate (will eval all models if not specified)
-l, --list List evals
To run evals you will need to configure API keys in your environment:
OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key
Languages
Ruby
79.6%
JavaScript
17%
SCSS
2.1%
CSS
0.5%
HTML
0.5%
Other
0.3%