mirror of
https://github.com/discourse/discourse-ai.git
synced 2025-06-26 17:42:15 +00:00
- Fix search API to only include column_names when present to make the API less confusing - Ensure correct LLM is used in PMs by tracking and preferring the last bot user - Fix persona_id conversion from string to integer in custom fields - Add missing test for PM triage with no replies - ensure we don't try to auto title topic - Ensure bot users are properly added to PMs - Make title setting optional when replying to posts - Add ability to control stream_reply behavior These changes improve reliability and fix edge cases in bot interactions, particularly in private messages with multiple LLMs and while triaging posts using personas
Discourse AI Plugin
Plugin Summary
For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco
Evals
The directory evals
contains AI evals for the Discourse AI plugin.
You may create a local config by copying config/eval-llms.yml
to config/eval-llms.local.yml
and modifying the values.
To run them use:
cd evals ./run --help
Usage: evals/run [options]
-e, --eval NAME Name of the evaluation to run
--list-models List models
-m, --model NAME Model to evaluate (will eval all models if not specified)
-l, --list List evals
To run evals you will need to configure API keys in your environment:
OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key
Languages
Ruby
81.3%
JavaScript
15.5%
SCSS
2.2%
CSS
0.4%
HTML
0.4%
Other
0.2%