Roman Rizzi ff2e18f9ca
FIX: Structured output discrepancies. (#1340)
This change fixes two bugs and adds a safeguard.

The first issue is that the schema Gemini expected differed from the one sent, resulting in 400 errors when performing completions.

The second issue was that creating a new persona won't define a method
for `response_format`. This has to be explicitly defined when we wrap it inside the Persona class. Also, There was a mismatch between the default value and what we stored in the DB. Some parts of the code expected symbols as keys and others as strings.

Finally, we add a safeguard when, even if asked to, the model refuses to reply with a valid JSON. In this case, we are making a best-effort to recover and stream the raw response.
2025-05-15 11:32:10 -03:00
2023-02-17 11:33:47 -03:00
2024-10-14 13:37:20 +02:00
2023-11-03 11:30:09 +00:00
2023-11-03 11:30:09 +00:00
2025-02-24 11:20:06 +08:00
2025-03-17 15:14:53 +11:00
2025-03-17 15:14:53 +11:00

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Evals

The directory evals contains AI evals for the Discourse AI plugin. You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.

To run them use:

cd evals ./run --help

Usage: evals/run [options]
    -e, --eval NAME                  Name of the evaluation to run
        --list-models                List models
    -m, --model NAME                 Model to evaluate (will eval all models if not specified)
    -l, --list                       List evals

To run evals you will need to configure API keys in your environment:

OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key

Languages
Ruby 81.1%
JavaScript 15.6%
SCSS 2.3%
CSS 0.4%
HTML 0.4%
Other 0.2%