Keegan George e15952031d
UX: Smoother streaming for discoveries (#1154)
## 🔍 Overview
This update ensures that the streaming for discoveries is smoother, especially on first update.


##  More details
To help with smoother streaming, the discovery preview (which was being tracked as a separate property in the JS logic) will be removed and the entire discovery content will be shown/hidden via the existing CSS. The preview was already receiving the full update even though it was visually hidden, so removing the separate property shouldn't have any negative performance hit. Visually hiding it with CSS only will help simplify the component and also allow for smoother streaming. We will instead remove the buffered streaming approach and instead use typing timers similar to what we did for streaming summarization. 

No related tests as streaming animations are difficult to test.
2025-02-27 07:32:39 -08:00
2023-02-17 11:33:47 -03:00
2024-10-14 13:37:20 +02:00
2023-11-03 11:30:09 +00:00
2023-11-03 11:30:09 +00:00
2025-02-24 11:20:06 +08:00

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Evals

The directory evals contains AI evals for the Discourse AI plugin. You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.

To run them use:

cd evals ./run --help

Usage: evals/run [options]
    -e, --eval NAME                  Name of the evaluation to run
        --list-models                List models
    -m, --model NAME                 Model to evaluate (will eval all models if not specified)
    -l, --list                       List evals

To run evals you will need to configure API keys in your environment:

OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key

Languages
Ruby 81.2%
JavaScript 15.6%
SCSS 2.2%
CSS 0.4%
HTML 0.4%
Other 0.2%