discourse-ai/app
Roman Rizzi ddf2bf7034
DEV: Backfill embeddings concurrently. (#941)
We are adding a new method for generating and storing embeddings in bulk, which relies on `Concurrent::Promises::Future`. Generating an embedding consists of three steps:

Prepare text
HTTP call to retrieve the vector
Save to DB.
Each one is independently executed on whatever thread the pool gives us.

We are bringing a custom thread pool instead of the global executor since we want control over how many threads we spawn to limit concurrency. We also avoid firing thousands of HTTP requests when working with large batches.
2024-11-26 14:12:32 -03:00
..
controllers/discourse_ai FIX: automatically bust cache for share ai assets (#942) 2024-11-22 11:23:15 +11:00
helpers/discourse_ai/ai_bot FIX: automatically bust cache for share ai assets (#942) 2024-11-22 11:23:15 +11:00
jobs DEV: Backfill embeddings concurrently. (#941) 2024-11-26 14:12:32 -03:00
mailers FEATURE: support sending AI report to an email address (#368) 2023-12-19 17:51:49 +11:00
models PERF: Preload only gists when including summaries in topic list (#948) 2024-11-25 12:24:02 -03:00
serializers FEATURE: improve tool support (#904) 2024-11-12 08:14:30 +11:00
services/discourse_ai REFACTOR: Support of different summarization targets/prompts. (#835) 2024-10-15 13:53:26 -03:00
views FIX: automatically bust cache for share ai assets (#942) 2024-11-22 11:23:15 +11:00