discourse-ai/spec
Sam 6623928b95
FIX: call after tool calls failing on OpenAI / Gemini (#599)
A recent change meant that llm instance got cached internally, repeat calls
to inference would cache data in Endpoint object leading model to
failures.

Both Gemini and Open AI expect a clean endpoint object cause they
set data.

This amends internals to make sure llm.generate will always operate
on clean objects
2024-05-01 17:50:58 +10:00
..
fabricators FEATURE: AI Bot RAG support. (#537) 2024-04-01 13:43:34 -03:00
fixtures FIX: blank metadata leading to errors (#578) 2024-04-17 13:46:40 +10:00
jobs FIX: blank metadata leading to errors (#578) 2024-04-17 13:46:40 +10:00
lib FIX: call after tool calls failing on OpenAI / Gemini (#599) 2024-05-01 17:50:58 +10:00
models FEATURE: allow tuning of RAG generation (#565) 2024-04-12 10:32:46 -03:00
requests FEATURE: Enhance AI debugging capabilities and improve interface adjustments (#577) 2024-04-15 23:22:06 +10:00
serializers DEV: Fix new Rubocop offenses 2024-03-06 15:23:29 +01:00
shared FEATURE: Stable diffusion 3 support (#582) 2024-04-19 18:08:16 +10:00
support FIX: typo causing text_embedding_3_large to fail (#460) 2024-02-05 11:16:36 +11:00
system UX: Highlight AI post helper selection (#520) 2024-04-04 11:35:01 -07:00
tasks FIX: Filter soft-deleted topics when backfilling sentiment (#527) 2024-03-12 21:01:24 -03:00
plugin_spec.rb DEV: Fix new Rubocop offenses 2024-03-06 15:23:29 +01:00