discourse-ai/spec/lib
Sam 6623928b95
FIX: call after tool calls failing on OpenAI / Gemini (#599)
A recent change meant that llm instance got cached internally, repeat calls
to inference would cache data in Endpoint object leading model to
failures.

Both Gemini and Open AI expect a clean endpoint object cause they
set data.

This amends internals to make sure llm.generate will always operate
on clean objects
2024-05-01 17:50:58 +10:00
..
completions FIX: call after tool calls failing on OpenAI / Gemini (#599) 2024-05-01 17:50:58 +10:00
discourse_automation FIX: Avoid replying to the reply user for llm_triage automation (#544) 2024-03-22 12:34:18 +08:00
modules FEATURE: Add Question Consolidator for robust Upload support in Personas (#596) 2024-04-30 13:49:21 +10:00
utils FEATURE: Add basic connection check to DNS SRV resources (#563) 2024-04-12 10:39:19 -03:00