discourse-ai/lib/completions
Sam 6623928b95
FIX: call after tool calls failing on OpenAI / Gemini (#599)
A recent change meant that llm instance got cached internally, repeat calls
to inference would cache data in Endpoint object leading model to
failures.

Both Gemini and Open AI expect a clean endpoint object cause they
set data.

This amends internals to make sure llm.generate will always operate
on clean objects
2024-05-01 17:50:58 +10:00
..
dialects FEATURE: Gemini 1.5 pro support and Claude Opus bedrock support (#580) 2024-04-17 15:37:19 +10:00
endpoints FIX: call after tool calls failing on OpenAI / Gemini (#599) 2024-05-01 17:50:58 +10:00
function_call_normalizer.rb FIX: more robust function call support (#581) 2024-04-19 06:54:54 +10:00
llm.rb FIX: call after tool calls failing on OpenAI / Gemini (#599) 2024-05-01 17:50:58 +10:00
prompt.rb FEATURE: Add vision support to AI personas (Claude 3) (#546) 2024-03-27 14:30:11 +11:00
upload_encoder.rb FEATURE: Add vision support to AI personas (Claude 3) (#546) 2024-03-27 14:30:11 +11:00