discourse-ai/lib/completions/endpoints
Sam 6623928b95
FIX: call after tool calls failing on OpenAI / Gemini (#599)
A recent change meant that llm instance got cached internally, repeat calls
to inference would cache data in Endpoint object leading model to
failures.

Both Gemini and Open AI expect a clean endpoint object cause they
set data.

This amends internals to make sure llm.generate will always operate
on clean objects
2024-05-01 17:50:58 +10:00
..
anthropic.rb FEATURE: add Claude 3 sonnet/haiku support for Amazon Bedrock (#534) 2024-03-19 06:48:46 +11:00
aws_bedrock.rb FEATURE: Gemini 1.5 pro support and Claude Opus bedrock support (#580) 2024-04-17 15:37:19 +10:00
base.rb FIX: more robust function call support (#581) 2024-04-19 06:54:54 +10:00
canned_response.rb UX: Validations to LLM-backed features (except AI Bot) (#436) 2024-01-29 16:04:25 -03:00
cohere.rb FEATURE: Cohere Command R support (#558) 2024-04-11 07:24:17 +10:00
fake.rb FEATURE: Add Question Consolidator for robust Upload support in Personas (#596) 2024-04-30 13:49:21 +10:00
gemini.rb FIX: more robust function call support (#581) 2024-04-19 06:54:54 +10:00
hugging_face.rb UX: Validations to LLM-backed features (except AI Bot) (#436) 2024-01-29 16:04:25 -03:00
open_ai.rb FIX: call after tool calls failing on OpenAI / Gemini (#599) 2024-05-01 17:50:58 +10:00
vllm.rb UX: Validations to LLM-backed features (except AI Bot) (#436) 2024-01-29 16:04:25 -03:00