Sam 6623928b95
FIX: call after tool calls failing on OpenAI / Gemini (#599)
A recent change meant that llm instance got cached internally, repeat calls
to inference would cache data in Endpoint object leading model to
failures.

Both Gemini and Open AI expect a clean endpoint object cause they
set data.

This amends internals to make sure llm.generate will always operate
on clean objects
2024-05-01 17:50:58 +10:00
2023-02-17 11:33:47 -03:00
2024-04-30 21:57:37 +02:00
2023-02-17 11:33:47 -03:00
2023-11-03 11:30:09 +00:00
2023-11-03 11:30:09 +00:00
2024-03-06 15:23:29 +01:00
2023-02-17 11:33:47 -03:00
2024-01-13 00:28:06 +01:00
2023-09-04 15:46:35 -03:00
2024-01-13 00:28:06 +01:00

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Languages
Ruby 79.2%
JavaScript 17.3%
SCSS 2.1%
CSS 0.6%
HTML 0.5%
Other 0.3%