6623928b95
A recent change meant that llm instance got cached internally, repeat calls to inference would cache data in Endpoint object leading model to failures. Both Gemini and Open AI expect a clean endpoint object cause they set data. This amends internals to make sure llm.generate will always operate on clean objects |
||
---|---|---|
.. | ||
artist.rb | ||
creative.rb | ||
dall_e_3.rb | ||
discourse_helper.rb | ||
general.rb | ||
github_helper.rb | ||
persona.rb | ||
researcher.rb | ||
settings_explorer.rb | ||
sql_helper.rb |