316ea9624e
* FIX: properly truncate !command prompts ### What is going on here? Previous to this change where a command was issued by the LLM it could hallucinate a continuation eg: ``` This is what tags are !tags some nonsense here ``` This change introduces safeguards so `some nonsense here` does not creep in to the prompt history, poisoning the llm results This in effect grounds the llm a lot better and results in the llm forgetting less about results. The change only impacts Claude at the moment, but will also improve stuff for llama 2 in future. Also, this makes it significantly easier to test the bot framework without an llm cause we avoid a whole bunch of complex stubbing * blank is not a valid bot response, do not inject into prompt |
||
---|---|---|
.. | ||
database | ||
inference | ||
tokenizer | ||
chat_message_classificator.rb | ||
classificator.rb | ||
post_classificator.rb |