316ea9624e
* FIX: properly truncate !command prompts ### What is going on here? Previous to this change where a command was issued by the LLM it could hallucinate a continuation eg: ``` This is what tags are !tags some nonsense here ``` This change introduces safeguards so `some nonsense here` does not creep in to the prompt history, poisoning the llm results This in effect grounds the llm a lot better and results in the llm forgetting less about results. The change only impacts Claude at the moment, but will also improve stuff for llama 2 in future. Also, this makes it significantly easier to test the bot framework without an llm cause we avoid a whole bunch of complex stubbing * blank is not a valid bot response, do not inject into prompt |
||
---|---|---|
.github/workflows | ||
app | ||
assets | ||
config | ||
db | ||
lib | ||
spec | ||
svg-icons | ||
test/javascripts | ||
tokenizers | ||
.discourse-compatibility | ||
.eslintrc | ||
.gitignore | ||
.prettierrc | ||
.rubocop.yml | ||
.streerc | ||
.template-lintrc.js | ||
Gemfile | ||
Gemfile.lock | ||
LICENSE | ||
README.md | ||
package.json | ||
plugin.rb | ||
translator.yml | ||
yarn.lock |
README.md
Discourse AI Plugin
Plugin Summary
For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco