825f01cfb2
Account properly for function calls, don't stream through <details> blocks - Rush cooked content back to client - Wait longer (up to 60 seconds) before giving up on streaming - Clean up message bus channels so we don't have leftover data - Make ai streamer much more reusable and much easier to read - If buffer grows quickly, rush update so you are not artificially waiting - Refine prompt interface - Fix lost system message when prompt gets long |
||
---|---|---|
.github/workflows | ||
app | ||
assets | ||
config | ||
db | ||
discourse_automation | ||
lib | ||
spec | ||
test/javascripts | ||
tokenizers | ||
.discourse-compatibility | ||
.eslintrc.cjs | ||
.gitignore | ||
.prettierignore | ||
.prettierrc.cjs | ||
.rubocop.yml | ||
.streerc | ||
.template-lintrc.cjs | ||
Gemfile | ||
Gemfile.lock | ||
LICENSE | ||
README.md | ||
package.json | ||
plugin.rb | ||
translator.yml | ||
yarn.lock |
README.md
Discourse AI Plugin
Plugin Summary
For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco