mirror of
https://github.com/discourse/discourse-ai.git
synced 2025-07-05 05:52:16 +00:00
The tokenizer was truncating and padding to 128 tokens, and we try append new post content until we hit 384 tokens. This was causing the tokenizer to accept all posts in a topic, wasting CPU and memory.
Plugin Name Plugin
Plugin Summary
For more information, please see: url to meta topic
Languages
Ruby
81.2%
JavaScript
15.6%
SCSS
2.2%
CSS
0.4%
HTML
0.4%
Other
0.2%