discourse-ai/lib/tokenizer
Roman Rizzi ec97996905
FIX/REFACTOR: FoldContent revamp (#866)
* FIX/REFACTOR: FoldContent revamp

We hit a snag with our hot topic gist strategy: the regex we used to split the content didn't work, so we cannot send the original post separately. This was important for letting the model focus on what's new in the topic.

The algorithm doesn’t give us full control over how prompts are written, and figuring out how to format the content isn't straightforward. This means we're having to use more complicated workarounds, like regex.

To tackle this, I'm suggesting we simplify the approach a bit. Let's focus on summarizing as much as we can upfront, then gradually add new content until there's nothing left to summarize.

Also, the "extend" part is mostly for models with small context windows, which shouldn't pose a problem 99% of the time with the content volume we're dealing with.

* Fix fold docs

* Use #shift instead of #pop to get the first elem, not the last
2024-10-25 11:51:17 -03:00
..
all_mpnet_base_v2_tokenizer.rb DEV: port directory structure to Zeitwerk (#319) 2023-11-29 15:17:46 +11:00
anthropic_tokenizer.rb DEV: port directory structure to Zeitwerk (#319) 2023-11-29 15:17:46 +11:00
basic_tokenizer.rb FIX/REFACTOR: FoldContent revamp (#866) 2024-10-25 11:51:17 -03:00
bert_tokenizer.rb DEV: port directory structure to Zeitwerk (#319) 2023-11-29 15:17:46 +11:00
bge_large_en_tokenizer.rb DEV: port directory structure to Zeitwerk (#319) 2023-11-29 15:17:46 +11:00
bge_m3_tokenizer.rb FEATURE: Add BGE-M3 embeddings support (#569) 2024-04-10 17:24:01 -03:00
llama3_tokenizer.rb FEATURE: Llama 3 tokenizer (#615) 2024-05-13 12:45:52 -03:00
mixtral_tokenizer.rb Mixtral (#376) 2023-12-26 14:49:55 -03:00
multilingual_e5_large_tokenizer.rb DEV: port directory structure to Zeitwerk (#319) 2023-11-29 15:17:46 +11:00
open_ai_gpt4o_tokenizer.rb FEATURE: GPT4o Tokenizer (#721) 2024-07-22 15:26:14 -03:00
open_ai_tokenizer.rb FIX/REFACTOR: FoldContent revamp (#866) 2024-10-25 11:51:17 -03:00