mirror of
https://github.com/discourse/discourse-ai.git
synced 2025-02-14 15:34:42 +00:00
* FIX/REFACTOR: FoldContent revamp We hit a snag with our hot topic gist strategy: the regex we used to split the content didn't work, so we cannot send the original post separately. This was important for letting the model focus on what's new in the topic. The algorithm doesn’t give us full control over how prompts are written, and figuring out how to format the content isn't straightforward. This means we're having to use more complicated workarounds, like regex. To tackle this, I'm suggesting we simplify the approach a bit. Let's focus on summarizing as much as we can upfront, then gradually add new content until there's nothing left to summarize. Also, the "extend" part is mostly for models with small context windows, which shouldn't pose a problem 99% of the time with the content volume we're dealing with. * Fix fold docs * Use #shift instead of #pop to get the first elem, not the last
45 lines
1.2 KiB
Ruby
45 lines
1.2 KiB
Ruby
# frozen_string_literal: true
|
|
|
|
module DiscourseAi
|
|
module Tokenizer
|
|
class OpenAiTokenizer < BasicTokenizer
|
|
class << self
|
|
def tokenizer
|
|
@@tokenizer ||= Tiktoken.get_encoding("cl100k_base")
|
|
end
|
|
|
|
def tokenize(text)
|
|
tokenizer.encode(text)
|
|
end
|
|
|
|
def encode(text)
|
|
tokenizer.encode(text)
|
|
end
|
|
|
|
def decode(token_ids)
|
|
tokenizer.decode(token_ids)
|
|
end
|
|
|
|
def truncate(text, max_length)
|
|
# fast track common case, /2 to handle unicode chars
|
|
# than can take more than 1 token per char
|
|
return text if !SiteSetting.ai_strict_token_counting && text.size < max_length / 2
|
|
|
|
tokenizer.decode(tokenize(text).take(max_length))
|
|
rescue Tiktoken::UnicodeError
|
|
max_length = max_length - 1
|
|
retry
|
|
end
|
|
|
|
def below_limit?(text, limit)
|
|
# fast track common case, /2 to handle unicode chars
|
|
# than can take more than 1 token per char
|
|
return true if !SiteSetting.ai_strict_token_counting && text.size < limit / 2
|
|
|
|
tokenizer.encode(text).length < limit
|
|
end
|
|
end
|
|
end
|
|
end
|
|
end
|