mirror of
https://github.com/discourse/discourse-ai.git
synced 2025-03-08 18:29:32 +00:00
Previous to this change we relied on explicit loading for a files in Discourse AI. This had a few downsides: - Busywork whenever you add a file (an extra require relative) - We were not keeping to conventions internally ... some places were OpenAI others are OpenAi - Autoloader did not work which lead to lots of full application broken reloads when developing. This moves all of DiscourseAI into a Zeitwerk compatible structure. It also leaves some minimal amount of manual loading (automation - which is loading into an existing namespace that may or may not be there) To avoid needing /lib/discourse_ai/... we mount a namespace thus we are able to keep /lib pointed at ::DiscourseAi Various files were renamed to get around zeitwerk rules and minimize usage of custom inflections Though we can get custom inflections to work it is not worth it, will require a Discourse core patch which means we create a hard dependency.
63 lines
1.5 KiB
Ruby
63 lines
1.5 KiB
Ruby
# frozen_string_literal: true
|
|
|
|
module DiscourseAi
|
|
module Embeddings
|
|
module Strategies
|
|
class Truncation
|
|
def id
|
|
1
|
|
end
|
|
|
|
def version
|
|
1
|
|
end
|
|
|
|
def prepare_text_from(target, tokenizer, max_length)
|
|
case target
|
|
when Topic
|
|
topic_truncation(target, tokenizer, max_length)
|
|
when Post
|
|
post_truncation(target, tokenizer, max_length)
|
|
else
|
|
raise ArgumentError, "Invalid target type"
|
|
end
|
|
end
|
|
|
|
private
|
|
|
|
def topic_information(topic)
|
|
info = +""
|
|
|
|
info << topic.title
|
|
info << "\n\n"
|
|
info << topic.category.name if topic&.category&.name
|
|
if SiteSetting.tagging_enabled
|
|
info << "\n\n"
|
|
info << topic.tags.pluck(:name).join(", ")
|
|
end
|
|
info << "\n\n"
|
|
end
|
|
|
|
def topic_truncation(topic, tokenizer, max_length)
|
|
text = +topic_information(topic)
|
|
|
|
topic.posts.find_each do |post|
|
|
text << post.raw
|
|
break if tokenizer.size(text) >= max_length #maybe keep a partial counter to speed this up?
|
|
text << "\n\n"
|
|
end
|
|
|
|
tokenizer.truncate(text, max_length)
|
|
end
|
|
|
|
def post_truncation(topic, tokenizer, max_length)
|
|
text = +topic_information(post.topic)
|
|
text << post.raw
|
|
|
|
tokenizer.truncate(text, max_length)
|
|
end
|
|
end
|
|
end
|
|
end
|
|
end
|