discourse-ai/lib/toxicity/toxicity_classification.rb
Sam 6ddc17fd61
DEV: port directory structure to Zeitwerk (#319)
Previous to this change we relied on explicit loading for a files in Discourse AI.

This had a few downsides:

- Busywork whenever you add a file (an extra require relative)
- We were not keeping to conventions internally ... some places were OpenAI others are OpenAi
- Autoloader did not work which lead to lots of full application broken reloads when developing.

This moves all of DiscourseAI into a Zeitwerk compatible structure.

It also leaves some minimal amount of manual loading (automation - which is loading into an existing namespace that may or may not be there)

To avoid needing /lib/discourse_ai/... we mount a namespace thus we are able to keep /lib pointed at ::DiscourseAi

Various files were renamed to get around zeitwerk rules and minimize usage of custom inflections

Though we can get custom inflections to work it is not worth it, will require a Discourse core patch which means we create a hard dependency.
2023-11-29 15:17:46 +11:00

73 lines
1.8 KiB
Ruby

# frozen_string_literal: true
module DiscourseAi
module Toxicity
class ToxicityClassification
CLASSIFICATION_LABELS = %i[
toxicity
severe_toxicity
obscene
identity_attack
insult
threat
sexual_explicit
]
def type
:toxicity
end
def can_classify?(target)
content_of(target).present?
end
def get_verdicts(classification_data)
# We only use one model for this classification.
# Classification_data looks like { model_name => classification }
_model_used, data = classification_data.to_a.first
verdict =
CLASSIFICATION_LABELS.any? do |label|
data[label] >= SiteSetting.send("ai_toxicity_flag_threshold_#{label}")
end
{ available_model => verdict }
end
def should_flag_based_on?(verdicts)
return false if !SiteSetting.ai_toxicity_flag_automatically
verdicts.values.any?
end
def request(target_to_classify)
data =
::DiscourseAi::Inference::DiscourseClassifier.perform!(
"#{SiteSetting.ai_toxicity_inference_service_api_endpoint}/api/v1/classify",
SiteSetting.ai_toxicity_inference_service_api_model,
content_of(target_to_classify),
SiteSetting.ai_toxicity_inference_service_api_key,
)
{ available_model => data }
end
private
def available_model
SiteSetting.ai_toxicity_inference_service_api_model
end
def content_of(target_to_classify)
return target_to_classify.message if target_to_classify.is_a?(Chat::Message)
if target_to_classify.post_number == 1
"#{target_to_classify.topic.title}\n#{target_to_classify.raw}"
else
target_to_classify.raw
end
end
end
end
end