mirror of
https://github.com/discourse/discourse-ai.git
synced 2025-03-09 11:48:47 +00:00
This introduces a comprehensive spam detection system that uses LLM models to automatically identify and flag potential spam posts. The system is designed to be both powerful and configurable while preventing false positives. Key Features: * Automatically scans first 3 posts from new users (TL0/TL1) * Creates dedicated AI flagging user to distinguish from system flags * Tracks false positives/negatives for quality monitoring * Supports custom instructions to fine-tune detection * Includes test interface for trying detection on any post Technical Implementation: * New database tables: - ai_spam_logs: Stores scan history and results - ai_moderation_settings: Stores LLM config and custom instructions * Rate limiting and safeguards: - Minimum 10-minute delay between rescans - Only scans significant edits (>10 char difference) - Maximum 3 scans per post - 24-hour maximum age for scannable posts * Admin UI features: - Real-time testing capabilities - 7-day statistics dashboard - Configurable LLM model selection - Custom instruction support Security and Performance: * Respects trust levels - only scans TL0/TL1 users * Skips private messages entirely * Stops scanning users after 3 successful public posts * Includes comprehensive test coverage * Maintains audit log of all scan attempts --------- Co-authored-by: Keegan George <kgeorge13@gmail.com> Co-authored-by: Martin Brennan <martin@discourse.org>
48 lines
1.6 KiB
Ruby
48 lines
1.6 KiB
Ruby
# frozen_string_literal: true
|
|
|
|
module DiscourseAi
|
|
module AiModeration
|
|
class SpamReport
|
|
def self.generate(min_date: 1.week.ago)
|
|
spam_status = [Reviewable.statuses[:approved], Reviewable.statuses[:deleted]]
|
|
ham_status = [Reviewable.statuses[:rejected], Reviewable.statuses[:ignored]]
|
|
|
|
sql = <<~SQL
|
|
WITH spam_stats AS (
|
|
SELECT
|
|
asl.reviewable_id,
|
|
asl.post_id,
|
|
asl.is_spam,
|
|
r.status as reviewable_status,
|
|
r.target_type,
|
|
r.potential_spam
|
|
FROM ai_spam_logs asl
|
|
LEFT JOIN reviewables r ON r.id = asl.reviewable_id
|
|
WHERE asl.created_at > :min_date
|
|
),
|
|
post_reviewables AS (
|
|
SELECT
|
|
target_id post_id,
|
|
COUNT(DISTINCT target_id) as false_negative_count
|
|
FROM reviewables
|
|
WHERE target_type = 'Post'
|
|
AND status IN (:spam)
|
|
AND potential_spam
|
|
AND target_id IN (SELECT post_id FROM spam_stats)
|
|
GROUP BY target_id
|
|
)
|
|
SELECT
|
|
COUNT(*) AS scanned_count,
|
|
SUM(CASE WHEN is_spam THEN 1 ELSE 0 END) AS spam_detected,
|
|
COUNT(CASE WHEN reviewable_status IN (:ham) THEN 1 END) AS false_positives,
|
|
COALESCE(SUM(pr.false_negative_count), 0) AS false_negatives
|
|
FROM spam_stats
|
|
LEFT JOIN post_reviewables pr USING (post_id)
|
|
SQL
|
|
|
|
DB.query(sql, spam: spam_status, ham: ham_status, min_date: min_date).first
|
|
end
|
|
end
|
|
end
|
|
end
|