discourse-ai/lib/completions/json_streaming_tracker.rb
Roman Rizzi ff2e18f9ca
FIX: Structured output discrepancies. (#1340)
This change fixes two bugs and adds a safeguard.

The first issue is that the schema Gemini expected differed from the one sent, resulting in 400 errors when performing completions.

The second issue was that creating a new persona won't define a method
for `response_format`. This has to be explicitly defined when we wrap it inside the Persona class. Also, There was a mismatch between the default value and what we stored in the DB. Some parts of the code expected symbols as keys and others as strings.

Finally, we add a safeguard when, even if asked to, the model refuses to reply with a valid JSON. In this case, we are making a best-effort to recover and stream the raw response.
2025-05-15 11:32:10 -03:00

54 lines
1.2 KiB
Ruby

# frozen_string_literal: true
module DiscourseAi
module Completions
class JsonStreamingTracker
attr_reader :current_key, :current_value, :stream_consumer
def initialize(stream_consumer)
@stream_consumer = stream_consumer
@current_key = nil
@current_value = nil
@parser = DiscourseAi::Completions::JsonStreamingParser.new
@parser.key do |k|
@current_key = k
@current_value = nil
end
@parser.value do |v|
if @current_key
stream_consumer.notify_progress(@current_key, v)
@current_key = nil
end
end
end
def broken?
@broken
end
def <<(json)
# llm could send broken json
# in that case just deal with it later
# don't stream
return if @broken
begin
@parser << json
rescue DiscourseAi::Completions::ParserError
@broken = true
return
end
if @parser.state == :start_string && @current_key
# this is is worth notifying
stream_consumer.notify_progress(@current_key, @parser.buf)
end
@current_key = nil if @parser.state == :end_value
end
end
end
end