Implement streaming tool call implementation for Anthropic and Open AI. When calling: llm.generate(..., partial_tool_calls: true) do ... Partials may contain ToolCall instances with partial: true, These tool calls are partially populated with json partially parsed. So for example when performing a search you may get: ToolCall(..., {search: "hello" }) ToolCall(..., {search: "hello world" }) The library used to parse json is: https://github.com/dgraham/json-stream We use a fork cause we need access to the internal buffer. This prepares internals to perform partial tool calls, but does not implement it yet. |
||
---|---|---|
.. | ||
dialects | ||
endpoints | ||
json_stream_decoder_spec.rb | ||
llm_spec.rb | ||
prompt_messages_builder_spec.rb | ||
prompt_spec.rb | ||
xml_tag_stripper_spec.rb | ||
xml_tool_processor_spec.rb |