mirror of
https://github.com/discourse/discourse-ai.git
synced 2025-02-28 14:29:39 +00:00
Both endpoints provide OpenAI-compatible servers. The only difference is that Vllm doesn't support passing tools as a separate parameter. Even if the tool param is supported, it ultimately relies on the model's ability to handle native functions, which is not the case with the models we have today. As a part of this change, we are dropping support for StableBeluga/Llama2 models. They don't have a chat_template, meaning the new API can translate them. These changes let us remove some of our existing dialects and are a first step in our plan to support any LLM by defining them as data-driven concepts. I rewrote the "translate" method to use a template method and extracted the tool support strategies into its classes to simplify the code. Finally, these changes bring support for Ollama when running in dev mode. It only works with Mistral for now, but it will change soon..
105 lines
2.5 KiB
Ruby
105 lines
2.5 KiB
Ruby
# frozen_string_literal: true
|
|
|
|
# see: https://docs.cohere.com/reference/chat
|
|
#
|
|
module DiscourseAi
|
|
module Completions
|
|
module Dialects
|
|
class Command < Dialect
|
|
class << self
|
|
def can_translate?(model_name)
|
|
%w[command-light command command-r command-r-plus].include?(model_name)
|
|
end
|
|
|
|
def tokenizer
|
|
DiscourseAi::Tokenizer::OpenAiTokenizer
|
|
end
|
|
end
|
|
|
|
VALID_ID_REGEX = /\A[a-zA-Z0-9_]+\z/
|
|
|
|
def translate
|
|
messages = super
|
|
|
|
system_message = messages.shift[:message] if messages.first[:role] == "SYSTEM"
|
|
|
|
prompt = { preamble: +"#{system_message}" }
|
|
prompt[:chat_history] = messages if messages.present?
|
|
|
|
messages.reverse_each do |msg|
|
|
if msg[:role] == "USER"
|
|
prompt[:message] = msg[:message]
|
|
messages.delete(msg)
|
|
break
|
|
end
|
|
end
|
|
|
|
prompt
|
|
end
|
|
|
|
def max_prompt_tokens
|
|
case model_name
|
|
when "command-light"
|
|
4096
|
|
when "command"
|
|
8192
|
|
when "command-r"
|
|
131_072
|
|
when "command-r-plus"
|
|
131_072
|
|
else
|
|
8192
|
|
end
|
|
end
|
|
|
|
private
|
|
|
|
def per_message_overhead
|
|
0
|
|
end
|
|
|
|
def calculate_message_token(context)
|
|
self.class.tokenizer.size(context[:content].to_s + context[:name].to_s)
|
|
end
|
|
|
|
def tools_dialect
|
|
@tools_dialect ||= DiscourseAi::Completions::Dialects::XmlTools.new(prompt.tools)
|
|
end
|
|
|
|
def system_msg(msg)
|
|
cmd_msg = { role: "SYSTEM", message: msg[:content] }
|
|
|
|
if tools_dialect.instructions.present?
|
|
cmd_msg[:message] = [
|
|
msg[:content],
|
|
tools_dialect.instructions,
|
|
"NEVER attempt to run tools using JSON, always use XML. Lives depend on it.",
|
|
].join("\n")
|
|
end
|
|
|
|
cmd_msg
|
|
end
|
|
|
|
def model_msg(msg)
|
|
{ role: "CHATBOT", message: msg[:content] }
|
|
end
|
|
|
|
def tool_call_msg(msg)
|
|
{ role: "CHATBOT", message: tools_dialect.from_raw_tool_call(msg) }
|
|
end
|
|
|
|
def tool_msg(msg)
|
|
{ role: "USER", message: tools_dialect.from_raw_tool(msg) }
|
|
end
|
|
|
|
def user_msg(msg)
|
|
user_message = { role: "USER", message: msg[:content] }
|
|
user_message[:message] = "#{msg[:id]}: #{msg[:content]}" if msg[:id]
|
|
|
|
user_message
|
|
end
|
|
end
|
|
end
|
|
end
|
|
end
|