mirror of
https://github.com/discourse/discourse-ai.git
synced 2025-03-08 02:12:14 +00:00
Introduces a UI to manage customizable personas (admin only feature) Part of the change was some extensive internal refactoring: - AIBot now has a persona set in the constructor, once set it never changes - Command now takes in bot as a constructor param, so it has the correct persona and is not generating AIBot objects on the fly - Added a .prettierignore file, due to the way ALE is configured in nvim it is a pre-req for prettier to work - Adds a bunch of validations on the AIPersona model, system personas (artist/creative etc...) are all seeded. We now ensure - name uniqueness, and only allow certain properties to be touched for system personas. - (JS note) the client side design takes advantage of nested routes, the parent route for personas gets all the personas via this.store.findAll("ai-persona") then child routes simply reach into this model to find a particular persona. - (JS note) data is sideloaded into the ai-persona model the meta property supplied from the controller, resultSetMeta - This removes ai_bot_enabled_personas and ai_bot_enabled_chat_commands, both should be controlled from the UI on a per persona basis - Fixes a long standing bug in token accounting ... we were doing to_json.length instead of to_json.to_s.length - Amended it so {commands} are always inserted at the end unconditionally, no need to add it to the template of the system message as it just confuses things - Adds a concept of required_commands to stock personas, these are commands that must be configured for this stock persona to show up. - Refactored tests so we stop requiring inference_stubs, it was very confusing to need it, added to plugin.rb for now which at least is clearer - Migrates the persona selector to gjs --------- Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com> Co-authored-by: Martin Brennan <martin@discourse.org>
70 lines
2.3 KiB
Ruby
70 lines
2.3 KiB
Ruby
# frozen_string_literal: true
|
|
|
|
RSpec.describe DiscourseAi::Inference::AnthropicCompletions do
|
|
before { SiteSetting.ai_anthropic_api_key = "abc-123" }
|
|
|
|
it "can complete a trivial prompt" do
|
|
response_text = "1. Serenity\\n2. Laughter\\n3. Adventure"
|
|
prompt = "Human: write 3 words\n\n"
|
|
user_id = 183
|
|
req_opts = { max_tokens_to_sample: 700, temperature: 0.5 }
|
|
|
|
AnthropicCompletionStubs.stub_response(prompt, response_text, req_opts: req_opts)
|
|
|
|
completions =
|
|
DiscourseAi::Inference::AnthropicCompletions.perform!(
|
|
prompt,
|
|
"claude-2",
|
|
temperature: req_opts[:temperature],
|
|
max_tokens: req_opts[:max_tokens_to_sample],
|
|
user_id: user_id,
|
|
)
|
|
|
|
expect(completions[:completion]).to eq(response_text)
|
|
|
|
expect(AiApiAuditLog.count).to eq(1)
|
|
log = AiApiAuditLog.first
|
|
|
|
request_body = { model: "claude-2", prompt: prompt }.merge(req_opts).to_json
|
|
response_body = AnthropicCompletionStubs.response(response_text).to_json
|
|
|
|
expect(log.provider_id).to eq(AiApiAuditLog::Provider::Anthropic)
|
|
expect(log.request_tokens).to eq(6)
|
|
expect(log.response_tokens).to eq(16)
|
|
expect(log.raw_request_payload).to eq(request_body)
|
|
expect(log.raw_response_payload).to eq(response_body)
|
|
end
|
|
|
|
it "supports streaming mode" do
|
|
deltas = ["Mount", "ain", " ", "Tree ", "Frog"]
|
|
prompt = "Human: write 3 words\n\n"
|
|
req_opts = { max_tokens_to_sample: 300, stream: true }
|
|
content = +""
|
|
|
|
AnthropicCompletionStubs.stub_streamed_response(prompt, deltas, req_opts: req_opts)
|
|
|
|
DiscourseAi::Inference::AnthropicCompletions.perform!(
|
|
prompt,
|
|
"claude-2",
|
|
max_tokens: req_opts[:max_tokens_to_sample],
|
|
) do |partial, cancel|
|
|
data = partial[:completion]
|
|
content << data if data
|
|
cancel.call if content.split(" ").length == 2
|
|
end
|
|
|
|
expect(content).to eq("Mountain Tree ")
|
|
|
|
expect(AiApiAuditLog.count).to eq(1)
|
|
log = AiApiAuditLog.first
|
|
|
|
request_body = { model: "claude-2", prompt: prompt }.merge(req_opts).to_json
|
|
|
|
expect(log.provider_id).to eq(AiApiAuditLog::Provider::Anthropic)
|
|
expect(log.request_tokens).to eq(6)
|
|
expect(log.response_tokens).to eq(3)
|
|
expect(log.raw_request_payload).to eq(request_body)
|
|
expect(log.raw_response_payload).to be_present
|
|
end
|
|
end
|