Sam a66b1042cc
FEATURE: scale up result count for search depending on model (#346)
We were limiting to 20 results unconditionally cause we had to make
sure search always fit in an 8k context window.

Models such as GPT 3.5 Turbo (16k) and GPT 4 Turbo / Claude 2.1 (over 150k)
allow us to return a lot more results.

This means we have a much richer understanding cause context is far
larger.

This also allows a persona to tweak this number, in some cases admin
may want to be conservative and save on tokens by limiting results

This also tweaks the `limit` param which GPT-4 liked to set to tell
model only to use it when it needs to (and describes default behavior)
2023-12-11 16:54:16 +11:00
2023-02-17 11:33:47 -03:00
2023-02-17 11:33:47 -03:00
2023-11-03 11:30:09 +00:00
2023-11-03 11:30:09 +00:00
2023-11-29 23:01:48 +01:00
2023-02-17 11:33:47 -03:00
2023-11-29 23:01:48 +01:00
2023-09-04 15:46:35 -03:00
2023-07-15 00:56:15 +02:00
2023-11-29 23:01:48 +01:00

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Languages
Ruby 79.5%
JavaScript 17%
SCSS 2%
CSS 0.6%
HTML 0.5%
Other 0.4%