discourse-ai/config
Sam a66b1042cc
FEATURE: scale up result count for search depending on model (#346)
We were limiting to 20 results unconditionally cause we had to make
sure search always fit in an 8k context window.

Models such as GPT 3.5 Turbo (16k) and GPT 4 Turbo / Claude 2.1 (over 150k)
allow us to return a lot more results.

This means we have a much richer understanding cause context is far
larger.

This also allows a persona to tweak this number, in some cases admin
may want to be conservative and save on tokens by limiting results

This also tweaks the `limit` param which GPT-4 liked to set to tell
model only to use it when it needs to (and describes default behavior)
2023-12-11 16:54:16 +11:00
..
locales FEATURE: scale up result count for search depending on model (#346) 2023-12-11 16:54:16 +11:00
routes.rb FEATURE: UI to update ai personas on admin page (#290) 2023-11-21 16:56:43 +11:00
settings.yml FEATURE: implement GPT-4 turbo support (#345) 2023-12-11 14:59:57 +11:00