a66b1042cc
We were limiting to 20 results unconditionally cause we had to make sure search always fit in an 8k context window. Models such as GPT 3.5 Turbo (16k) and GPT 4 Turbo / Claude 2.1 (over 150k) allow us to return a lot more results. This means we have a much richer understanding cause context is far larger. This also allows a persona to tweak this number, in some cases admin may want to be conservative and save on tokens by limiting results This also tweaks the `limit` param which GPT-4 liked to set to tell model only to use it when it needs to (and describes default behavior) |
||
---|---|---|
.. | ||
javascripts | ||
stylesheets/modules |