Sam 316ea9624e
FIX: properly truncate !command prompts (#227)
* FIX: properly truncate !command prompts

### What is going on here?

Previous to this change where a command was issued by the LLM it
could hallucinate a continuation eg:

```
This is what tags are

!tags

some nonsense here
```

This change introduces safeguards so `some nonsense here` does not
creep in to the prompt history, poisoning the llm results

This in effect grounds the llm a lot better and results in the llm
forgetting less about results.

The change only impacts Claude at the moment, but will also improve
stuff for llama 2 in future.

Also, this makes it significantly easier to test the bot framework
without an llm cause we avoid a whole bunch of complex stubbing

* blank is not a valid bot response, do not inject into prompt
2023-09-15 07:02:37 +10:00
2023-02-17 11:33:47 -03:00
2023-02-17 11:33:47 -03:00
2023-02-17 11:33:47 -03:00
2023-02-17 11:33:47 -03:00
2023-02-17 11:33:47 -03:00
2023-02-17 11:33:47 -03:00
2023-02-17 11:33:47 -03:00
2023-02-17 11:33:47 -03:00
2023-02-17 11:33:47 -03:00
2023-02-17 11:33:47 -03:00
2023-02-17 11:33:47 -03:00
2023-09-04 15:46:35 -03:00
2023-07-15 00:56:15 +02:00

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Languages
Ruby 79.8%
JavaScript 16.8%
SCSS 1.9%
CSS 0.6%
HTML 0.5%
Other 0.4%