3 Commits

Author SHA1 Message Date
Sam
65718f6dbe
FIX: eat all leading spaces llms provide when they stream them (#1280)
* FIX: eat all leading spaces llms provide when they stream them

* improve so we don't stop replying...
2025-04-24 22:07:26 +10:00
Sam
2060426709
FIX: guard against situations where there is no reply, pass thread id (#1279) 2025-04-24 20:31:14 +10:00
Sam
2a5c60db10
FEATURE: display more places where AI is used / Chat streamer (#1278)
* FEATURE: display more places where AI is used

- Usage was not showing automation or image caption in llm list.
- Also: FIX - reasoning models would time out incorrectly after 60 seconds (raised to 10 minutes)

* correct enum not to enumerate non configured models

* FEATURE: implement chat streamer

This implements a basic chat streamer, it provides 2 things:

1. Gives feedback to the user when LLM is generating
2. Streams stuff much more efficiently to client (given it may take 100ms or so per call to update chat)
2025-04-24 16:22:19 +10:00