4 Commits

Author SHA1 Message Date
Sam
c34fcc8a95
FEATURE: forum researcher persona for deep research (#1313)
This commit introduces a new Forum Researcher persona specialized in deep forum content analysis along with comprehensive improvements to our AI infrastructure.

Key additions:

    New Forum Researcher persona with advanced filtering and analysis capabilities
    Robust filtering system supporting tags, categories, dates, users, and keywords
    LLM formatter to efficiently process and chunk research results

Infrastructure improvements:

    Implemented CancelManager class to centrally manage AI completion cancellations
    Replaced callback-based cancellation with a more robust pattern
    Added systematic cancellation monitoring with callbacks

Other improvements:

    Added configurable default_enabled flag to control which personas are enabled by default
    Updated translation strings for the new researcher functionality
    Added comprehensive specs for the new components

    Renames Researcher -> Web Researcher

This change makes our AI platform more stable while adding powerful research capabilities that can analyze forum trends and surface relevant content.
2025-05-14 12:36:16 +10:00
Sam
65718f6dbe
FIX: eat all leading spaces llms provide when they stream them (#1280)
* FIX: eat all leading spaces llms provide when they stream them

* improve so we don't stop replying...
2025-04-24 22:07:26 +10:00
Sam
2060426709
FIX: guard against situations where there is no reply, pass thread id (#1279) 2025-04-24 20:31:14 +10:00
Sam
2a5c60db10
FEATURE: display more places where AI is used / Chat streamer (#1278)
* FEATURE: display more places where AI is used

- Usage was not showing automation or image caption in llm list.
- Also: FIX - reasoning models would time out incorrectly after 60 seconds (raised to 10 minutes)

* correct enum not to enumerate non configured models

* FEATURE: implement chat streamer

This implements a basic chat streamer, it provides 2 things:

1. Gives feedback to the user when LLM is generating
2. Streams stuff much more efficiently to client (given it may take 100ms or so per call to update chat)
2025-04-24 16:22:19 +10:00