Commit Graph

902 Commits

Author SHA1 Message Date
David Taylor 1a10680818
DEV: Replace DSection with body-class helper (#924) 2024-11-20 14:11:15 +00:00
Sam 52c644798d
DEV: improve artifact presentation (#932)
1. Keep source in a "details" block after rendered so it does
not overwhelm users

2. Ensure artifacts are never indexed by robots

3. Cache break our CSS that changed recently
2024-11-20 18:53:19 +11:00
Sam 2652716398
UX: improve artifact styling add direct share link (#930)
Also remove uneeded sandboxing give this is all handled by
artifacts directly
2024-11-20 13:13:03 +11:00
Sam a0aec48606
FIX: gists are not html safe (#931)
Also allow "Everyone" in ai_hot_topic_gists_allowed_groups
2024-11-20 10:54:49 +11:00
Discourse Translator Bot f09e74c05e
Update translations (#929) 2024-11-20 00:21:34 +01:00
Rafael dos Santos Silva 2c8d81827f
FIX: Misc fixes for sentiment in the admin dashboard (#928)
* FIX: Misc fixes for sentiment in the admin dashboard

- Fixes missing filters for the main graph

- Fixes previous 30 days trend in emotion table

Also moves links to individual cells in emotion table, so admins can
drill down to the specific time period on their reports.

* lints
2024-11-19 19:16:21 -03:00
Kris a9afa04329
UX: update gist toggle styles (#926) 2024-11-19 15:33:34 -05:00
Roman Rizzi 530a795d43
FIX: Instruct AR that we want to use ai_summaries for filtering. (#927)
We use `includes` instead of `joins` because we want to eager-load summaries, avoiding an extra query when summarizing. However, Rails will complain unless you explicitly inform them you plan to use that inside a `WHERE` clause.
2024-11-19 17:32:13 -03:00
Roman Rizzi dcde94a393
FIX: Reduce scope of topic gists inclusion. (#925)
The topic query is used differently, and we can't assume the modifier will always receive an AR relation. Let's scope it to `Discourse#filters` instead of most lists.
2024-11-19 15:37:19 -03:00
Roman Rizzi 3c91f374ac
FIX: Skip gists from PM topic lists (#923) 2024-11-19 12:51:19 -03:00
Roman Rizzi fb80d776d8
FEATURE: Enable gists on all topic lists (#922) 2024-11-19 11:04:34 -03:00
Rafael dos Santos Silva 48d08dedd4
FEATURE: Emotion activity metrics table (#916) 2024-11-19 10:01:10 -03:00
David Taylor b10be23533
FIX: Ensure artifacts are sandboxed, even when visited directly (#921)
It's important that artifacts are never given 'same origin' access to the forum domain, so that they cannot access cookies, or make authenticated HTTP requests. So even when visiting the URL directly, we need to wrap them in a sandboxed iframe.
2024-11-19 11:44:17 +00:00
David Taylor 6b9c66054c
DEV: Update eslint config (#917)
* DEV: Update eslint config

* fixup

* pnpm upgrade
2024-11-19 11:57:40 +01:00
Sam 3ae1e4eaf0
FIX: properly bypass CSP for artifacts (#920)
Was meant to be bypassed but was not implemented correctly
2024-11-19 20:25:07 +11:00
Sam 755b63f31f
FEATURE: Add support for Mistral models (#919)
Adds support for mistral models (pixtral and mistral large now have presets)

Also corrects token accounting in AWS bedrock models
2024-11-19 17:28:09 +11:00
Sam 0d7f353284
FEATURE: AI artifacts (#898)
This is a significant PR that introduces AI Artifacts functionality to the discourse-ai plugin along with several other improvements. Here are the key changes:

1. AI Artifacts System:
   - Adds a new `AiArtifact` model and database migration
   - Allows creation of web artifacts with HTML, CSS, and JavaScript content
   - Introduces security settings (`strict`, `lax`, `disabled`) for controlling artifact execution
   - Implements artifact rendering in iframes with sandbox protection
   - New `CreateArtifact` tool for AI to generate interactive content

2. Tool System Improvements:
   - Adds support for partial tool calls, allowing incremental updates during generation
   - Better handling of tool call states and progress tracking
   - Improved XML tool processing with CDATA support
   - Fixes for tool parameter handling and duplicate invocations

3. LLM Provider Updates:
   - Updates for Anthropic Claude models with correct token limits
   - Adds support for native/XML tool modes in Gemini integration
   - Adds new model configurations including Llama 3.1 models
   - Improvements to streaming response handling

4. UI Enhancements:
   - New artifact viewer component with expand/collapse functionality
   - Security controls for artifact execution (click-to-run in strict mode)
   - Improved dialog and response handling
   - Better error management for tool execution

5. Security Improvements:
   - Sandbox controls for artifact execution
   - Public/private artifact sharing controls
   - Security settings to control artifact behavior
   - CSP and frame-options handling for artifacts

6. Technical Improvements:
   - Better post streaming implementation
   - Improved error handling in completions
   - Better memory management for partial tool calls
   - Enhanced testing coverage

7. Configuration:
   - New site settings for artifact security
   - Extended LLM model configurations
   - Additional tool configuration options

This PR significantly enhances the plugin's capabilities for generating and displaying interactive content while maintaining security and providing flexible configuration options for administrators.
2024-11-19 09:22:39 +11:00
Rafael dos Santos Silva 4fb686a548
FIX: Move emotion /filter logic into a CTE to keep cardinality sane (#915) 2024-11-14 17:16:48 -03:00
Rafael dos Santos Silva 5026ab52d0
FEATURE: Order by emotion on /filter (#913) 2024-11-14 12:45:40 -03:00
Sam 823e8ef490
FEATURE: partial tool call support for OpenAI and Anthropic (#908)
Implement streaming tool call implementation for Anthropic and Open AI.

When calling:

llm.generate(..., partial_tool_calls: true) do ...
Partials may contain ToolCall instances with partial: true, These tool calls are partially populated with json partially parsed.

So for example when performing a search you may get:

ToolCall(..., {search: "hello" })
ToolCall(..., {search: "hello world" })

The library used to parse json is:

https://github.com/dgraham/json-stream

We use a fork cause we need access to the internal buffer.

This prepares internals to perform partial tool calls, but does not implement it yet.
2024-11-14 06:58:24 +11:00
Keegan George f75b13c4fa
FIX: results not being reset when appending to query param (#912)
This PR fixes an issue where the AI search results were not being reset when you append your search to an existing query param (typically when you've come from quick search). This is because `handleSearch()` doesn't get called in this situation. So here we explicitly check for query params, trigger a reset and search for those occasions.
2024-11-13 07:19:34 -08:00
Sam 9551b1a4d1
FIX: do not strip empty string during stream processing (#911)
Fixes issue in Open AI provider eating newlines and spaces
2024-11-13 07:12:00 +11:00
Rafael dos Santos Silva aef9a03d4c
FEATURE: Truncate AI Captions to a reasonable max size (#907) 2024-11-12 15:52:46 -03:00
Sérgio Saquetim 9583964676
DEV: Added compatibility with the Glimmer Post Menu (#887) 2024-11-12 15:46:17 -03:00
Keegan George 2fc05685bb
DEV: Apply text formatting and AI feature naming conventions (#905)
This PR applies Discourse's formatting text guidelines as per [documentation on Meta](https://meta.discourse.org/t/formatting-text-in-discourse-documentation-and-uis/324637). Additionally, applies our AI feature naming conventions, standardizing feature names.
2024-11-12 08:48:30 -08:00
Kris 26bfda4763
UX: reduce topic list title size when gists are enabled (#910) 2024-11-12 09:55:07 -05:00
Discourse Translator Bot 6ef635619f
Update translations (#909) 2024-11-12 14:54:47 +01:00
Sam e817b7dc11
FEATURE: improve tool support (#904)
This re-implements tool support in DiscourseAi::Completions::Llm #generate

Previously tool support was always returned via XML and it would be the responsibility of the caller to parse XML

New implementation has the endpoints return ToolCall objects.

Additionally this simplifies the Llm endpoint interface and gives it more clarity. Llms must implement

decode, decode_chunk (for streaming)

It is the implementers responsibility to figure out how to decode chunks, base no longer implements. To make this easy we ship a flexible json decoder which is easy to wire up.

Also (new)

    Better debugging for PMs, we now have a next / previous button to see all the Llm messages associated with a PM
    Token accounting is fixed for vllm (we were not correctly counting tokens)
2024-11-12 08:14:30 +11:00
Keegan George 644141ff08
FIX: Regenerate summary button still shows cached summary (#903)
This PR fixes an issue where clicking to regenerate a summary was still showing the cached summary. To resolve this we call resetSummary() to reset all the summarization related properties before creating a new request.
2024-11-07 16:01:18 -08:00
Roman Rizzi fbc74c7467
FEATURE: Extend summary backfill to also generate gists (#896)
Updates default batch size to 0 and max to 10000
2024-11-07 13:40:18 -03:00
Keegan George c421f713a3
DEV: Handle streaming animation within `AiSummaryBox` (#901)
This PR further decouples the streaming animation by completely handling the streaming animation directly in the `AiSummaryBox` component. Previously, handling the streaming animation by calling methods in the `ai-streamer` API was leading to timing issues making things out-of-sync. This results in some issues such as the last update of streamed text not being shown. Handling streaming directly in the component should simplify things drastically and prevent any issues.
2024-11-07 08:08:32 -08:00
Rafael dos Santos Silva 021e09607d
FIX: Unhide gemini api key setting for embeddings (#902) 2024-11-07 11:55:18 -03:00
Kris 893fa624e4
UX: increase gist size, and adjust surrounding elements to accommodate (#900) 2024-11-07 08:35:05 -05:00
Kris 1ad5321c09
UX: add sparkle icon to related topics for anons (#897) 2024-11-05 17:15:20 -05:00
Osama Sayegh da7c97d294
FIX: Specify type for Rag upload (#895)
Specifying `type` when using `UppyUpload` will be required as of https://github.com/discourse/discourse/pull/29600.
2024-11-05 22:10:54 +03:00
Discourse Translator Bot ab4678275d
Update translations (#894) 2024-11-05 16:55:54 +01:00
Keegan George 99282612a9
DEV: Prefer ENV key for seeded models (#893)
This PR ensures we prefer getting the API key from environment variables when it is a seeded model.
2024-11-05 06:19:13 -08:00
Roman Rizzi 9505a8976c
FEATURE: Automatically backfill regular summaries. (#892)
This change introduces a job to summarize topics and cache the results automatically. We provide a setting to control how many topics we'll backfill per hour and what the topic's minimum word count is to qualify.

We'll prioritize topics without summary over outdated ones.
2024-11-04 17:48:11 -03:00
Sam 98022d7d96
FEATURE: support custom instructions for persona streaming (#890)
This allows us to inject information into the system prompt
which can help shape replies without repeating over and over
in messages.
2024-11-05 07:43:26 +11:00
Jarek Radosz fa7ca8bc31
DEV: Use the new more-topics API (#885)
See: https://github.com/discourse/discourse/pull/29143
2024-11-04 17:42:50 +01:00
Rafael dos Santos Silva 772ee934ab
Migrate sentiment to a TEI backend (#886) 2024-11-04 09:14:34 -03:00
Sam bffe9dfa07
FIX: we must properly encode objects prior to escaping (#891)
in cases of arrays escapeHTML will not work)

*
2024-11-04 16:16:25 +11:00
Sam c352054d4e
FIX: encode parameters returned from LLMs correctly (#889)
Fixes encoding of params on LLM function calls.

Previously we would improperly return results if a function parameter returned an HTML tag.

Additionally adds some missing HTTP verbs to tool calls.
2024-11-04 10:07:17 +11:00
Roman Rizzi 7e3a543f6f
FEATURE: Double gist length to 40 words (#888) 2024-11-01 13:09:03 -03:00
Kris 32ea421408
UX: in share, use native image dimensions and hide filename (#880) 2024-10-31 13:51:10 -04:00
Roman Rizzi e8f0633141
DEV: Extend truncation to all summarizable content (#884) 2024-10-31 12:17:42 -03:00
Roman Rizzi e8eed710e0
FIX: Truncate OP for gists to help the model focus on the latest posts (#883) 2024-10-31 10:54:56 -03:00
Roman Rizzi 32fb023357
COPY: Include model names in sentiment report descriptions (#882) 2024-10-30 15:50:28 -03:00
Roman Rizzi 00e4a84305
COPY: Update sentiment report descriptions to clarify how it works (#881) 2024-10-30 11:48:32 -03:00
Sam 34a59b623e
FIX: ensure replies are never double streamed (#879)
The custom field "discourse_ai_bypass_ai_reply" was added so
we can signal the post created hook to bypass replying even
if it thinks it should.

Otherwise there are cases where we double answer user questions
leading to much confusion.

This also slightly refactors code making the controller smaller
2024-10-30 20:24:39 +11:00