Add a modifier that will allow us to tune the results returned by suggested.
At the moment the modifier allows us to toggle including random results.
This was created for the discourse-ai module. It needs to switch off random
results when it returns related topics.
Longer term we can use it to toggle unread/new and other aspects.
This also demonstrates how to test the contract when adding modifiers.
topic.url is a dangerous method to use, it is incompatible with our `sign_in`
helper.
This removes a bunch of places this was erroneously used and helps to avoid
future cargo culting.
Similar to the _map added for group_list SiteSettings in
e62e93f83a, this commit adds
the same extension for simple and compact `list` type SiteSettings,
so developers do not have to do the `.to_s.split("|")` dance
themselves all the time.
For example:
```
SiteSetting.markdown_linkify_tlds
=> "com|net|org|io|onion|co|tv|ru|cn|us|uk|me|de|fr|fi|gov|ddd"
SiteSetting.markdown_linkify_tlds_map
=> ["com", "net", "org", "io", "onion", "co", "tv", "ru", "cn", "us", "uk", "me", "de", "fr", "fi", "gov"]
```
There is no need to validate the user's emails when
promoting/demoting their trust level, this can cause
issues in things like Jobs::Tl3Promotions, we don't
need to fail in that case when all we are doing is changing
trust level.
Introduces a new API for plugin data modification without class-based extension overhead.
This commit introduces a new API that allows plugins to modify data in cases where they return different data rather than additional data, as is common with filtered_registers in DiscoursePluginRegistry. This API removes the need for defining class-based extension points.
When a plugin registers a modifier, it will automatically be called if the plugin is enabled. The core will then modify the parameter sent to it using the block registered by the plugin:
```ruby
DiscoursePluginRegistry.register_modifier(plugin_instance, :magic_sum_modifier) { |a, b| a + b }
sum = DiscoursePluginRegistry.apply_modifier(:magic_sum_filter, 1, 2)
expect(sum).to eq(3)
```
Key features of these modifiers:
- Operate in a stack (first registered, first called)
- Automatically disabled when the plugin is disabled
- Pass the cumulative result of all block invocations to the caller
The following are the changes being introduced in this commit:
1. Instead of mapping the query language to various query params on the
client side, we've decided that the benefits of having a more robust
query language far outweighs the benefits of having a more human readable query params in the URL.
As such, the `/filter` route will just accept a single `q` query param
and the query string will be parsed on the server side.
1. On the `/filter` route, the tags filtering query language is now
supported in the input per the example provided below:
```
tags:bug+feature tagged both bug and feature
tags:bug,feature tagged either bug or feature
-tags:bug+feature excluding topics tagged bug and feature
-tags:bug,feature excluding topics tagged bug or feature
```
The `tags` filter can also be specified multiple
times in the query string like so `tags:bug tags:feature` which will
filter topics that contain both the `bug` tag and `feature` tag. More
complex query like `tags:bug+feature -tags:experimental` will also work.
When user.last_seen was less than push_notification_time_window_mins we
where delaying the notification for the whole
push_notification_time_window_mins PLUS the time the user was away from.
Originally reported in https://meta.discourse.org/t/-/259688
`default_categories_*` site settings will update the category preferences on user creation. But it shouldn't update the user's category preference if a group's setting already updated it for that user.
This change sets the ground work for allowing us to filter topics list
by tags in the following ways:
1. Filter for topics that matches all tags in a given set of tags
2. Filter for topics that matches any tags in a given set of tags
3. Exclude topics that matches all tags in a given set of tags
4. Exclude topics that matches any tags in a given set of tags
Before, incorrectly filled fields were marked with red border. Now, additional information under the field is displayed to notify the user what is incorrect.
/t/93696
When we renamed the `default_categories_regular` to `default_categories_normal` we missed a site setting validation method. It allowed the duplicate category ids in `default_categories_normal` site setting and caused the problem in user registration process.
5176c689e9
When setting the ACL for optimized images after setting the
ACL for the linked upload (e.g. via the SyncACLForUploads job),
we were using the optimized image path as the S3 key. This worked
for single sites, however it would fail silently for multisite
sites since the path would be incorrect, because the Discourse.store.upload_path
was not included.
For example, something like this:
somecluster1/optimized/2X/1/3478534853498753984_2_1380x300.png
Instead of:
somecluster1/uploads/somesite1/2X/1/3478534853498753984_2_1380x300.png
The silent failure is still intentional, since we don't want to
break other things because of ACL updates, but now we will update
the ACL correctly for optimized images on multisite sites.
Change styling of filter input & remove button.
This follows the same pattern of design we use for search. In the search dropdown we do not have a button to search. We rely on pressing enter. I've also provided an example of Github's PR filter UI at the bottom of this comment.
We also do not have buttons like this on any other topic-list header. On tag and category dropdowns, we also rely on pressing enter to filter the topic list by chosen categories & tags.
Co-authored-by: Jordan Vidrine <jordan@jordanvidrine.com>
Our SafeMigrate system is designed to prevent tables/columns being dropped in pre-deploy migrations. Its regex-based detection was triggering incorrectly on `ALTER COLUMN DROP NOT NULL`.
There was a lot of duplication in the svg parsing and coercion code. This reduces that duplication and causes svg sprite parsing to happen earlier so that more computation is cached.
During search indexing we "stuff" the index with additional keywords for
entities that look like domain names.
This allows searches for `cnn` to find URLs for `www.cnn.com`
The search stuffing attempted to keep indexes aligned at the correct positions
by remapping the indexed terms. However under certain edge cases a single
word can stem into 2 different lexemes. If this happened we had an off by
one which caused the entire indexing to fail.
We work around this edge case (and carry incorrect index positions) for cases
like this. It is unlikely to impact search quality at all given index position
makes almost no difference in the search algorithm.