This adds a special filter to topic lists that will filter to tracked and
watched categories.
To use it you can visit:
`https://sitename/?filter=tracked`
`https://sitename/unread?filter=tracked`
and so on
Note, we do not include explicitly tracked and watched topics **outside** of
the tracked categories and tags.
We can consider a `filter=all_tracked` to cover this edge case.
Before this commit if you were bulk removing group members and passed in
a user who wasn't currently a member of that group the whole request
would fail. This change will return a 200 response now listing the users
that were removed and those that were skipped.
To reproduce the initial issue here:
1. A user makes a post, which discourse-akismet marks as spam (I cheated and called `DiscourseAkismet::PostsBouncer.new.send(:mark_as_spam, post)` for this)
2. The post lands in the review queue
3. The category the topic is in has a `reviewable_by_group_id`
4. A user in that group goes and looks at the Review queue, decides the post is not spam, and clicks Not Spam
5. Weird stuff happens because the `PostDestroyer#recover` method didn't handle this (the user who clicked Not Spam was not the owner of the post and was not a staff member, so the post didn't get un-destroyed and post counts didn't get updated)
Now users who belong to a group who can review a category now have the ability to recover/delete posts fully.
This adds an option to "delete on owner reply" to bookmarks. If you select this option in the modal, then reply to the topic the bookmark is in, the bookmark will be deleted on reply.
This PR also changes the checkboxes for these additional bookmark options to an Integer column in the DB with a combobox to select the option you want.
The use cases are:
* Sometimes I will bookmark the topics to read it later. In this case we definitely don’t need to keep the bookmark after I replied to it.
* Sometimes I will read the topic in mobile and I will prefer to reply in PC later. Or I may have to do some research before reply. So I will bookmark it for reply later.
* FEATURE: Allow List for PMs
This feature adds a new user setting that is disabled by default that
allows them to specify a list of users that are allowed to send them
private messages. This way they don't have to maintain a large list of
users they don't want to here from and instead just list the people they
know they do want. Staff will still always be able to send messages to
the user.
* Update PR based on feedback
In 1bd8a075, a hidden site setting was added that causes Email::Styles
to treat its input as a complete document in all cases.
This commit enables that setting by default.
Some tests were removed that were broken by this change. They tested the
behaviour of applying email styles to empty strings. They weren't useful
because:
* Sending empty email is not something we ever intend to do,
* They were testing incidental behaviour - there are lots of
valid ways to process the empty string,
* Their intent wasn't clear from their descriptions,
It seems there was a discrepancy in that background images were attached
to the full slug category class: `category-:slug-:id` and our body class
only had `category-:slug`.
This fix adds support for both formats.
* Remove unneeded bookmark name index.
* Change bookmark search query to use post_search_data. This allows searching on topic title and post content
* Tweak the style/layout of the bookmark list so the search looks better and the whole page fits better on mobile.
* Added scopes UI
* Create scopes when creating a new API key
* Show scopes on the API key show route
* Apply scopes on API requests
* Extend scopes from plugins
* Add missing scopes. A mapping can be associated with multiple controller actions
* Only send scopes if the use global key option is disabled. Use the discourse plugin registry to add new scopes
* Add not null validations and index for api_key_id
* Annotate model
* DEV: Move default mappings to ApiKeyScope
* Remove unused attribute and improve UI for existing keys
* Support multiple parameters separated by a comma
It's possible through an import or other means to have images larger
than the current max allowed image size in the db.
If this happens the thumbnail generation job will keep running
indefinitely trying to download a new copy of the original but
discarding it because it is larger than the max_file_size eventually
causing this error
`Job exception: undefined method `path' for nil:NilClass`
because the newly downloaded image is now nil.
This fix stops the enqueuing of the `GenerateTopicThumbnails` job for
all images that happen to be larger than the max image size.
Considering document length in search introduced too much variance in
our search results such that it makes certain searches better but at the
same time made certain searches worst. Instead, we want to have a more
determistic way of ranking search so that it is easier to reason about
why a post is rank higher in search than another.
The long term plan to tackle repeated terms is to restrict the number of
positions for a given lexeme in our search index.
Follow up to d8c796bc4.
Note that his change increases query time by around 40% in the following
benchmark against `dev.discourse.org` but this is a tradeoff that has to be taken so that relevance
search is accurate.
```
require 'benchmark/ips'
Benchmark.ips do |x|
x.config(time: 10, warmup: 2)
x.report("current aggregate search query") do
DB.exec <<~SQL
SELECT "posts"."id", "posts"."user_id", "posts"."topic_id", "posts"."post_number", "posts"."raw", "posts"."cooked", "posts"."created_at", "posts"."updated_at", "posts"."reply_to_post_number", "posts"."reply_count", "posts"."quote_count", "posts"."deleted_at", "posts"."off_topic_count", "posts"."like_count", "posts"."incoming_link_count", "posts"."bookmark_count", "posts"."score", "posts"."reads", "posts"."post_type", "posts"."sort_order", "posts"."last_editor_id", "posts"."hidden", "posts"."hidden_reason_id", "posts"."notify_moderators_count", "posts"."spam_count", "posts"."illegal_count", "posts"."inappropriate_count", "posts"."last_version_at", "posts"."user_deleted", "posts"."reply_to_user_id", "posts"."percent_rank", "posts"."notify_user_count", "posts"."like_score", "posts"."deleted_by_id", "posts"."edit_reason", "posts"."word_count", "posts"."version", "posts"."cook_method", "posts"."wiki", "posts"."baked_at", "posts"."baked_version", "posts"."hidden_at", "posts"."self_edits", "posts"."reply_quoted", "posts"."via_email", "posts"."raw_email", "posts"."public_version", "posts"."action_code", "posts"."locked_by_id", "posts"."image_upload_id" FROM "posts" JOIN (SELECT *, row_number() over() row_number FROM (SELECT topics.id, min(posts.post_number) post_number FROM "posts" INNER JOIN "post_search_data" ON "post_search_data"."post_id" = "posts"."id" INNER JOIN "topics" ON "topics"."id" = "posts"."topic_id" AND ("topics"."deleted_at" IS NULL) LEFT JOIN categories ON categories.id = topics.category_id WHERE ("posts"."deleted_at" IS NULL) AND "posts"."post_type" IN (1, 2, 3, 4) AND (topics.visible) AND (topics.archetype <> 'private_message') AND (post_search_data.search_data @@ TO_TSQUERY('english', '''postgres'':*ABCD')) AND (categories.id NOT IN (
SELECT categories.id WHERE categories.search_priority = 1
)
) AND ((categories.id IS NULL) OR (NOT categories.read_restricted)) GROUP BY topics.id ORDER BY MAX((
TS_RANK_CD(
post_search_data.search_data,
TO_TSQUERY('english', '''postgres'':*ABCD'),
1|32
) *
(
CASE categories.search_priority
WHEN 2
THEN 0.6
WHEN 3
THEN 0.8
WHEN 4
THEN 1.2
WHEN 5
THEN 1.4
ELSE
CASE WHEN topics.closed
THEN 0.9
ELSE 1
END
END
)
)
) DESC, topics.bumped_at DESC LIMIT 51 OFFSET 0) xxx) x ON x.id = posts.topic_id AND x.post_number = posts.post_number WHERE ("posts"."deleted_at" IS NULL) ORDER BY row_number;
SQL
end
x.report("current aggregate search query with proper ranking") do
DB.exec <<~SQL
SELECT "posts"."id", "posts"."user_id", "posts"."topic_id", "posts"."post_number", "posts"."raw", "posts"."cooked", "posts"."created_at", "posts"."updated_at", "posts"."reply_to_post_number", "posts"."reply_count", "posts"."quote_count", "posts"."deleted_at", "posts"."off_topic_count", "posts"."like_count", "posts"."incoming_link_count", "posts"."bookmark_count", "posts"."score", "posts"."reads", "posts"."post_type", "posts"."sort_order", "posts"."last_editor_id", "posts"."hidden", "posts"."hidden_reason_id", "posts"."notify_moderators_count", "posts"."spam_count", "posts"."illegal_count", "posts"."inappropriate_count", "posts"."last_version_at", "posts"."user_deleted", "posts"."reply_to_user_id", "posts"."percent_rank", "posts"."notify_user_count", "posts"."like_score", "posts"."deleted_by_id", "posts"."edit_reason", "posts"."word_count", "posts"."version", "posts"."cook_method", "posts"."wiki", "posts"."baked_at", "posts"."baked_version", "posts"."hidden_at", "posts"."self_edits", "posts"."reply_quoted", "posts"."via_email", "posts"."raw_email", "posts"."public_version", "posts"."action_code", "posts"."locked_by_id", "posts"."image_upload_id" FROM "posts" JOIN (SELECT *, row_number() over() row_number FROM (SELECT subquery.topic_id id, (ARRAY_AGG(subquery.post_number ORDER BY rank DESC, bumped_at DESC))[1] post_number, MAX(subquery.rank) rank, MAX(subquery.bumped_at) bumped_at FROM (SELECT "posts"."id", "posts"."user_id", "posts"."topic_id", "posts"."post_number", "posts"."raw", "posts"."cooked", "posts"."created_at", "posts"."updated_at", "posts"."reply_to_post_number", "posts"."reply_count", "posts"."quote_count", "posts"."deleted_at", "posts"."off_topic_count", "posts"."like_count", "posts"."incoming_link_count", "posts"."bookmark_count", "posts"."score", "posts"."reads", "posts"."post_type", "posts"."sort_order", "posts"."last_editor_id", "posts"."hidden", "posts"."hidden_reason_id", "posts"."notify_moderators_count", "posts"."spam_count", "posts"."illegal_count", "posts"."inappropriate_count", "posts"."last_version_at", "posts"."user_deleted", "posts"."reply_to_user_id", "posts"."percent_rank", "posts"."notify_user_count", "posts"."like_score", "posts"."deleted_by_id", "posts"."edit_reason", "posts"."word_count", "posts"."version", "posts"."cook_method", "posts"."wiki", "posts"."baked_at", "posts"."baked_version", "posts"."hidden_at", "posts"."self_edits", "posts"."reply_quoted", "posts"."via_email", "posts"."raw_email", "posts"."public_version", "posts"."action_code", "posts"."locked_by_id", "posts"."image_upload_id", (
TS_RANK_CD(
post_search_data.search_data,
TO_TSQUERY('english', '''postgres'':*ABCD'),
1|32
) *
(
CASE categories.search_priority
WHEN 2
THEN 0.6
WHEN 3
THEN 0.8
WHEN 4
THEN 1.2
WHEN 5
THEN 1.4
ELSE
CASE WHEN topics.closed
THEN 0.9
ELSE 1
END
END
)
)
rank, topics.bumped_at bumped_at FROM "posts" INNER JOIN "post_search_data" ON "post_search_data"."post_id" = "posts"."id" INNER JOIN "topics" ON "topics"."id" = "posts"."topic_id" AND ("topics"."deleted_at" IS NULL) LEFT JOIN categories ON categories.id = topics.category_id WHERE ("posts"."deleted_at" IS NULL) AND "posts"."post_type" IN (1, 2, 3, 4) AND (topics.visible) AND (topics.archetype <> 'private_message') AND (post_search_data.search_data @@ TO_TSQUERY('english', '''postgres'':*ABCD')) AND (categories.id NOT IN (
SELECT categories.id WHERE categories.search_priority = 1
)
) AND ((categories.id IS NULL) OR (NOT categories.read_restricted))) subquery GROUP BY subquery.topic_id ORDER BY rank DESC, bumped_at DESC LIMIT 51 OFFSET 0) xxx) x ON x.id = posts.topic_id AND x.post_number = posts.post_number WHERE ("posts"."deleted_at" IS NULL) ORDER BY row_number;
SQL
end
x.compare!
end
```
```
Warming up --------------------------------------
current aggregate search query
1.000 i/100ms
current aggregate search query with proper ranking
1.000 i/100ms
Calculating -------------------------------------
current aggregate search query
18.040 (± 0.0%) i/s - 181.000 in 10.035241s
current aggregate search query with proper ranking
12.992 (± 0.0%) i/s - 130.000 in 10.007214s
Comparison:
current aggregate search query: 18.0 i/s
current aggregate search query with proper ranking: 13.0 i/s - 1.39x (± 0.00) slower
```
Indexing query strings in URLS produces inconsistent results in PG and
pollutes the search data for really little gain.
The following seems to work as expected...
```
discourse_development=# SELECT TO_TSVECTOR('https://www.discourse.org?test=2&test2=3');
to_tsvector
------------------------------------------------------
'2':3 '3':5 'test':2 'test2':4 'www.discourse.org':1
```
However, once a path is present
```
discourse_development=# SELECT TO_TSVECTOR('https://www.discourse.org/latest?test=2&test2=3');
to_tsvector
----------------------------------------------------------------------------------------------
'/latest?test=2&test2=3':3 'www.discourse.org':2 'www.discourse.org/latest?test=2&test2=3':1
```
The lexeme contains both the path and the query string.
```
discourse_development=# SELECT alias, lexemes FROM TS_DEBUG('www.discourse.org');
alias | lexemes
-------+---------------------
host | {www.discourse.org}
discourse_development=# SELECT TO_TSVECTOR('www.discourse.org');
to_tsvector
-----------------------
'www.discourse.org':1
```
Given the above lexeme, we will inject additional lexeme by splitting
the host on `.`. The actual tsvector stored will look something like
```
tsvector
---------------------------------------
'discourse':1 'discourse.org':1 'org':1 'www':1 'www.discourse.org':1
```
Previously, we would only take either the `MIN` or `MAX` for
`post_number` during aggregation meaning that the ranking is not
considered.
```
require 'benchmark/ips'
Benchmark.ips do |x|
x.config(time: 10, warmup: 2)
x.report("current aggregate search query") do
DB.exec <<~SQL
SELECT "posts"."id", "posts"."user_id", "posts"."topic_id", "posts"."post_number", "posts"."raw", "posts"."cooked", "posts"."created_at", "posts"."updated_at", "posts"."reply_to_post_number", "posts"."reply_count", "posts"."quote_count", "posts"."deleted_at", "posts"."off_topic_count", "posts"."like_count", "posts"."incoming_link_count", "posts"."bookmark_count", "posts"."score", "posts"."reads", "posts"."post_type", "posts"."sort_order", "posts"."last_editor_id", "posts"."hidden", "posts"."hidden_reason_id", "posts"."notify_moderators_count", "posts"."spam_count", "posts"."illegal_count", "posts"."inappropriate_count", "posts"."last_version_at", "posts"."user_deleted", "posts"."reply_to_user_id", "posts"."percent_rank", "posts"."notify_user_count", "posts"."like_score", "posts"."deleted_by_id", "posts"."edit_reason", "posts"."word_count", "posts"."version", "posts"."cook_method", "posts"."wiki", "posts"."baked_at", "posts"."baked_version", "posts"."hidden_at", "posts"."self_edits", "posts"."reply_quoted", "posts"."via_email", "posts"."raw_email", "posts"."public_version", "posts"."action_code", "posts"."locked_by_id", "posts"."image_upload_id" FROM "posts" JOIN (SELECT *, row_number() over() row_number FROM (SELECT topics.id, min(posts.post_number) post_number FROM "posts" INNER JOIN "post_search_data" ON "post_search_data"."post_id" = "posts"."id" INNER JOIN "topics" ON "topics"."id" = "posts"."topic_id" AND ("topics"."deleted_at" IS NULL) LEFT JOIN categories ON categories.id = topics.category_id WHERE ("posts"."deleted_at" IS NULL) AND "posts"."post_type" IN (1, 2, 3, 4) AND (topics.visible) AND (topics.archetype <> 'private_message') AND (post_search_data.search_data @@ TO_TSQUERY('english', '''postgres'':*ABCD')) AND (categories.id NOT IN (
SELECT categories.id WHERE categories.search_priority = 1
)
) AND ((categories.id IS NULL) OR (NOT categories.read_restricted)) GROUP BY topics.id ORDER BY MAX((
TS_RANK_CD(
post_search_data.search_data,
TO_TSQUERY('english', '''postgres'':*ABCD'),
1|32
) *
(
CASE categories.search_priority
WHEN 2
THEN 0.6
WHEN 3
THEN 0.8
WHEN 4
THEN 1.2
WHEN 5
THEN 1.4
ELSE
CASE WHEN topics.closed
THEN 0.9
ELSE 1
END
END
)
)
) DESC, topics.bumped_at DESC LIMIT 51 OFFSET 0) xxx) x ON x.id = posts.topic_id AND x.post_number = posts.post_number WHERE ("posts"."deleted_at" IS NULL) ORDER BY row_number;
SQL
end
x.report("current aggregate search query with proper ranking") do
DB.exec <<~SQL
SELECT "posts"."id", "posts"."user_id", "posts"."topic_id", "posts"."post_number", "posts"."raw", "posts"."cooked", "posts"."created_at", "posts"."updated_at", "posts"."reply_to_post_number", "posts"."reply_count", "posts"."quote_count", "posts"."deleted_at", "posts"."off_topic_count", "posts"."like_count", "posts"."incoming_link_count", "posts"."bookmark_count", "posts"."score", "posts"."reads", "posts"."post_type", "posts"."sort_order", "posts"."last_editor_id", "posts"."hidden", "posts"."hidden_reason_id", "posts"."notify_moderators_count", "posts"."spam_count", "posts"."illegal_count", "posts"."inappropriate_count", "posts"."last_version_at", "posts"."user_deleted", "posts"."reply_to_user_id", "posts"."percent_rank", "posts"."notify_user_count", "posts"."like_score", "posts"."deleted_by_id", "posts"."edit_reason", "posts"."word_count", "posts"."version", "posts"."cook_method", "posts"."wiki", "posts"."baked_at", "posts"."baked_version", "posts"."hidden_at", "posts"."self_edits", "posts"."reply_quoted", "posts"."via_email", "posts"."raw_email", "posts"."public_version", "posts"."action_code", "posts"."locked_by_id", "posts"."image_upload_id" FROM "posts" JOIN (SELECT *, row_number() over() row_number FROM (SELECT subquery.topic_id id, (ARRAY_AGG(subquery.post_number))[1] post_number, MAX(subquery.rank) rank, MAX(subquery.bumped_at) bumped_at FROM (SELECT "posts"."id", "posts"."user_id", "posts"."topic_id", "posts"."post_number", "posts"."raw", "posts"."cooked", "posts"."created_at", "posts"."updated_at", "posts"."reply_to_post_number", "posts"."reply_count", "posts"."quote_count", "posts"."deleted_at", "posts"."off_topic_count", "posts"."like_count", "posts"."incoming_link_count", "posts"."bookmark_count", "posts"."score", "posts"."reads", "posts"."post_type", "posts"."sort_order", "posts"."last_editor_id", "posts"."hidden", "posts"."hidden_reason_id", "posts"."notify_moderators_count", "posts"."spam_count", "posts"."illegal_count", "posts"."inappropriate_count", "posts"."last_version_at", "posts"."user_deleted", "posts"."reply_to_user_id", "posts"."percent_rank", "posts"."notify_user_count", "posts"."like_score", "posts"."deleted_by_id", "posts"."edit_reason", "posts"."word_count", "posts"."version", "posts"."cook_method", "posts"."wiki", "posts"."baked_at", "posts"."baked_version", "posts"."hidden_at", "posts"."self_edits", "posts"."reply_quoted", "posts"."via_email", "posts"."raw_email", "posts"."public_version", "posts"."action_code", "posts"."locked_by_id", "posts"."image_upload_id", (
TS_RANK_CD(
post_search_data.search_data,
TO_TSQUERY('english', '''postgres'':*ABCD'),
1|32
) *
(
CASE categories.search_priority
WHEN 2
THEN 0.6
WHEN 3
THEN 0.8
WHEN 4
THEN 1.2
WHEN 5
THEN 1.4
ELSE
CASE WHEN topics.closed
THEN 0.9
ELSE 1
END
END
)
)
rank, topics.bumped_at bumped_at FROM "posts" INNER JOIN "post_search_data" ON "post_search_data"."post_id" = "posts"."id" INNER JOIN "topics" ON "topics"."id" = "posts"."topic_id" AND ("topics"."deleted_at" IS NULL) LEFT JOIN categories ON categories.id = topics.category_id WHERE ("posts"."deleted_at" IS NULL) AND "posts"."post_type" IN (1, 2, 3, 4) AND (topics.visible) AND (topics.archetype <> 'private_message') AND (post_search_data.search_data @@ TO_TSQUERY('english', '''postgres'':*ABCD')) AND (categories.id NOT IN (
SELECT categories.id WHERE categories.search_priority = 1
)
) AND ((categories.id IS NULL) OR (NOT categories.read_restricted))) subquery GROUP BY subquery.topic_id ORDER BY rank DESC, bumped_at DESC LIMIT 51 OFFSET 0) xxx) x ON x.id = posts.topic_id AND x.post_number = posts.post_number WHERE ("posts"."deleted_at" IS NULL) ORDER BY row_number;
SQL
end
x.compare!
end
```
```
Warming up --------------------------------------
current aggregate search query
1.000 i/100ms
current aggregate search query with proper ranking
1.000 i/100ms
Calculating -------------------------------------
current aggregate search query
17.726 (± 0.0%) i/s - 178.000 in 10.045107s
current aggregate search query with proper ranking
17.802 (± 0.0%) i/s - 178.000 in 10.002230s
Comparison:
current aggregate search query with proper ranking: 17.8 i/s
current aggregate search query: 17.7 i/s - 1.00x (± 0.00) slower
```
On large topics, the cost of sending the entire post ID list back over to the database is signficant. Just have the DB recalculate the list of visible posts instead.
Category and tag hashtags used to be handled differently even though
most of the code was very similar. This design was the root cause of
multiple issues related to hashtags.
This commit reduces the number of requests (just one and debounced
better), removes the use of CSS classes which marked resolved hashtags,
simplifies a lot of the code as there is a single source of truth and
previous race condition fixes are now useless.
It also includes a very minor security fix which let unauthorized users
to guess hidden tags.
When creating a bookmark reminder that deletes the bookmark on reminder, if the user clicked on the notification and got taken to the post in the topic the bookmark icon still showed as blue with the reminder clock indicator. This was because the response JSON for reloading a topic post was not including the bookmark attributes, not even the bookmarked boolean.
We now return the correct attributes in the serializer, and if bookmarked is false we clear all the bookmark related attributes on the post for the notification to make sure nothing of the old bookmark remains in the UI.
This was only a problem if the user did not refresh the app completely inbetween setting the reminder and receiving the notification.
It's a little awkward to test constants by re-assigning them so
I've added a new parameter to `Discourse.find_compatible_resource`
which can be used by tests.
Instead of loading all of the user bookmarks using all the post IDs in a topic, load all the bookmarks for a user using the topic ID. This eliminates a costly WHERE ID IN query.
We have a couple of examples of enormous amounts of text being entered in the name column of bookmarks. This is not desirable...it is just meant to be a short note / reminder of why you bookmarked this.
This PR caps the column at 100 characters and truncates existing names in the database to 100 characters.
* FIX: Improve category hashtag lookup
This commit improves support for sub-sub-categories and does not include
the ID of the category in the slug, which fixes the composer preview.
* FIX: Sub-sub-categories can be mentioned using only two levels
* FIX: Remove support for three-level hashtags
* DEV: Simplify code
Adds a new rake task `plugin:checkout_compatible_all` and
`plugin:checkout_compatible[plugin-name]` that check out compatible plugin
versions.
Supports a .discourse-compatibility file in the root of plugins and themes that
list out a plugin's compatibility with certain discourse versions:
eg: .discourse-compatibility
```
2.5.0.beta6: some-git-hash
2.4.4.beta4: some-git-tag
2.2.0: git-reference
```
This ensures older Discourse installs are able to find and install older
versions of plugins without intervention, through the manifest only.
It iterates through the versions in descending order. If the current Discourse
version matches an item in the manifest, it checks out the listed plugin target.
If the Discourse version is greater than an item in the manifest, it checks out
the next highest version listed in the manifest.
If no versions match, it makes no change.
If any value, including nil, is passed in as an argument the default
won't be set, so we need to handle when a non-Array value is passed in
to the `generate_thumbnails!` method.
This is a very expensive process, and it should only be required in exceptional circumstances. It is possible to run a similar recovery using `rake uploads:recover` (5284d41a8e/lib/upload_recovery.rb (L135-L184))
`OptimizedImage#filesize` calls `Discourse.store.download` with an OptimizedImage as an argument. It would in turn attempt to call `#original_filename` and `#secure?` on that object. Both would fail as these methods do not exist on OptimizedImage, only on Upload. We didn't know about these issues because:
1. `#calculate_filesize` is not called often, because the filesize is saved on OptimizedImage creation, so it's used mostly for manual filesize recalculation
2. we were using `rescue nil` which swallows all errors
Previously, while generating the topic page's canoncial url we used the current post number. It will create invalid canonical path if the topic has whsiper posts. Now we only taking the visible posts for current page index calculation.
The previous fix (f43c0a5d85) wasn't working for images that were already uploaded.
The "metadata" (eg. 'for_*' and 'secure' attributes) were not added to existing uploads.
Also used 'Upload.get_from_url' is the admin/site_setting controller to properly retrieve
an upload from its URL.
Fixed the Upload::URL_REGEX to use the \h (hexadecimal) for the SHA
Follow-up-to: f43c0a5d85
When uploading an image as a site setting, we need to return the "raw" URL, otherwise
when saving the site setting, the upload won't be looked up properly.
Follow-up-to: f11363d446
* FIX: Correct version comparison logic when comparing stable to beta
For example, version 1.3.0 should be considered higher than 1.3.0.beta3. So `Discourse.has_needed_version?('1.3.0', '1.3.0.beta3')` should return true
* Switch to use Gem::Version to compare versions
When rebaking a post we were invalidating _regular_ oneboxes but not inline oneboxes.
DEV: also renamed 'InlineOneboxer.purge' to 'InlineOneboxer.invalidate' to keep
the API consistent with 'Oneboxer.invalidate'
#b19dcac2 improved the serializer so it sends default notification
levels to users to work around cases where a category edit would
would result in clients being left with invalid notification state
Unfortunately this did not address the root issue.
When we edit categories we publish state to multiple users this
means that the serializer is executed unscoped with no user.
The client already handles this case per:
dcad720a4c/app/assets/javascripts/discourse/app/models/site.js (L119-L119)
If a property is not shipped to it, it will leave it alone on the
existing category.
This fix ensures that these wide category info updates do not
include notification state to avoid corruption of local state.
When linking to a topic in the same Discourse, we try to onebox the link to show the title
and other various information depending on whether it's a "standard" or "inline" onebox.
However, we were not properly detecting links to topics that had no slugs (eg. https://meta.discourse.org/t/1234).
This is causing issues when purging old users, if they are set up in the
exact condition where they will be demoted into another group, but also
do not have a primary email.
* FEATURE: Don't display muted/ignored users under "who liked"
Previously, if you clicked on the heart icon below a post
it would show you the avatar for a user even if you ignored or muted
them.
This commit will instead display a (?) icon. The count of likes will
remain correct, but you needn't be reminded of the person you
preferred not to see.
* Use a circle instead of (?) for unknown user
The `EXECUTE FUNCTION` syntax for `CREATE TRIGGER` statements was introduced in PostgreSQL 11. We need to replace `EXECUTE FUNCTION` with `EXECUTE PROCEDURE` in order to be able to restore backups created with PG12 on PG10.
Fixed bugs, added specs, extracted the upload downsizing code to a class, added support for non-S3 setups, changed it so that images aren't downloaded twice.
This code has been tested on production and successfully resized ~180k uploads.
Includes:
* DEV: Extract upload downsizing logic
* DEV: Add support for non-S3 uploads
* DEV: Process only images uploaded by users
* FIX: Incorrect usage of `count` and `exist?` typo
* DEV: Spec S3 image downsizing
* DEV: Avoid downloading images twice
* DEV: Update filesizes earlier in the process
* DEV: Return false on invalid upload
* FIX: Download images that currently above the limit (If the image size limit is decreased, then there was no way to resize those images that now fall outside the allowed size range)
* Update script/downsize_uploads.rb (Co-authored-by: Régis Hanol <regis@hanol.fr>)
This came in as a request on meta to include the raw field in the post
webhook serializer.
https://meta.discourse.org/t/-/49045/55?u=blake
Including this field can prevent needing to make a 2nd API request to
get the raw field of a post.
It would be handy down the road if we updated the webhook ui to specify
fields or arguments that you wanted to be included in the serialized
data, but most requests I've seen to update the serializers have been
valid requests that are good to add anyways, so I don't think we have
reached that point yet.
FIX: prevent re-flagging when we have reviewed flags before
Fixes an edge case where a review can be reflagged when:
User flags as inappropriate.
Moderator rejects the flag.
Another user re-flags the post as spam.
Before, anyone was able to re-flag as inappropriate despite it being flagged
previously. With this, users are unable to re-flag for the same reason
regardless of reviewable status.
This test lasted about 7 years prior to it becoming flaky.
Today ... for whatever reason the test suite created 100 users
prior to running this spec. So the new user becomes user id 101
And... lo and behold:
```
1) PostSerializer a hidden post with add_raw enabled a hidden post shows the raw post only if authorized to see it
Failure/Error: expect(serialized_post_for_user(Fabricate(:user))[:raw]).to eq(nil)
expected: nil
got: "Raw contents of the post."
(compared using ==)
# ./spec/serializers/post_serializer_spec.rb:127:in `block (4 levels) in <main>'
```
We removed pry-nav a while back because it is not up to date with pry but it is super useful. Luckily pry-byebug is here to save us all from Satan's power.
To get this to work you need to add the following to your $HOME/.pryrc file.
```
if defined?(PryByebug)
Pry.commands.alias_command 'c', 'continue'
Pry.commands.alias_command 's', 'step'
Pry.commands.alias_command 'n', 'next'
Pry.commands.alias_command 'f', 'finish'
end
Pry::Commands.command /^$/, "repeat last command" do
pry_instance.run_command Pry.history.to_a.last
end
```
The require-ing of pry, pry-rails, and pry-byebug in specs is controlled by the IMPROVED_SPEC_DEBUGGING flag (disabled by default).
Adds new hidden site settings for rate limits:
30 for logged in users, 15 for anon
Adds an anon cache for searching, caches results of searches for 1 minute
Allow limiting the number of migrations to do at once, both to do migrations that
have impact limited to multiple off-peak usage hours to reduce user impact from
a migration, and to allow tests that do only a very small number for test
purposes. ("Give me a ping, Vasili. One ping only, please.")
Looks like some html elements like `aside` and `section` will throw an error
when checking if they are inline or not. The commit simply handles
```
Job exception: undefined method `inline?' for nil:NilClass
```
and adds a test for it.
Moves the most important checks into a linter. It gets executed by Lefthook as well as the docker rake task and Github actions. Doing those checks in rspec takes too long and it produces errors when the discourse:test Docker image contains old, invalid locale files.
Discourse needs a bunch of data preloaded before it can start up.
Normally we throw blobs of this into the HTML document that is requested
but in some cases that's awkward to retrieve.
For example with Ember CLI you have a separate javascript application
that needs to make its own HTML.
This API endpoint returns a JSON object with all the data Discourse needs to
bootstrap and start up.
In some restricted setups all JS payloads need tight control.
This setting bans admins from making changes to JS on the site and
requires all themes be whitelisted to be used.
There are edge cases we still need to work through in this mode
hence this is still not supported in production and experimental.
Use an example like this to enable:
`DISCOURSE_WHITELISTED_THEME_REPOS="https://repo.com/repo.git,https://repo.com/repo2.git"`
By default this feature is not enabled and no changes are made.
One exception is that default theme id was missing a security check
this was added for correctness.
If `default email digest frequency` was set to "Never", users would get
a `digest_after_minutes` set to `nil` which triggered this error
in the logs if/when the site eventually changed that setting and
enabled digests:
```
NoMethodError (undefined method `>=' for nil:NilClass)
/var/www/discourse/app/mailers/user_notifications.rb:227:in `digest'
```
* FEATURE: notify admins about old credentials
Security and API keys should be renewed periodically.
This additional notification should help admins keep their Discourse safe and secure.
Previously the pull hotlinked images job was skipped after system edits. This ensured that we never had an infinite loop of system-edit/pull-hotlinked/system-edit/pull-hotlinked etc.
A side effect was that edits made by system for any other reason (e.g. API, removing full quotes) would prevent pulling hotlinked images. This commit removes the system edit check, and replaces it with another method to avoid an infinite job scheduling loop.
Hostname can vary per-site on a multisite cluster, so this change requires converting the compiler_version from a constant into a class method which is evaluated at runtime. The value is stored in the theme DistributedCache, so performance impact should be negligible.
We previously did not account for completely untagged topics when
looking at muted tags, this caused new/unread counts to be off if
1. You had muted tags
2. You had an unread/new topic
3. This topic had no tags
This feature allows certain plugins to output tag information
to topic tracking state, this allows per tag stats which can be
used by sidebars and other plugins that need per tag stats
Not enabled by default cause this would add cost to a critical
query