also add a hard limit of 1000 users per job run so we do not clog the
scheduler
destroyer.destroy has a transaction and this can have some serious complications
with the open record set find_each has going
- Plugin developers using OpenID2.0 should migrate to OAuth2 or OIDC. OpenID2.0 APIs will be removed in v2.4.0
- For sites requiring Yahoo login, it can be implemented using the OpenID Connect plugin: https://meta.discourse.org/t/103632
For more information, see https://meta.discourse.org/t/113249
In certain edge cases, the message bus won't send the message to the
user about the updated review count and it can go out of sync.
This patch synchronizes the review count every time:
1. The user visits the "Needs Review" page
2. Every time the user performs an action
restructure query so it avoids ORs
It appears postgres is picking suboptimal indexes if too many ORs exist
despite how trivial the condition is.
This bypasses conditional in the query and evals them upfront.
On meta for my user this made a 10x perf difference.
This boils down to either having `OR u.admin` or not having `OR u.admin` in
the query.
Note, to avoid race conditions we are setting last_unread to 10 minutes ago
if there is nothing unread.
This is safer in case of in progress transactions
we don't want to lose unread for any window of time.
This optimisation avoids large scans joining the topics table with the
topic_users table.
Previously when a user carried a lot of read state we would have to join
the entire read state with the topics table. This operation would slow down
home page and every topic page. The more read state you accumulated the
larger the impact.
The optimisation helps people who clean up unread, however if you carry
unread from years ago it will only have minimal impact.
A new checkbox has been added to the Tags tab of the category settings modal
which is used when some tags and/or tag groups are restricted to the category,
and all other unrestricted tags should also be allowed.
Default is the same as the previous behaviour: only allow the specified set of
tags and tag groups in the category.
Sometimes sidekiq is so fast that it starts jobs before transactions
have comitted. This patch moves the message bus stuff until after things
have comitted.
Such links might be in present in old PMs. For example, a notification of
outstanding flags.
New PMs should receive the correct link but this prevents 404s in the
other case.
"Rejecting" a user in the queue is equivalent to deleting them, which
would then making it impossible to review rejected users. Now we store
information about the user in the payload so if they are deleted things
still display in the Rejected view.
Secondly, if a user is destroyed outside of the review queue, it will
now automatically "Reject" that queue item.
Conversely, if a user is deactivated the reviewable should automatically
be rejected.
Before this fix, if a user was not active they'd still show in the
review queue but without an "Approve" button which was confusing.
Previously every rebake would remove and recreate records in this table
This caused created_at and updated_at to keep changing
Yes, I know the SQL is somewhat complex, but this makes quote extraction
more efficient cause we do everything in 2 round trips.
This also removes some concurrency protection we should no longer need
Some sites have external URLs that don't even match `%/uploads/%' and
some sites surprise me with URLs that contains the default path when it
is a site in a multisite cluster. We can't do anything about those.
User cards triggered in header were incorrectly positioned in Safari desktop.
Using `position()` instead of `offset()` is more consistent, since header is a fixed element in this scenario.
If the post ids keep loading, we might end up in a situations where
we're always loading the same post ids over and over again without
indexing anything new.
Follow up to daeda80ada.
Adds the parallel_tests gem, and redis/postgres configuration for running rspec tests in parallel. To use:
```
rake parallel:rake[db:create]
rake parallel:rake[db:migrate]
rake parallel:spec
```
This brings the test suite from 12m20s to 3m11s on my macOS machine
Handle the case of https://github.com/discourse/DiscoTOC doing this kind of setup:
```
return {
action: "insertDtoc",
icon: "align-left",
label: themePrefix("insert_table_of_contents"),
condition: !composerController.get("model.canCategorize")
};
```
In this case there's no function to call, it's already set.
This commit fixes the follow quality issue with `PostSearchData#raw_data`:
1. URLs are being tokenized and links with similar href and characters
are being duplicated in the raw data.
`Post#cooked`:
```
<p><a href=\"https://meta.discourse.org/some.png\" class=\"onebox\" target=\"_blank\" rel=\"nofollow noopener\">https://meta.discourse.org/some.png</a></p>
```
`PostSearchData#raw_data` Before:
```
This is a test topic 0 Uncategorized https://meta.discourse.org/some.png discourse org/some png https://meta.discourse.org/some.png discourse org/some png
```
`PostSearchData#raw_data` After:
```
This is a test topic 0 Uncategorized https://meta.discourse.org/some.png meta discourse org
```
2. Ligthbox being included in search pollutes the
`PostSearchData#raw_data` unncessarily.
From 28 March 2018 to 28 March 2019, searches for the term `image` on
`meta.discourse.org` had a click through rate of 2.1%. Non-lightboxed images are not included in indexing for search yet we were indexing content within a lightbox. Also, search for terms like `image` was affected we were using `Pasted image` as the filename for
uploads that were pasted.
`Post#cooked`
```
<p>Let me see how I can fix this image<br>\n<div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://meta.discourse.org/some.png\" title=\"some.png\" rel=\"nofollow noopener\"><img src=\"https://meta.discourse.org/some.png\" width=\"275\" height=\"299\"><div class=\"meta\">\n<svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use xlink:href=\"#far-image\"></use></svg><span class=\"filename\">some.png</span><span class=\"informations\">1750×2000</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use xlink:href=\"#discourse-expand\"></use></svg>\n</div></a></div></p>
```
`PostSearchData#raw_data` Before:
```
This is a test topic 0 Uncategorized Let me see how I can fix this image some.png png https://meta.discourse.org/some.png discourse org/some png some.png png 1750×2000
```
`PostSearchData#raw_data` After:
```
This is a test topic 0 Uncategorized Let me see how I can fix this image
```
In terms of indexing performance, we now have to parse the given HTML
through nokogiri twice. However performance is not a huge worry here since a string length of 194170 takes only 30ms
to scrub plus the indexing takes place in a background job.