To check if a post contains any embedded media, we look if the "image_sizes" attribute is present in the new post manager arguments.
We want to see one boxed links, but we only store the raw content of the post. To work around this, I extracted the onebox logic from the composer editor into a module.
With secure media and the UploadSecurity class, we need a nice way for plugins to register custom upload types that should be considered public and never secure.
This is intended for use by plugins which are building their own topic lists, and want to include PMs alongside regular topics (e.g. discourse-assign). It does not get used directly in core.
* DEV - versions of JS files written to a JS file to be included by load-script and appended as params to URLs
* Formatting
* Incorporate feedback from PR
* Update filename of public-js-versions
Editing a post didn't update the `post_uploads` right away. Instead it relied on the `CookedPostProcessor`. This can lead to an inconsistent state if uploads are added or removed during an edit and, for some reason, the `ProcessPost` job doesn't run (successfully). This inconsistency leads to missing uploads, because the newly added uploads appear to be unused and will be deleted by the `CleanUpUploads` job. In addition to that, uploads, which got removed during the edit, appear to be still in use and won't be deleted by the background job.
This commit ensures that the `post_uploads` are updated during the edit without relying on a background job.
In some cases Discourse admins may opt for sessions not to persist when a
browser is closed.
This is particularly useful in healthcare and education settings where
computers are shared among multiple workers.
By default `persistent_sessions` site setting is enabled, to opt out you
must disable the site setting.
After restoring a backup it takes up to 48 hours for uploads stored on S3 to appear in the S3 inventory. This change prevents alerts about missing uploads by preventing the EnsureS3UploadsExistence job from running in the first 48 hours after a restore. During the restore it deletes the count of missing uploads from the PluginStore, so that an alert isn't triggered by an old number.
We must guarantee that "rel=noopener" was set if "target=_blank" is present, which is not always the case for trusted users. Also, if the link contains the "nofollow" attribute, it has to have the "ugc" attribute as well.
In c6ceda8c, a bug was introduced where an admin searching for his own
private messages will actually end up searching through all private
messages on the site.
Follow-up to c6ceda8c4e
This PR introduces a few important changes to secure media redaction in emails. First of all, two new site settings have been introduced:
* `secure_media_allow_embed_images_in_emails`: If enabled we will embed secure images in emails instead of redacting them.
* `secure_media_max_email_embed_image_size_kb`: The cap to the size of the secure image we will embed, defaulting to 1mb, so the email does not become too big. Max is 10mb. Works in tandem with `email_total_attachment_size_limit_kb`.
`Email::Sender` will now attach images to the email based on these settings. The sender will also call `inline_secure_images` in `Email::Styles` after secure media is redacted and attachments are added to replace redaction messages with attached images. I went with attachment and `cid` URLs because base64 image support is _still_ flaky in email clients.
All redaction of secure media is now handled in `Email::Styles` and calls out to `PrettyText.strip_secure_media` to do the actual stripping and replacing with placeholders. `app/mailers/group_smtp_mailer.rb` and `app/mailers/user_notifications.rb` no longer do any stripping because they are earlier in the pipeline than `Email::Styles`.
Finally the redaction notice has been restyled and includes a link to the media that the user can click, which will show it to them if they have the necessary permissions.
![image](https://user-images.githubusercontent.com/920448/92341012-b9a2c380-f0ff-11ea-860e-b376b4528357.png)
- Lets child components extend color definitions
- Includes default theme color definitions
- Fails gracefully on color stylesheet SCSS errors
- Includes theme variables when extending colors
This PR ensures that new bookmarks cannot be created for deleted posts and topics, and also makes sure that if a bookmark was created and then the topic deleted that the show topic page does not error from trying to retrieve the bookmark reminder at.
Adding these classes to the stylesheet link elements in order to toggle dark/light schemes via this theme-component. Eventually this theme-component could possible be merged into core.
DEV: add plugin hooks for silence message parameters
Allows plugins to add, and update extra silence message params for custom
i18n vars
Allows plugins to override system messages via `message_title` and
`message_raw` parameters. We can later expose these params where necessary via event
hooks. Expose the parameter for the on user_silenced trigger.
* DEV: Switch our fast_xor gem for xorcist
We use the `xor` function as part of password hashing and we want to use
a faster version than the native ruby xor'ing feature so we use a gem
for this.
fast_xor has been abandoned, and xorcist fixed our initial holdup for
switching in https://github.com/fny/xorcist/issues/4
xorcist also has jruby support so we can remove our jruby fallback
logic.
* Move using statement inside of class
"en_US" doesn't contain most of the translations, so it falls back to "en". But that behavior stopped translation overrides to work for pluralized strings in "en_US", because it relies on existing translations. This fixes it by looking up the existing translation in all fallback locales.
After thinking about it, I worry that this will potentially leave a site
setting set when people hit ctrl-c ... feels a tiny bit risky, so leaving
it out.
This commit adds a new site setting "allowed_onebox_iframes". By default, all onebox iframes are allowed. When the list of domains is restricted, Onebox will automatically skip engines which require those domains, and use a fallback engine.
This is a follow-up for f51ccea028 because specs were failing on macOS. macOS uses bsdtar by default and the --transform option isn't supported by bsdtar. But the -s option is and it works the same.
Originally disabled in 0c0192e. Upload specs now use separate paths for each spec worker.
Fixes an issue in UploadRecovery#recover_from_local – it didn't take into account the testing infix (e.g. test_0) in the uploads/tombstone paths.
With the addition of `PostSearchData#private_message`, a partial
index consisting of only search data from regular posts can be created.
The partial index helps to speed up searches on large sites since PG
will not have to do an index scan on the entire search data index which
has shown to be a bottle neck.
- Introduces uploads:delete_missing_s3 which can be used to "give up" and
delete broken records from the database
- Fixes a bug in fix_missing_s3 - crashing on deleted posts
- Adds more info to analyze_missing_s3
The filter noops if an incorrect username is passed. This filter is not
exposed as part of the UI but is only used when an admin transitions
from a search within a user's personal messages to the full page search.
Follow-up to 4b30799054.
Note the following query being generated where the filter for a user's
private messages is executed twice.
```sql
SELECT "posts"."id", "posts"."user_id", "posts"."topic_id", "posts"."post_number", "posts"."raw", "posts"."cooked", "posts"."created_at", "posts"."updated_at", "posts"."reply_to_post_number", "posts"."reply_count", "posts"."quote_count", "posts"."deleted_at", "posts"."off_topic_count", "posts"."like_count", "posts"."incoming_link_count", "posts"."bookmark_count", "posts"."score", "posts"."reads", "posts"."post_type", "posts"."sort_order", "posts"."last_editor_id", "posts"."hidden", "posts"."hidden_reason_id", "posts"."notify_moderators_count", "posts"."spam_count", "posts"."illegal_count", "posts"."inappropriate_count", "posts"."last_version_at", "posts"."user_deleted", "posts"."reply_to_user_id", "posts"."percent_rank", "posts"."notify_user_count", "posts"."like_score", "posts"."deleted_by_id", "posts"."edit_reason", "posts"."word_count", "posts"."version", "posts"."cook_method", "posts"."wiki", "posts"."baked_at", "posts"."baked_version", "posts"."hidden_at", "posts"."self_edits", "posts"."reply_quoted", "posts"."via_email", "posts"."raw_email", "posts"."public_version", "posts"."action_code", "posts"."locked_by_id", "posts"."image_upload_id", (TS_RANK_CD(
post_search_data.search_data,
TO_TSQUERY('english', '''test'':*ABCD'),
0|32
)
* (
CASE categories.search_priority
WHEN 2
THEN 0.6
WHEN 3
THEN 0.8
WHEN 4
THEN 1.2
WHEN 5
THEN 1.4
ELSE
CASE WHEN topics.closed
THEN 0.9
ELSE 1
END
END
)
) rank, topics.bumped_at topic_bumped_at FROM "posts" INNER JOIN "post_search_data" ON "post_search_data"."post_id" = "posts"."id" INNER JOIN "topics" ON "topics"."id" = "posts"."topic_id" AND ("topics"."deleted_at" IS NULL) LEFT JOIN categories ON categories.id = topics.category_id WHERE ("posts"."deleted_at" IS NULL) AND "posts"."post_type" IN (1, 2, 3) AND (topics.visible) AND (topics.archetype = 'private_message' AND post_search_data.private_message) AND (posts.topic_id IN (SELECT topic_id
FROM topic_allowed_users
WHERE user_id = 99999
UNION ALL
SELECT tg.topic_id
FROM topic_allowed_groups tg
JOIN group_users gu ON gu.user_id = 99999 AND gu.group_id = tg.group_id
)) AND (post_search_data.search_data @@ TO_TSQUERY('english', '''test'':*ABCD')) AND (posts.topic_id IN (SELECT topic_id
FROM topic_allowed_users
WHERE user_id = 99999
UNION ALL
SELECT tg.topic_id
FROM topic_allowed_groups tg
JOIN group_users gu ON gu.user_id = 99999 AND gu.group_id = tg.group_id
)) AND ((categories.id IS NULL) OR (NOT categories.read_restricted) OR (categories.id IN (999999))) ORDER BY rank DESC, topic_bumped_at DESC
```
Renamed from `private_messages` to `personal_messages` without
deprecation because the `private_messages` advanced search filter never
worked in the first place when it was implemented.
This also ensures that restoring a backup works when it was created with the wrong upload paths in the time between ab4c0a4970 (shortly after v2.6.0.beta1) and this fix.
Because we allow all the other flag types on a deleted post we should be
able to send a pm to the user letting them know why we deleted their
post.
Bug report:
https://meta.discourse.org/t/-/161156
Like "default watching" and "default tracking" categories option now the "regular" categories support is added. It will be useful for sites that are muted by default. The user option will be displayed only if `mute_all_categories_by_default` site setting is enabled.
* FIX: Unlike own posts on ownership transfer
If a user has liked a post that has passed the
`post_undo_action_window_mins` system setting window and you transfer ownership
of that post to that user you will be the owner of a post that you have
liked, but cannot unlike resulting in a weird UI behavior. This commit
fixes this issue.
The existing tests didn't check for the timeout window for unliking
posts so I added that in.
I couldn't find a good way to do this logic inside of the guardian class
so rather than duplicating behavior of the `PostActionDestroyer` class
inside of the `PostOwnerChanger` I decided to pass in a "bypass"
variable that could be used to check if the calling class is the
'post_owner_changer' and bypass the guardian instead. I went this route
because the guardian `can_delete_post_action` method has no way of
distinguishing how to allow a user to be able to unlike their own posts
after the timeout window but only on a post owner change.
* use an options hash instead
Enabling the moderators_manage_categories_and_groups site setting will allow moderator users to create/manage groups.
* show New Group form to moderators
* Allow moderators to update groups and read logs, where appropriate
* Rename site setting from create -> manage
* improved tests
* Migration should rename old log entries
* Log group changes, even if those changes mean you can no longer see the group
* Slight reshuffle
* RouteTo /g if they no longer have permissions to view group
Themes can now declare custom colors that get compiled in core's color definitions stylesheet, thus allowing themes to better support dark/light color schemes.
For example, if you need your theme to use tertiary for an element in a light color scheme and quaternary in a dark scheme, you can add the following SCSS to your theme's `color_definitions.scss` file:
```
:root {
--mytheme-tertiary-or-quaternary: #{dark-light-choose($tertiary, $quaternary)};
}
```
And then use the `--mytheme-tertiary-or-quaternary` variable as the color property of that element. You can also use this file to add color variables that use SCSS color transformation functions (lighten, darken, saturate, etc.) without compromising your theme's compatibility with different color schemes.
This avoids the samesite cookie related error on chrome. It also adds support for twitter 'GIF' content, and allows videos to resize smoothly for narrow devices.
Convert all IMAP logging to write to a database table for easier inspection. These logs are cleaned up daily if they are > 5 days old.
Logs can easily be watched in dev by setting DISCOURSE_DEV_LOG_LEVEL=\"debug\" and running tail -f development.log | grep IMAP
Previously we did an early return if either SiteSetting.tagging_enabled or SiteSetting.allow_staff_to_tag_pms was false when updating the email on the IMAP server -- however this also stopped us from archiving or deleting emails if either of these were disabled.
Add update for fetching git commits if they do not exist, eg with
clone --depth 1 - only can fetch via git fetch --depth 1 {remote} {ref}
the ref needs to be a full, non-ambiguous reference.
* If the error doesn't have a message, the class name will help
* example:
before: "Failed to download #{filename} because "
after: "Failed to download #{filename} because Aws::S3::Errors::NotFound"
This fixes an issue where a non-default theme set to use the base color
scheme (i.e. the theme had an empty `color_scheme_id`) was loading the
default theme's color scheme instead.
`rake uploads:analyze_missing` can be used get rich information regarding
uploads missing from s3 (where verified is false)
`rake uploads:fix_missing` is a work in progress task for automatically
correcting certain historic issues. At the moment it simply rebakes all
posts with missing uploads, but it will improve over time
Adds functionality to reflect topic delete in Discourse to IMAP inbox (Gmail only for now) and reflecting Gmail deletes in Discourse.
Adding lots of tests, various refactors and code improvements.
When Discourse topic is destroyed in PostDestroyer mark the topic incoming email as imap_sync: true, and do the opposite when post is recovered.
When we run the S3 inventory, mark uploads that exist as verified true, those that don't as verified false, and uploads not included in the check / not yet checked as verified nil.
The UI prevents users from trying to create tags on topics when they
don't have permission, but if you are trying to add tags to a topic via
the API and you don't have permission before this change it would
silently succeed in creating the topic, but it wouldn't have any tags.
Now a 422 error will be returned with an error message when trying to
create a topic with tags when tagging is disabled or you don't have
enough trust level to add tags to a topic.
Bug report: https://meta.discourse.org/t/-/70525/14
When a tab is open but left unattended for a while, the red, green, and blue
pills tend to go out of sync.
So whevener we open the notifications menu, we sync up the notification count
(eg. blue and green pills) with the server.
However, the reviewable count (eg. the red pill) is not a notification and
is located in the hamburger menu. This commit adds a new route on the server
side to retrieve the reviewable count for the current user and a ping
(refreshReviewableCount) from the client side to sync the reviewable count
whenever they open the hamburger menu.
REFACTOR: I also refactored the hamburger-menu widget code to prevent repetitive uses
of "this.".
PERF: I improved the performance of the 'notify_reviewable' job by doing only 1 query
to the database to retrieve all the pending reviewables and then tallying based on the
various rights.
Similar to `advanced_filter` I introduced `advanced_order`.
I needed a new option because default orders are evaluated after advanced_filter so I couldn't use it.
Also, that part is a little bit more generic
```
elsif word =~ /order:\w+/
@order = word.gsub('order:', '').to_sym
nil
```
After those changes, I can use them in plugins in this way:
```
Search.advanced_order(:votes) do |posts|
posts.reorder("COALESCE((SELECT dvvc.counter FROM discourse_voting_vote_counters dvvc WHERE dvvc.topic_id = subquery.topic_id), 0) DESC")
end
```
The poll breakdown modal replaces the grouped pie charts feature.
Includes:
* MODAL: Untangle `onSelectPanel`
Previously modal-tab component would call on click the onSelectPanel callback with itself (modal-tab) as `this` which severely limited its usefulness. Now showModal binds the callback to its controller.
"The PR includes a fix/change to d-modal (b7f6ec6) that hasn't been extracted to a separate PR because it's not currently possible to test a change like this in abstract, i.e. with dynamically created controllers/components in tests. The percentage/count toggle test for the poll breakdown feature is essentially a test for that d-modal modification."
If a plugin name contained the name of another plugin, the wrong stylesheets would be reloaded. For example, working on discourse-prometheus-alert-receiver would cause discourse-prometheus stylesheets to be reloaded.
In virtualdom, element 'properties' are not completely synonymous with element 'attributes'. In particular, `data-` properties will not be rendered as attributes. To ensure all attributes are passed through, we need to include them under an `attributes` key. For more info, see https://github.com/Matt-Esch/virtual-dom/blob/master/docs/vnode.md#custom-attributes-data-
This stops sync of tracking state when list is filtered, in the past this
would cause the tracking state to go off wack.
Additionally this introduces an alias for "filter=tracking", called "f=tracking"
This was done cause the term "filter" is used internally in 2 different ways
the main way is for /unread /new filtering.
Trying to also call a query param "filter" causes enormous amounts of
internal pain, this circumvents the issue.
In the near future, we will be swtiching to PG headlines to generate the
search blurb. As such, we need to replace audio and video links in the
raw data used for headline generation. This also means that we avoid
replacing links each time we need to generate the blurb.
This commit should cause no functional change
- Split into functions to avoid deep nesting
- Register custom field type, and remove manual json parse/serialize
- Recover from deleted upload records
Also adds a test to ensure pull_hotlinked_images redownloads secure images only once
The assign plugin is one of two situations where a post can be both a whisper and a small-action. Check the action_code field to filter out small-actions.
Running the reorder rake task was triggering the following error:
PG::UniqueViolation: ERROR: duplicate key value violates unique constraint "post_timings_unique"
I re-worked the queries and refactored to use the same couple of queries for all similar tables/columns.
* Fixed an issue I introduced in the last PR where I am just archiving everything regardless of whether it is actually archived in Discourse man_facepalming
* Refactor group list_mailboxes IMAP code to use providers, add specs, and add provider code to get the correct prodivder
A first step to adding automatic dark mode color scheme switching. Adds a new SCSS file at `color_definitions.scss` that serves to output all SCSS color variables as CSS custom properties. And replaces all SCSS color variables with the new CSS custom properties throughout the stylesheets.
This is an alpha feature at this point, can only be enabled via console using the `default_dark_mode_color_scheme_id` site setting.
Adds a imap_group_id column to IncomingEmail to deal with an issue where we were trying to update emails in the mailbox, calling IncomingEmail.where(imap_sync: true). However UID and UIDVALIDITY could be the same across accounts. So if group A used IMAP details for Gmail account A, and group B used IMAP details for Gmail account B, and both tried to sync changes to an email with UID of 3 (e.g. changing Labels), one account could affect the other. This even applied to Archiving!
Also in this PR:
* Fix error occurring if we do a uid_fetch and no emails are returned
* Allow for creating labels within the target mailbox (previously we would not do this, only use existing labels)
* Improve consistency for log messages
* Add specs for generic IMAP provider (Gmail specs still to come)
* Add custom archiving support for Gmail
* Only use Message-ID for uniqueness of IncomingEmail if it was generated by us
* Various refactors and improvements
I added delete_when_reminder_sent to ignored_columns because it no longer exists and added a shortcut method delete_when_reminder_sent? to the Bookmark model. However I have been seeing some weird errors like:
> Job exception: unknown attribute 'delete_when_reminder_sent' for Bookmark.
So I am very suspicious. I am just renaming the method to auto_delete_when_reminder_sent? to avoid any potential conflicts.
Also found include_bookmark_delete_on_owner_reply? in PostSerializer which is used for nothing; I must have forgotten to delete it before.
For the following conditions, the TopicUser.bookmarked column was not updated correctly:
* When a bookmark was auto-deleted because the reminder was sent
* When a bookmark was auto-deleted because the owner of the bookmark replied to the topic
This adds another migration to fix the out-of-sync column and also some refactors to BookmarkManager to allow for more of these delete cases. BookmarkManager is used instead of directly destroying the bookmark in PostCreator and BookmarkReminderNotificationHandler.
This changes PG text search to only match the given title against
lexemes that are formed from the title. Likewise, the given raw will
only be matched against lexemes that are formed from the post's raw.
If a user posted a topic and Akismet decided it was spam, the topic gets deleted and put into the review queue. If a category moderator for that category marked the post/topic as "Not Spam" the topic did not get recovered correctly because Guardian.new(@user).can_review_topic?(@post.topic) returned false incorrectly because the topic was deleted.
This adds a special filter to topic lists that will filter to tracked and
watched categories.
To use it you can visit:
`https://sitename/?filter=tracked`
`https://sitename/unread?filter=tracked`
and so on
Note, we do not include explicitly tracked and watched topics **outside** of
the tracked categories and tags.
We can consider a `filter=all_tracked` to cover this edge case.
To reproduce the initial issue here:
1. A user makes a post, which discourse-akismet marks as spam (I cheated and called `DiscourseAkismet::PostsBouncer.new.send(:mark_as_spam, post)` for this)
2. The post lands in the review queue
3. The category the topic is in has a `reviewable_by_group_id`
4. A user in that group goes and looks at the Review queue, decides the post is not spam, and clicks Not Spam
5. Weird stuff happens because the `PostDestroyer#recover` method didn't handle this (the user who clicked Not Spam was not the owner of the post and was not a staff member, so the post didn't get un-destroyed and post counts didn't get updated)
Now users who belong to a group who can review a category now have the ability to recover/delete posts fully.
Previously we considered 'upload rows without etags' to be exempt from the check. This is bad, because older/migrated sites might not have etags on all their uploads. We should consider rows without etags to be broken, since we can't check them against the inventory.
This also removes the `by_users` scope. We need all uploads to be working, even ones created by the system user.
This adds an option to "delete on owner reply" to bookmarks. If you select this option in the modal, then reply to the topic the bookmark is in, the bookmark will be deleted on reply.
This PR also changes the checkboxes for these additional bookmark options to an Integer column in the DB with a combobox to select the option you want.
The use cases are:
* Sometimes I will bookmark the topics to read it later. In this case we definitely don’t need to keep the bookmark after I replied to it.
* Sometimes I will read the topic in mobile and I will prefer to reply in PC later. Or I may have to do some research before reply. So I will bookmark it for reply later.
* FEATURE: Allow List for PMs
This feature adds a new user setting that is disabled by default that
allows them to specify a list of users that are allowed to send them
private messages. This way they don't have to maintain a large list of
users they don't want to here from and instead just list the people they
know they do want. Staff will still always be able to send messages to
the user.
* Update PR based on feedback
Some definitions rely on others, in particular the c/cpp/c-like ones,
and we were appending the bundle of all files in the folder.
Instead for testing I've limited us to just three definitions. This has
the benefit of being a lot smaller to download/parse in test mode too.
It seems there was a discrepancy in that background images were attached
to the full slug category class: `category-:slug-:id` and our body class
only had `category-:slug`.
This fix adds support for both formats.
* Remove unneeded bookmark name index.
* Change bookmark search query to use post_search_data. This allows searching on topic title and post content
* Tweak the style/layout of the bookmark list so the search looks better and the whole page fits better on mobile.
* Added scopes UI
* Create scopes when creating a new API key
* Show scopes on the API key show route
* Apply scopes on API requests
* Extend scopes from plugins
* Add missing scopes. A mapping can be associated with multiple controller actions
* Only send scopes if the use global key option is disabled. Use the discourse plugin registry to add new scopes
* Add not null validations and index for api_key_id
* Annotate model
* DEV: Move default mappings to ApiKeyScope
* Remove unused attribute and improve UI for existing keys
* Support multiple parameters separated by a comma
Follow up to d8c796bc4.
Note that his change increases query time by around 40% in the following
benchmark against `dev.discourse.org` but this is a tradeoff that has to be taken so that relevance
search is accurate.
```
require 'benchmark/ips'
Benchmark.ips do |x|
x.config(time: 10, warmup: 2)
x.report("current aggregate search query") do
DB.exec <<~SQL
SELECT "posts"."id", "posts"."user_id", "posts"."topic_id", "posts"."post_number", "posts"."raw", "posts"."cooked", "posts"."created_at", "posts"."updated_at", "posts"."reply_to_post_number", "posts"."reply_count", "posts"."quote_count", "posts"."deleted_at", "posts"."off_topic_count", "posts"."like_count", "posts"."incoming_link_count", "posts"."bookmark_count", "posts"."score", "posts"."reads", "posts"."post_type", "posts"."sort_order", "posts"."last_editor_id", "posts"."hidden", "posts"."hidden_reason_id", "posts"."notify_moderators_count", "posts"."spam_count", "posts"."illegal_count", "posts"."inappropriate_count", "posts"."last_version_at", "posts"."user_deleted", "posts"."reply_to_user_id", "posts"."percent_rank", "posts"."notify_user_count", "posts"."like_score", "posts"."deleted_by_id", "posts"."edit_reason", "posts"."word_count", "posts"."version", "posts"."cook_method", "posts"."wiki", "posts"."baked_at", "posts"."baked_version", "posts"."hidden_at", "posts"."self_edits", "posts"."reply_quoted", "posts"."via_email", "posts"."raw_email", "posts"."public_version", "posts"."action_code", "posts"."locked_by_id", "posts"."image_upload_id" FROM "posts" JOIN (SELECT *, row_number() over() row_number FROM (SELECT topics.id, min(posts.post_number) post_number FROM "posts" INNER JOIN "post_search_data" ON "post_search_data"."post_id" = "posts"."id" INNER JOIN "topics" ON "topics"."id" = "posts"."topic_id" AND ("topics"."deleted_at" IS NULL) LEFT JOIN categories ON categories.id = topics.category_id WHERE ("posts"."deleted_at" IS NULL) AND "posts"."post_type" IN (1, 2, 3, 4) AND (topics.visible) AND (topics.archetype <> 'private_message') AND (post_search_data.search_data @@ TO_TSQUERY('english', '''postgres'':*ABCD')) AND (categories.id NOT IN (
SELECT categories.id WHERE categories.search_priority = 1
)
) AND ((categories.id IS NULL) OR (NOT categories.read_restricted)) GROUP BY topics.id ORDER BY MAX((
TS_RANK_CD(
post_search_data.search_data,
TO_TSQUERY('english', '''postgres'':*ABCD'),
1|32
) *
(
CASE categories.search_priority
WHEN 2
THEN 0.6
WHEN 3
THEN 0.8
WHEN 4
THEN 1.2
WHEN 5
THEN 1.4
ELSE
CASE WHEN topics.closed
THEN 0.9
ELSE 1
END
END
)
)
) DESC, topics.bumped_at DESC LIMIT 51 OFFSET 0) xxx) x ON x.id = posts.topic_id AND x.post_number = posts.post_number WHERE ("posts"."deleted_at" IS NULL) ORDER BY row_number;
SQL
end
x.report("current aggregate search query with proper ranking") do
DB.exec <<~SQL
SELECT "posts"."id", "posts"."user_id", "posts"."topic_id", "posts"."post_number", "posts"."raw", "posts"."cooked", "posts"."created_at", "posts"."updated_at", "posts"."reply_to_post_number", "posts"."reply_count", "posts"."quote_count", "posts"."deleted_at", "posts"."off_topic_count", "posts"."like_count", "posts"."incoming_link_count", "posts"."bookmark_count", "posts"."score", "posts"."reads", "posts"."post_type", "posts"."sort_order", "posts"."last_editor_id", "posts"."hidden", "posts"."hidden_reason_id", "posts"."notify_moderators_count", "posts"."spam_count", "posts"."illegal_count", "posts"."inappropriate_count", "posts"."last_version_at", "posts"."user_deleted", "posts"."reply_to_user_id", "posts"."percent_rank", "posts"."notify_user_count", "posts"."like_score", "posts"."deleted_by_id", "posts"."edit_reason", "posts"."word_count", "posts"."version", "posts"."cook_method", "posts"."wiki", "posts"."baked_at", "posts"."baked_version", "posts"."hidden_at", "posts"."self_edits", "posts"."reply_quoted", "posts"."via_email", "posts"."raw_email", "posts"."public_version", "posts"."action_code", "posts"."locked_by_id", "posts"."image_upload_id" FROM "posts" JOIN (SELECT *, row_number() over() row_number FROM (SELECT subquery.topic_id id, (ARRAY_AGG(subquery.post_number ORDER BY rank DESC, bumped_at DESC))[1] post_number, MAX(subquery.rank) rank, MAX(subquery.bumped_at) bumped_at FROM (SELECT "posts"."id", "posts"."user_id", "posts"."topic_id", "posts"."post_number", "posts"."raw", "posts"."cooked", "posts"."created_at", "posts"."updated_at", "posts"."reply_to_post_number", "posts"."reply_count", "posts"."quote_count", "posts"."deleted_at", "posts"."off_topic_count", "posts"."like_count", "posts"."incoming_link_count", "posts"."bookmark_count", "posts"."score", "posts"."reads", "posts"."post_type", "posts"."sort_order", "posts"."last_editor_id", "posts"."hidden", "posts"."hidden_reason_id", "posts"."notify_moderators_count", "posts"."spam_count", "posts"."illegal_count", "posts"."inappropriate_count", "posts"."last_version_at", "posts"."user_deleted", "posts"."reply_to_user_id", "posts"."percent_rank", "posts"."notify_user_count", "posts"."like_score", "posts"."deleted_by_id", "posts"."edit_reason", "posts"."word_count", "posts"."version", "posts"."cook_method", "posts"."wiki", "posts"."baked_at", "posts"."baked_version", "posts"."hidden_at", "posts"."self_edits", "posts"."reply_quoted", "posts"."via_email", "posts"."raw_email", "posts"."public_version", "posts"."action_code", "posts"."locked_by_id", "posts"."image_upload_id", (
TS_RANK_CD(
post_search_data.search_data,
TO_TSQUERY('english', '''postgres'':*ABCD'),
1|32
) *
(
CASE categories.search_priority
WHEN 2
THEN 0.6
WHEN 3
THEN 0.8
WHEN 4
THEN 1.2
WHEN 5
THEN 1.4
ELSE
CASE WHEN topics.closed
THEN 0.9
ELSE 1
END
END
)
)
rank, topics.bumped_at bumped_at FROM "posts" INNER JOIN "post_search_data" ON "post_search_data"."post_id" = "posts"."id" INNER JOIN "topics" ON "topics"."id" = "posts"."topic_id" AND ("topics"."deleted_at" IS NULL) LEFT JOIN categories ON categories.id = topics.category_id WHERE ("posts"."deleted_at" IS NULL) AND "posts"."post_type" IN (1, 2, 3, 4) AND (topics.visible) AND (topics.archetype <> 'private_message') AND (post_search_data.search_data @@ TO_TSQUERY('english', '''postgres'':*ABCD')) AND (categories.id NOT IN (
SELECT categories.id WHERE categories.search_priority = 1
)
) AND ((categories.id IS NULL) OR (NOT categories.read_restricted))) subquery GROUP BY subquery.topic_id ORDER BY rank DESC, bumped_at DESC LIMIT 51 OFFSET 0) xxx) x ON x.id = posts.topic_id AND x.post_number = posts.post_number WHERE ("posts"."deleted_at" IS NULL) ORDER BY row_number;
SQL
end
x.compare!
end
```
```
Warming up --------------------------------------
current aggregate search query
1.000 i/100ms
current aggregate search query with proper ranking
1.000 i/100ms
Calculating -------------------------------------
current aggregate search query
18.040 (± 0.0%) i/s - 181.000 in 10.035241s
current aggregate search query with proper ranking
12.992 (± 0.0%) i/s - 130.000 in 10.007214s
Comparison:
current aggregate search query: 18.0 i/s
current aggregate search query with proper ranking: 13.0 i/s - 1.39x (± 0.00) slower
```
Indexing query strings in URLS produces inconsistent results in PG and
pollutes the search data for really little gain.
The following seems to work as expected...
```
discourse_development=# SELECT TO_TSVECTOR('https://www.discourse.org?test=2&test2=3');
to_tsvector
------------------------------------------------------
'2':3 '3':5 'test':2 'test2':4 'www.discourse.org':1
```
However, once a path is present
```
discourse_development=# SELECT TO_TSVECTOR('https://www.discourse.org/latest?test=2&test2=3');
to_tsvector
----------------------------------------------------------------------------------------------
'/latest?test=2&test2=3':3 'www.discourse.org':2 'www.discourse.org/latest?test=2&test2=3':1
```
The lexeme contains both the path and the query string.
Previously, we would only take either the `MIN` or `MAX` for
`post_number` during aggregation meaning that the ranking is not
considered.
```
require 'benchmark/ips'
Benchmark.ips do |x|
x.config(time: 10, warmup: 2)
x.report("current aggregate search query") do
DB.exec <<~SQL
SELECT "posts"."id", "posts"."user_id", "posts"."topic_id", "posts"."post_number", "posts"."raw", "posts"."cooked", "posts"."created_at", "posts"."updated_at", "posts"."reply_to_post_number", "posts"."reply_count", "posts"."quote_count", "posts"."deleted_at", "posts"."off_topic_count", "posts"."like_count", "posts"."incoming_link_count", "posts"."bookmark_count", "posts"."score", "posts"."reads", "posts"."post_type", "posts"."sort_order", "posts"."last_editor_id", "posts"."hidden", "posts"."hidden_reason_id", "posts"."notify_moderators_count", "posts"."spam_count", "posts"."illegal_count", "posts"."inappropriate_count", "posts"."last_version_at", "posts"."user_deleted", "posts"."reply_to_user_id", "posts"."percent_rank", "posts"."notify_user_count", "posts"."like_score", "posts"."deleted_by_id", "posts"."edit_reason", "posts"."word_count", "posts"."version", "posts"."cook_method", "posts"."wiki", "posts"."baked_at", "posts"."baked_version", "posts"."hidden_at", "posts"."self_edits", "posts"."reply_quoted", "posts"."via_email", "posts"."raw_email", "posts"."public_version", "posts"."action_code", "posts"."locked_by_id", "posts"."image_upload_id" FROM "posts" JOIN (SELECT *, row_number() over() row_number FROM (SELECT topics.id, min(posts.post_number) post_number FROM "posts" INNER JOIN "post_search_data" ON "post_search_data"."post_id" = "posts"."id" INNER JOIN "topics" ON "topics"."id" = "posts"."topic_id" AND ("topics"."deleted_at" IS NULL) LEFT JOIN categories ON categories.id = topics.category_id WHERE ("posts"."deleted_at" IS NULL) AND "posts"."post_type" IN (1, 2, 3, 4) AND (topics.visible) AND (topics.archetype <> 'private_message') AND (post_search_data.search_data @@ TO_TSQUERY('english', '''postgres'':*ABCD')) AND (categories.id NOT IN (
SELECT categories.id WHERE categories.search_priority = 1
)
) AND ((categories.id IS NULL) OR (NOT categories.read_restricted)) GROUP BY topics.id ORDER BY MAX((
TS_RANK_CD(
post_search_data.search_data,
TO_TSQUERY('english', '''postgres'':*ABCD'),
1|32
) *
(
CASE categories.search_priority
WHEN 2
THEN 0.6
WHEN 3
THEN 0.8
WHEN 4
THEN 1.2
WHEN 5
THEN 1.4
ELSE
CASE WHEN topics.closed
THEN 0.9
ELSE 1
END
END
)
)
) DESC, topics.bumped_at DESC LIMIT 51 OFFSET 0) xxx) x ON x.id = posts.topic_id AND x.post_number = posts.post_number WHERE ("posts"."deleted_at" IS NULL) ORDER BY row_number;
SQL
end
x.report("current aggregate search query with proper ranking") do
DB.exec <<~SQL
SELECT "posts"."id", "posts"."user_id", "posts"."topic_id", "posts"."post_number", "posts"."raw", "posts"."cooked", "posts"."created_at", "posts"."updated_at", "posts"."reply_to_post_number", "posts"."reply_count", "posts"."quote_count", "posts"."deleted_at", "posts"."off_topic_count", "posts"."like_count", "posts"."incoming_link_count", "posts"."bookmark_count", "posts"."score", "posts"."reads", "posts"."post_type", "posts"."sort_order", "posts"."last_editor_id", "posts"."hidden", "posts"."hidden_reason_id", "posts"."notify_moderators_count", "posts"."spam_count", "posts"."illegal_count", "posts"."inappropriate_count", "posts"."last_version_at", "posts"."user_deleted", "posts"."reply_to_user_id", "posts"."percent_rank", "posts"."notify_user_count", "posts"."like_score", "posts"."deleted_by_id", "posts"."edit_reason", "posts"."word_count", "posts"."version", "posts"."cook_method", "posts"."wiki", "posts"."baked_at", "posts"."baked_version", "posts"."hidden_at", "posts"."self_edits", "posts"."reply_quoted", "posts"."via_email", "posts"."raw_email", "posts"."public_version", "posts"."action_code", "posts"."locked_by_id", "posts"."image_upload_id" FROM "posts" JOIN (SELECT *, row_number() over() row_number FROM (SELECT subquery.topic_id id, (ARRAY_AGG(subquery.post_number))[1] post_number, MAX(subquery.rank) rank, MAX(subquery.bumped_at) bumped_at FROM (SELECT "posts"."id", "posts"."user_id", "posts"."topic_id", "posts"."post_number", "posts"."raw", "posts"."cooked", "posts"."created_at", "posts"."updated_at", "posts"."reply_to_post_number", "posts"."reply_count", "posts"."quote_count", "posts"."deleted_at", "posts"."off_topic_count", "posts"."like_count", "posts"."incoming_link_count", "posts"."bookmark_count", "posts"."score", "posts"."reads", "posts"."post_type", "posts"."sort_order", "posts"."last_editor_id", "posts"."hidden", "posts"."hidden_reason_id", "posts"."notify_moderators_count", "posts"."spam_count", "posts"."illegal_count", "posts"."inappropriate_count", "posts"."last_version_at", "posts"."user_deleted", "posts"."reply_to_user_id", "posts"."percent_rank", "posts"."notify_user_count", "posts"."like_score", "posts"."deleted_by_id", "posts"."edit_reason", "posts"."word_count", "posts"."version", "posts"."cook_method", "posts"."wiki", "posts"."baked_at", "posts"."baked_version", "posts"."hidden_at", "posts"."self_edits", "posts"."reply_quoted", "posts"."via_email", "posts"."raw_email", "posts"."public_version", "posts"."action_code", "posts"."locked_by_id", "posts"."image_upload_id", (
TS_RANK_CD(
post_search_data.search_data,
TO_TSQUERY('english', '''postgres'':*ABCD'),
1|32
) *
(
CASE categories.search_priority
WHEN 2
THEN 0.6
WHEN 3
THEN 0.8
WHEN 4
THEN 1.2
WHEN 5
THEN 1.4
ELSE
CASE WHEN topics.closed
THEN 0.9
ELSE 1
END
END
)
)
rank, topics.bumped_at bumped_at FROM "posts" INNER JOIN "post_search_data" ON "post_search_data"."post_id" = "posts"."id" INNER JOIN "topics" ON "topics"."id" = "posts"."topic_id" AND ("topics"."deleted_at" IS NULL) LEFT JOIN categories ON categories.id = topics.category_id WHERE ("posts"."deleted_at" IS NULL) AND "posts"."post_type" IN (1, 2, 3, 4) AND (topics.visible) AND (topics.archetype <> 'private_message') AND (post_search_data.search_data @@ TO_TSQUERY('english', '''postgres'':*ABCD')) AND (categories.id NOT IN (
SELECT categories.id WHERE categories.search_priority = 1
)
) AND ((categories.id IS NULL) OR (NOT categories.read_restricted))) subquery GROUP BY subquery.topic_id ORDER BY rank DESC, bumped_at DESC LIMIT 51 OFFSET 0) xxx) x ON x.id = posts.topic_id AND x.post_number = posts.post_number WHERE ("posts"."deleted_at" IS NULL) ORDER BY row_number;
SQL
end
x.compare!
end
```
```
Warming up --------------------------------------
current aggregate search query
1.000 i/100ms
current aggregate search query with proper ranking
1.000 i/100ms
Calculating -------------------------------------
current aggregate search query
17.726 (± 0.0%) i/s - 178.000 in 10.045107s
current aggregate search query with proper ranking
17.802 (± 0.0%) i/s - 178.000 in 10.002230s
Comparison:
current aggregate search query with proper ranking: 17.8 i/s
current aggregate search query: 17.7 i/s - 1.00x (± 0.00) slower
```
On large topics, the cost of sending the entire post ID list back over to the database is signficant. Just have the DB recalculate the list of visible posts instead.
Category and tag hashtags used to be handled differently even though
most of the code was very similar. This design was the root cause of
multiple issues related to hashtags.
This commit reduces the number of requests (just one and debounced
better), removes the use of CSS classes which marked resolved hashtags,
simplifies a lot of the code as there is a single source of truth and
previous race condition fixes are now useless.
It also includes a very minor security fix which let unauthorized users
to guess hidden tags.
It's a little awkward to test constants by re-assigning them so
I've added a new parameter to `Discourse.find_compatible_resource`
which can be used by tests.
Instead of loading all of the user bookmarks using all the post IDs in a topic, load all the bookmarks for a user using the topic ID. This eliminates a costly WHERE ID IN query.
* strip out the href and xlink:href attributes from use element that
are _not_ anchors in svgs which can be used for XSS
* adding the content-disposition: attachment ensures that
uploaded SVGs cannot be opened and executed using the XSS exploit.
svgs embedded using an img tag do not suffer from the same exploit
Before this change:
- first full page load would get category defaults defined un cateory settings
- a navigation to a topic and then back to categories list would reset defaut to the ones defined in discovery/topics
Adds a new rake task `plugin:checkout_compatible_all` and
`plugin:checkout_compatible[plugin-name]` that check out compatible plugin
versions.
Supports a .discourse-compatibility file in the root of plugins and themes that
list out a plugin's compatibility with certain discourse versions:
eg: .discourse-compatibility
```
2.5.0.beta6: some-git-hash
2.4.4.beta4: some-git-tag
2.2.0: git-reference
```
This ensures older Discourse installs are able to find and install older
versions of plugins without intervention, through the manifest only.
It iterates through the versions in descending order. If the current Discourse
version matches an item in the manifest, it checks out the listed plugin target.
If the Discourse version is greater than an item in the manifest, it checks out
the next highest version listed in the manifest.
If no versions match, it makes no change.
This is a very expensive process, and it should only be required in exceptional circumstances. It is possible to run a similar recovery using `rake uploads:recover` (5284d41a8e/lib/upload_recovery.rb (L135-L184))
`OptimizedImage#filesize` calls `Discourse.store.download` with an OptimizedImage as an argument. It would in turn attempt to call `#original_filename` and `#secure?` on that object. Both would fail as these methods do not exist on OptimizedImage, only on Upload. We didn't know about these issues because:
1. `#calculate_filesize` is not called often, because the filesize is saved on OptimizedImage creation, so it's used mostly for manual filesize recalculation
2. we were using `rescue nil` which swallows all errors
`Nokogiri::HTML.fragment` is a huge hack (a comment in the source code
admits this). The current behavior of `Email::Styles` is to try to
emulate `fragment` using nokogumbo, but it misses some edge cases. In
particular, meta tags in a email template don't make it through to the
final email.
Instead of treating the provided HTML as an indeterminate fragment, this
commit makes `Email::Styles` treat the HTML as a complete document. This
means that the generated HTML for an email will now always contain top
level structure (a doctype, html, head and body tags).
This new behavior is behind a hidden site setting for now and defaults
off.
Previously, while generating the topic page's canoncial url we used the current post number. It will create invalid canonical path if the topic has whsiper posts. Now we only taking the visible posts for current page index calculation.
The previous fix (f43c0a5d85) wasn't working for images that were already uploaded.
The "metadata" (eg. 'for_*' and 'secure' attributes) were not added to existing uploads.
Also used 'Upload.get_from_url' is the admin/site_setting controller to properly retrieve
an upload from its URL.
Fixed the Upload::URL_REGEX to use the \h (hexadecimal) for the SHA
Follow-up-to: f43c0a5d85
* Change S3Helper::DOWNLOAD_URL_EXPIRES_AFTER_SECONDS to 5 minutes, which controls presigned URL expiry and secure-media route cache time.
* This is done because of the composer preview refreshing while typing causes a lot of requests sent to our server because of the short URL expiry. If this ends up being not enough we can always increase the time or explore other avenues (e.g. GitHub has a 7 day validity for secure URLs)
In 91c89df6, I fixed the onebox to support local topics with a slug-less URL.
This commit fixes all the other spots (search, topic links and user badges) where we look up for a local topic.
Follow-up-to: 91c89df6
* FIX: Correct version comparison logic when comparing stable to beta
For example, version 1.3.0 should be considered higher than 1.3.0.beta3. So `Discourse.has_needed_version?('1.3.0', '1.3.0.beta3')` should return true
* Switch to use Gem::Version to compare versions
Added a small helper class to for seed data because we need to add the
same filter to multisite:migrate as we have in db:migrate. Having this
filter in both places means we can get rid of the SKIP_SEED flag.
The logic of adding additional search results does not seem to be
needed anymore.
It appears to be a relic of an old implementation.
This saves an entire search query for every search made.
When rebaking a post we were invalidating _regular_ oneboxes but not inline oneboxes.
DEV: also renamed 'InlineOneboxer.purge' to 'InlineOneboxer.invalidate' to keep
the API consistent with 'Oneboxer.invalidate'
When linking to a topic in the same Discourse, we try to onebox the link to show the title
and other various information depending on whether it's a "standard" or "inline" onebox.
However, we were not properly detecting links to topics that had no slugs (eg. https://meta.discourse.org/t/1234).
This is causing issues when purging old users, if they are set up in the
exact condition where they will be demoted into another group, but also
do not have a primary email.
A future-dated migration was accidently introduced by me in 45c399f0. This was removed in b9762afc, but other migrations had already been generated based on its incorrect date. This commit removes the offending data in the schema_migrations table, and corrects the version in the published_pages migration.
This commit also adds a check to db:migrate which raises an error when invalid migration timestamps are used.
When plugin is hooking into TopicView joining other tables, it may fail because `created_at` is potentially available on 2 tables. Therefore we should explicitly define which `created_at` we want.
The `EXECUTE FUNCTION` syntax for `CREATE TRIGGER` statements was introduced in PostgreSQL 11. We need to replace `EXECUTE FUNCTION` with `EXECUTE PROCEDURE` in order to be able to restore backups created with PG12 on PG10.
Fixed bugs, added specs, extracted the upload downsizing code to a class, added support for non-S3 setups, changed it so that images aren't downloaded twice.
This code has been tested on production and successfully resized ~180k uploads.
Includes:
* DEV: Extract upload downsizing logic
* DEV: Add support for non-S3 uploads
* DEV: Process only images uploaded by users
* FIX: Incorrect usage of `count` and `exist?` typo
* DEV: Spec S3 image downsizing
* DEV: Avoid downloading images twice
* DEV: Update filesizes earlier in the process
* DEV: Return false on invalid upload
* FIX: Download images that currently above the limit (If the image size limit is decreased, then there was no way to resize those images that now fall outside the allowed size range)
* Update script/downsize_uploads.rb (Co-authored-by: Régis Hanol <regis@hanol.fr>)
FIX: prevent re-flagging when we have reviewed flags before
Fixes an edge case where a review can be reflagged when:
User flags as inappropriate.
Moderator rejects the flag.
Another user re-flags the post as spam.
Before, anyone was able to re-flag as inappropriate despite it being flagged
previously. With this, users are unable to re-flag for the same reason
regardless of reviewable status.
FEATURE: new rake task to update first_post_created_at column
The not-equal operator (`<>`) in PostgreSQL does not compare values
with NULL. We should instead use `IS DISTINCT FROM` when comparing
values with NULL.
Since this is rare, we don't want to check for
`Discourse.pg_readonly_mode?` on every request since we have to reach
for Redis. Instead, just rescue the error here.
Allow limiting the number of migrations to do at once, both to do migrations that
have impact limited to multiple off-peak usage hours to reduce user impact from
a migration, and to allow tests that do only a very small number for test
purposes. ("Give me a ping, Vasili. One ping only, please.")
Looks like some html elements like `aside` and `section` will throw an error
when checking if they are inline or not. The commit simply handles
```
Job exception: undefined method `inline?' for nil:NilClass
```
and adds a test for it.
Moves the most important checks into a linter. It gets executed by Lefthook as well as the docker rake task and Github actions. Doing those checks in rspec takes too long and it produces errors when the discourse:test Docker image contains old, invalid locale files.
* DEV: Move `Discourse.getURL` and related functions to a module
* DEV: Remove `Discourse.getURL` and `Discourse.getURLWithCDN`
* FIX: `get-url` is required for server side code
* DEV: Deprecate `BaseUri` too.
In some restricted setups all JS payloads need tight control.
This setting bans admins from making changes to JS on the site and
requires all themes be whitelisted to be used.
There are edge cases we still need to work through in this mode
hence this is still not supported in production and experimental.
Use an example like this to enable:
`DISCOURSE_WHITELISTED_THEME_REPOS="https://repo.com/repo.git,https://repo.com/repo2.git"`
By default this feature is not enabled and no changes are made.
One exception is that default theme id was missing a security check
this was added for correctness.
* DEV: To be pedantic, there is more than EMBER in there now
* DEV: Use less globals. Have `Discourse` start in an initializer
* DEV: Remove another global
Previously the pull hotlinked images job was skipped after system edits. This ensured that we never had an infinite loop of system-edit/pull-hotlinked/system-edit/pull-hotlinked etc.
A side effect was that edits made by system for any other reason (e.g. API, removing full quotes) would prevent pulling hotlinked images. This commit removes the system edit check, and replaces it with another method to avoid an infinite job scheduling loop.
* DEV: new S3 backup layout
Currently, with $S3_BACKUP_BUCKET of "bucket/backups", multisite backups
end up in "bucket/backups/backups/dbname/" and single-site will be in
"bucket/backups/".
Both _should_ be in "bucket/backups/dbname/"
- remove MULTISITE_PREFIX,
- always include dbname,
- method to move to the new prefix
- job to call the method
* SPEC: add tests for `VacateLegacyPrefixBackups` onceoff job.
Co-authored-by: Vinoth Kannan <vinothkannan@vinkas.com>
Fixes a regression in
e8fb9d4066
which caused a bug where you couldn't send a message to a group that
contained an Uppercase letter. Added a test case for this.
Bug report: https://meta.discourse.org/t/-/152999
Previously we had a partial fix in place where non human users
were not allowed draft sequences, this left edges around where non
human users asked for drafts yet had none.
For example system could already have a few drafts in place.
This also removes and extensibility point we added that is not in use
This reverts commit 20780a1eee.
* SECURITY: re-adds accidentally reverted commit:
03d26cd6: ensure embed_url contains valid http(s) uri
* when the merge commit e62a85cf was reverted, git chose the 2660c2e2 parent to land on
instead of the 03d26cd6 parent (which contains security fixes)
* We now have a site setting "topic_excerpt_maxlength" that is used when the OP is created or revised to generate a topic excerpt.
* However, posts created before this setting was introduced cannot benefit from this change unless they are revised, and if the topic excerpt length setting is changed that situation is also not covererd.
* This PR makes a change to rebake! to update the topic excerpt IF the post is the OP.
In some cases, between Discourse forums the hostname of a URL could match if they are hosting S3 files on the same bucket but the S3 bucket path might not. So e.g. https://testbucket.somesite.com/testpath/some/file/url.png vs https://testbucket.somesite.com/prodpath/some/file/url.png. So has_been_uploaded? was returning true for the second URL, even though it may have been uploaded on a different Discourse forum.
This is a very rare case but must be accounted for, because this impacts UrlHelper.is_local which mistakenly thinks the file has already been downloaded and thus allows the URL to be cooked, where we want to return the full URL to be downloaded using PullHotlinkedImages.
* the post_actions table has no FK to users, so if a user has been
deleted we may end up with dangling post_action records, which then
interferes with the bookmarks migration because bookmarks DO have
an FK to users
PG already handles English stop words, the list in cppjieba is
bigger than the list PG uses, which in turn causes confusion cause
words such as "volume" are stripped using cppijieba stop word list
We will follow up with another commit here to apply the Chinese
word stopwords, but for now to eliminate the confusion we are
skipping applying the stopword list when the dictionary in PG is
in English.
* DEV: Add framework for filtered plugin registers
Plugins often need to add values to a list, and we need to filter those lists at runtime to ignore values from disabled plugins. This commit provides a re-usable way to do that, which should make it easier to add new registers in future, and also reduce repeated code.
Follow-up commits will migrate existing registers to use this new system
* DEV: Migrate user and group custom field APIs to plugin registry
This gives us a consistent system for checking plugin enabled state, so we are repeating less logic. API changes are backwards compatible
Prior to this change we would never clear memory from contexts and
rely on V8 reacting to pressure
This could lead to bloating of PrettyText and Transpiler contexts
This optimisations ensures that we will clear memory 2 seconds after
the last eval on the context
Previously we would raise a warning in the logs if downloading
a file (from s3) takes longer than 60 seconds.
At scale this happens reasonably frequently.
1. Raised the duration to 3 minutes
2. Pulled the resizing mutex out of the downloading mutex
so we have less and clearer error logs
* DEV: Standardize table sorting verbiage
This commit creates a common component that tables can use to make their
headers sortable. This commit also standardizes on using `desc` as the
default and passing in the `asc=true` flag to adjust the sorting
direction.
* Add deprecation warnings
Adds deprecation warnings if using previous params and maintains
backwards compatibility. Set the default sort value for group members to
be asc.
* switch group requests to use common table-header-toggle
* update fixture
* PERF: Dematerialize topic_reply_count
It's only ever used for trust level promotions that run daily, or compared to 0. We don't need to track it on every post creation.
* UX: Add symbol in TL3 report if topic reply count is capped
* DEV: Drop user_stats.topic_reply_count column
Co-authored-by: Mark VanLandingham <markvanlan@gmail.com>
Co-authored-by: Robin Ward <robin.ward@gmail.com>
Co-authored-by: Mark VanLandingham <markvanlan@gmail.com>
Use a helper method to simplify creating a new register. Previously this would require creating lots of different methods manually, and adding every register to the clear/reset functions
This commit prevents a 500 error from occurring if someone is trying to
setup their discourse instance as a sso provider and they don't pass in
a `return_sso_url` in their payload.
This refactors default_current_user_provider in a few ways:
- Introduce a generic `api_parameter_allowed?` method which checks for whitelisted routes/formats
- Only read the api_key parameter on allowed routes. It is now completely ignored on other routes (previously it would raise a 403)
- Start reading user_api_key parameter on allowed routes
- Refactor tests as end-end integration tests
A plugin API for PARAMETER_API_PATTERNS will be added soon
For pages that do not specify canonical URL we will default to `https://SITENAME/PATH`.
This ensures that if a URL is crawled on the CDN the search ranking will transfer to the main site.
Additionally we whitelist the `?page` param
Adds a new rake task to auto generate a constants.js file with the
constants present. This makes migrating to Ember CLI easier, but also
slightly speeds up asset compilation by having to do less work.
If the constants change you need to run:
`rake javascripts:update_constants`