osts from topics with 'auto delete replies timer' with more than
skip_auto_delete_reply_likes likes will no longer be deleted. If 0,
all posts will be deleted.
My initial implementation didn't consider this case. We should skip imported users if the "imported_id" field is present, even if there're other custom fields.
This commit is dedicated to https://twitter.com/FiloSottile/status/1335666583126073354 for reminding me that like timestamps are valuable data.
Likes additionally include the topic_id and post_number of the acted post, to aid in analysis. Flag export does not include the disposition by staff.
When jobs are enqueued inside a transaction, it's possible that they will be executed before the necessary data is available in the database. This commit ensures all jobs are enqueued in an ActiveRecord after_commit hook.
One potential downside here is if the job fails to enqueue, the transaction will no longer be aborted. However, the chance of that happening is reasonably low, and the impact is significantly lower than the current issue where jobs are scheduled before their data is ready.
The `enqueue_jobs` is not correctly post-processing the post since the
post is being created inside a transaction block. This commit explicitly
enqueues the job outside transaction block.
When the linked topic is created we'll not hardcode the topic title and
let onebox work its magic instead so that the title can be updated
automatically.
- IgnoredUser records should all now have an expiring_at value. This commit enforces that in the DB, and fixes any corrupt rows
- Changes to the ignored user list are now handled by the `/u/{username}/notification_level` endpoint. This allows setting expiration dates on the ignore. This commit removes the old logic for saving a list of usernames in the user preferences.
- Many specs were calling `IgnoredUser.create`. This commit changes them to use `Fabricate(:ignored_user)` for consistency
b8c676e7 added the 'forever' option to the UI, and this is correctly stored in the database. However, we had a hard-coded limit of 4 months in the cleanup job. This commit removes the limit, so ignores can last forever.
This commit adds a site setting `auto_close_topics_create_linked_topic`
which when enabled works in conjunction with `auto_close_topics_post_count`
setting and creates a new linked topic for the topic just closed.
The auto-created new topic contains a link for all the previous topics
and the topic titles are appended with `(Part {n})`.
The setting is enabled by default.
There is a site setting reply_by_email_enabled which when combined with reply_by_email_address creates a Reply-To header in emails in the format "test+%{reply_key}@test.com" along with a PostReplyKey record, so when replying Discourse knows where to route the reply.
However this conflicts with the IMAP implementation. Since we are sending the email for a group via SMTP and from their actual email account, we want all replys to go to that email account as well so the IMAP sync job can pick them up and put them in the correct place. So if the group has IMAP enabled and configured, then the reply-to header will be correct.
This PR also makes a further fix to 64b0b50 by using the correct recipient user for the PostReplyKey record. If the post user is used we encounter this error:
if destination.user_id != user.id && !forwarded_reply_key?(destination, user)
raise ReplyUserNotMatchingError, "post_reply_key.user_id => #{destination.user_id.inspect}, user.id => #{user.id.inspect}"
end
This is because the user above is found from the from_address, but the destination which is the PostReplyKey is made by the post.user, which will be different people.
Our Email::Sender class accepts an optional user argument, which is used to create a PostReplyKey record when present. This record is used to sub out the %{reply_key} placeholder in the Reply-To mail header, so if we do not pass in the user we get a broken Reply-To header.
This is especially problematic in the IMAP group SMTP situation, because these emails go to customers that we are replying to, and when they reply to us the email bounces! This fixes the issue by passing user to the Email::Sender when sending a group_smtp email but there is still more to do in another PR.
This Email::Sender optional user is a bit of a footgun IMO, especially because most of the time we use it there is a user we can source. I would like to do another PR for this after this one to make the parameter not optional, so we don't end up with these reply issues down the line again.
Dependency on gifsicle, allow_animated_avatars and allow_animated_thumbnails
site settings were all removed. Animated GIF images are still allowed, but
the generated optimized images are no longer animated for those (which were
used for avatars and thumbnails).
The added 'animated' is populated by extracting information using FastImage.
This field was used to selectively reoptimize old animations. This process
happens in the background.
When posts or topics are deleted we don't want to immediately delete associated bookmarks, so we have a grace period to recover them and their reminders if the post or topic is un-deleted. This PR adds a task to the Weekly scheduled job to go and delete bookmarks attached to posts or topics deleted > 3 days ago.
Previously, Jobs::EnqueueDigestEmails would enqueue a digest job for every user, even if there are no topics to send. The digest job would exit, no email would send, and last_emailed_at would not change. 30 minutes later, Jobs::EnqueueDigestEmails would run again and re-enqueue jobs for the same users.
120fa8ad introduced a temporary mitigation for this issue, by randomly selecting a subset of those users each time.
This commit adds a new `digest_attempted_at` column to the `user_stats` table. This column is updated every time a digest job completes for a user. Using this, we can avoid scheduling digest jobs for the same user every 30 minutes. This also removes the random user selection in 120fa8ad, and instead prioritizes users who had digests attempted the longest time ago.
To avoid blocking the sidekiq queue a limit of 10,000 digests per 30 minutes
is introduced.
This acts as a safety measure that makes sure we don't keep pouring oil on
a fire.
On multisites it is recommended to set the number way lower so sites do not
dominate the backlog. A reasonable default for multisites may be 100-500.
This can be controlled with the environment var
DISCOURSE_MAX_DIGESTS_ENQUEUED_PER_30_MINS_PER_SITE
See https://meta.discourse.org/t/changing-a-users-email/164512 for additional context.
Previously when an admin user changed a user's email we assumed that they would need a password reset too because they likely did not have access to their account. This proved to be incorrect, as there are other reasons a user needs admin to change their email. This PR:
* Changes the admin change email for user flow so the user is sent an email to confirm the change
* We now record who the email change request was requested by
* If the requested by user is admin and not the user we note this in the email sent to the user
* We also make the confirm change email route open to anonymous users, so it can be clicked by the user even if they do not have access to their account. If there is a logged in user we make sure the confirmation matches the current user.
* FEATURE: Export the entire user profile as json, not just bio/website
* FEATURE: Add session log information to user export
Even though the columns are named 'auth_token' etc, the content is not actually usable to log into the forum with. Despite all that, it is still truncated for export, to avoid any 'token hash cracking' situations.
This PR removes the user reminder topic timers, because that system has been supplanted and improved by bookmark reminders. The option is removed from the UI and all existing user reminder topic timers are migrated to bookmark reminders.
Migration does this:
* Get all topic_timers with status_type 5 (reminders)
* Gets all bookmarks where the user ID and topic ID match
* Loops through the found topic timers
* If there is no bookmark for the OP of the topic, then we just create a bookmark with a reminder
* If there is a bookmark for the OP of the topic and it does **not** have a reminder set, then just
update it with the topic timer reminder
* If there is a bookmark for the OP of the topic with a reminder then just discard the topic timer
* Cancels all outstanding user reminder topic timers
* **Trashes (not deletes) all user reminder topic timers**
Notes:
* For now I have left the user reminder topic timer job class in place; this is so the jobs can be cancelled in the migration. It and the specs will be deleted in the next PR.
* At a later date I will write a migration to delete all trashed user topic timers. They are not deleted here in case there are data issues and they need to be recovered.
* A future PR will change the UI of the topic timer modal to make it look more like the bookmark modal.
After restoring a backup it takes up to 48 hours for uploads stored on S3 to appear in the S3 inventory. This change prevents alerts about missing uploads by preventing the EnsureS3UploadsExistence job from running in the first 48 hours after a restore. During the restore it deletes the count of missing uploads from the PluginStore, so that an alert isn't triggered by an old number.
It is possible that a user could exist without an email, if so we should
not enqueue a job to download their gravatar.
This commit resolves this error that can occur:
```
Job exception: undefined method `email' for nil:NilClass
/var/www/discourse/app/models/user.rb:1204:in `email'
/var/www/discourse/app/jobs/regular/update_gravatar.rb:12:in `execute'
```
This commit also fixes the original spec which actually was wrong. The
job never enqueued in the original spec and so the gravatar was never
actually updated and the test was checking if the two values were the
same, but they were both null and never updated, so of course they were
the same!
A new test has also been added to make sure the gravatar job isn't
enqueued when a user's email is missing.
* FEATURE: Use predictable filenames inside the user archive export
* FEATURE: Include badges in user archive export
* FEATURE: Add user_visits table to the user archive export
This is in preparation for improvements to the user archive export data.
Some refactors happened along the way, including calling the different _export methods 'components' of the zip file.
Additionally, make the test for post export much more comprehensive.
Copy sources:
app/jobs/regular/export_csv_file.rb
spec/jobs/export_csv_file_spec.rb
With the addition of `PostSearchData#private_message`, a partial
index consisting of only search data from regular posts can be created.
The partial index helps to speed up searches on large sites since PG
will not have to do an index scan on the entire search data index which
has shown to be a bottle neck.
Convert all IMAP logging to write to a database table for easier inspection. These logs are cleaned up daily if they are > 5 days old.
Logs can easily be watched in dev by setting DISCOURSE_DEV_LOG_LEVEL=\"debug\" and running tail -f development.log | grep IMAP
* FEATURE: don't notify about changed tags for a private message
Only staff members observing specific tag should receive a notification
* FIX: remove other category which is not used
* FIX: improved specs to ensure that revise was succesful
When a tab is open but left unattended for a while, the red, green, and blue
pills tend to go out of sync.
So whevener we open the notifications menu, we sync up the notification count
(eg. blue and green pills) with the server.
However, the reviewable count (eg. the red pill) is not a notification and
is located in the hamburger menu. This commit adds a new route on the server
side to retrieve the reviewable count for the current user and a ping
(refreshReviewableCount) from the client side to sync the reviewable count
whenever they open the hamburger menu.
REFACTOR: I also refactored the hamburger-menu widget code to prevent repetitive uses
of "this.".
PERF: I improved the performance of the 'notify_reviewable' job by doing only 1 query
to the database to retrieve all the pending reviewables and then tallying based on the
various rights.
This commit should cause no functional change
- Split into functions to avoid deep nesting
- Register custom field type, and remove manual json parse/serialize
- Recover from deleted upload records
Also adds a test to ensure pull_hotlinked_images redownloads secure images only once
Was tired of seeing the following warnings in the logs
```
/discourse/app/jobs/scheduled/old_keys_reminder.rb:7: warning: already initialized constant Jobs::OldKeysReminder::OLD_CREDENTIALS_PERIOD
/discourse/app/jobs/scheduled/old_keys_reminder.rb:7: warning: previous definition of OLD_CREDENTIALS_PERIOD was here
```
* FEATURE: notify admins about old credentials
Security and API keys should be renewed periodically.
This additional notification should help admins keep their Discourse safe and secure.
Previously the pull hotlinked images job was skipped after system edits. This ensured that we never had an infinite loop of system-edit/pull-hotlinked/system-edit/pull-hotlinked etc.
A side effect was that edits made by system for any other reason (e.g. API, removing full quotes) would prevent pulling hotlinked images. This commit removes the system edit check, and replaces it with another method to avoid an infinite job scheduling loop.
* DEV: new S3 backup layout
Currently, with $S3_BACKUP_BUCKET of "bucket/backups", multisite backups
end up in "bucket/backups/backups/dbname/" and single-site will be in
"bucket/backups/".
Both _should_ be in "bucket/backups/dbname/"
- remove MULTISITE_PREFIX,
- always include dbname,
- method to move to the new prefix
- job to call the method
* SPEC: add tests for `VacateLegacyPrefixBackups` onceoff job.
Co-authored-by: Vinoth Kannan <vinothkannan@vinkas.com>
When running jobs in tests, we use `Jobs.run_immediately!`. This means that jobs are run synchronously when they are enqueued. Jobs sometimes enqueue other jobs, which are also executed synchronously. This means that the outermost job will block until the inner jobs have finished executing. In some cases (e.g. process_post with hotlinked images) this can lead to a deadlock.
This commit changes the behavior slightly. Now we will never run jobs inside other jobs. Instead, we will queue them up and run them sequentially in the order they were enqueued. As a whole, they are still executed synchronously. Consider the example
```ruby
class Jobs::InnerJob < Jobs::Base
def execute(args)
puts "Running inner job"
end
end
class Jobs::OuterJob < Jobs::Base
def execute(args)
puts "Starting outer job"
Jobs.enqueue(:inner_job)
puts "Finished outer job"
end
end
Jobs.enqueue(:outer_job)
puts "All jobs complete"
```
The old behavior would result in:
```
Starting outer job
Running inner job
Finished outer job
All jobs complete
```
The new behavior will result in:
```
Starting outer job
Finished outer job
Running inner job
All jobs complete
```
It might happen that some User records have no associated primary emails.
In which case we don't ever want to send them a digest.
Also added a new "user_email_no_email" skipped email log to ensure these cases
are properly handled and surfaced.
* FEATURE: notify admins about old credentials
Security and API keys should be renewed periodically.
This additional notification should help admins keep their Discourse safe and secure.
* PERF: Dematerialize topic_reply_count
It's only ever used for trust level promotions that run daily, or compared to 0. We don't need to track it on every post creation.
* UX: Add symbol in TL3 report if topic reply count is capped
* DEV: Drop user_stats.topic_reply_count column
This introduces new APIs for obtaining optimized thumbnails for topics. There are a few building blocks required for this:
- Introduces new `image_upload_id` columns on the `posts` and `topics` table. This replaces the old `image_url` column, which means that thumbnails are now restricted to uploads. Hotlinked thumbnails are no longer possible. In normal use (with pull_hotlinked_images enabled), this has no noticeable impact
- A migration attempts to match existing urls to upload records. If a match cannot be found then the posts will be queued for rebake
- Optimized thumbnails are generated during post_process_cooked. If thumbnails are missing when serializing a topic list, then a sidekiq job is queued
- Topic lists and topics now include a `thumbnails` key, which includes all the available images:
```
"thumbnails": [
{
"max_width": null,
"max_height": null,
"url": "//example.com/original-image.png",
"width": 1380,
"height": 1840
},
{
"max_width": 1024,
"max_height": 1024,
"url": "//example.com/optimized-image.png",
"width": 768,
"height": 1024
}
]
```
- Themes can request additional thumbnail sizes by using a modifier in their `about.json` file:
```
"modifiers": {
"topic_thumbnail_sizes": [
[200, 200],
[800, 800]
],
...
```
Remember that these are generated asynchronously, so your theme should include logic to fallback to other available thumbnails if your requested size has not yet been generated
- Two new raw plugin outlets are introduced, to improve the customisability of the topic list. `topic-list-before-columns` and `topic-list-before-link`
We have the `# frozen_string_literal: true` comment on all our
files. This means all string literals are frozen. There is no need
to call #freeze on any literals.
For files with `# frozen_string_literal: true`
```
puts %w{a b}[0].frozen?
=> true
puts "hi".frozen?
=> true
puts "a #{1} b".frozen?
=> true
puts ("a " + "b").frozen?
=> false
puts (-("a " + "b")).frozen?
=> true
```
For more details see: https://samsaffron.com/archive/2018/02/16/reducing-string-duplication-in-ruby
The main thrust of this PR is to take all the conditional checks based on the `enable_bookmarks_with_reminders` away and only keep the code from the `true` path, making bookmarks with reminders the core bookmarks feature. There is also a migration to create `Bookmark` records out of `PostAction` bookmarks for a site.
### Summary
* Remove logic based on whether enable_bookmarks_with_reminders is true. This site setting is now obsolete, the old bookmark functionality is being removed. Retain the setting and set the value to `true` in a migration.
* Use the code from the rake task to create a database migration that creates bookmarks from post actions.
* Change the bookmark report to read from the new table.
* Get rid of old endpoints for bookmarks
* Link to the new bookmarks list from the user summary page
Within 24 hours of signing up, new users were losing their
default trust level of 3. With this fix, demotions from
trust level 3 won't happen when the "default trust level"
setting is 3 or 4.
Previously all topic posters would be added which could lead to major performance issues. Now if there are too many posters, only the acting user will be added as a participant.
The process_post job uses CookedPostProcessor which also uses a
DistributedMutex. There's no good reason for the timeout of the outer
lock to be smaller than the timeout of the inner lock.
If the “secure media” site setting is enabled then ALL files uploaded to Discourse (images, video, audio, pdf, txt, zip etc. etc.) will follow the secure media rules. The “prevent anons from downloading files” setting will no longer have any bearing on upload security. Basically, the feature will more appropriately be called “secure uploads” instead of “secure media”.
This is being done because there are communities out there that would like all attachments and media to be secure based on category rules but still allow anonymous users to download attachments in public places, which is not possible in the current arrangement.
* This PR implements the scheduling and notification system for bookmark reminders. Every 5 minutes a schedule runs to check any reminders that need to be sent before now, limited to **300** reminders at a time. Any leftover reminders will be sent in the next run. This is to avoid having to deal with fickle sidekiq and reminders in the far-flung future, which would necessitate having a background job anyway to clean up any missing `enqueue_at` reminders.
* If a reminder is sent its `reminder_at` time is cleared and the `reminder_last_sent_at` time is filled in. Notifications are only user-level notifications for now.
* All JavaScript and frontend code related to displaying the bookmark reminder notification is contained here. The reminder functionality is now re-enabled in the bookmark modal as well.
* This PR also implements the "Remind me next time I am at my desktop" bookmark reminder functionality. When the user is on a mobile device they are able to select this option. When they choose this option we set a key in Redis saying they have a pending at desktop reminder. The next time they change devices we check if the new device is desktop, and if it is we send reminders using a DistributedMutex. There is also a job to ensure consistency of these reminders in Redis (in case Redis drops the ball) and the at desktop reminders expire after 20 days.
* Also in this PR is a fix to delete all Bookmarks for a user via `UserDestroyer`
If a post is being cooked twice (for example after an edit), there is a
chance the 'raw' and 'cooked' column to be inconsistent. This reduces
the chances of that happening.
Rails has an odd behavior for calling .delete_all on a has_many relation - the
default behavior is to nullify the foreign key fields instead of actually
'DELETE'ing the records.
Additionally, publishing a shared draft topic creates a PostRevision that the
NotifyPostRevision job picks up which is then promptly deleted.
Use destroy_all when cleaning up the revisions and have the NotifyPostRevision
job tolerate deleted PostRevision records.
This takes a small performance hit (several SQL DELETEs instead of just one)
but shouldn't be too much of an issue (high cardinalities range from 30-100).
Introduces a new site setting `max_notifications_per_user`.
Out-of-the-box this is set to 10,000. If a user exceeds this number of
notifications, we will delete the oldest notifications keeping only 10,000.
To disable this safeguard set the setting to 0.
Enforcement happens weekly.
This is in place to protect the system from pathological states where a
single user has enormous amounts of notifications causing various queries
to time out. In practice nobody looks back more than a few hundred notifications.
Previously we had many places in the app that called `hostname` to get
hostname of a server. This commit replaces the pattern in 2 ways
1. We cache the result in `Discourse.os_hostname` so it is only ever called once
2. We prefer to use Socket.gethostname which avoids making a shell command
This improves performance as we are not spawning hostname processes throughout
the app lifetime
Basically, say you had already downloaded a certain image from a certain URL
using pull_hotlinked_images and the onebox. The upload would be stored
by its sha as an upload record. Whenever you linked to the same URL again
in a post (e.g. in our case an og:image on review.discourse) we would
would reuse the original upload record because of the sha1.
However when you turned on secure media this could cause problems as
the first post that uses that upload after secure media is enabled
will set the access control post for the upload to the new post.
Then if the post is deleted every single onebox/link to that same image
URL will fail forever with 403 as the secure-media-uploads URL fails
if the access control post has been deleted.
To fix this when cooking posts and pulling hotlinked images, we only
allow using an original upload by URL if its access control post
matches the current post, and if the original_sha1 is filled in,
meaning it was uploaded AFTER secure media was enabled. otherwise
we just redownload the media again to be safe, as the URL will always
be new then.
Regression was created here:
https://github.com/discourse/discourse/pull/8750
When tag or category is added and the user is watching that category/tag
we changed notification type to `edited` instead of `new post`.
However, the logic here should be a little bit more sophisticated.
If the user has already seen the post, notification should be `edited`.
However, when user hasn't yet seen post, notification should be "new
reply". The case for that is when for example topic is under private
category and set for publishing later. In that case, we modify an
existing topic, however, for a user, it is like a new post.
Discussion on meta:
https://meta.discourse.org/t/publication-of-timed-topics-dont-trigger-new-topic-notifications/139335/13
Previously the badge was granted one month after the last time the badge was granted. The exact date shifted by one day each month. The new logic tries to grant the badge always at the beginning of a new month by looking at new users of the previous month. The "granted at" date is set to the end of the previous month.
With this change the script:
* Actually removes original large-sized images
* Doesn't save processed files if their size has increased
* Prevents inconsistent state
When pull_hotlinked_images tried to run on posts with secure media (which had already been downloaded from external sources) we were getting a 404 when trying to download the image because the secure endpoint doesn't allow anon downloads.
Also, we were getting into an infinite loop of pull_hotlinked_images because the job didn't consider the secure media URLs as "downloaded" already so it kept trying to download them over and over.
In this PR I have also refactored secure-media-upload URL checks and mutations into single source of truth in Upload, adding a SECURE_MEDIA_ROUTE constant to check URLs against too.