Constants should always be only assigned once. The logical OR assignment
of a constant is a relic of the past before we used zeitwerk for
autoloading and had bugs where a file could be loaded twice resulting in
constant redefinition warnings.
In our production environment, we have been seeing Sidekiq processes
getting stuck randomly when a USR1 signal is sent to the Unicorn master
process. We have not been able to identify the root cause of why the
Sidekiq process gets stuck. We however noticed that when the Unicorn
master process receives a USR1 signal, it will reopen the logs for the
Unicorn master process first before sending a USR1 signal for the
Unicorn worker processes to reopen the logs. We figured that we should
do the same for the Sidekiq process as well when a USR1 signal.
In this commit, we introduce an arbitrary delay of 1 second before we
the Sidekiq process reopens its log files so as to allow enough time for the Unicorn
master to finish reopening it logs first.
We also do not send reopen logs for the Sidekiq process if the `DISCOURSE_LOG_SIDEKIQ`
env is not present because there is no need to reopen any logs.
When running checks, we look to the existing problem check trackers and try to grab their ProblemCheck classes.
In some cases this is no longer in the problem check repository, e.g. when the check was part of a plugin that has been uninstalled.
In the case where the check was scheduled, this would lead to an error in one of the jobs
This patch removes the `with_service` helper from the code base.
Instead, we can pass a block with actions directly to the `.call` method
of a service.
This simplifies how to use services:
- use `.call` without a block to run the service and get its result
object.
- use `.call` with a block of actions to run the service and execute
arbitrary code depending on the service outcome.
It also means a service is now “self-contained” and can be used anywhere
without having to include a helper or whatever.
Currently, categories support designating only 1 group as a moderation group on the category. This commit removes the one group limitation and makes it possible to designate multiple groups as mods on a category.
Internal topic: t/124648.
Introduce a new log line for Sidekiq jobs that consume more than
`DISCOURSE_LIVE_SLOTS_SIDEKIQ_LIMIT` live slots. This is useful to
track down jobs that may leak memory.
This is enabled only when Sidekiq's job instrumenter is enabled (set
`DISCOURSE_LOG_SIDEKIQ` to `1`).
This patch allows using an AR relation as a model in services without
fetching associated records. It will just check if the relation is empty
or not. In the former case, the execution will stop at that point, as
expected.
### Why?
Before, all flags were static. Therefore, they were stored in class variables and serialized by SiteSerializer. Recently, we added an option for admins to add their own flags or disable existing flags. Therefore, the class variable had to be dropped because it was unsafe for a multisite environment. However, it started causing performance problems.
### Solution
When a new Flag system is used, instead of using PostActionType, we can serialize Flags and use fragment cache for performance reasons.
At the same time, we are still supporting deprecated `replace_flags` API call. When it is used, we fall back to the old solution and the admin cannot add custom flags. In a couple of months, we will be able to drop that API function and clean that code properly. However, because it may still be used, redis cache was introduced to improve performance.
To test backward compatibility you can add this code to any plugin
```ruby
replace_flags do |flag_settings|
flag_settings.add(
4,
:inappropriate,
topic_type: true,
notify_type: true,
auto_action_type: true,
)
flag_settings.add(1001, :trolling, topic_type: true, notify_type: true, auto_action_type: true)
end
```
This change ensures native push notifications respect the site setting for push_notification_time_window_mins. Previously only web push notifications would account for the delay, now we can bring more consistency between Discourse in browser vs Hub, by applying the same delay strategy to both forms of push notifications.
### Why?
Before, all flags were static. Therefore, they were stored in class variables and serialized by SiteSerializer. Recently, we added an option for admins to add their own flags or disable existing flags. Therefore, the class variable had to be dropped because it was unsafe for a multisite environment. However, it started causing performance problems.
### Solution
When a new Flag system is used, instead of using PostActionType, we can serialize Flags and use fragment cache for performance reasons.
At the same time, we are still supporting deprecated `replace_flags` API call. When it is used, we fall back to the old solution and the admin cannot add custom flags. In a couple of months, we will be able to drop that API function and clean that code properly. However, because it may still be used, redis cache was introduced to improve performance.
To test backward compatibility you can add this code to any plugin
```ruby
replace_flags do |flag_settings|
flag_settings.add(
4,
:inappropriate,
topic_type: true,
notify_type: true,
auto_action_type: true,
)
flag_settings.add(1001, :trolling, topic_type: true, notify_type: true, auto_action_type: true)
end
```
This change is mainly a refactor of the desktop notifications service to improve readability and have standardised values for tracking state for current user in regards to the Notification API and Push API.
Also improves readability when handling push notification jobs, especially in scenarios where the push_notification_time_window_mins site setting is set to 0, which will allow sending push notifications instantly.
* FIX: Ensure JsLocaleHelper to obly outputs up-to-date translations
The old implementation forgot to filter out deprecated
translations, causing these translations to incorrectly override the new
locale in the frontend.
This commit fills in the forgotten where clause, filtering only the
up-to-date part.
Related meta topic: https://meta.discourse.org/t/outdated-translation-replacement-causing-missing-translation/314352
In development, classes are lazy loaded so `Jobs::Onceoff.onceoff_job_klasses`
may not have been set. This is not a problem in production cause stuff
is eager loaded.
Follow-up to f4d06f195d
We are investigating a memory leak in Sidekiq and saw the following line
when comparing heap dumps over time.
`Allocated IMEMO 14775 objects of size 591000/7389528 (in bytes) at:
/var/www/discourse/app/jobs/onceoff/onceoff.rb:36`
That line in question was doing a `.select { |klass| klass < self }` on
`ObjectSpace.each_object(Class)`. This for some reason is allocating a
whole bunch of `IMEMO` objects which are instruction sequence objects.
Instead of diving deeper into why this might be leaking, we can just
save our time by switching to an implementation that is more efficient
and does not require looping through a ton of objects.
Background:
In order to redrive failed webhook events, an operator has to go through and click on each. This PR is adding a mechanism to retry all failed events to help resolve issues quickly once the underlying failure has been resolved.
What is the change?:
Previously, we had to redeliver each webhook event. This merge is adding a 'Redeliver Failed' button next to the webhook event filter to redeliver all failed events. If there is no failed webhook events to redeliver, 'Redeliver Failed' gets disabled. If you click it, a window pops up to confirm the operator. Failed webhook events will be added to the queue and webhook event list will show the redelivering progress. Every minute, a job will be ran to go through 20 events to redeliver. Every hour, a job will cleanup the redelivering events which have been stored more than 8 hours.
I am changing many of these to notes or resolving them as is,
most of these I have not actively worked on in years so someone
else can work on them when we get to these areas again.
This commit introduces a hidden `s3_inventory_bucket` site setting which
replaces the `enable_s3_inventory` and `s3_configure_inventory_policy`
site setting.
The reason `enable_s3_inventory` and `s3_configure_inventory_policy`
site settings are removed is because this feature has technically been
broken since it was introduced. When the `enable_s3_inventory` feature
is turned on, the app will because configure a daily inventory policy for the
`s3_upload_bucket` bucket and store the inventories under a prefix in
the bucket. The problem here is that once the inventories are created,
there is nothing cleaning up all these inventories so whoever that has
enabled this feature would have been paying the cost of storing a whole
bunch of inventory files which are never used. Given that we have not
received any complains about inventory files inflating S3 storage costs,
we think that it is very likely that this feature is no longer being
used and we are looking to drop support for this feature in the not too
distance future.
For now, we will still support a hidden `s3_inventory_bucket` site
setting which site administrators can configure via the
`DISCOURSE_S3_INVENTORY_BUCKET` env.
This commit updates `S3Inventory#files` to ignore S3 inventory files
which have a `last_modified` timestamp which are not at least 2 days
older than `BackupMetadata.last_restore_date` timestamp.
This check was previously only in `Jobs::EnsureS3UploadsExistence` but
`S3Inventory` can also be used via Rake tasks so this protection needs
to be in `S3Inventory` and not in the scheduled job.
This PR introduces a basic AdminNotice model to store these notices. Admin notices are categorized by their source/type (currently only notices from problem check.) They also have a priority.
This commit introduces a few changes as a result of
customer issues with finding why a topic was relisted.
In one case, if a user edited the OP of a topic that was
unlisted and hidden because of too many flags, the topic
would get relisted by directly changing topic.visible,
instead of going via TopicStatusUpdater.
To improve tracking we:
* Introduce a visibility_reason_id to topic which functions
in a similar way to hidden_reason_id on post, this column is
set from the various places we change topic visibility
* Fix Post#unhide! which was directly modifying topic.visible,
instead we use TopicStatusUpdater which sets visibility_reason_id
and also makes a small action post
* Show the reason topic visibility changed when hovering the
unlisted icon in topic status on topic titles
In a large forum with millions of users and millions of user_fields
updating the list of dropdown user field options will result in a
502 now due to the large number of fields.
This commit moves the indexing into a job.
Previously, when the new site was created and after the first admin login, no one will receive notifications to review the user approval queue since only the moderators would receive the PMs about it. Also, this PR will change the "pending_users_reminder_delay_minutes" site setting to 5 minutes while the site is in bootstrap mode.
This adds a hidden site setting of `skip_email_bulk_invites`
If set to `true`, the `BulkInvite` job will pass the value to `Invite`, meaning the generated invite wont trigger an email notification being sent to the newly invited user.
(This is useful if you want to manage the sending of the invite emails outside of Discourse.)
Doing the following renames:
Jobs::ProblemChecks → Jobs::RunProblemChecks
Jobs::ProblemCheck → Jobs::RunProblemCheck
This is to disambiguate the ProblemCheck class name, ease fuzzy finding, and avoid needing to use :: in a bunch of places.
As part of problem checks refactoring, we're moving some data to be DB backed. In this PR it's the tracking of problem check execution. When was it last run, when was the last problem, when should it run next, how many consecutive checks had problems, etc.
This allows us to implement the perform_every feature in scheduled problem checks for checks that don't need to be run every 10 minutes.
Now forums can enroll their sites to be showcased in the Discourse [Discover](https://discourse.org/discover) directory. Once they enable the site setting `include_in_discourse_discover` to enroll their forum the `CallDiscourseHub` job will ping the `api.discourse.org/api/discover/enroll` endpoint. Then the Discourse Hub will fetch the basic details from the forum and add it to the review queue. If the site is approved then the forum details will be displayed in the `/discover` page.