Currently, in the list controller, when encountering an unsafe redirect
error, a 404 is rendered. The problem is that it’s done in a way that it
also logs a fatal error (because a `Discourse::NotFound` exception was
raised inside a `rescue_from` block).
This patch addresses that issue by simply rendering a 404 without
raising any error.
This commit adds a `MiniSchedulerLongRunningJobLogger` class which will
poll every 60 seconds for mini_scheduler jobs which are stuck. When it
detects that a job is stuck, it will log a warning message with the
current backtrace of the thread that is executing the job.
Note that for scheduled jobs which are executed at a frequency of less
than 30 minutes, we will log when the job has been executing for 30
minutes.
For scheduled jobs executed at a frequency of less than 2 hours, we will
log when the job has been executing for a duration greater than its
specified frequency.
For scheduled jobs executed at a frequency greater than 2 hours, we will
log as long as the job has been executing for more than 2 hours.
Those are the steps to move the flag:
1. open menu;
2. click move up - `saving` CSS class is added;
3. request to backend;
4. `saving` CSS class is removed.
To check if the action was finished we are using this method:
```
def move_up(key)
open_flag_menu(key)
find(".admin-flag-item__move-up").click
has_saved_flag?(key)
self
end
def has_saved_flag?(key)
has_css?(".admin-flag-item.#{key}.saving")
has_no_css?(".admin-flag-item.#{key}.saving")
end
```
However, sometimes specs were failing with `expected to find CSS ".admin-flag-item.spam.saving" but there were no matches`
I think that the problem is with those 2 lines:
```
find(".admin-flag-item__move-up").click
has_closed_flag_menu?
```
If the save action is very fast, then the `saving` class is removed before the first check.
Therefore, to determine that the move action is finished, I am checking if the menu is closed.
Currently, when a badly named category slug is provided, it can lead to
an infinite redirect.
This patch addresses the issue by properly unescaping `request.fullpath`
so the path is successfully rewritten and the redirect happens as
expected.
Currently to handle stub topics after merging, there are only options to (1) never delete a stub topic and (2) delete a stub topic after X amount of days. This adds the option to immediately delete a stub topic upon merge.
---------
Co-authored-by: Mark VanLandingham <markvanlan@gmail.com>
Co-authored-by: Renato Atilio <renato@discourse.org>
This commit continues on work laid out by 6039b513fe to redesign the /about page. In this commit, we add the site age and a section on the right hand side to show site activities/statistics such as topics, posts, sign-ups, likes etc.
Following a recent refactor, some methods from `FlagSettings` have been
renamed (`custom_types` -> `additional_message_types`). The
`PostActionType` model was using `custom_types` but when the renaming
was done, it was renamed to `with_additional_message` instead of
`additional_message_types`, which under the right circumstances will
raise an error.
Admin can create up to 50 custom flags. It is limited for performance reasons.
When the limit is reached "Add button" is disabled and backend is protected by guardian.
This commit patches `Net::HTTP` to reduce the default timeouts of 60
seconds when we are processing a request. There are certain routes in
Discourse which makes external requests and if the proper timeouts are
not set, we risk having the Unicorn master process force restarting the
Unicorn workers once the `30` seconds timeout is reached. This can
potentially become a vector for DoS attacks and this commit is aimed at
reducing the risk here.
We support a low-level construct called "inline checks", which you can use to register a problem ad-hoc from within application code.
Problems registered by inline checks never show up in the admin dashboard, this is because when loading the dashboard, we run all realtime checks and look for problems. Because of an oversight, we considered inline checks to be "realtime", causing them to be run and clear their problem status.
To fix this, we don't consider inline checks to be realtime, to prevent them from running when loading the admin dashboard.
This change is mainly a refactor of the desktop notifications service to improve readability and have standardised values for tracking state for current user in regards to the Notification API and Push API.
Also improves readability when handling push notification jobs, especially in scenarios where the push_notification_time_window_mins site setting is set to 0, which will allow sending push notifications instantly.
We were writing theme-transpiler JS files to the filesystem on a per-process basis, and then immediately reading them back in. Plus, there was no cleanup mechanism, so the tmp directory would grow indefinitely.
This commit refactors things so that the `build.js` script outputs the theme-transpiler source to stdout. That way, we can read it directly into the process, and then into mini-racer, without needing to go via the filesystem. No cleanup required!
In production, the theme-transpiler is still cached in a file during `assets:precompile`
In the formkit conversion in 2ca06ba236
we missed setting a type for the UppyImageUploader for badges. Also,
we were not passing down the `image_url` as form data, so when we used
`data.image` for that field the badge was not updating in the UI after
page loads and the image URL was not loading for preview.
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
Before this commit, running `rspec --seed 22953 --format documentation spec/requests/admin/site_texts_controller_spec.rb:191 spec/lib/freedom_patches/translate_accelerator_spec.rb:109` will fail.
Setting `I18n.config.available_locales` is equivalent to hard coding the
locales for the entire process. It should not be set so that `I18n` will
fallback to `backend.locales`.
Sometimes the backtrace is quite big for failing specs, this env var
(RSPEC_EXCLUDE_NOISE_IN_BACKTRACE) can be set to
1 to remove backtrace from anything but spec or application code in
rspec. This makes it easier to see where the actual failure is
coming from, most of the time all the gem paths are noise.
When creating a shared draft, we're recording topic view stats on the draft and then pass those on when the draft is published, conflating the actual view count.
This fixes that by not registering topic views if the topic is a shared draft.
When `SiteSetting.review_every_post` is true and the category `require_topic_approval` system creates two reviewable items.
1. Firstly, because the category needs approval, the `ReviewableQueuePost` record` is created - at this stage, no topic is created.
2. Admin is approving the review. The topic and first post are created.
3. Because `review_every_post` is true `queue_for_review_if_possible` callback is evaluated and `ReviewablePost` is created.
4. Then `ReviewableQueuePost` is linked to the newly generated topic and post.
At the beginning, we were thinking about hooking to those guards:
```
def self.queue_for_review_if_possible(post, created_or_edited_by)
return unless SiteSetting.review_every_post
return if post.post_type != Post.types[:regular] || post.topic.private_message?
return if Reviewable.pending.where(target: post).exists?
...
```
And add something like
```
return if Reviewable.approved.where(target: post).exists?
```
However, because the callback happens in point 3. before the `ReviewableQueuePost` is linked to the `Topic`, it was not possible.
Therefore, when `ReviewableQueuePost` is creating a `Topic`, a new option called `:reviewed_queued_post` is passed to `PostCreator` to avoid creating a second `Reviewable`.
Currently, descriptions for flag types aren’t interpolated, returning
`%{base_path}` in their string, for example. This breaks the navigation
on the sites.
The behavior changed probably because of an upgrade of Ruby, as two
hashes were passed to `I18n.t` (`vars` and `default`) without using the
splat operator.
When using `Discourse.cache.fetch` with an expiry, there's a potential for a race condition due to how we read the data from redis.
The code used to be
```ruby
raw = redis.get(key) if !force
entry = read_entry(key) if raw
return entry if raw && !(entry == :__corrupt_cache__)
```
with `read_entry` defined as follow
```ruby
def read_entry(key)
if data = redis.get(key)
Marshal.load(data)
end
rescue => e
:__corrupt_cache__
end
```
If the value at "key" expired in redis between `raw = redis.get` and `entry = read_entry`, the `entry` variable would be `nil` despite `raw` having a value.
We would then proceed to return `entry` (which is `nil`) thinking it had a value, when it didn't.
The first `redis.get` can be skipped altogether and we can rely only on `read_entry` to read the data from redis. Thus avoiding the race condition and removing the double read operations.
Internal ref - t/132507
* SECURITY: Update default allowed iframes list
Change the default iframe url list to all include 3 slashes.
* SECURITY: limit group tag's name length
Limit the size of a group tag's name to 100 characters.
Internal ref - t/130059
* SECURITY: Improve sanitization of SVGs in Onebox
---------
Co-authored-by: Blake Erickson <o.blakeerickson@gmail.com>
Co-authored-by: Régis Hanol <regis@hanol.fr>
Co-authored-by: David Taylor <david@taylorhq.com>
Followup 4aea12fdcb
In certain config areas (like About) we want to be able
to fetch specific site settings by name. In this case,
sometimes we need to be able to fetch hidden settings,
in cases where a config area is still experimental.
Splitting out a different endpoint for this purpose
allows us to be stricter with what we return for config
areas without affecting the main site settings UI, revealing
hidden settings before they are ready.
Since switching to Maxmind permalinks to download the databases in
7079698cdf, we have received multiple
reports about rebuilds failing as `maxminddb:refresh` runs during
the rebuilds and failing to download the databases cases the rebuilds to
fail.
Downloading Maxmind databases should not sit in the critical rebuild
path but since we are close to the Discourse 3.3 release, we have opted
to just rescue all errors encountered when downloading the databases.
In the near future after the Discourse 3.3 release, we will be looking
at moving the downloading of maxmind databases out of the rebuild path.
We have a dedicated admin page (`/admin/customize/email_templates`) that lets admins customize all emails that Discourse sends to users. The way this page works is that it lists all translations strings that are used for emails, and the list of translation strings is currently hardcoded and hasn't been updated in years. We've had a number of new emails that Discourse sends, so we should add those templates to the list to let admins easily customize those templates.
Meta topic: https://meta.discourse.org/t/3-2-x-still-ignores-some-custom-email-templates/308203.
* FIX: Ensure JsLocaleHelper to obly outputs up-to-date translations
The old implementation forgot to filter out deprecated
translations, causing these translations to incorrectly override the new
locale in the frontend.
This commit fills in the forgotten where clause, filtering only the
up-to-date part.
Related meta topic: https://meta.discourse.org/t/outdated-translation-replacement-causing-missing-translation/314352
This patch fixes the `i18n:check` rake task which has been broken by
the `MessageFormat` upgrade.
It also adds a spec to ensure we generate valid MF code for all our
available locales.
Currently, when adding translation overrides, values aren’t validated
for MF strings. This results in being able to add invalid plural keys or
even strings containing invalid syntax.
This patch addresses this issue by compiling the string when saving an
override if the key is detected as an MF one.
If there’s an error from the compiler, it’s added to the model errors,
which in turn is displayed to the user in the admin UI, helping them to
understand what went wrong.
When we show user tips, we immediately send an AJAX request to mark the
tiup as seen. This is done in the background. However, when system tests
are run, sometimes that request is not completed before the test ends.
This causes the test to be flakey.
One way to fix this is to force the system test run to wait for the AJAX
request to complete. However, this is not ideal because it makes the
test suite slower on each run.
Instead, this commit removes the flakey assertion and adds an alternative
assertion in the frontend tests that ensures the background request is
sent when the user tip is shown.