Admin can create up to 50 custom flags. It is limited for performance reasons.
When the limit is reached "Add button" is disabled and backend is protected by guardian.
This commit patches `Net::HTTP` to reduce the default timeouts of 60
seconds when we are processing a request. There are certain routes in
Discourse which makes external requests and if the proper timeouts are
not set, we risk having the Unicorn master process force restarting the
Unicorn workers once the `30` seconds timeout is reached. This can
potentially become a vector for DoS attacks and this commit is aimed at
reducing the risk here.
The Safari 15 bugfix has been rolled into @babel/preset-env in the most recent version, so we no longer need to carry our vendored copy.
This commit updates @babel/preset-env, runs npx yarn-deduplicate yarn.lock, and removes the vendored transform.
This commit also refactors our theme transpiler to use @babel/preset-env, with the same list of target browsers as our ember-cli build uses. This means we no longer need to maintain a separate list of babel transforms for themes.
We were writing theme-transpiler JS files to the filesystem on a per-process basis, and then immediately reading them back in. Plus, there was no cleanup mechanism, so the tmp directory would grow indefinitely.
This commit refactors things so that the `build.js` script outputs the theme-transpiler source to stdout. That way, we can read it directly into the process, and then into mini-racer, without needing to go via the filesystem. No cleanup required!
In production, the theme-transpiler is still cached in a file during `assets:precompile`
When `SiteSetting.review_every_post` is true and the category `require_topic_approval` system creates two reviewable items.
1. Firstly, because the category needs approval, the `ReviewableQueuePost` record` is created - at this stage, no topic is created.
2. Admin is approving the review. The topic and first post are created.
3. Because `review_every_post` is true `queue_for_review_if_possible` callback is evaluated and `ReviewablePost` is created.
4. Then `ReviewableQueuePost` is linked to the newly generated topic and post.
At the beginning, we were thinking about hooking to those guards:
```
def self.queue_for_review_if_possible(post, created_or_edited_by)
return unless SiteSetting.review_every_post
return if post.post_type != Post.types[:regular] || post.topic.private_message?
return if Reviewable.pending.where(target: post).exists?
...
```
And add something like
```
return if Reviewable.approved.where(target: post).exists?
```
However, because the callback happens in point 3. before the `ReviewableQueuePost` is linked to the `Topic`, it was not possible.
Therefore, when `ReviewableQueuePost` is creating a `Topic`, a new option called `:reviewed_queued_post` is passed to `PostCreator` to avoid creating a second `Reviewable`.
We have been seeing `ZLib::BufError` when running the `assets:precompile` rake
task.
```
I, [2024-07-30T05:19:58.807019 #1059] INFO -- : Writing /var/www/discourse/public/assets/scripts/discourse-test-listen-boot-9b14a0fc65c689577e6a428dcfd680205516fe211700a71c7adb5cbcf4df2cc5.js
rake aborted!
Zlib::BufError: buffer error (Zlib::BufError)
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sprockets-3.7.3/lib/sprockets/cache/file_store.rb💯in `<<'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sprockets-3.7.3/lib/sprockets/cache/file_store.rb💯in `set'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sprockets-3.7.3/lib/sprockets/cache.rb:212:in `set'
```
The hypothesis here is that some thread unsafe issue is causing the
problem since we download the Maxmind databases in a thread and run
decompression operations once the gzip file is downloaded.
In the near term, we plan to move downloading of Maxmind databases out
of the Rake task into a scheduled job so this patch should be considered
a temporary solution.
The trade-off here is that build time will slightly increase since we
are not longer downloading Maxmind databases while precompiling assets
at the same time.
When using `Discourse.cache.fetch` with an expiry, there's a potential for a race condition due to how we read the data from redis.
The code used to be
```ruby
raw = redis.get(key) if !force
entry = read_entry(key) if raw
return entry if raw && !(entry == :__corrupt_cache__)
```
with `read_entry` defined as follow
```ruby
def read_entry(key)
if data = redis.get(key)
Marshal.load(data)
end
rescue => e
:__corrupt_cache__
end
```
If the value at "key" expired in redis between `raw = redis.get` and `entry = read_entry`, the `entry` variable would be `nil` despite `raw` having a value.
We would then proceed to return `entry` (which is `nil`) thinking it had a value, when it didn't.
The first `redis.get` can be skipped altogether and we can rely only on `read_entry` to read the data from redis. Thus avoiding the race condition and removing the double read operations.
Internal ref - t/132507
* SECURITY: Update default allowed iframes list
Change the default iframe url list to all include 3 slashes.
* SECURITY: limit group tag's name length
Limit the size of a group tag's name to 100 characters.
Internal ref - t/130059
* SECURITY: Improve sanitization of SVGs in Onebox
---------
Co-authored-by: Blake Erickson <o.blakeerickson@gmail.com>
Co-authored-by: Régis Hanol <regis@hanol.fr>
Co-authored-by: David Taylor <david@taylorhq.com>
Since switching to Maxmind permalinks to download the databases in
7079698cdf, we have received multiple
reports about rebuilds failing as `maxminddb:refresh` runs during
the rebuilds and failing to download the databases cases the rebuilds to
fail.
Downloading Maxmind databases should not sit in the critical rebuild
path but since we are close to the Discourse 3.3 release, we have opted
to just rescue all errors encountered when downloading the databases.
In the near future after the Discourse 3.3 release, we will be looking
at moving the downloading of maxmind databases out of the rebuild path.
We have a dedicated admin page (`/admin/customize/email_templates`) that lets admins customize all emails that Discourse sends to users. The way this page works is that it lists all translations strings that are used for emails, and the list of translation strings is currently hardcoded and hasn't been updated in years. We've had a number of new emails that Discourse sends, so we should add those templates to the list to let admins easily customize those templates.
Meta topic: https://meta.discourse.org/t/3-2-x-still-ignores-some-custom-email-templates/308203.
* FIX: Ensure JsLocaleHelper to obly outputs up-to-date translations
The old implementation forgot to filter out deprecated
translations, causing these translations to incorrectly override the new
locale in the frontend.
This commit fills in the forgotten where clause, filtering only the
up-to-date part.
Related meta topic: https://meta.discourse.org/t/outdated-translation-replacement-causing-missing-translation/314352
This patch fixes the `i18n:check` rake task which has been broken by
the `MessageFormat` upgrade.
It also adds a spec to ensure we generate valid MF code for all our
available locales.
We can get translations with invalid plural keys from Crowdin
or from custom overrides. Currently, this will raise an error and the
locales won’t be outputted at all.
This patch addresses this issue by using the new `strict: false` option
of our `messageformat-wrapper` gem, allowing to generate locales even if
there are invalid plural keys present.
* FIX: Add post id to the anchor to prevent two identical anchors
We generate anchors for headings in posts. This works fine if there is
only one post in a topic with anchors. The problem comes when you have
two or more posts with the same heading. PrettyText generates anchors
based on the heading text using the raw context of each post, so it is
entirely possible to generate the same anchor for two posts in the same
topic, especially for topics with template replies
Post1:
# heading
context
Post2:
# heading
context
When both posts are on the page at the same time, the anchor will only
work for the first post, according to the [HTML specification](https://html.spec.whatwg.org/multipage/browsing-the-web.html#scroll-to-the-fragment-identifier).
> If there is an a element in the document tree whose root is document
> that has a name attribute whose value is equal to fragment, then
> return the *first* such element in tree order.
This bug is particularly serious in forums with non-Latin languages,
such as Chinese. We do not generate slugs for Chinese, which results in
the heading anchors being completely dependent on their order.
```ruby
[2] pry(main)> PrettyText.cook("# 中文")
=> "<h1><a name=\"h-1\" class=\"anchor\" href=\"#h-1\"></a>中文</h1>"
```
Therefore, the anchors in the two posts must be in exactly the same by
order, causing almost all of the anchors in the second post to be
invalid.
This commit solves this problem by adding the `post_id` to the anchor.
The new anchor generation method will add `p-{post_id}` as a prefix when
post_id is available:
```ruby
[3] pry(main)> PrettyText.cook("# 中文", post_id: 1234)
=> "<h1><a name=\"p-1234-h-1\" class=\"anchor\" href=\"#p-1234-h-1\"></a>中文</h1>"
```
This way we can ensure that each anchor name only appears once on the
same topic. Using post id also prevents the potential possibility of the
same anchor name when splitting/merging topics.
Previously in these 2 PRs, we introduced a new site setting `SiteSetting.enforce_second_factor_on_external_auth`.
https://github.com/discourse/discourse/pull/27547https://github.com/discourse/discourse/pull/27674
When disabled, it should enforce 2FA for local login with username and password and skip the requirement when authenticating with oauth2.
We stored information about the login method in a secure session but it is not reliable. Therefore, information about the login method is moved to the database.
Previously, we couldn't change the user agent name dynamically for onebox requests. In this commit, a new hidden site setting `onebox_user_agent` is created to override the default user agent value specified in the [initializer](c333e9d6e6/config/initializers/100-onebox_options.rb (L15)).
Co-authored-by: Régis Hanol <regis@hanol.fr>
* FEATURE: Clean up previously logged information after permanently deleting posts
When soft deleteing a topic or post, we will log some details in the
staff log, including the raw content of the post. Before this commit, we
will not clear the information in these records. Therefore, after
permanently deleting the post, `UserHistory` still retains copy of the
permanently deleted post. This is an unexpected behaviour and may raise
some potential legal issues.
This commit adds a behavior that when a post is permanently deleted, the
details column of the `UserHistory` associated with the post will be
overwritten to "(permanently deleted)". At the same time, for permanent
deletion, a new `action_id` is introduced to distinguish it from soft
deletion.
Related meta topic: https://meta.discourse.org/t/introduce-a-way-to-also-permanently-delete-the-sensitive-info-from-the-staff-logs/292546
This ensures that elasticsearch doesn't parse it as an object. There are
too many combination of job opts so we don't want elasticsearch to be
parsing and indexing this field as an object.
This improves the `TextSentinel` so that we don't consider CJK text as being uppercase and thus failing the validator.
It also optimizes the entropy computation by using native ruby `.bytes` to get all the bytes from the text.
It also tweaks the `seems_pronounceable?` and `seems_unpretentious?` check to use the `\p{Alnum}` unicode regexp group to account for non-latin languages.
Reference - https://meta.discourse.org/t/body-seems-unclear-error-when-users-are-typing-in-chinese/88715
Inspired by https://github.com/discourse/discourse/pull/27900
Co-authored-by: Paulo Magalhaes <mentalstring@gmail.com>
We are investigating reports of errors with messageformat compilation following 301713ef. This commit does not fix the issue, but it introduces some basic error handling to avoid completely breaking affected sites.
We will have a fix for the root cause soon.
When tags contain an underscore we should allow filtering in the same way, previously due to the regex those with underscores were not being found when filtering.
This commit ensures that we reset the `missing_s3_uploads` status count
if there are no inventory files which are at least 2 days older than the
site's restored date.
Otherwise, a site with missing uploads but was subsequntly restored will
be continue to report missing uploads for 2 days.
Followup 560e8aff75
The linked commit allowed oneboxing private GitHub PRs,
issues, commits, and so on, but it didn't actually allow
oneboxing the root repo e.g https://github.com/discourse/discourse-reactions
We didn't have an engine for this, we were relying on OpenGraph
tags on the HTML rendering of the page like we do with other
oneboxes.
To fix this, we needed a new github engine for repos specifically.
Also, this commit adds a `data-github-private-repo` attribute to
PR, issue, and repo onebox HTML so we have an indicator of
whether the repo was private, which can be used for theme components
and so on.
Our old group SMTP SSL option was a checkbox,
but this was not ideal because there are actually
3 different ways SSL can be used when sending
SMTP:
* None
* SSL/TLS
* STARTTLS
We got around this before with specific overrides
for Gmail, but it's not flexible enough and now people
want to use other providers. It's best to be clear,
though it is a technical detail. We provide a way
to test the SMTP settings before saving them so there
should be little chance of messing this up.
This commit also converts GroupEmailSettings to a glimmer
component.
Allow admin to create custom flag which requires an additional message.
I decided to rename the old `custom_flag` into `require_message` as it is more descriptive.
# Context
Currently there is no way to add a custom filter to the experimental `/filter` endpoint. While you can implement a custom `status:` there is no way to include the user's input in a custom query.
# PR
This PR adds the ability to implement a custom filter. eg. `CUSTOM_FILTER:foo`
- Add `add_filter_custom_filter` for extension
- Add specs
Followup 7b627dc14b
In this other commit, I changed the email settings validator
to always use the `login` authentication method for
office365 and outlook, but I didn't change the actual
group SMTP mailer to do this.
This commit fixes that issue and does some minor refactoring.
This commit updates `TopicQuery.validators` to cover all of the
public options listed in `TopicQuery.public_valid_options`. This is done
to fix the app returning a 500 response code when an invalid value, such
as a hash, is passed as a query param when accessing the various topic
list routes.
Followup 560e8aff75
GitHub auth tokens cannot be made with permissions to
access multiple organisations. This is quite limiting.
This commit changes the site setting to be a "secret list"
type, which allows for a key/value mapping where the value
is treated like a password in the UI.
Now when a GitHub URL is requested for oneboxing, the
org name from the URL is used to determine which token
to use for the request.
Just in case anyone used the old site setting already,
there is a migration to create a `default` entry
with that token in the new list setting, and for
a period of time we will consider that token valid to
use for all GitHub oneboxes as well.
Allow admin to create custom flag which requires an additional message.
I decided to rename the old `custom_flag` into `require_message` as it is more descriptive.
Currently, when a plugin registers a new reviewable type or extends a
list method (through `register_reviewble_type` and `extend_list_method`
respectively), the new array is statically computed and always returns
the same value. It will continue to return the same value even if the
plugin is disabled (it can be a problem in a multisite env too).
To address this issue, this patch changes how `extend_list_method`
works. It’s now using `DiscoursePluginRegistry.define_filtered_register`
to create a register on the fly and store the extra values from various
plugins. It then combines the original values with the ones from the
registry. The registry is already aware of disabled plugins, so when a
plugin is disabled, its registered values won’t be returned.
This commit improves the logging of Sidekiq errors when
`ENABLE_LOGSTASH_LOGGER` is set to 1. Prior to this change, we would
only log the message and the backtrace. After this change, useful
information like `job.class`, `job.opts`, `job.problem_db`,
`exception.class` and `exception.message` are included in the log line
as well.
This change how we present attachments from incoming emails to now be "hidden" in a "[details]" so they don't "hang" at the end of the post.
This is especially useful when using Discourse as a support tool where email is the main communication channel. For various reasons, images are often duplicated by email user agents, and hiding them behind the details block help keep the conversation focused on the isssue at hand.
Internal ref t/122333
This patch upgrades the MessageFormat library to version 3.3.0 from
0.1.5.
Our `I18n.messageFormat` method signature is unchanged, and now uses the
new API under the hood.
We don’t need dedicated locale files for handling pluralization rules
anymore as everything is now included by the library itself.
The compilation of the messages now happens through our
`messageformat-wrapper` gem. It then outputs an ES module that includes
all its needed dependencies.
Most of the changes happen in `JsLocaleHelper` and in the `ExtraLocales`
controller.
A new method called `.output_MF` has been introduced in
`JsLocaleHelper`. It handles all the fetching, compiling and
transpiling to generate the proper MF messages in JS. Overrides and
fallbacks are also handled directly in this method.
The other main change is that now the MF translations are served through
the `ExtraLocales` controller instead of being statically compiled in a
JS file, then having to patch the messages using overrides and
fallbacks. Now the MF translations are just another bundle that is
created on the fly and cached by the client.
This commit updates `DiscourseLogstashLogger#add_with_opts` to avoid
logging messages that matches regexp patterns configured in
`Logster.store.ignore`. Those error logs are mostly triggered by clients
and do not serve any useful purpose.
This commit updates the `docker:test` rake task to run core and plugin
QUnit tests in parallel using half the number of available CPU
processors to speed up time it takes to run the tests on hardware with
more CPU cores.
Before this commit, core QUnit tests ran by the `docker:test` rake task was capped at 3 parallel
processes while plugin QUnit tests was not ran in parallel.
This commit updates `DiscourseLogstashlogger` to add the
`exception_class` and `exception_message` field to the log line when the
`progname` of the log message is `web-exception` which is Logster's
logging of exceptions during a web request.
The `exception_class` and `exception_message` fields allows consumers of
the logs to easily group logs together.
This commit adds the ability to onebox private GitHub
commits, pull requests, issues, blobs, and actions using
a new `github_onebox_access_token` site setting. The token
must be set up in correctly to have access to the repos needed.
To do this successfully with the Oneboxer, we need to skip
redirects on the github.com host, otherwise we get a 404
on the URL before it is translated into a GitHub API URL
and has the appropriate headers added.
- Delete vendored copy
- Create a JS entrypoint under `static/` which imports all the modes/themes/extensions we need
- Create an async `load-ace-editor` entrypoint
- Update `<AceEditor` component to use the new entrypoint
- De-jquery-ify `<AceEditor`
- Bump `v1.4.13` -> `v1.35.2`
This change eliminates a couple of instances where subfolder urls are badly formatted, in most cases we can use Discourse.base_url_no_prefix to prevent adding the subfolder to the base url.
This commit adds a hidden `s3_inventory_bucket_region` site setting to
specify the region of the `s3_inventory_bucket` when the `S3Inventory`
class initializes an instance of the `S3Helper`. By default, the
`S3Helper` class uses the value of the `s3_region` site setting but the
region of the `s3_inventory_bucket` is not always the same as the
`s3_region` configured.
Add a new column - `user_agent` - to the `SearchLog` table.
This column can be null as we are only allowing a the user-agent string to have a max length of 2000 characters. In the case the user-agent string surpasses the max characters allowed, we simply nullify the value, and save/write the log as normal.
In our official Docker image, running git commands results in the
following error:
```
fatal: detected dubious ownership in repository at '/var/www/discourse'
To add an exception for this directory, call:
git config --global --add safe.directory /var/www/discourse
```
The AdminPlugin JS model uses a similar pattern to chat models,
where it is a plain JS class manually converting provided
snake_case attributes from the serializer to JS camelCase.
However this doesn't work when it comes to using `add_to_serializer`
in plugins since core does not know about these new attributes.
Instead, we can use a JS function to convert snake_case to camelCase
and use that when initializing AdminPlugin. This commit also moves
similar functions to a new case-converter.js file in
discourse-common/lib.
This commit rewrites `DiscourseLogstashLogger` to not be an instance
of `LogstashLogger`. The reason we don't want it to be an instance of
`LogstashLogger` is because we want the new logger to be chained to
Logster's logger which can then pass down useful information like the
request's env and error backtraces which Logster has already gathered.
Note that this commit does not bother to maintain backwards
compatibility and drops the `LOGSTASH_URI` and `UNICORN_LOGSTASH_URI`
ENV variables which were previously used to configure the destination in
which `logstash-logger` would send the logs to. Instead, we introduce
the `ENABLE_LOGSTASH_LOGGER` ENV variable to replace both ENV and remove
the need for the log paths to be specified. Note that the previous
feature was considered experimental as stated in d888d3c54c
and the new feature should be considered experimental as well. The code
may be moved into a plugin in the future.
Fixes the following deprecation warning:
> DEPRECATION WARNING: The `check_pending!` method is deprecated in favor of `check_all_pending!`. The new implementation will loop through all available database configurations and find pending migrations. The prior implementation did not permit this.
Instead of the deprecated 'min trust level to allow ignore' in order to reduce the number of deprecation notices in the logs.
This tweaks a few serializers so that the 'can_ignore_users?` property is always coming from the server and properly used on the client-side.
Followup 2f2da72747
This commit moves topic view tracking from happening
every time a Topic is requested, which is susceptible
to inflating numbers of views from web crawlers, to
our request tracker middleware.
In this new location, topic views are only tracked when
the following headers are sent:
* HTTP_DISCOURSE_TRACK_VIEW - This is sent on every page navigation when
clicking around the ember app. We count these as browser page views
because we know it comes from the AJAX call in our app. The topic ID
is extracted from HTTP_DISCOURSE_TRACK_VIEW_TOPIC_ID
* HTTP_DISCOURSE_DEFERRED_TRACK_VIEW - Sent when MessageBus initializes
after first loading the page to count the initial page load view. The
topic ID is extracted from HTTP_DISCOURSE_DEFERRED_TRACK_VIEW.
This will bring topic views more in line with the change we
made to page views in the referenced commit and result in
more realistic topic view counts.
* UX: Add a rake task to monitor progress for long rebakes
When doing mass rebaking, this task will print progress, speed and
expected time to completion (ETC) on a loop. It is meant for rebakes
that take several hours or days.
It will calculate a 10m moving average over the past 6 hours, and print
ETC accordingly.
It also shows Sidekiq stats in case Sidekiq jobs for rebaking were
enqueued separately, instead of using the rebake_posts tasks in this
file (which are 100% synchronous and do not touch Sidekiq at all).
NOTE: only the currently unbaked count at task start time is considered;
this is useful in live communities with lots of traffic, where new posts
might otherwise change the goal posts continuously.
* Satisfy stree
I am changing many of these to notes or resolving them as is,
most of these I have not actively worked on in years so someone
else can work on them when we get to these areas again.
This commit continues work laid out by ffec8163b0 for the admin config page for the /about page. The last commit set up the user interface, and this one sets up all the wiring needed to make the input fields and save buttons actually work.
Internal topic: t/128544.
* rescue mmdb download failure
If mmdb database fails to download a bootstrap fails. This is a trivial fix for that problem. A more elegant solution might check whether the dataabase was downloaded and provide a helpful error message.
* linting
---------
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
When using the full page search and filtering down to a specific topic, the sort order was overwritten to by by "post_number".
This was confusing because we allow different type of sort order in the full search page.
This fixes it by only sorting by post_number when there's no "global" sort order defined.
Since the "new topic map" uses the search endpoint behind the scene, this also fixes the "most likes" popup.
Context - https://meta.discourse.org/t/searching-order-seems-to-be-broken-when-searching-in-topic/312303
If the `enforce_second_factor_on_external_auth` setting
is disabled and a user logs in with an OAuth method,
they don't automatically get redirected to /preferences/second-factor
on login. However, they can get there manually, and once there
they cannot leave.
This commit fixes the issue and allows them to leave
and also does some refactors to indicate to the client
what login method is used as a followup to
0e1102b332
Followup 96a0781bc1
When sending emails where secure uploads is enabled
and secure_uploads_allow_embed_images_in_emails is
true, we attach the images to the email, and we
do some munging with the final email so the structure
of the MIME parts looks like this:
```
multipart/mixed
multipart/alternative
text/plain
text/html
image/jpeg
image/jpeg
image/png
```
However, we were not specifying the `boundary` of the
`multipart/mixed` main content-type of the email, so
sometimes the email would come through appearing to
have an empty body with the entire thing attached as
one attachment, and some mail parsers considered the
entire email as the "epilogue" and/or "preamble".
This commit fixes the issue by specifying the boundary
in the content-type header per https://www.w3.org/Protocols/rfc1341/7_2_Multipart.html
Adds a checkbox to filter untranslated text strings in the admin UI, behind a hidden and default `false` site setting `admin_allow_filter_untranslated_text`.
* DEV: Upgrade Rails to 7.1
* FIX: Remove references to `Rails.logger.chained`
`Rails.logger.chained` was provided by Logster before Rails 7.1
introduced their broadcast logger. Now all the loggers are added to
`Rails.logger.broadcasts`.
Some code in our initializers was still using `chained` instead of
`broadcasts`.
* DEV: Make parameters optional to all FakeLogger methods
* FIX: Set `override_level` on Logster loggers (#27519)
A followup to f595d599dd
* FIX: Don’t duplicate Rack response
---------
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
This ensures that the theme id is resolved as early as possible in the
request cycle. This is necessary for the custom homepage to skip
preloading the wrong data.
* DEV: Upgrade Rails to 7.1
* FIX: Remove references to `Rails.logger.chained`
`Rails.logger.chained` was provided by Logster before Rails 7.1
introduced their broadcast logger. Now all the loggers are added to
`Rails.logger.broadcasts`.
Some code in our initializers was still using `chained` instead of
`broadcasts`.
* DEV: Make parameters optional to all FakeLogger methods
* FIX: Set `override_level` on Logster loggers (#27519)
A followup to f595d599dd
* FIX: Don’t duplicate Rack response
---------
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
We recently fixed a problem where secure upload images weren't re-attached when sending the activity summary e-mail.
This fix contained a bug that would lead to n copies of the e-mail body being included, n being the number of duplicates. This is because #fix_parts_after_attachments! was called once per attachment, and adding more parts to the multipart e-mail.
This PR fixes that by:
Adding a failing test case for the above.
Moving the looping over multiple posts into #fix_parts_after_attachments! itself.
When a post is cooked the links are extracted and `TopicLink` instances
are created for each of them. These links are used in various places,
including the topic view, user summary page, etc.
In previous commit 48e5d1a, hotlinked images from Oneboxes have been
ignored from the texts, but hotlinked images turned into Lightboxes
were still extracted.
Many site settings can be distructive or have huge side-effects
for a site that the admin may not be aware of when changing it.
This commit introduces a `requires_confirmation` attribute that
can be added to any site setting. When it is true, a confirmation
dialog will open if that setting is changed in the admin UI,
optionally with a custom message that is defined in client.en.yml.
If the admin does not confirm, we reset the setting to its previous
clean value and do not save the new value.
A new admin setting called `enforce_second_factor_on_external_auth`. It allows users to authenticate using external providers even when 2FA is forced with `enforce_second_factor` site setting.
* Revert "FIX: Set `override_level` on Logster loggers (#27519)"
This reverts commit c1b0488c54.
* Revert "DEV: Make parameters optional to all FakeLogger methods"
This reverts commit 3318dad7b4.
* Revert "FIX: Remove references to `Rails.logger.chained`"
This reverts commit f595d599dd.
* Revert "DEV: Upgrade Rails to 7.1"
This reverts commit 081b00391e.
`Rails.logger.chained` was provided by Logster before Rails 7.1
introduced their broadcast logger. Now all the loggers are added to
`Rails.logger.broadcasts`.
Some code in our initializers was still using `chained` instead of
`broadcasts`.
Currently when a cache entry is corrupt, we log the event without doing
anything else. It means the cache is still corrupt, and the proper value
isn’t computed again.
Normally, it’s very rare the cache becomes corrupt, but it can happen
when upgrading Rails for example and the cache format changes. This is
normally handled automatically by Rails but since we’re using a custom
cache class, we have to do it ourselves.
This patch takes the same approach the Rails team did, when a cache
entry is corrupt, we treat it as a miss, recomputing the proper value
and caching it in the new format.
Wasn't quite handling the cases where a closing bracket `]` was used in the value of one of the attributes.
```markdown
[chat quote=user channel="[broken]"]
```
Would not be correctly parsed because we would _greedily_ use the first `]` as the end of the tag even though it might be a valid character when inside proper quotes.
c39a4de139/app/assets/javascripts/discourse-markdown-it/src/features/bbcode-block.js (L62)
Re-wrote the `parseBBCodeTag` to properly handle the following cases
- A closing tag (aka `[/name]`) which are easy since they don't have any attributes
- An old `[quote=...]` format we used that doesn't uses quotes but still has various attributes of the form `key:value`
- All three valid BBCode opening tag formats we support
- `[name]` without any attributes
- `[name=foo]` with a default value
- `[name foo=bar]` with some attributes
Ended up having to fix/rewrite the few bbcode rules that were using the `parseBBCodeTag` function, namely `d-wrap` and `discourse-local-dates`.
While working on this, I think I also found a way to get rid the of shims we had in place so that plugins could use the `parseBBCodeTag` function.
Reference - https://meta.discourse.org/t/having-a-right-bracket-in-a-channel-name-breaks-all-quotes-from-that-channel/308439
This is a duplicate of the `host` field which means we are bloating the
logs unnecessarily.
Just remove without depreciation for now but we are open to properly
depreciating it if others depend on this field.
This commit adds ability to fetch a subset of site settings from the `/admin/site_settings` endpoint so that it can be used in all places where the client app needs access to a subset of the site settings.
Additionally, this commit also introduces a new service class called `UpdateSiteSetting` that encapsulates all the logic that surrounds updating a site setting so that it can be used to update site setting(s) anywhere in the backend. This service comes in handy with, for example, the controller for the flags admin config area which may need to update some site settings related to flags.
Internal topic: t/130713.
This commits introduces the `sidekiq_report_long_running_jobs_minutes`
global setting which allows a site administrator to log a warning in the
Rails log when a Sidekiq job has been running for too long.
The warning is logged with the backtrace of the thread that is
processing the Sidekiq job to make it easier to figure out what a
sidekiq job is stuck on.
When we turn on settings automatically for customers,
we sometimes use `.set_and_log` which will make a staff
action log for the site setting change. This is fine, but
there is no context for customers.
This change allows setting a message with `.set_and_log`, which
will be stored in the `details` column of the staff action log
created, which will show up on `/admin/logs/staff_action_logs`
---------
Co-authored-by: Kelv <kelv@discourse.org>
In #26642 we introduced a change that re-attaches securely uploaded images in the digest e-mail. However, this change assumed that the type argument to the Email::Sender constructor would be a symbol, but when it is coming from the UserEmail job it is a string. This PR fixes that.
This introduces the syntax of
`category:a,b,c` which will search across multiple categories.
Previously there was no way to allow search across a wide selection of
categories.
We are seeing the following error in our logs when Sidekiq is sent a
`USR1` signal in production when logrotate happens:
```
log writing failed. stream closed in another thread
Error encountered while starting Sidekiq: can't be called from trap context\n/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/lib/unicorn/util.rb:71:in `reopen'
```
I'm not quite sure where the error is triggered from so I'm improving
the way we log errors.
We were using `autoclose` as the topic status update
when silently closing topics using the bulk
actions (introduced in 0464ddcd9b).
However, this resulted in a message like this showing in
the topic as a small moderator post:
> This topic was automatically closed after X days.
This is not accurate, the topic was bulk closed by someone.
Instead, we can use `closed` as the status, and a more accurate
> Closed on DATE
message is used. `TopicStatusUpdater` needed an additional
option to keep the same "fake read" behaviour as autoclose
so we can keep the same functionality for silently closing
topics in bulk actions.
decorator-transforms (https://github.com/ef4/decorator-transforms) is a modern replacement for babel's plugin-proposal-decorators. It provides a decorator implementation using modern browser features, without needing to enable babel's full suite of class feature transformations. This improves the developer experience and performance.
In local testing with Google's 'tachometer' tool, this reduces Discourse's 'init-to-render' time by around 3-4% (230ms -> 222ms).
It reduces our initial gzip'd JS payloads by 3.2% (2.43MB -> 2.35MB), or 7.5% (14.5MB -> 13.4MB) uncompressed.
This was previously reverted in 97847f6. This version includes a babel transformation which works around the bug in Safari <= 15.
For Cloudflare compatibility issues, check https://meta.discourse.org/t/311390
Not all HTML elements are converted into Markdown. Some are kept as HTML.
Without this fix XML/HTML entities that are formatted as text instead of code are swallowed by Discourse.
This also fixes quotes in the `title` attribute of the `<abbr>` tag.
Previously `HtmlToMarkdown` always converted HTML tables into Markdown tables. That lead to some badly formatted Markdown tables, e.g. when the table contained `rowspan` or `colspan`. This solves the issue by using very basic HTML tables in those cases.
Inline the helper functions, avoid creating and then immediately destructuring arrays, use complete strings instead of string interpolation, Map instead of a pojo.
This commit introduces a hidden `s3_inventory_bucket` site setting which
replaces the `enable_s3_inventory` and `s3_configure_inventory_policy`
site setting.
The reason `enable_s3_inventory` and `s3_configure_inventory_policy`
site settings are removed is because this feature has technically been
broken since it was introduced. When the `enable_s3_inventory` feature
is turned on, the app will because configure a daily inventory policy for the
`s3_upload_bucket` bucket and store the inventories under a prefix in
the bucket. The problem here is that once the inventories are created,
there is nothing cleaning up all these inventories so whoever that has
enabled this feature would have been paying the cost of storing a whole
bunch of inventory files which are never used. Given that we have not
received any complains about inventory files inflating S3 storage costs,
we think that it is very likely that this feature is no longer being
used and we are looking to drop support for this feature in the not too
distance future.
For now, we will still support a hidden `s3_inventory_bucket` site
setting which site administrators can configure via the
`DISCOURSE_S3_INVENTORY_BUCKET` env.
Continued work on moderate flags UI.
In this PR admins are allowed to change the order of flags. The notify user flag is always on top but all other flags can be moved.
Previously "themes frontend" CI job would:
1. pull compatible versions of themes that happened to be in the base image
2. clone all official themes (overriding the compatible versions from 1.)
3. run tests
This commit adds the ability for site administrators to mark users'
passwords as expired. Note that this commit does not add any client side
interface to mark a user's password as expired.
The following changes are introduced in this commit:
1. Adds a `user_passwords` table and `UserPassword` model. While the
`user_passwords` table is currently used to only store expired
passwords, it will be used in the future to store a user's current
password as well.
2. Adds a `UserPasswordExpirer.expire_user_password` method which can
be used from the Rails console to mark a user's password as expired.
3. Updates `SessionsController#create` to check that the user's current
password has not been marked as expired after confirming the
password. If the password is determined to be expired based on the
existence of a `UserPassword` record with the `password_expired_at`
column set, we will not log the user in and will display a password
expired notice. A forgot password email is automatically send out to
the user as well.
This commit introduces the following changes:
1. Introduce the `SignalTrapLogger` singleton which starts a single
thread that polls a queue to log messages with the specified logger.
This thread is necessary becasue most loggers cannot be used inside
the `Signal.trap` context as they rely on mutexes which are not
allowed within the context.
2. Moves the monkey patch in `freedom_patches/unicorn_http_server_patch.rb` to
`config/unicorn.config.rb` which is already monkey patching
`Unicorn::HttpServer`.
3. `Unicorn::HttpServer` will now automatically send a `USR2` signal to
a unicorn worker 2 seconds before the worker is timed out by the
Unicorn master.
4. When a Unicorn worker receives a `USR2` signal, it will now log only
the main thread's backtraces to `Rails.logger`. Previously, it was
`put`ing the backtraces to `STDOUT` which most people wouldn't read.
Logging it via `Rails.logger` will make the backtraces easily
accessible via `/logs`.
This commit introduces the following changes which allows a site
administrator to mark `Upload` records with the `s3_file_missing`
verification status which will result in the `Upload` record being ignored when
`Discourse.store.list_missing_uploads` is ran on a site where S3 uploads
are enabled and `SiteSetting.enable_s3_inventory` is set to `true`.
1. Introduce `s3_file_missing` to `Upload.verification_statuses`
2. Introduce `Upload.mark_invalid_s3_uploads_as_missing` which updates
`Upload#verification_status` of all `Upload` records from `invalid_etag` to `s3_file_missing`.
3. Introduce `rake uploads:mark_invalid_s3_uploads_as_missing` Rake task
which allows a site administrator to change `Upload` records with
`invalid_etag` verification status to the `s3_file_missing`
verificaton_status.
4. Update `S3Inventory` to ignore `Upload` records with the
`s3_file_missing` verification status.
When uploading a video, the composer will now show a thumbnail image in
the composer preview instead of just the video placeholder image.
If `enable_diffhtml_preview` is enabled the video will be rendered in
the composer preview and is playable.
This commit updates all Sidekiq signal handling event logs to go through
Unicorn's logger instead of logging to STDOUT. Going through a proper logger
means the log messages are logged in the format which the logger has configured.
This means we get proper timestamp for the log messages.
Introduced back in 2022 in
e3d495850d,
our new more specific message-id format for inbound and
outbound emails has now been in use for a very long time,
we can remove the support for the old formats:
`topic/:topic_id/:post_id.:random@:host`
`topic/:topic_id@:host`
`topic/:topic_id.:random@:host`
In this PR service objects were moved to Core https://github.com/discourse/discourse/pull/26506
However, ServiceRunner should be moved as well. Mostly for CI to run effortlessly without loading plugins.
This commit moves the logic for crawler rate limits out of the application controller and into the request tracker middleware. The reason for this move is to apply rate limits to all crawler requests instead of just the requests that make it to the application controller. Some requests are served early from the middleware stack without reaching the Rails app for performance reasons (e.g. `AnonymousCache`) which results in crawlers getting 200 responses even though they've reached their limits and should be getting 429 responses.
Internal topic: t/128810.
This commit adds a `DISCOURSE_DUMP_BACKTRACES_ON_UNICORN_WORKER_TIMEOUT`
environment that will allow us to dump all backtraces for all threads of
a Unicorn worker 2 seconds before it times out. In development,
backtraces are dumped to `STDOUT` and in production we will dump it to
`unicorn.stdout.log`.
We want to dump all the backtraces to make it easier to identify the
cause of a Unicorn worker timing out.
This commit updates `S3Inventory#files` to ignore S3 inventory files
which have a `last_modified` timestamp which are not at least 2 days
older than `BackupMetadata.last_restore_date` timestamp.
This check was previously only in `Jobs::EnsureS3UploadsExistence` but
`S3Inventory` can also be used via Rake tasks so this protection needs
to be in `S3Inventory` and not in the scheduled job.
* DEV: allow reply_by_email, visit_link_to_respond strings to be modified by plugins
* DEV: separate visit_link_to_respond and reply_by_email modifiers out
This PR aims to add bulk actions to the user's bookmarks.
After this feature, all users should be able to select multiple bookmarks and perform the actions of "deleting" or "clear reminders"
This fixes the `PrettyText.make_all_links_absolute` to better handle subfolder.
In subfolder, when given the cooked version of a post, links to mentions includes the `Discourse.base_path` prefix. Adding the `Discourse.base_url` was doubling the `Discourse.base_path`.
The issue was hidden behind the specs which was stubbing `Discourse.base_url` instead of relying on `Discourse.base_path`.
This fixes both the "algorithm" used in `PrettyText.make_all_links_absolute` to better handle this case and correct the specs to properly handle subfolder cases.
There are lots of changes in the specs due to a refactoring to use squiggly heredoc strings for easier reading and less escaping.
This commit adds a step in our tests workflow on Github actions to update the themes to
use the compatible version when not running aginast the `main` branch.
This is to ensure that we are not running
the tests for themes against an incompatible version of Discourse.
AuthProvider#enabled_setting=, used primarily by plugins, has been deprecated since version 2.9, in favour of Authenticator#enabled?. This PR confirms we are seeing no more usage and removes the method.
Whenever one creates, updates, or deletes a post, we should keep the `topic.word_count` counter in sync.
Context - https://meta.discourse.org/t/-/308062
This commits updates `FinalDestination#get` to not forward
`Authorization` header on redirects since most HTTP clients I tested like
curl and wget does not it.
This also fixes a recent problem in `DiscourseIpInfo.mmdb_download`
where we will fail to download the databases when both `GlobalSetting.maxmind_account_id` and
`GlobalSetting.maxmind_license_key` has been set. The failure is due to
the bug above where the redirected URL given by MaxMind does not accept
an `Authorization` header.
This commit optimises the database query generated by
`TopicsFilter#filter_categories` when the `-category:*` filter is used.
Previously, the method will add the `topics.category_id NOT IN
(<category ids to be excluded>)` filter to the resulting query. However,
we noticed that the performance of the query degrades as the number of
rows in the `topics` table grow and when the number of category ids to be
excluded is large.
Sample of query we ran on a large database in production to demonstrate
the improvement:
Before:
```
SELECT topics.id FROM topics WHERE topics.category_id NOT IN (83, 136, 149, 143, 153, 165, 161, 123, 155, 163, 144, 134, 69, 135, 158, 141, 151, 160, 131, 133, 89, 104, 150, 147, 132, 145, 108, 146, 122, 100, 128, 154, 95, 102, 140, 139, 88, 91, 87) ORDER BY topics.id DESC LIMIT 5;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=27795.34..27795.34 rows=1 width=4) (actual time=29.317..30.165 rows=5 loops=1)
-> Sort (cost=27795.34..27795.34 rows=1 width=4) (actual time=29.316..30.163 rows=5 loops=1)
Sort Key: id DESC
Sort Method: top-N heapsort Memory: 25kB
-> Gather (cost=1000.10..27795.33 rows=1 width=4) (actual time=0.187..26.132 rows=73478 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Parallel Seq Scan on topics (cost=0.10..26795.23 rows=1 width=4) (actual time=0.013..22.252 rows=24493 loops=3)
Filter: (category_id <> ALL ('{83,136,149,143,153,165,161,123,155,163,144,134,69,135,158,141,151,160,131,133,89,104,150,147,132,145,108,146,122,100,128,154,95,102,140,139,88,91,87}'::integer[]))
Rows Removed by Filter: 77276
Planning Time: 0.140 ms
Execution Time: 30.181 ms
```
After:
```
SELECT topics.id FROM topics WHERE NOT EXISTS (
SELECT 1
FROM unnest(array[83, 136, 149, 143, 153, 165, 161, 123, 155, 163, 144, 134, 69, 135, 158, 141, 151, 160, 131, 133, 89, 104, 150, 147, 132, 145, 108, 146, 122, 100, 128, 154, 95, 102, 140, 139, 88, 91, 87]) AS excluded_categories(category_id)
WHERE topics.category_id IS NULL OR excluded_categories.category_id = topics.category_id
) ORDER BY topics.id DESC LIMIT 5 ;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.42..13.52 rows=5 width=4) (actual time=0.028..0.110 rows=5 loops=1)
-> Nested Loop Anti Join (cost=0.42..179929.62 rows=68715 width=4) (actual time=0.027..0.109 rows=5 loops=1)
Join Filter: ((topics.category_id IS NULL) OR (excluded_categories.category_id = topics.category_id))
Rows Removed by Join Filter: 239
-> Index Scan Backward using forum_threads_pkey on topics (cost=0.42..108925.71 rows=305301 width=8) (actual time=0.012..0.062 rows=44 loops=1)
-> Function Scan on unnest excluded_categories (cost=0.00..0.39 rows=39 width=4) (actual time=0.000..0.001 rows=6 loops=44)
Planning Time: 0.126 ms
Execution Time: 0.124 ms
(8 rows)
```
This commit changes request method for "categories/search" from GET to
POST to make sure that long filters can be passed to the server. For
example, category selectors with many categories are setting the full
list of selected category IDs to ensure these are filtered out from the
list of choices. This can result in a long URL that exceeds the maximum
length.
This reverts commit 0f4520867b.
This has led to two problems:
1. An incompatibility with Cloudflare's "auto minify" feature. They've deprecated this feature because of incompatibility with modern JS syntax. But unfortunately it will remain enabled on existing properties until 2024-08-05.
2. Discourse fails to boot in Safari 15. This is strange, because Safari does support all the required features in our production JS bundles. Even more strangely, things start working as soon as you open the developer tools. That suggests the cause could be a Safari bug rather than a simple incompatibility.
Reverting while we work out a path forward on both those issues.
This commit switches `DiscourseIpInfo.mmdb_download` to use the
permalinks supplied by MaxMind to download the MaxMind databases as
specified in
https://dev.maxmind.com/geoip/updating-databases#directly-downloading-databases
which states:
```
To directly download databases, follow these steps:
1. In the "Download Links" column, click "Get Permalink(s)" for the desired database.
2. Copy the permalink(s) provided in the modal window.
3. Provide your account ID and your license key using Basic Authentication to authenticate.
```
Previously we are downloading from `https://download.maxmind.com/app/geoip_download` but this is not
documented anyway on MaxMind's docs so this URL can in theory break
in the future without warning. Therefore, we are taking a proactive
approach to download the databases from MaxMind the recommended way
instead of relying on a hidden URL. This old way of downloading the
databases with only a license key will be deprecated in 3.3 and be
removed in 3.4.
In 95a82d608d, we lowered the default for
`Onebox.options.max_download_kb` from 10mb to 2mb for security hardening
purposes. However, this resulted in multiple bug reports where seemingly
nomral URLs stopped being oneboxed. It turns out that lowering
`Onebox.options.max_download_kb` resulted in `Onebox::Helpers::DownloadTooLarge` being raised
more often for more URLs in `Onebox::Helpers.fetch_response` which
`Onebox::Helpers.fetch_html_doc` relies on. When
`Onebox::Helpers::DownloadTooLarge` is raised in
`Onebox::Helpers.fetch_response`, we throw away whatever response body
which we have already downloaded at that point. This is not ideal
because Nokogiri can parse incomplete HTML documents and there is a
really high chance that the incomplete HTML document still contains the
information which we need for oneboxing.
Therefore, this commit updates `Onebox::Helpers.fetch_html_doc` to not
throw away the response body when the size of the response body exceeds
`Onebox.options.max_download_size`. Instead, we just take whatever
response which we have and get Nokogiri to parse it.
This can happen for various reasons including rate limiting and middleware bugs. This should resolve the warning we're seeing in the logs
```
RequestTracker.get_data failed : NoMethodError : undefined method `[]' for nil:NilClass
```
decorator-transforms (https://github.com/ef4/decorator-transforms) is a modern replacement for babel's plugin-proposal-decorators. It provides a decorator implementation using modern browser features, without needing to enable babel's full suite of class feature transformations. This improves the developer experience and performance.
In local testing with Google's 'tachometer' tool, this reduces Discourse's 'init-to-render' time by around 3-4% (230ms -> 222ms).
It reduces our initial gzip'd JS payloads by 3.2% (2.43MB -> 2.35MB), or 7.5% (14.5MB -> 13.4MB) uncompressed.
This keeps coming up in user testing as something
we want to get rid of. The `navigation_menu` setting
has been set to sidebar by default for some time now,
and we are rolling out admin sidebar widely. It just
doesn't make sense to let people turn this off in
the first step of the wizard -- we _want_ people to
use the sidebar.
Whenever a post already failed "lightweight" validations, we skip all the expensive validations (that cooks the post or run SQL queries) so that we reply as soon as possible.
Also skip validating polls when there's no "[/poll]" in the raw.
Internal ref - t/115890
When running `rake uploads:regenerate_missing_optimized`,
a `Discourse::InvalidAccess` will be raised if an SVG
file is being processed as `OptimizedImage.prepend_decoder!`
doesn't support the svg extension. This commit simply copies
the original SVG file as the thumbnail, just like currently
`OptimizedImage.create_for` does.
Ignored columns can only be dropped when its associated post-deploy
migration has been promoted to a regular migration. This is so because
Discourse doesn't rely on a schema file system to setup a brand new
database and thus the column information will be loaded by the
application first before the post-deploy migration runs.
- Use 'cheap-source-map' webpack config on low-memory machines
This results in worse quality sourcemaps in browser dev tools, but it significantly reduces memory use in our webpack build. In approximate local testing it drops from 1100mb to 590mb. This should make the rebuild process on low-memory machines much faster and less likely to trigger OOM errors.
In development, and on higher-memory machines, the higher-quality 'source-map' option is maintained.
- Disable Webpack's built-in `minimize` feature. Embroider already applies Terser after the webpack build is complete. There is no need to double-minimize the output.
- Update ember-cli-progress-ci to print to stderr instead of stdout. For some reason, pups (used by discourse_docker) buffers the stdout of commands and only prints when they are finished. stderr does not have this same limitation, so switching will mean sysadmins can see the progress of the ember build in real-time.
Given the number of variables it's hard to promise exact numbers. But, in my tests on a DO droplet with 1GB RAM (+2GB swap), this reduced the `ember build` portion of a `./launcher rebuild app` from ~50 minutes to ~15 minutes.
This commit adds a `isValidUrl` helper function to the context in
which theme migrations are ran in. This helper function is to make it
easier for theme developers to check if a string is a valid URL or path
when writing theme migrations. This can be helpful in cases when
migrating a string based setting to `type: objects` which contain `type:
string` properties with URL validations enabled.
This commit also introduces the `UrlHelper.is_valid_url?` method
which actually checks that the URL string is of the valid format instead of
only checking if the URL string is parseable which is what `UrlHelper.relaxed_parse` does
and is not sufficient for our needs.
This is a follow up of 5fcb7c262d
It was missing the case where secure uploads is enabled, which creates a copy of the upload no matter what.
So this checks for the original_sha1 of the uploads as well when checking for duplicates.
This commit introduces a few changes as a result of
customer issues with finding why a topic was relisted.
In one case, if a user edited the OP of a topic that was
unlisted and hidden because of too many flags, the topic
would get relisted by directly changing topic.visible,
instead of going via TopicStatusUpdater.
To improve tracking we:
* Introduce a visibility_reason_id to topic which functions
in a similar way to hidden_reason_id on post, this column is
set from the various places we change topic visibility
* Fix Post#unhide! which was directly modifying topic.visible,
instead we use TopicStatusUpdater which sets visibility_reason_id
and also makes a small action post
* Show the reason topic visibility changed when hovering the
unlisted icon in topic status on topic titles
This commit fixes a bug in theme settings migrations where values of `objects` typed theme settings aren't passed to migrations even when there are overriding values for those settings. What causes this bug is that, when creating the hash that contains all the overridden settings and will be passed to migrations, the values of `objects` typed settings are incorrectly retrieved from the `value` column (which is always nil for `objects` type) instead of `json_value`. `objects` settings are different from all other types in that they store their values in the `json_value` column and they need to be special-cased when retrieving their values.
Our 'page_view_crawler' / 'page_view_anon' metrics are based purely on the User Agent sent by clients. This means that 'badly behaved' bots which are imitating real user agents are counted towards 'anon' page views.
This commit introduces a new method of tracking visitors. When an initial HTML request is made, we assume it is a 'non-browser' request (i.e. a bot). Then, once the JS application has booted, we notify the server to count it as a 'browser' request. This reliance on a JavaScript-capable browser matches up more closely to dedicated analytics systems like Google Analytics.
Existing data collection and graphs are unchanged. Data collected via the new technique is available in a new 'experimental' report.
This commit fixes a bug in the `themes:update` rake task which resulted
in the ActiveRecord transaction not being rolled back when an error was
encountered. The transaction was first introduced in
7f0682f4f2 which changed a `begin..rescue`
block to `transaction do..rescue`. The problem with that change
prevented the transaction from ever rolling back as the code block
looks something like this:
```
transaction do
begin
update_theme
rescue => e
# surpress error
end
end
```
From the transaction's point of view now, it will never rollback even if
an error was encountered when updating the remote theme because it will
never see the error.
Instead we should have done something like this if we wanted to surpress
the errors encountered while still ensuring that the transaction is
rolled back.
```
begin
transaction do
update_theme
end
rescue => e
# surpress error
end
```
This is essential for us to determine which site is encountering an
error while updating remote themes. We are also including the theme's id
because themes can have the same name.
This service-worker caching functionality was disabled by default in 1c58395bca, and the setting to re-enable was marked as experimental. Now we are dropping all the related logic.
- Run the CSP-nonce-related middlewares on the generated response
- Fix the readonly mode checking to avoid empty strings being passed (the `check_readonly_mode` before_action will not execute in the case of these re-dispatched exceptions)
- Move the BlockRequestsMiddleware cookie-setting to the middleware, so that it is included even for unusual HTML responses like these exceptions
LinkedIn has grandfathered its old OAuth2 provider. This can only be used by existing apps. New apps have to use the new OIDC provider.
This PR adds a linkedin_oidc provider to core. This will exist alongside the discourse-linkedin-auth plugin, which will be kept for those still using the deprecated provider.
For e-mails, secure uploads redacts all secure images, and later uses the access control post to re-attached allowed ones. We pass the ID of this post through the X-Discourse-Post-Id header. As the name suggests, this assumes there's only ever one access control post. This is not true for activity summary e-mails, as they summarize across posts.
This adds a new header, X-Discourse-Post-Ids, which is used the same way as the old header, but also works for the case where an e-mail is associated with multiple posts.
Automatically add `moderators` and `admins` auto groups to specific site settings.
In the new group-based permissions systems, we just want to check the user’s groups since it more accurately reflects reality
Affected settings:
- tag_topic_allowed_groups
- create_tag_allowed_groups
- send_email_messages_allowed_groups
- personal_message_enabled_groups
- here_mention_allowed_groups
- approve_unless_allowed_groups
- approve_new_topics_unless_allowed_groups
- skip_review_media_groups
- email_in_allowed_groups
- create_topic_allowed_groups
- edit_wiki_post_allowed_groups
- edit_post_allowed_groups
- self_wiki_allowed_groups
- flag_post_allowed_groups
- post_links_allowed_groups
- embedded_media_post_allowed_groups
- profile_background_allowed_groups
- user_card_background_allowed_groups
- invite_allowed_groups
- ignore_allowed_groups
- user_api_key_allowed_groups
This is to enable :array type attributes for Contract
attributes in services, this is a followup to the move
of services from chat to core here:
cab178a405
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
This SQL tries to insert as much data as possible into the `user_stats` table by either calculating or by approximating stats based on existing. It also fixes an error in the calculation of `reply_count`which mistakenly contained all posts, not just replies.
This change also disables some steps in the `import:ensure_consistency` rake task by setting the `SKIP_USER_STATS` env variable. Otherwise, the rake task will overwrite the calculated data in the `user_stats` table with inaccurate data. I'm not changing or removing the logic from the rake task yet because other bulk import scripts seem to depend on it.
This modifier allows plugins to alter the outcome of
`should_secure_uploads?` on a Post record, for cases when
plugins need post-attached uploads to always be secure (or
not secure) in specific scenarios.
This commit changes enum typed theme objects property to be optional.
Previously, an enum typed property is always required but we have found
that this might not be ideal so we want to change it.
This method name is a bit confusing; with_secure_uploads implies
it may return a block or something with the uploads of the post,
and has_secure_uploads implies that it's checking whether the post
is linked to any secure uploads.
should_secure_uploads? communicates the true intent of this method --
which is to say whether uploads attached to this post should be
secure or not.
This commit changes `DiscourseIpInfo.mmdb_download` to use `File.join`
instead of `URI.join` when `GlobalSetting.maxmind_mirror_url` has been
configured. This is necessary because `URI.join` does not work the way I
expect it to work when I implemented it previously.
`URI.join("http://www.example.com/mirror", "test.tar.gz") results in
`http://www.example.com/test.tar.gz` instead of our expected
`http://www.exmaple.com/mirror/test.tar.gz`. For our simple use case
here, `File.join` is sufficient.
We will be collecting the logo URL and the site's default locale values along with existing basic details to display the site on the Discourse Discover listing page. It will be included only if the site is opted-in by enabling the "`include_in_discourse_discover`" site setting.
Also, we no longer going to use `about.json` and `site/statistics.json` endpoints retrieve these data. We will be using only the `site/basic-info.json` endpoint.
Doesn't actually seem to be used by any of our formatters, but let's send the proper data anyway for future-proofing. Followup to ff6cb1bc05 and 8098876bfa
When a user is manually deactivated, they should not be deleted by our background job that purges inactive users.
In addition, site settings keywords should accept an array of keywords.
In this PR, all references in the UI to the word "`upgrade`" are changed to "`update`". This is to differentiate the update process in self-hosted sites from the plan "upgrade" process in hosted sites.
Follow-up to the PR: https://github.com/discourse/docker_manager/pull/208
Why this change?
This allows downloading the MaxMind databases from a mirror in cases
where downloading directly from MaxMind's API endpoint is problematic
due to API limits.
Why this change?
We currently support `GlobalSetting.refresh_maxmind_db_during_precompile_days` which
should cache the maxmind databases on disk for the configured number of
days before it downloads the databases from maxmind again via the API.
This was previously added to help us avoid hitting the API rate limit from maxmind.
However, there was a bug in the `copy_maxmind` when we copied the latest
downloaded database to the cache directory. In particular, `FileUtils.cp` was called with
`preserve: true` which would preserve the modified time of the file
being copied. This is problematic because download the database from
maxmind on 2 April 2024 can give us a file with an mtime of 29 March
2024. If `GlobalSetting.refresh_maxmind_db_during_precompile_days` is
set to `2` for example, the cache will never be used since we will
think that the file has been downloaded for more than 2 days in our
checks.
What is the fix here?
While we want to preserve the owner and group of the file, we do not
want to preserve the modified time and hence we will call
`FileUtils.touch` when copying the file.
Why this change?
For a schema like this:
```
schema = {
name: "section",
properties: {
category_property: {
type: "categories",
required: true,
},
},
}
```
When the value of the property is set to an empty array, we are not
raising an error which we should because the property is marked as
required.
Why this change?
This is a follow-up to 86b2e3a.
Basically, we want to allow people to select more than 1 group as well.
What does this change do?
1. Change `type: group` to `type: groups` and support `min` and `max`
validations for `type: groups`.
2. Fix the `<SchemaThemeSetting::Types::Groups>` component to support the
`min` and `max` validations and switch it to use the `<GroupChooser>` component
instead of the `<ComboBoxComponent>` component which previously only supported
selecting a single group.
Previously the problem check registry simply looked at the subclasses of ProblemCheck. This was causing some confusion in environments where eager loading is not enabled, as the registry would appear empty as a result of the classes never being referenced (and thus never loaded.)
This PR changes the approach to a more explicit one. I followed other implementations (bookmarkable and hashtag autocomplete.) As a bonus, this now has a neat plugin entry point as well.