* FIX: Mark invites flash messages as HTML safe.
This change should be safe as all user inputs included in the errors are sanitized before sending it back to the client.
Context: https://meta.discourse.org/t/html-tags-are-explicit-after-latest-update/214220
* If somebody adds a new error message that includes user input and doesn't sanitize it, using html-safe suddenly becomes unsafe again. As an extra layer of protection, we make the client sanitize the error message received from the backend.
* Escape user input instead of sanitizing
For now this is still gated behind a `QUNIT_EMBER_CLI=1` environment variable, but will eventually become the default so that we can remove `run-qunit.js`.
If the SiteSetting `allowed_onebox_iframes` contains a value of `*`, it will use the values of `all_iframe_origins` during the Oneboxing process. If `all_iframe_origins` itself contains a value of `*`, `origins_to_regexes` will try to return a "catch-all" regex.
Other code assumes `origins_to_regexes`will return an array, so this change ensures the `*` case will return an array containing only the catch-all regex.
1. `html_doc.css('.Box.md')` always returns a truthy value (e.g. `[]`) so the second branch of the if-elsif never ran
2. `node&.css('text()')` was invalid code that would raise an error
3. Matching on h3 elements is no longer correct with the current html structure returned by GitHub
This reverts commit 2c7906999a.
The changes break some things in local development (putting JS files
into minified files, not allowing debugger, and others)
This reverts commit ea84a82f77.
This is causing problems with `/theme-qunit` on legacy, non-ember-cli production sites. Reverting while we work on a fix
This is quite complex as it means that in production we have to build
Ember CLI test files and allow them to be used by our Rails application.
There is a fair bit of glue we can remove in the future once we move to
Ember CLI completely.
The `plugin:pull_compatible_all` task is intended to take incompatible plugins and downgrade them to an earlier version. Problem is, when running the rake task in development/production environments, the plugins have already been activated. If an incompatible plugin raises an error in `plugin.rb` then the rake task will be unable to start.
This commit centralises our LOAD_PLUGINS detection, adds support for LOAD_PLUGINS=0 in dev/prod, and adds a warning to `plugin:pull_compatible_all` if it's run with plugins enabled.
This commit extends the options which can be passed to
`PrettyText.markdown` so that which Markdown-it rules and Discourse
Markdown plugins to be used when rendering a text can be customizable.
Currently, this extension is mainly used by plugins.
Having to load `ip_addr` is confusing especially when that file exists
to monkey patch Ruby's `IpAddr` class. Moving it to our freedom patches
folder which is automatically loaded on initialization.
Also:
* Remove an unused method (#fill_email)
* Replace a method that was used just once (#generate_username) with `SecureRandom.alphanumeric`
* Remove an obsolete dev puma `tmp/restart` file logic
* File.exists? is deprecated and removed in Ruby 3.2 in favor of
File.exist?
* Dir.exists? is deprecated and removed in Ruby 3.2 in favor of
Dir.exist?
The rake task deleted here was added back in Feb 2020
when bookmarks were first converted from PostAction
records, it is no longer needed. The ignored columns
were removed in ed83d7573e.
This commit adds a check that runs regularly as per
2d68e5d942 which tests the
credentials of groups with SMTP or IMAP enabled. If any issues
are found with those credentials a high priority problem is added to the
admin dashboard.
This commit also formats the admin dashboard differently if
there are high priority problems, bringing them to the top of
the list and highlighting them.
The problem will be cleared if the issue is fixed before the next
problem check, or if the group's settings are updated with a valid
credential.
I plan to use this in an upcoming discourse-reactions PR, where I want to like a post without notifying the user, so I can instead create a reaction notification.
Additionally, we decouple the a11y attributes from the icon itself, which will let us extend the widget's icon without losing them.
This allows authenticators to instruct the Auth::Result to override attributes without using the general site settings. This provides an easy migration path for auth plugins which offer their own "overrides email", "overrides username" or "overrides name" settings. With this new api, they can set `overrides_*` on the result object, and the attribute will be overriden regardless of the general site setting.
ManagedAuthenticator is updated to use this new API. Plugins which consume ManagedAuthenticator will instantly take advantage of this change.
This commit adds API documentation for the new upload
endpoints related to direct + multipart external uploads.
Also included is a rake task which watches the files in
the spec/requests/api directory and calls a script file
(spec/regenerate_swagger_docs) whenever one changes. This
script runs rake rswag:specs:swaggerize and then copies
the openapi.yml file over to the discourse_api_docs repo
directory, and hits a script there to convert the YML to
JSON so the API docs are refreshed while the server is
still running. This makes the loop of making a doc change
and seeing it in the local server much faster.
The rake task is rake autospec:swagger
This commit removes jQuery file uploader from Discourse,
completing the transition to Uppy. The image-uploader
and UploadMixin components are also removed in this commit
as they have already been replaced and are the only things
using jQuery file upload.
.-'~~~`-.
.' `.
| R I P |
| jquery |
| file |
| upload |
| |
\\| 2013-2021 |//
-----------------
Now that d5e380e5c1 has been
committed there is nothing in the codebase that uses either
resumable.js or the old backup-uploader component.
R.I.P resumable.js
This commit introduces scheduled problem checks for the admin dashboard, which are long running or otherwise cumbersome problem checks that will be run every 10 minutes rather than every time the dashboard is loaded. If these scheduled checks add a problem, the problem will remain until it is cleared or until the scheduled job runs again.
An example of a check that should be scheduled is validating credentials against an external provider.
This commit also introduces the concept of a `priority` to the problems generated by `AdminDashboardData` and the scheduled checks. This is `low` by default, and can be set to `high`, but this commit does not change any part of the UI with this information, only adds a CSS class.
I will be making a follow up PR to check group SMTP credentials.
When attempting to Onebox a page if there is no `meta property="og:description"` tag but there is a `meta name="description"` tag, Onebox should try to use that value.
Discourse sent only translation overrides for the current language to the client instead of sending overrides from fallback locales as well. This especially impacted en_GB -> en since most overrides would be done in English instead of English (UK).
This also adds lots of tests for previously untested code.
There's a small caveat: The client currently doesn't handle fallback locales for MessageFormat strings. That is why overrides for those strings always have a higher priority than regular translations. So, as an example, the lookup order for MessageFormat strings in German is:
1. override for de
2. override for en
3. value from de
4. value from en
As an example, the lookup order for German was:
1. override for de
2. override for en
3. value from de
4. value from en
After this change the lookup order is the same as on the client:
1. override for de
2. value from de
3. override for en
4. value from en
see /t/16381
A plugin API that allows customizing existing topic-backed static pages, like:
faq, tos, privacy (see: StaticController) The block passed to this
method has to return a SiteSetting name that contains a topic id.
```
add_topic_static_page("faq") do |controller|
current_user&.locale == "pl" ? "polish_faq_topic_id" : "faq_topic_id"
end
```
You can also add new pages in a plugin, but remember to add a route,
for example:
```
get "contact" => "static#show", id: "contact"
```
The discourse base image already contains a postgres installation, so pulling a separate postgres image is a little wasteful. Using the copy of Postgres in the discourse image saves about 20 seconds on every GitHub actions run.
This commit sets up Postgres with a few performance-improving flags, which we were already using for the `rake docker:test` task (used on our internal CI system).
If a theme name contained a double-quote, this problem could lead to invalid/unexpected HTML in the `<head>`
Note that this is not considered a security issue because themes can only be installed/named by administrators, and themes/administrators already have the ability to run arbitrary javascript.
We can fake redis transactions so that `fab!` works for redis and PG
data, but it's too slow to be used indiscriminately. Instead, you can
opt into it with the `use_redis_snapshotting` helper.
Insofar as snapshotting allows us to `fab!` more things, it provides a
speedup.
OAuth2Authenticator is considered deprecated, and isn't used in core. However, some plugins still depend on it, and this was breaking the signup of previously-staged users. There is no easy way to make an end-end test of this in core, but I will be adding an integration test in the SAML plugin.
This is a fix to address blurry onebox favicon images if the site you
are linking to happens to have a favicon.ico file that contains multiple
images.
This fix detects of we are trying to create an upload for a favicon.ico
file. We then convert it to a png and not a jpeg like we were doing. We
want a png because it will preserve transparency, otherwise if we
convert it to a jpeg we lose that and it looks bad on dark themed sites.
This fix also addresses the fact that .ico files can include multiple
images. The blurry images we were producing was caused by the
ImageMagick `-flatten` option when the .ico file had multiple images
which then squishes them all together. So for .ico files we are no
longer flattening them and instead we are grabbing the last image in the
.ico bundle and converting that single image to a png.
We previously used ConsolidateNotifications with a threshold of 1 to re-use an existing notification and bump it to the top instead of creating a new one. It produces some jumpiness in the user notification list, and it relies on updating the `created_at` attribute, which is a bit hacky.
As a better alternative, we're introducing a new plan that deletes all the previous versions of the notification, then creates a new one.
This commit introduces a new site setting "google_oauth2_hd_groups". If enabled, group information will be fetched from Google during authentication, and stored in the Discourse database. These 'associated groups' can be connected to a Discourse group via the "Membership" tab of the group preferences UI.
The majority of the implementation is generic, so we will be able to add support to more authentication methods in the near future.
https://meta.discourse.org/t/managing-group-membership-via-authentication/175950
Since 3b13f1146b the email threading
in mail clients has been broken, because the random suffix meant
that the References header would always be different for non-group
SMTP email notifications sent out.
This commit fixes the issue by always using the "canonical" topic
reference ID inside the References header in the format:
topic/TOPIC_ID@HOST
Which was the old format. We also add the References header to
notifications sent for the first post arriving, so the threading
works for subsequent emails. The Message-ID header is still random
as per the previous change.
Currently we display pending posts in topics (both for author and staff
members) but the feature is only enabled when there’s an enabled global site
setting related to moderation.
This patch allows to have the same behavior for a site where there’s
nothing enabled globally but where a moderated category exists. So when
browsing a topic of a moderated category, the presence of pending posts
will be checked whereas nothing will happen in a normal category.
Currently the Message-IDs we send out for outbound email
are not unique; for a post they look like:
topic/TOPIC_ID/POST_ID@HOST
And for a topic they look like:
topic/TOPIC_ID@HOST
This commit changes the outbound Message-IDs to also have
a random suffix before the host, so the new format is
like this:
topic/TOPIC_ID/POST_ID.RANDOM_SUFFIX@HOST
Or:
topic/TOPIC_ID.RANDOM_SUFFIX@HOST
This should help with email deliverability. This change
is backwards-compatible, the old Message-ID format will
still be recognized in the mail receiver flow, so people
will still be able to reply using Message-IDs, In-Reply-To,
and References headers that have already been sent.
This commit also refactors Message-ID related logic
to a central location, and adds judicious amounts of
tests and documentation.
Themes that are imported via a ZIP file do have a `remote_theme` record in the database but the record has a blank value for the `remote_url` field which means attempting to do an update git via will result in an error.
Under some conditions, these varied responses could lead to cache poisoning, hence the 'security' label.
Previously the Rails application would serve JSON data in place of HTML whenever Ember CLI requested an `application.html.erb`-rendered page. This commit removes that logic, and instead parses the HTML out of the standard response. This means that Rails doesn't need to customize its response for Ember CLI.
* REFACTOR: Improve support for consolidating notifications.
Before this commit, we didn't have a single way of consolidating notifications. For notifications like group summaries, we manually removed old ones before creating a new one. On the other hand, we used an after_create callback for likes and group membership requests, which caused unnecessary work, as we need to delete the record we created to replace it with a consolidated one.
We now have all the consolidation rules centralized in a single place: the consolidation planner class. Other parts of the app looking to create a consolidable notification can do so by calling Notification#consolidate_or_save!, instead of the default Notification#create! method.
Finally, we added two more rules: one for re-using existing group summaries and another for deleting duplicated dashboard problems PMs notifications when the user is tracking the moderator's inbox. Setting the threshold to one forces the planner to apply this rule every time.
I plan to add plugin support for adding custom rules in another PR to keep this one relatively small.
* DEV: Introduces a plugin API for consolidating notifications.
This commit removes the `Notification#filter_by_consolidation_data` scope since plugins could have to define their criteria. The Plan class now receives two blocks, one to query for an already consolidated notification, which we'll try to update, and another to query for existing ones to consolidate.
It also receives a consolidation window, which accepts an ActiveSupport::Duration object, and filter notifications created since that value.
We have a couple of site setting, `slow_down_crawler_user_agents` and `slow_down_crawler_rate`, that are meant to allow site owners to signal to specific crawlers that they're crawling the site too aggressively and that they should slow down.
When a crawler is added to the `slow_down_crawler_user_agents` setting, Discourse currently adds a `Crawl-delay` directive for that crawler in `/robots.txt`. Unfortunately, many crawlers don't support the `Crawl-delay` directive in `/robots.txt` which leaves the site owners no options if a crawler is crawling the site too aggressively.
This PR replaces the `Crawl-delay` directive with proper rate limiting for crawlers added to the `slow_down_crawler_user_agents` list. On every request made by a non-logged in user, Discourse will check the User Agent string and if it contains one of the values of the `slow_down_crawler_user_agents` list, Discourse will only allow 1 request every N seconds for that User Agent (N is the value of the `slow_down_crawler_rate` setting) and the rest of requests made within the same interval will get a 429 response.
The `slow_down_crawler_user_agents` setting becomes quite dangerous with this PR since it could rate limit lots if not all of anonymous traffic if the setting is not used appropriately. So to protect against this scenario, we've added a couple of new validations to the setting when it's changed:
1) each value added to setting must 3 characters or longer
2) each value cannot be a substring of tokens found in popular browser User Agent. The current list of prohibited values is: apple, windows, linux, ubuntu, gecko, firefox, chrome, safari, applewebkit, webkit, mozilla, macintosh, khtml, intel, osx, os x, iphone, ipad and mac.
Currently when a user creates posts that are moderated (for whatever
reason), a popup is displayed saying the post needs approval and the
total number of the user’s pending posts. But then this piece of
information is kind of lost and there is nowhere for the user to know
what are their pending posts or how many there are.
This patch solves this issue by adding a new “Pending” section to the
user’s activity page when there are some pending posts to display. When
there are none, then the “Pending” section isn’t displayed at all.
* FEATURE: Optionally send a 'noindex' header in non-canonical responses
This will be used in a SEO experiment.
Co-authored-by: David Taylor <david@taylorhq.com>
The error handling of the theme:update Rake task has been improved. If
an error occurs while updating the default site, then the exception will
be propagated and the process will exit with non-zero status.
This is a follow-up to commit 3f97f884fe.
This commit adds token_hash and scopes columns to email_tokens table.
token_hash is a replacement for the token column to avoid storing email
tokens in plaintext as it can pose a security risk. The new scope column
ensures that email tokens cannot be used to perform a different action
than the one intended.
To sum up, this commit:
* Adds token_hash and scope to email_tokens
* Reuses code that schedules critical_user_email
* Refactors EmailToken.confirm and EmailToken.atomic_confirm methods
* Periodically cleans old, unconfirmed or expired email tokens
This applies only when a single site exists. If a theme update fails
when there are multiple sites, then it will continue updating the
remaining themes.
Use @here to mention all users that were allowed to topic directly or
through group, who liked topics or read the topic. Only first 10 users
will be notified.
Allow current user to keep existent tags when adding or removing a tag.
For example, a user could not remove a tag from a topic if the topic
had another tag that was restricted to a different category.
The code that checked this permission was duplicated everytime a new
settings of this type was added. This commit changes the behavior of
some functionality because some feature checks were bypassed for staff
members.
* DEV: Swap out optipng with oxipng
The oxipng binary has been added to our base docker image here:
244c9cb110
oxipng is a rust replacement for optipng that provides increased
performance and multi-threading. Checkout
https://github.com/shssoichiro/oxipng for more info.
* add instructions for installing oxipng
Similar to site settings, adds support for `refresh` option to theme settings.
```yaml
super_feature_enabled:
type: bool
default: false
refresh: true
```
When rendering the markdown code blocks we replace the
offending characters in the output string with spans highlighting a textual
representation of the character, along with a title attribute with
information about why the character was highlighted.
The list of characters stripped by this fix, which are the bidirectional
characters considered relevant, are:
U+202A
U+202B
U+202C
U+202D
U+202E
U+2066
U+2067
U+2068
U+2069
This commit refactors the direct external upload routes (get presigned
put, complete external, create/abort/complete multipart) into a
helper which is then included in both BackupController and the
UploadController. This is done so UploadController doesn't need
strange backup logic added to it, and so each controller implementing
this helper can do their own validation/error handling nicely.
This is a follow up to e4350bb966
Currently, Discourse rate limits all incoming requests by the IP address they
originate from regardless of the user making the request. This can be
frustrating if there are multiple users using Discourse simultaneously while
sharing the same IP address (e.g. employees in an office).
This commit implements a new feature to make Discourse apply rate limits by
user id rather than IP address for users at or higher than the configured trust
level (1 is the default).
For example, let's say a Discourse instance is configured to allow 200 requests
per minute per IP address, and we have 10 users at trust level 4 using
Discourse simultaneously from the same IP address. Before this feature, the 10
users could only make a total of 200 requests per minute before they got rate
limited. But with the new feature, each user is allowed to make 200 requests
per minute because the rate limits are applied on user id rather than the IP
address.
The minimum trust level for applying user-id-based rate limits can be
configured by the `skip_per_ip_rate_limit_trust_level` global setting. The
default is 1, but it can be changed by either adding the
`DISCOURSE_SKIP_PER_IP_RATE_LIMIT_TRUST_LEVEL` environment variable with the
desired value to your `app.yml`, or changing the setting's value in the
`discourse.conf` file.
Requests made with API keys are still rate limited by IP address and the
relevant global settings that control API keys rate limits.
Before this commit, Discourse's auth cookie (`_t`) was simply a 32 characters
string that Discourse used to lookup the current user from the database and the
cookie contained no additional information about the user. However, we had to
change the cookie content in this commit so we could identify the user from the
cookie without making a database query before the rate limits logic and avoid
introducing a bottleneck on busy sites.
Besides the 32 characters auth token, the cookie now includes the user id,
trust level and the cookie's generation date, and we encrypt/sign the cookie to
prevent tampering.
Internal ticket number: t54739.
* Revert "DEV: increase lock timeout for multisite migration (#14831)"
This partially reverts commit 337ef60303.
We need to revert the mutex around `db:status:json` because the mutex is not available unless the rails environment is loaded which the `db:status:json` doesn't load before the mutex. We can't load the environment before entering the mutex because the mutex is meant to prevent other instances of the task from loading a rails environment while the database is migrating.
Co-authored-by: David Taylor <david@taylorhq.com>
Co-authored-by: David Taylor <david@taylorhq.com>
This PR introduces a new `enable_experimental_backup_uploads` site setting (default false and hidden), which when enabled alongside `enable_direct_s3_uploads` will allow for direct S3 multipart uploads of backup .tar.gz files.
To make multipart external uploads work with both the S3BackupStore and the S3Store, I've had to move several methods out of S3Store and into S3Helper, including:
* presigned_url
* create_multipart
* abort_multipart
* complete_multipart
* presign_multipart_part
* list_multipart_parts
Then, S3Store and S3BackupStore either delegate directly to S3Helper or have their own special methods to call S3Helper for these methods. FileStore.temporary_upload_path has also removed its dependence on upload_path, and can now be used interchangeably between the stores. A similar change was made in the frontend as well, moving the multipart related JS code out of ComposerUppyUpload and into a mixin of its own, so it can also be used by UppyUploadMixin.
Some changes to ExternalUploadManager had to be made here as well. The backup direct uploads do not need an Upload record made for them in the database, so they can be moved to their final S3 resting place when completing the multipart upload.
This changeset is not perfect; it introduces some special cases in UploadController to handle backups that was previously in BackupController, because UploadController is where the multipart routes are located. A subsequent pull request will pull these routes into a module or some other sharing pattern, along with hooks, so the backup controller and the upload controller (and any future controllers that may need them) can include these routes in a nicer way.
PERF: Update like count in visible posts without an extra GET per like
Currently when a user is reading a topic and some post in it receive a
like from another user, the Ember app will be notified via MessageBus
and issue a GET to `/posts/{id}` to get the new like count. This worked
fine for us until today, but it can easily create a self-inflicted DDoS
when a topic with a large number of visitors gets a large number of
likes, since we will issue `visitors * likes` GET requests requests.
This patch optimizes this flow, by sending the new like count down in
the MessageBus notification, removing any need for the extra request.
It shouldn't cause any drift on the count because we send down the full
count instead of the difference too.
Possible follow-ups could include handling like removal.
We are only using list_multipart_parts right now in the
uploads controller for multipart uploads to check if the
upload exists; thus we don't need up to 1000 parts.
Also adding a note for future explorers that list_multipart_parts
only gets 1000 parts max, and adding params for max parts
and starting parts.
Support for invites alongside DiscourseConnect was added in 355d51af. This commit fixes the guardian method so that the bulk invite button functionality also works.
* FIX: Preserve field types when updating revision
When a post was edited quickly twice by the same user, the old post
revision was updated with the newest changes. To check if the change
was reverted (i.e. rename topic A to B and then back to A) a comparison
of the initial value and last value is performed. If the check passes
then the intermediary value is dismissed and only the initial value and
the last ones are preserved. Otherwise, the modification is dismissed
because the field returned to its initial value.
This used to work well for most fields, but failed for "tags" because
the field is an array and the values were transformed to strings to
perform the comparison.
* FIX: Reset last_editor_id if revision is reverted
If a post was revised and then the same revision was reverted,
last_editor_id was still set to the ID of the user who last edited the
post. This was a problem because the same person could then edit the
same post again and because it was the same user and same post, the
system attempted to update the last one (that did not exist anymore).
This commit introduces a new s3:ensure_cors_rules rake task
that is run as a prerequisite to s3:upload_assets. This rake
task calls out to the S3CorsRulesets class to ensure that
the 3 relevant sets of CORS rules are applied, depending on
site settings:
* assets
* direct S3 backups
* direct S3 uploads
This works for both Global S3 settings and Database S3 settings
(the latter set directly via SiteSetting).
As it is, only one rule can be applied, which is generally
the assets rule as it is called first. This commit changes
the ensure_cors! method to be able to apply new rules as
well as the existing ones.
This commit also slightly changes the existing rules to cover
direct S3 uploads via uppy, especially multipart, which requires
some more headers.
FinalDestination's follow_canonical mode used for embedded topics should work when canonical URLs are relative, as specified in [RFC 6596](https://datatracker.ietf.org/doc/html/rfc6596)
We are no longer able to display the image returned by Instagram directly within a Discourse site (either in the composer, or within a cooked post within a topic), so:
- Display an image placeholder in the composer preview
- A cooked post should use an iframe to display the Instagram 'embed' content
The `白名单` term becomes `名单 白名单` after it is processed by
cppjieba in :query mode. However, `白名单` is not tokenized as such by cppjieba when it
appears in a string of text. Therefore, this may lead to failed matches as
the search data generated while indexing may not contain all of the
terms generated by :query mode. We've decided to maintain parity for now
such that both indexing and querying uses the same :mix mode. This may
lead to less accurate search but our plan is to properly support CJK
search in the future.
In preparation for adding automatic CORS rules creation
for direct S3 uploads, I am adding tests here and moving the
CORS rule definitions into a dedicated class so they are all
in the one place.
There is a problem with ensure_cors! as well -- if there is
already a CORS rule defined (presumably the asset one) then
we do nothing and do not apply the new rule. This means that
the S3BackupStore.ensure_cors method does nothing right now
if the assets rule is already defined, and it will mean the
same for any direct S3 upload rules I add for uppy. We need
to be able to add more rules, not just one.
This is not a problem on our hosting because we define the
rules at an infra level.
If the initial `get`/`head` response within `resolve` returns a status code of `103`, attempt to fetch the same URL with the alternative `small_get` method.
* FIX: allowed_theme_ids should not be persisted in GlobalSettings
It was observed that the memoized value of `GlobalSetting.allowed_theme_ids` would be persisted across requests, which could lead to unpredictable/undesired behaviours in a multisite environment.
This change moves that logic out of GlobalSettings so that the returned theme IDs are correct for the current site.
Uses get_set_cache, which ultimately uses DistributedCache, which will take care of multisite issues for us.
* DEV: Sanitize HTML admin inputs
This PR adds on-save HTML sanitization for:
Client site settings
translation overrides
badges descriptions
user fields descriptions
I used Rails's SafeListSanitizer, which [accepts the following HTML tags and attributes](018cf54073/lib/rails/html/sanitizer.rb (L108))
* Make sure that the sanitization logic doesn't corrupt settings with special characters
This PR doesn't change any behavior, but just removes code that wasn't in use. This is a pretty dangerous place to change, since it gets called during user's registration. At the same time the refactoring is very straightforward, it's clear that this code wasn't doing any work (it still needs to be double-checked during review though). Also, the test coverage of UserNameSuggester is good.
* PERF: Remove JOIN on categories for PM search
JOIN on categories is not needed when searchin in private messages as
PMs are not categorized.
* DEV: Use == for string comparison
* PERF: Optimize query for allowed topic groups
There was a query that checked for all topics a user or their groups
were allowed to see. This used UNION between topic_allowed_users and
topic_allowed_groups which was very inefficient. That was replaced with
a OR condition that checks in either tables more efficiently.
By default, Rails only includes the Vary:Accept header in responses when the Accept: header is included in the request. This means that proxies/browsers may cache a response to a request with a missing Accept header, and then later serve that cached version for a request which **does** supply the Accept header. This can lead to some very unexpected behavior in browsers.
This commit adds the Vary:Accept header for all requests, even if the Accept header is not present in the request. If a format parameter (e.g. `.json` suffix) is included in the path, then the Accept header is still omitted. (The format parameter takes precedence over any Accept: header, so the response is no longer varies based on the Accept header)
Searching within a topic currently does not make use of PG search and
we're simply doing an `ilike` against the post raw. Furthermore,
`Post#post_number` is already unique within a topic so the other
ordering will never ever be used. This change simply makes the query
cleaner to read.
This new app event will fire whenever a bookmark is created,
edited, or deleted for a post or topic, and replaces these old
app events which had inconsistent APIs:
* page:bookmark-post-toggled
* topic:bookmark-toggled
When the event is triggered, the arguments are in this order:
1. bookmark - The bookmark record created or changed. Will be null
if the bookmark was deleted.
2. target - Object with target (post or topic) and targetId (post ID
or topic ID)
User API keys (not the same thing as admin API keys) are currently
leaked to redis when rate limits are applied to them since redis is the
backend for rate limits in Discourse and the API keys are included in
the redis keys that are used to track usage of user API keys in the last
24 hours.
This commit stops the leak by using a SHA-256 representation of the user
API key instead of the key itself to form the redis key.
We don't need to manually delete the existing redis keys that contain
unhashed user API keys because they're not long-lived and will be
automatically deleted within 48 hours after this commit is deployed to
your Discourse instance.
This removes all custom controllers and redis/messagebus logic from discourse-presence, and replaces it with core's new PresenceChannel system.
All functionality should be retained. This implementation should scale much better to large numbers of users, reduce the number of HTTP requests made by clients, and reduce the volume of messages on the MessageBus.
For more information on PresenceChannel, see 31db8352
Instead of leaking ordering of the posts all around the class, we
centralize it in a method making the code easier to understand. In a
future PR, we will also introduce a plugin API to allow custom ordering
and the change in this commit helps to faciliate that.
The method was only used for mega topics but it was redundant as the
first post can be determined from using the condition where
`Post#post_number` equal to one.
`/u/username/invited.json?filter=expired` and `/u/username/invited.json?filter=pending` APIs are already returning data to admins. However, the `can_see_invite_details?` boolean was false, which prevented the Ember frontend from showing the tabs correctly. This commit updates the guardian method to match reality.
Sometimes administrators want to permanently delete posts and topics
from the database. To make sure that this is done for a good reasons,
administrators can do this only after one minute has passed since the
post was deleted or immediately if another administrator does it.
When a tag with alot of topics is used, we end up allocating a Ruby
array of all the topic ids. Instead, we can just use a subquery here and
handle all of the exclusion logic in PG.
Follow-up to ae13839f98
We don't want to be using emails as source for username and name suggestions in cases when it's possible that a user have no chance to intervene and correct a suggested username. It risks exposing email addresses.
* FIX: Remove List-Post email header
This header is used for mailing lists and can confuse some email clients
such as Thunderbird to display wrong replying options.
* FIX: Replace reply_key in email custom headers
Admins can add custom email headers from site settings. Email sender
will try to replace the reply key if %{reply_key} exists or remove the
header if a reply key does not exist.
Invite is used in two contexts, when inviting a new user to the forum
and when inviting an existent user to a topic. The first case is more
complex and it involves permission checks to ensure that new users can
be created. In the second case, it is enough to ensure that the topic
is visible for both users and that all preconditions are met.
One edge case is the invite to topic via email functionality which
checks for both conditions because first the user must be invited to
create an account first and then to the topic.
A side effect of these changes is that all site settings related to
invites refer to inviting new users only now.
We aren't translating these settings, so it makes more sense to move them into the code. I added an instance method so plugins can add mappings for custom reasons.
- Allow the `/presence/get` endpoint to return multiple channels in a single request (limited to 50)
- When multiple presence channels are initialized in a single Ember runloop, batch them into a single GET request
- Introduce the `presence-pretender` to allow easy testing of PresenceChannel-related features
- Introduce a `use_cache` boolean (default true) on the the server-side PresenceChannel initializer. Useful during testing.
This commit fixes the `eval` and `evalsha` commands/methods and any other methods that don't have a wrapper in `DiscourseRedis` and expect keyword arguments. I noticed this problem in Logster when I was trying to fetch some log messages in JSON format using the rails console and saw the messages were missing the `env` field. Logster uses the `eval` command to fetch messages `env`s:
dc351fd00f/lib/logster/redis_store.rb (L250-L253)
and that code was not fetching anything because `DiscourseRedis` didn't pass the `keys` keyword arg to the redis gem.
Usually, when an email is received a user lookup is performed using the
email address found in the `From` header. When an email has an
`X-Original-From` header, if it is equal to `Reply-To` then it uses that
one instead. The comparison was sensitive to whitespaces and other
insignificant characters such as quotes because it reconstructed the
`From` header.
For the fixture added in this commit, it compared the reconstructed
`From` header `John Doe <johndoe@example.com>` with the `Reply-To`
header `"John Doe" <johndoe@example.com>`.
* DEV: Remove HTML setting type and sanitization logic.
We concluded that we don't want settings to contain HTML, so I'm removing the setting type and sanitization logic. Additionally, we no longer allow the global-notice text to contain HTML.
I searched for usages of this setting type in the `all-the-plugins` repo and found none, so I haven't added a migration for existing settings.
* Mark Global notices containing links as HTML Safe.
FinalDestination now supports the `follow_canonical` option, which will perform an initial GET request, parse the canonical link if present, and perform a HEAD request to it.
We use this mode during embeds to avoid treating URLs with different query parameters as different topics.
Previously, while retrieving each upload urls in a post S3 CDN urls with different path in prefix (external urls technically) are considered as uploaded url. It created issue while checking missing uploads.
This commit resolves refactors can_invite_to? to use
can_invite_to_forum? for checking the site-wide permissions and then
perform topic specific checkups.
Similarly, can_invite_to? is always used with a topic object and this is
now enforced.
There was another problem before when `must_approve_users` site setting
was not checked when inviting users to forum, but was checked when
inviting to a topic.
Another minor security issue was that group owners could invite to
group topics even if they did not have the minimum trust level to do
it.
It was possible to see notifications of other users using routes:
- notifications/responses
- notifications/likes-received
- notifications/mentions
- notifications/edits
We weren't showing anything private (like notifications about private messages), only things that're publicly available in other places. But anyway, it feels strange that it's possible to look at notifications of someone else. Additionally, there is a risk that we can unintentionally leak something on these pages in the future.
This commit restricts these routes.
The all inboxes was introduced in
016efeadf6 but we decided to roll it back
for performance reasons. The main performance challenge here is that PG
has to basically loop through all the PMs that a user is allowed to view
before being able to order by `Topic#bumped_at`. The all inboxes was not
planned as part of the new/unread filter so we've decided not to tackle
the performance issue for the upcoming release.
Follow-up to 016efeadf6
* PERF: Improve database query perf when loading topics for a category.
Instead of left joining the `topics` table against `categories` by filtering with `categories.id`,
we can improve the query plan by filtering against `topics.category_id`
first before joining which helps to reduce the number of rows in the
topics table that has to be joined against the other tables and also
make better use of our existing index.
The following is a before and after of the query plan for a category
with many subcategories.
Before:
```
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
Limit (cost=1.28..747.09 rows=30 width=12) (actual time=85.502..2453.727 rows=30 loops=1)
-> Nested Loop Left Join (cost=1.28..566518.36 rows=22788 width=12) (actual time=85.501..2453.722 rows=30 loops=1)
Join Filter: (category_users.category_id = topics.category_id)
Filter: ((topics.category_id = 11) OR (COALESCE(category_users.notification_level, 1) <> 0) OR (tu.notification_level > 1))
-> Nested Loop Left Join (cost=1.00..566001.58 rows=22866 width=20) (actual time=85.494..2453.702 rows=30 loops=1)
Filter: ((COALESCE(tu.notification_level, 1) > 0) AND ((topics.category_id <> 11) OR (topics.pinned_at IS NULL) OR ((t
opics.pinned_at <= tu.cleared_pinned_at) AND (tu.cleared_pinned_at IS NOT NULL))))
Rows Removed by Filter: 1
-> Nested Loop (cost=0.57..528561.75 rows=68606 width=24) (actual time=85.472..2453.562 rows=31 loops=1)
Join Filter: ((topics.category_id = categories.id) AND ((categories.topic_id <> topics.id) OR (categories.id = 1
1)))
Rows Removed by Join Filter: 13938306
-> Index Scan using index_topics_on_bumped_at on topics (cost=0.42..100480.05 rows=715549 width=24) (actual ti
me=0.010..633.015 rows=464623 loops=1)
Filter: ((deleted_at IS NULL) AND ((archetype)::text <> 'private_message'::text))
Rows Removed by Filter: 105321
-> Materialize (cost=0.14..36.04 rows=30 width=8) (actual time=0.000..0.002 rows=30 loops=464623)
-> Index Scan using categories_pkey on categories (cost=0.14..35.89 rows=30 width=8) (actual time=0.006.
.0.040 rows=30 loops=1)
Index Cond: (id = ANY ('{11,53,57,55,54,56,112,94,107,115,116,117,97,95,102,103,101,105,99,114,106,1
13,104,98,100,96,108,109,110,111}'::integer[]))
-> Index Scan using index_topic_users_on_topic_id_and_user_id on topic_users tu (cost=0.43..0.53 rows=1 width=16) (a
ctual time=0.004..0.004 rows=0 loops=31)
Index Cond: ((topic_id = topics.id) AND (user_id = 1103877))
-> Materialize (cost=0.28..2.30 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=30)
-> Index Scan using index_category_users_on_user_id_and_last_seen_at on category_users (cost=0.28..2.29 rows=1 width
=8) (actual time=0.004..0.004 rows=0 loops=1)
Index Cond: (user_id = 1103877)
Planning Time: 1.359 ms
Execution Time: 2453.765 ms
(23 rows)
```
After:
```
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=1.28..438.55 rows=30 width=12) (actual time=38.297..657.215 rows=30 loops=1)
-> Nested Loop Left Join (cost=1.28..195944.68 rows=13443 width=12) (actual time=38.296..657.211 rows=30 loops=1)
Filter: ((categories.topic_id <> topics.id) OR (topics.category_id = 11))
Rows Removed by Filter: 29
-> Nested Loop Left Join (cost=1.13..193462.59 rows=13443 width=16) (actual time=38.289..657.092 rows=59 loops=1)
Join Filter: (category_users.category_id = topics.category_id)
Filter: ((topics.category_id = 11) OR (COALESCE(category_users.notification_level, 1) <> 0) OR (tu.notification_level > 1))
-> Nested Loop Left Join (cost=0.85..193156.79 rows=13489 width=20) (actual time=38.282..657.059 rows=59 loops=1)
Filter: ((COALESCE(tu.notification_level, 1) > 0) AND ((topics.category_id <> 11) OR (topics.pinned_at IS NULL) OR ((topics.pinned_at <= tu.cleared_pinned_at) AND (tu.cleared_pinned_at IS NOT NULL))))
Rows Removed by Filter: 1
-> Index Scan using index_topics_on_bumped_at on topics (cost=0.42..134521.06 rows=40470 width=24) (actual time=38.267..656.850 rows=60 loops=1)
Filter: ((deleted_at IS NULL) AND ((archetype)::text <> 'private_message'::text) AND (category_id = ANY ('{11,53,57,55,54,56,112,94,107,115,116,117,97,95,102,103,101,105,99,114,106,113,104,98,100,96,108,109,110,111}'::integer[])))
Rows Removed by Filter: 569895
-> Index Scan using index_topic_users_on_topic_id_and_user_id on topic_users tu (cost=0.43..1.43 rows=1 width=16) (actual time=0.003..0.003 rows=0 loops=60)
Index Cond: ((topic_id = topics.id) AND (user_id = 1103877))
-> Materialize (cost=0.28..2.30 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=59)
-> Index Scan using index_category_users_on_user_id_and_last_seen_at on category_users (cost=0.28..2.29 rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=1)
Index Cond: (user_id = 1103877)
-> Index Scan using categories_pkey on categories (cost=0.14..0.17 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=59)
Index Cond: (id = topics.category_id)
Planning Time: 1.633 ms
Execution Time: 657.255 ms
(22 rows)
```
* PERF: Optimize index on topics bumped_at.
Replace `index_topics_on_bumped_at` index with a partial index on `Topic#bumped_at` filtered by archetype since there is already another index that covers private topics.
The file size error messages for max_image_size_kb and
max_attachment_size_kb are shown to the user in the KB
format, regardless of how large the limit is. Since we
are going to support uploading much larger files soon,
this KB-based limit soon becomes unfriendly to the end
user.
For example, if the max attachment size is set to 512000
KB, this is what the user sees:
> Sorry, the file you are trying to upload is too big (maximum
size is 512000KB)
This makes the user do math. In almost all file explorers that
a regular user would be familiar width, the file size is shown
in a format based on the maximum increment (e.g. KB, MB, GB).
This commit changes the behaviour to output a humanized file size
instead of the raw KB. For the above example, it would now say:
> Sorry, the file you are trying to upload is too big (maximum
size is 512 MB)
This humanization also handles decimals, e.g. 1536KB = 1.5 MB
Instead of going to the OP of the topic for topic-level bookmarks
(which are bookmarks where for_topic is true) when clicking on the
bookmark in the quick access menu or on the user bookmark list,
this commit takes the user to the last unread post in
the topic instead. This should be generally more useful than landing
on the unchanging OP.
To make this work nicely, I needed to add the last_read_post_number to
the BookmarkQuery based on the TopicUser association. It should not add
too much extra weight to the query, because it is limited to the user
that we are fetching bookmarks for.
Also fixed an issue where the bookmark serializer highest_post_number was
not taking into account whether the user was staff, which is when we
should use highest_staff_post_number instead.
Allows creating a bookmark with the `for_topic` flag introduced in d1d2298a4c set to true. This happens when clicking on the Bookmark button in the topic footer when no other posts are bookmarked. In a later PR, when clicking on these topic-level bookmarks the user will be taken to the last unread post in the topic, not the OP. Only the OP can have a topic level bookmark, and users can also make a post-level bookmark on the OP of the topic.
I had to do some pretty heavy refactors because most of the bookmark code in the JS topics controller was centred around instances of Post JS models, but the topic level bookmark is not centred around a post. Some refactors were just for readability as well.
Also removes some missed reminderType code from the purge in 41e19adb0d
The display name can have quotes around it, which does not work
with our current comparison of a from field (in this case Reply-To)
and another header (X-Original-From), because we are not comparing
the two values in the same way. This causes an issue where the
commit here: b88d8c8 will not
work properly; the forwarded email gets the From address instead
of the Reply-To address as intended.
We don't actually use the reminder_type for bookmarks anywhere;
we are just storing it. It has no bearing on the UI. It used
to be relevant with the at_desktop bookmark reminders (see
fa572d3a7a)
This commit marks the column as readonly, ignores it, and removes
the index, and it will be dropped in a later PR. Some plugins
are relying on reminder_type partially so some stubs have been
left in place to avoid errors.
This partially reverts commit ddb458343d.
Seeing performance degrade on larger sites so back to drawing board on
this one. Instead of the DISTINCT LEFT JOIN, we switch back to
IN(subquery).
First reported in https://meta.discourse.org/t/-/202482/19
There are two optimizations being applied here:
1. Fetch a user's group ids in a seperate query instead of including it
as a sub-query. When I tried a subquery, the query plan becomes very
inefficient.
1. Join against the `topic_allowed_users` and `topic_allowed_groups`
table instead of doing an IN against a subquery where we UNION the
`topic_id`s from the two tables. From my profiling, this enables PG to
do a backwards index scan on the `index_topics_on_timestamps_private`
index.
This commit fixes a bug where listing all messages was incorrectly
excluding topics if a topic has been archived by a group even if the
user did not belong to the group.
This commit also fixes another bug where dismissing private messages
selectively was subjected to the default limit of 30.
We don't need no stinkin' denormalization! This commit ignores
the topic_id column on bookmarks, to be deleted at a later date.
We don't really need this column and it's better to rely on the
post.topic_id as the canonical topic_id for bookmarks, then we
don't need to remember to update both columns if the bookmarked
post moves to another topic.
When copying an existing upload stub temporary object
on S3 to its final destination we were not copying across
its additional headers such as content-disposition and
cache-control, which led to issues like attachments not
downloading with their original filename when clicking
the download links in posts.
This is because the metadata_directive = REPLACE option
was not being passed to object.copy_from(), so only the
source object's headers were being used. Added an option
for apply_metadata_to_destination to apply this option
conditionally, because we may not always want to replace
this metadata, but we definitely do when copying a temporary
upload.
When a user archives a personal message, they are redirected back to the
inbox and will refresh the list of the topics for the given filter.
Publishing an event to the user results in an incorrect incoming message
because the list of topics has already been refreshed.
This does mean that if a user has two tabs opened, the non-active tab
will not receive the incoming message but at this point we do not think
the technical trade-offs are worth it to support this feature. We
basically have to somehow exclude a client from an incoming message
which is not easy to do.
Follow-up to fc1fd1b416
In order to include the new/unread count in the browse more message
under suggested topics, a couple of technical changes have to be made.
1. `PrivateMessageTopicTrackingState` is now auto-injected which is
similar to how it is done for `TopicTrackingState`. This is done so
we don't have to attempt to pass the `PrivateMessageTopicTrackingState`
object multiple levels down into the suggested-topics component. While
the object is auto-injected, we only fetch the initial state and start
tracking when the relevant private messages routes has been hit and only
when a private message's suggested topics is loaded. This is
done as we do not want to add the extra overhead of fetching the inital
state to all page loads but instead wait till the private messages
routes are hit.
2. Previously, we would stop tracking once the `user-private-messages`
route has been deactivated. However, that is not ideal since
navigating out of the route and back means we send an API call to the
server each time. Since `PrivateMessageTopicTrackingState` is kept in
sync cheaply via messageBus, we can just continue to track the state
even if the user has navigated away from the relevant stages.
When forwarding emails into the group inbox, we now use the
original sender email as the from_address since
2ac9fd9dff. However, we have not
been saving the original CC addresses of the forwarded email,
which are needed to include those recipients in on the conversation
when replying via the group inbox.
This commit captures the CC addresses on the incoming email, and
makes sure the emails are created as staged users and added to the
list of topic allowed users so they are included on CC's sent by
the GroupSmtpEmail and other jobs.
When 2ac9fd9dff was done, this
affected the small post that is created when forwarding an email
into the group inbox. Instead of using the name and the email of
the user who forwarded the email, it used the original from email
and name to create the small post. So instead of something like
"Discourse Team forwarded the above email" we ended up with
"John Smith forwarded the above email" which is incorrect.
This fixes the issue by creating a staged user for the forwarding
email address (if such a user does not yet exist) and uses that
for the "forwarded" small post instead.
Other locale characters in file names (e.g. é, ä) as well
as special characters can cause issues on S3, notably the S3
copy object operation does not support these special characters.
Instead of storing the original file name in the key, which is
unnecessary, we now generate a random file name with the original
extension for the temporary file and use that for all external
upload stub operations.
The seed-fu gem resets the sequence on all the tables it touches. In some situations, this can cause primary keys to be re-used. This commit introduces a freedom patch which ensures seed-fu only touches the sequence when it is less than the id of one of the seeded records.
PresenceChannel aims to be a generic system for allow the server, and end-users, to track the number and identity of users performing a specific task on the site. For example, it might be used to track who is currently 'replying' to a specific topic, editing a specific wiki post, etc.
A few key pieces of information about the system:
- PresenceChannels are identified by a name of the format `/prefix/blah`, where `prefix` has been configured by some core/plugin implementation, and `blah` can be any string the implementation wants to use.
- Presence is a boolean thing - each user is either present, or not present. If a user has multiple clients 'present' in a channel, they will be deduplicated so that the user is only counted once
- Developers can configure the existence and configuration of channels 'just in time' using a callback. The result of this is cached for 2 minutes.
- Configuration of a channel can specify permissions in a similar way to MessageBus (public boolean, a list of allowed_user_ids, and a list of allowed_group_ids). A channel can also be placed in 'count_only' mode, where the identity of present users is not revealed to end-users.
- The backend implementation uses redis lua scripts, and is designed to scale well. In the future, hard limits may be introduced on the maximum number of users that can be present in a channel.
- Clients can enter/leave at will. If a client has not marked itself 'present' in the last 60 seconds, they will automatically 'leave' the channel. The JS implementation takes care of this regular check-in.
- On the client-side, PresenceChannel instances can be fetched from the `presence` ember service. Each PresenceChannel can be used entered/left/subscribed/unsubscribed, and the service will automatically deduplicate information before interacting with the server.
- When a client joins a PresenceChannel, the JS implementation will automatically make a GET request for the current channel state. To avoid this, the channel state can be serialized into one of your existing endpoints, and then passed to the `subscribe` method on the channel.
- The PresenceChannel JS object is an ember object. The `users` and `count` property can be used directly in ember templates, and in computed properties.
- It is important to make sure that you `unsubscribe()` and `leave()` any PresenceChannel objects after use
An example implementation may look something like this. On the server:
```ruby
register_presence_channel_prefix("site") do |channel|
next nil unless channel == "/site/online"
PresenceChannel::Config.new(public: true)
end
```
And on the client, a component could be implemented like this:
```javascript
import Component from "@ember/component";
import { inject as service } from "@ember/service";
export default Component.extend({
presence: service(),
init() {
this._super(...arguments);
this.set("presenceChannel", this.presence.getChannel("/site/online"));
},
didInsertElement() {
this.presenceChannel.enter();
this.presenceChannel.subscribe();
},
willDestroyElement() {
this.presenceChannel.leave();
this.presenceChannel.unsubscribe();
},
});
```
With this template:
```handlebars
Online: {{presenceChannel.count}}
<ul>
{{#each presenceChannel.users as |user|}}
<li>{{avatar user imageSize="tiny"}} {{user.username}}</li>
{{/each}}
</ul>
```
The generate_presigned_put endpoint for direct external uploads
(such as the one for the uppy-image-uploader) records allowed
S3 metadata values on the uploaded object. We use this to store
the sha1-checksum generated by the UppyChecksum plugin, for later
comparison in ExternalUploadManager.
However, we were not doing this for the create_multipart endpoint,
so the checksum was never captured and compared correctly.
Also includes a fix to make sure UppyChecksum is the last preprocessor to run.
It is important that the UppyChecksum preprocessor is the last one to
be added; the preprocessors are run in order and since other preprocessors
may modify the file (e.g. the UppyMediaOptimization one), we need to
checksum once we are sure the file data has "settled".
This bug was introduced by f66007ec83.
In PostJobsEnqueuer we previously did not fire the after_post_create
event and after_topic_create event for private message topics. This was
changed in the above commit in order to publish message bus messages
for topic tracking state updates. Unfortunately this caused the
NotifyMailingListSubscribers job to be enqueued for all posts including
private messages, and admins and the users involved in the PMs got
emailed the contents of the PMs if they had mailing list mode enabled.
Luckily the impact of this was mitigated by a Guardian#can_see? check
for each mailing list mode user in the NotifyMailingListSubscribers job.
We never want to notify mailing list mode subscribers for private messages
so an early return has been added there, plus the logic in PostJobsEnqueuer
has been fixed, and tests have been added to that class where there were
none before.
This is unnecessary, as when the temporary key is created
in S3Store we already include the s3_bucket_folder_path, and
the key will always start with temp/ to assist with lifecycle
rules for multipart uploads.
This was affecting Discourse.store.object_from_path,
Discourse.store.signed_url_for_path, and possibly others.
See also: e0102a5
This can be used to change the list of topic posters. For example,
discourse-solved can use this to move the user who posted the solution
after the original poster.
There are certain design decisions that were made in this commit.
Private messages implements its own version of topic tracking state because there are significant differences between regular and private_message topics. Regular topics have to track categories and tags while private messages do not. It is much easier to design the new topic tracking state if we maintain two different classes, instead of trying to mash this two worlds together.
One MessageBus channel per user and one MessageBus channel per group. This allows each user and each group to have their own channel backlog instead of having one global channel which requires the client to filter away unrelated messages.
Previously we had temp/ in the middle of the S3 key path like so
* /uploads/default/temp/randomstring/test.png (normal site)
* /sitename/uploads/default/temp/randomstring/test.png (s3 folder path site)
* /standard10/uploads/sitename/temp/randomstring/test.png (multisite site)
However this necessitates making a lifecycle rule to clean up incomplete
S3 multipart uploads for every site, something which we cannot do. It makes
much more sense to have a structure with /temp at the start of the key,
which is what this commit does:
* /temp/uploads/default/randomstring/test.png (normal site)
* /temp/sitename/uploads/default/randomstring/test.png (s3 folder path site)
* /temp/standard10/uploads/sitename/randomstring/test.png (multisite site)
This pull request introduces the endpoints required, and the JavaScript functionality in the `ComposerUppyUpload` mixin, for direct S3 multipart uploads. There are four new endpoints in the uploads controller:
* `create-multipart.json` - Creates the multipart upload in S3 along with an `ExternalUploadStub` record, storing information about the file in the same way as `generate-presigned-put.json` does for regular direct S3 uploads
* `batch-presign-multipart-parts.json` - Takes a list of part numbers and the unique identifier for an `ExternalUploadStub` record, and generates the presigned URLs for those parts if the multipart upload still exists and if the user has permission to access that upload
* `complete-multipart.json` - Completes the multipart upload in S3. Needs the full list of part numbers and their associated ETags which are returned when the part is uploaded to the presigned URL above. Only works if the user has permission to access the associated `ExternalUploadStub` record and the multipart upload still exists.
After we confirm the upload is complete in S3, we go through the regular `UploadCreator` flow, the same as `complete-external-upload.json`, and promote the temporary upload S3 into a full `Upload` record, moving it to its final destination.
* `abort-multipart.json` - Aborts the multipart upload on S3 and destroys the `ExternalUploadStub` record if the user has permission to access that upload.
Also added are a few new columns to `ExternalUploadStub`:
* multipart - Whether or not this is a multipart upload
* external_upload_identifier - The "upload ID" for an S3 multipart upload
* filesize - The size of the file when the `create-multipart.json` or `generate-presigned-put.json` is called. This is used for validation.
When the user completes a direct S3 upload, either regular or multipart, we take the `filesize` that was captured when the `ExternalUploadStub` was first created and compare it with the final `Content-Length` size of the file where it is stored in S3. Then, if the two do not match, we throw an error, delete the file on S3, and ban the user from uploading files for N (default 5) minutes. This would only happen if the user uploads a different file than what they first specified, or in the case of multipart uploads uploaded larger chunks than needed. This is done to prevent abuse of S3 storage by bad actors.
Also included in this PR is an update to vendor/uppy.js. This has been built locally from the latest uppy source at d613b849a6. This must be done so that I can get my multipart upload changes into Discourse. When the Uppy team cuts a proper release, we can bump the package.json versions instead.
When emails were forwarded to a group inbox by the email address
of the group, for example when an email ends up in spam and must
be manually forwarded to the group+site@discoursemail.com address,
the OP of the topic ended up being the group's email address instead
of the sender who originally sent the email to the group inbox.
This commit detects that an email has been forwarded using existing
tools, and if the from address matches one of the group incoming
email addresses, then we look at the forwarded email's from address
and use that instead for the incoming email from address as well as
the staged/regular user used for the Topic.user.
This will make it much cleaner to forward emails into a group inbox,
and will prevent issues with PostAlerter where the OP is double-notified
for these emails.
Currently, pinned topics are ordered by the `bumped_at` column. This behavior is not desired because it gives admins no control over the order of pinned topics. This PR makes pinned topics ordered by the `pinned_at` column. A topic that is pinned last appears first in topic lists. If an admin wants an already pinned topic to appear first in the list of pinned topics, they'll have to unpin that topic and pin it again.
Meta topic: https://meta.discourse.org/t/how-do-i-set-the-order-of-pinned-topics/16935/23?u=osama.
Emails can include the marker in a different language, depending on
site and user settings. The email receiver always looked for the marker
in default language.
Uploads can be reused between site settings. This change allows the same
upload to be exported only once and then the same file is reused. The
same applies to import.
Allow admins to configure exceptions to our Rails rate limiter.
Configuration happens in the environment variables, and work with both
IPs and CIDR blocks.
Example:
```
env:
DISCOURSE_MAX_REQS_PER_IP_EXCEPTIONS: >-
14.15.16.32/27
216.148.1.2
```
Sections with unreserverd characters will appear url-encoded and need to
be unescaped before using it.
Wikipedia generates 2 different spans in this case in the same page, one
with an id resulting of replacing the % symbols with . and the other with
the decoded version of the string. For example, for /wiki/foo#A%C3%A1A it
will generate:
<span id="A.C3.A1A"></span>
<span id="AáA">AáA</span>
Unescaping the `m_url_hash_name` should work in all cases to target the
proper section span.
When a post is flagged with the reason of 'Something Else' a brief message can be added by the user which subsequently creates a `meta_topic` private message. The group `moderators` is automatically added to this topic.
If category group moderation is enabled, and the post belongs to a category with a reviewable group, that group should also be added to the meta_topic.
Note: This extends the `notify_moderators` logic, and will add the reviewable group to the meta_topic, regardless of the settings of that group.
In 2018 check was added that TL1 welcome message is sent unless user already has BasicBadge granted.
I think we should also check if BasicBadge is even enabled. Otherwise, each time group is assigned to a user and trust level is recalculated, they will receive a welcome message.
The following example message would generate an exception:
```
Return-Path: <discourse@bar.com>
From: Foo Bar <discourse@bar.com>
To: reply+4f97315cc828096c9cb34c6f1a0d6fe8@bar.com
Date: Fri, 15 Jan 2016 00:12:43 +0100
Message-ID: <21@foo.bar.mail>
Mime-Version: 1.0
Content-Type: text/html; charset=UTF-8
</div>
```
Exception:
```
NoMethodError:
undefined method `split' for nil:NilClass
```
This reverts a part of changes introduced by https://github.com/discourse/discourse/pull/13947
In that PR I:
1. Disallowed topic feature links for TL-0 users
2. Additionally, disallowed just putting any URL in topic titles for TL-0 users
Actually, we don't need the second part. It introduced unnecessary complexity for no good reason. In fact, it tries to do the job that anti-spam plugins (like Akismet plugin) should be doing.
This PR reverts this second change.
This adds an optional ENV variable, `EMBER_CLI_PROD_ASSETS`. If truthy,
compiling production assets will be done via Ember CLI and will replace
the assets Rails would otherwise use.
This disallows putting URLs in topic titles for TL0 users, which means that:
If a TL-0 user puts a link into the title, a topic featured link won't be generated (as if it was disabled in the site settings)
Server methods for creating and updating topics will be refusing featured links when they are called by TL-0 users
TL-0 users won't be able to put any link into the topic title. For example, the title "Hey, take a look at https://my-site.com" will be rejected.
Also, it improves a bit server behavior when creating or updating feature links on topics in the categories with disabled featured links. Before the server just silently ignored a featured link field that was passed to him, now it will be returning 422 response.
We are still on a version of pretender since 2017
https://github.com/pretenderjs/pretender/releases/tag/v1.6.1
Since then many changes have been made, including adding support
for xhr.upload. Upgrading will let us write proper acceptance
tests for uppy, which uses XmlHTTPRequest internally including
xhr.upload.
Updates pretender to 3.4.7 and fake-xml-http-request to 2.1.2.
Note: There have been no breaking changes in the releases that would
affect us, mainly dropping support for old node versions.
* FIX: Update draft count when sequence is increased
Sometimes users ended up having a draft count higher than the actual
number of drafts.
* FIX: Do not update draft count twice
The call to DraftSequence.next! above already does it.
Discourse automatically sends a private message after backup or
restore finished. The private message used to contain the log inline
even when it was very long. A very long log can create issues because
the length of the post will be over the maximum allowed length of a
post. When that happens, Discourse will try to create an upload with
the logs. If that fails, it will trim the log and inline it.
Inlining secure images with the same name was not possible because they
were indexed by filename. If an email contained two files with the same
name, only the first image was used for both of them. The other file
was still attached to the email.
When the Reply-To header is present for incoming emails we
want to use it instead of the from address. This is usually the
case when forwarding an email via a mailing list into Discourse.
For now we are only using the Reply-To header if the email has
been forwarded via Google Groups, which is why we are checking the
X-Original-From header too. In future we may want to use the Reply-To
header in more cases.
Searching in a category looked only one level down, ignoring the site
setting max_category_nesting. The user interface did not support the
third level of categories and did not display them in the "Categorized"
input of the advanced search options.
* FEATURE: Onebox can match engines based on the content_type
`FinalDestination` now returns the `content_type` of a resolved URL.
`Oneboxer` passes this value to `Onebox` itself. Onebox engines can now specify a `matches_content_type` regex of content_types that the engine can handle, regardless of the URL.
`ImageOnebox` will match URLs with a content type of `image/png`, `jpg`, `gif`, `bmp`, `tif`, etc.
This will allow images that exist at a URL without a file type extension to be correctly rendered, assuming a valid `content_type` is returned.
When a post is created, the draft sequence is increased and then older
drafts are automatically executing a raw SQL query. This skipped the
Draft model callbacks and did not update user's draft count.
I fixed another problem related to a raw SQL query from Draft.cleanup!
method.
We shouldn't be checking if a user is allowed to do an action in the logger. We should be checking it just before we perform the action. In fact, guardians in the logger can make things even worse in case of a security bug. Let's say we forgot to check user's permissions before performing some action, but we still have a call to the guardian in the logger. In this case, a user would perform the action anyway, and this action wouldn't even be logged!
I've checked all cases and I confirm that we're safe to delete this calls from the logger.
I've added two calls to guardians in admin/user_controller. We didn't have security bugs there, because regular users can't access admin/... routes at all. But it's good to have calls to guardian in these methods anyway, neighboring methods have them.
This adds a few different things to allow for direct S3 uploads using uppy. **These changes are still not the default.** There are hidden `enable_experimental_image_uploader` and `enable_direct_s3_uploads` settings that must be turned on for any of this code to be used, and even if they are turned on only the User Card Background for the user profile actually uses uppy-image-uploader.
A new `ExternalUploadStub` model and database table is introduced in this pull request. This is used to keep track of uploads that are uploaded to a temporary location in S3 with the direct to S3 code, and they are eventually deleted a) when the direct upload is completed and b) after a certain time period of not being used.
### Starting a direct S3 upload
When an S3 direct upload is initiated with uppy, we first request a presigned PUT URL from the new `generate-presigned-put` endpoint in `UploadsController`. This generates an S3 key in the `temp` folder inside the correct bucket path, along with any metadata from the clientside (e.g. the SHA1 checksum described below). This will also create an `ExternalUploadStub` and store the details of the temp object key and the file being uploaded.
Once the clientside has this URL, uppy will upload the file direct to S3 using the presigned URL. Once the upload is complete we go to the next stage.
### Completing a direct S3 upload
Once the upload to S3 is done we call the new `complete-external-upload` route with the unique identifier of the `ExternalUploadStub` created earlier. Only the user who made the stub can complete the external upload. One of two paths is followed via the `ExternalUploadManager`.
1. If the object in S3 is too large (currently 100mb defined by `ExternalUploadManager::DOWNLOAD_LIMIT`) we do not download and generate the SHA1 for that file. Instead we create the `Upload` record via `UploadCreator` and simply copy it to its final destination on S3 then delete the initial temp file. Several modifications to `UploadCreator` have been made to accommodate this.
2. If the object in S3 is small enough, we download it. When the temporary S3 file is downloaded, we compare the SHA1 checksum generated by the browser with the actual SHA1 checksum of the file generated by ruby. The browser SHA1 checksum is stored on the object in S3 with metadata, and is generated via the `UppyChecksum` plugin. Keep in mind that some browsers will not generate this due to compatibility or other issues.
We then follow the normal `UploadCreator` path with one exception. To cut down on having to re-upload the file again, if there are no changes (such as resizing etc) to the file in `UploadCreator` we follow the same copy + delete temp path that we do for files that are too large.
3. Finally we return the serialized upload record back to the client
There are several errors that could happen that are handled by `UploadsController` as well.
Also in this PR is some refactoring of `displayErrorForUpload` to handle both uppy and jquery file uploader errors.
In badge queries for 'First Share' and 'Nice/Good/Great Share' badges,
check that the user exists.
For 'Nice+ Share' badges, also grant badges if the number of shares is
equal to the threshhold count to better match the descriptions.
The cache_fullpath for the Stylesheet::Manager was the same for
every test runner in a parallel test environment, so when other
specs or other places e.g. the stylesheets_controller_spec ran
rm -rf Stylesheet::Manager.cache_fullpath this caused errors
for other specs running that went through the
Stylesheet::Manager::Builder#compile path, causing the error
```
Errno::ENOENT:
No such file or directory @ rb_sysopen
```
Also fixed the stylesheet_controller which was interpolating Rails.root + CACHE_PATH
itself instead of just using Stylesheet::Manager.cache_fullpath
Prior to this fix, post whisperer in personal messages are revealed in
the topic's participants list even though non-staff users are unable to
see the whisper.
Using an invalid value was allowed. This commit tries to automatically
fix the color by adding missing # symbol or will show an error to the
user if it is not possible and it is not a CSS color either.
Not specifying an `Accept-Language` should be equivalent to specifying an `Accept-Language` of `*`, however some webservers seem to prefer it if we are explicit about being able to handle a response of content in any language.
There was a bunch of warnings repeated over and over during spec runs:
```
/var/www/discourse/lib/tasks/release_note.rake:3: warning: already initialized constant DATE_REGEX
/var/www/discourse/lib/tasks/release_note.rake:3: warning: previous definition of DATE_REGEX was here
/var/www/discourse/lib/tasks/release_note.rake:5: warning: already initialized constant CHANGE_TYPES
/var/www/discourse/lib/tasks/release_note.rake:5: warning: previous definition of CHANGE_TYPES was here
```
When the Forever option is selected for suspending a user, the user is suspended for 1000 years. Without customizing the site’s text, this time period is displayed to the user in the suspension email that is sent to the user, and if the user attempts to log back into the site. Telling someone that they have been suspended for 1000 years seems likely to come across as a bad attempt at humour.
This PR special case messages when a user suspended or silenced forever.
This change largely targets dev users, but it could potentially change
behaviour in production.
Jamie Wilson & I debugged a problem where "should not be larger than the
maximum thumbnail size" would fail due to timeouts.
On our systems, on ImageMagick 7.1.0-2, with inkscape installed, IM would
attempt to rasterise the svg then check the resulting filesize, causing the
test to timeout.
As of now, we haven't found a way to cause this to behave better, but have a
workaround in that forcing IM to use the internal renderer (`MSVG:`) seems to
make it perform the same on development workstations as it does in our docker
container.
Configuring staged users to watch categories and tags is a way to sign
them up to get many emails. These emails may be unwanted and get marked
as spam, hurting the site's email deliverability.
Users can opt-in to email notifications by logging on to their
account and configuring their own preferences.
If staff need to be able to configure these preferences on behalf of
staged users, the "allow changing staged user tracking" site setting
can be enabled. Default is to not allow it.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
* DEV: Improve rake `release_note:generate` date handling
A commit-ish value like HEAD@{2021-01-01} is based on the **local state** of HEAD on that date. It does not use dates attached to commits.
Instead, the rake task now detects date-like strings and supplies them to `git log` via the `--after` and `--before` flags
* Skip printing plugin when there are no changes found
A list of skipped plugins is printed on a single line at the end of the output
After every new random record created using the `dev:populate` rake task a new Discourse event will be triggered. So the plugins can modify the records if needed.
Fixes two issues:
- ignores invalid XML in custom icon sprite SVG file (and outputs an error if sprite was uploaded via admin UI)
- clears SVG sprite cache when deleting an `icons-sprite` upload in a theme
By default, Twitter will return the URL for the avatar image of the tweet poster as the `og:image` value.
However, if the `user_generated` attribute is true, we should not use this as the avatar URL as this will be an URL of an image in the tweet itself (e.g., an image belonging to a tweeted news story).
If user had a staged account and logged in using a third party service
a different username was suggested. This change will try to use the
username given by the authentication provider first, then the current
staged username and last suggest a new one.
The date shown in topic timeline was one day later if the post at that
position was made near midnight. This happened because the days number
was rounded down.
User flair was given by user's primary group. This PR separates the
two, adds a new field to the user model for flair group ID and users
can select their flair from user preferences now.
This TODO comment has existed for 8 years. Sort must be working just
fine or we would have prioritized fixing it.
Removing this comment as a tiny step toward keeping our codebase nice
and tidy.
This PR adds uppy to the project with a custom JS build and the shims needed to import it into our JS code. We need a custom build of Uppy because we do not use webpack for our JS modules/build. The only way to get what you want from Uppy is to use the webpack modules or to include the entire Uppy project including all plugins in a single JS file. This way we can just use the plugins we actually want. Future PRs will actually use Uppy!
Take 2 of https://github.com/discourse/discourse/pull/13466.
Fixes a few issues with the original PR:
- color definition stylesheet target now includes the theme id, to avoid themes set to use the default color scheme loading the same stylesheet
- changes the internal cache key for color definition stylesheet to reset the pre-existing cache
`bin/rake annotate` is an alias of `bin/annotate --models`
`bin/rake annotate:clean` generates annotations by using a temporary, freshly migrated database. This should help us to produce more consistent annotations, even if development databases have been polluted by plugin migrations.
A GitHub actions task is also added which generates annotations on a clean database, and raises an error if they differ from the committed annotations.
We renamed the site setting for this long ago, but there
were a few places left in the code base where "ninja edit"
needed to be turned into "grace period". Doing this here
to avoid combatative language.
We changed (https://github.com/discourse/discourse/pull/13407) behaviour of the topic level bookmark button recently. That PR made the button be opening the edit bookmark modal when there is only one bookmark on the topic instead of just removing that bookmark as it was before.
This PR fixes the next problems that weren't taken into account in the previous PR:
1. Everything should work fine even on very big topics when a bookmarked post is unloaded from the post stream. I've added code that loads the post we need and makes everything work as expected
2. When at least one bookmark on the topic has a reminder, we should always be showing the icon with a clock on the topic level bookmark button
3. We should show correct tooltips for the topic level bookmark button
This PR makes several changes to the group SMTP email contents to make it look more like a support inbox message.
* Remove the context posts, they only add clutter to the email and replies
* Display email addresses of staged users instead of odd generated usernames
* Add a "please reply above this line" message to sent emails
This PR backtracks a fair bit on this one https://github.com/discourse/discourse/pull/13220/files.
Instead of sending the group SMTP email for each user via `UserNotifications`, we are changing to send only one email with the existing `Jobs::GroupSmtpEmail` job and `GroupSmtpMailer`. We are changing this job and mailer along with `PostAlerter` to make the first topic allowed user the `to_address` for the email and any other `topic_allowed_users` to be the CC address on the email. This is to cut down on emails sent via SMTP, which is subject to daily limits from providers such as Gmail. We log these details in the `EmailLog` table now.
In addition to this, we have changed `PostAlerter` to no longer rely on incoming email email addresses for sending the `GroupSmtpEmail` job. This was unreliable as a user's email could have changed in the meantime. Also it was a little overcomplicated to use the incoming email records -- it is far simpler to reason about to just use topic allowed users.
This also adds a fix to include cc_addresses in the EmailLog.addressed_to_user scope.
We are a few versions behind on this gem. We need to update it
for S3 multipart uploads. In the current version we are using, we
cannot do this:
```ruby
Discourse.store.s3_helper.object(key).presigned_url(:upload_part, part_number: 1, upload_id: multipart_upload_id)
```
The S3 client raises an error, saying the operation is undefined. Once
I updated the gem this operation works as expected and returns a
presigned URL for the upload_part operation.
Also remove use of Aws::S3::FileUploader::FIFTEEN_MEGABYTES.
This was part of a private API and should not have been used.
The `themes:isolated_test` rake task will now unset all `DISCOURSE_*` env variables if `UNSET_DISCOURSE_ENV_VARS` env var is set and will also spin up a temporary redis server so the unicorn web server that's spun up for the tests doesn't leak into the "main" redis server.
Previously, we were storing custom svg sprite paths in the cache. This is a problem because sprites in themes get stored as uploads, and the returned paths were files in the temporary download cache which could sometimes be cleaned up, resulting in a broken cache.
I previously tried to fix this by skipping the missing files and clearing the cache, but that didn't work out well with CDNs. This PR stores the contents of the files in the custom_svg_sprites cache to avoid the problem of missing temp files.
Also, plugin custom icons are only included if the plugin is enabled.
In #12841, we started setting the ReviewableQueuedPost's target and topic after approving it instead of storing them in the payload. As a result, the reviewable_counts query started to include queued posts.
When a category is set to require approval, every post has an associated reviewable. Pointing that each post has an associated queued post is not necessary in this case, so I added a WHERE clause to skip them.
Having a large number of post-deploy migrations running out-of-numerical-sequence with pre-deploy migrations can be problematic. For example, if we have the sequence
- db/migrate/2017... - add column
- db/post_migrate/2018... - drop the column
- db/migrate/2021... - add the same column again
It will work fine in numerical order. But if you run the pre-deploy migrations **followed by** the post-deploy migrations, you will not get the same result.
Our post-deploy system is designed to allow for seamless upgrades of Discourse. However, it is reasonable for us to only support this totally seamless experience for a limited period of time. This commit moves all post_deploy migrations which are more than 1 year old (i.e. more than 2 major Discourse versions ago) into the regular pre-deploy migrations directory. This limits the impact of any edge cases caused by out-of-numerical-sequence migrations.
This adds the following columns to EmailLog:
* cc_addresses
* cc_user_ids
* topic_id
* raw
This is to bring the EmailLog table closer in parity to
IncomingEmail so it can be better utilized for Group SMTP
and IMAP mailing.
The raw column contains the full content of the outbound email,
but _only_ if the new hidden site setting
enable_raw_outbound_email_logging is enabled. Most sites do not
need it, and it's mostly required for IMAP and SMTP sending.
In the next pull request, there will be a migration to backfill
topic_id on the EmailLog table, at which point we can remove the
topic fallback method on EmailLog.
The other processing operations, such as fixing orientation or cropping,
can in rare cases increase the size of the uploaded image. Running the
downsize step after all these operations should create the best image
possible.
We had checks for the chrome binary in 3 different places
for tests and only one of them checked for google-chrome-stable,
which is problematic for Arch linux users (there are dozens of us!)
This PR moves all the code to one place and references it instead
of copying and pasting.
Before this change, calling `StyleSheet::Manager.stylesheet_details`
for the first time resulted in multiple queries to the database. This is
because the code was modelled in a way where each `Theme` was loaded
from the database one at a time.
This PR restructures the code such that it allows us to load all the
theme records in a single query. It also allows us to eager load the
required associations upfront. In order to achieve this, I removed the
support of loading multiple themes per request. It was initially added
to support user selectable theme components but the feature was never
completed and abandoned because it wasn't a feature that we thought was
worth building.
We already reject email replies to public topics via `SiteSetting.disallow_reply_by_email_after_days` and raising the `OldDestinationError`. This PR introduces similar behaviour for group inboxes, but without the rejection, and **only when SMTP is enabled for the group**.
If a reply is sent via email and the post is older than `SiteSetting.disallow_reply_by_email_after_days` days ago, then we create a new topic instead of making a reply in the old one and link back to the original topic. This is done to prevent long running group inbox discussions.
If a user posted a URL that appeared inside a Onebox, then the user
got a duplicate link notice. This was fixed by skipping those links in
Ruby.
If a user posted a URL that was Oneboxes and contained other links that
appeared in previous posts, then the user got a duplicate link notice.
This was fixed by skipping those links in JavaScript.
Before this fix we would display this exception:
```
Discourse::InvalidParameters:
value
```
After this fix we will display:
```
Discourse::InvalidParameters:
Invalid `x` value for `s3_region`
```
When we are emailing people from a group inbox, we are having
a PM conversation with them, as a support account would. In this
case mailing list headers do not make sense. It is not like a forum
topic where you may have tens or hundreds of participants -- it is a
conversation between the group and a small handful of people
directly contacting the group, often just one person.
The only header left in tact was List-Unsubsribe which is important
for letting people opt out to notifications.
* FIX: Allow SVG uploads if dimensions are a fraction of a unit
`UploadCreator` counts the number of pixels in an file to determine if it is valid. `pixels` is calculated by multiplying the width and height of the image, as determined by FastImage.
SVG files can have their width/height expressed in a variety of different units of measurement. For example, ‘px’, ‘in’, ‘cm’, ‘mm’, ‘pt’, ‘pc’, etc are all valid within SVG files. If an image has a width of `0.5in`, FastImage may interpret this as being a width of `0`, meaning it will report the `size` as being `0`.
However, we don’t need to concern ourselves with the number of ‘pixels’ in a SVG files, as that is irrelevant for this file format, so we can skip over the check for `pixels == 0` when processing this file type.
* DEV: Speed up getting SVG dimensions
The `-ping` flag prevents the entire image from being rasterized before a result is returned. See:
https://imagemagick.org/script/command-line-options.php#ping
The purpose of this is to allow us to catch regressions for a feature we've built recently that allows theme tests to run in production. We recently had a regression that we didn't notice for days, so to prevent that from happening again we'll use this in our internal CI pipelines.
The first thing we needed here was an enum rather than a boolean to determine how a directory_column was created. Now we have `automatic`, `user_field` and `plugin` directory columns.
This plugin API is assuming that the plugin has added a migration to a column to the `directory_items` table.
This was created to be initially used by discourse-solved. PR with API usage - https://github.com/discourse/discourse-solved/pull/137/
Profiling showed that we were roughly 10% of a request time creating all
the ActiveRecord objects for categories in the `Site` model on a site with 61 categories.
Instead of querying for the categories each time based on which categories the user can see,
we can just preload all of the categories upfront and filter out the
categories that the user can not see.
When dismissing new topics for the Tracked filter, the dismiss was
limited to 30 topics which is the default per page count for TopicQuery.
This happened even if you specified which topic IDs you were
selectively dismissing. This PR fixes that bug, and also moves
the per_page_count into a DEFAULT_PER_PAGE_COUNT for the TopicQuery
so it can be stubbed in tests.
Also moves the unused stub_const method into the spec helpers
for cases like this; it is much better to handle this in one place
with an ensure. In a follow up PR I will clean up other specs that
do the same thing and make them use stub_const.
Dir.glob does not guarantee file order and can change when ran on different machines.
This means that running asset precompilation on the exact same codebase will output
different content hashes.
Uploading lots of small files can be made significantly faster by parallelizing the `s3.put_object` calls. In testing, an UPLOAD_CONCURRENCY of 10 made a large restore 10x faster. An UPLOAD_CONCURRENCY of 20 made the same restore 18x faster.
This commit is careful to parallelize as little as possible, to reduce the chance of concurrency issues. In the worker threads, no database transactions are performed. All modification of shared objects is controlled with a mutex.
Unfortunately we do not have any existing tests for the `ToS3Migration` class. This change has been tested with a large site backup (120k uploads totalling 45GB)
When we call Bookmark.cleanup! we want to make sure that
topic_user.bookmarked is updated for topics linked to the
bookmarks that were deleted. Also when PostDestroyer calls
destroy and recover. We have a job for this already --
SyncTopicUserBookmarked -- so we just utilize that.
There are 2 changes in this PR:
1) Add a new environment variable called `DISCOURSE_SKIP_CSS_WATCHER` to disable our stylesheet watcher, and make the `qunit:test` rake task set this variable on the Unicorn/Rails server it spins up to disable our stylesheet watcher when running the tests because it doesn't really need it.
2) Print more Chrome logs (such as network/security errors) to the console.
Since we use the event to perform additional validations on the file, we should check if it added any errors to the upload before saving it. This change makes the UploadCreator more consistent since we no longer have to rely on exceptions.
Adds a new `smtp_group_id` column to `EmailLog` which is filled in if the mail `from_address` matches a group's `email_username`. This is for easier debugging, so we know which emails have been sent via group SMTP.
Fixes our backend spec suite in GitHub Actions CI. For more information about the Docker issue see: https://github.com/docker/for-linux/issues/1015
(It's possible that error could also happen in dev/production, though thankfully that hasn't happened yet afaik)
When replying to a user_private_message email originating from
a group PM that does _not_ have a reply key (e.g. when replying
directly to the group's SMTP address), we were mistakenly linking
the new post created from the reply to the OP and the user who
created the topic, based on the first IncomingEmail message ID in
the topic, rather than using the correct reply to user and post number
that the user actually replied to.
We now use the In-Reply-To header to look up the corresponding EmailLog
record when the user who replied was sent a user_private_message email,
and use the post from that as the reply_to_user/post.
This also removes superfluous filtering of incoming_email records. After
already filtering by message_id and then addressed_to_user (which only
returns incoming emails where the to, from, or cc address includes any
of the user's emails), we were filtering again but in the ruby code for
the exact same conditions. After removing this all existing tests still
pass.
If force_https is enabled all resource (including markdown preview and so on) will be accessed using HTTPS
If for any reason you attempt to link to non HTTPS reachable content content may appear broken
When `Theme#all_theme_variables` returns an empty array, we were running
a pointless query in `StyleSheet::Manager#uploads_digest`.
`SELECT "sha1" FROM "theme_fields" INNER JOIN "uploads" ON
"uploads"."id" = "theme_fields"."upload_id" WHERE 1=0`
Previously, the `transformed.blah` shortcut could only be used in top-level hbs statements like {{transformed.blah}}. When attempting to use it in a sub-expression like `{{concat "hello" transformed.world}}`, it would raise a "transformed is not defined" error.
This commit updates the shortcut logic to make `transformed.blah` and `attrs.blah` work consistently in all hbs expressions.
Co-authored-by: Jordan Vidrine <jordan@jordanvidrine.com>
Generating the client settings json involves santizing all string based
site settings. This is expensive as per our profile in production (~120ms) and one request after each deploy has
to pay this penalty.
See https://github.com/rails/rails/pull/42368
The impact is not quantifiable at the time of this writing but
prelimary investigation shows that `SELECT 1` accounts for 0.09 of CPU
time on a database. Note that Discourse runs thousands of databases so
the small impact may be amplified by the large number of databases that
we run.
* FIX: Quoting Oneboxed content should exclude formatting
When a post is quoted that includes Oneboxed content, we should not include the formatting generated by the Onebox. Rather, we should attempt to collapse the link referenced by the Onebox to a single line text link.
* DEV: fix tests
IMDb movie links were being rendered as posters. This was because
IMDb was sending `og:type` as `image` randomly in some cases. To
fix this we'll now default all IMDb links as article type. This will
ensure that the IMDb onebox link includes all the information instead
of showing just a poster without any context.
This PR changes the `UserNotification` class to send outbound `user_private_message` using the group's SMTP settings, but only if:
* The first allowed_group on the topic has SMTP configured and enabled
* SiteSetting.enable_smtp is true
* The group does not have IMAP enabled, if this is enabled the `GroupSMTPMailer` handles things
The email is sent using the group's `email_username` as both the `from` and `reply-to` address, so when the user replies from their email it will go through the group's SMTP inbox, which needs to have email forwarding set up to send the message on to a location (such as a hosted site email address like meta@discoursemail.com) where it can be POSTed into discourse's handle_mail route.
Also includes a fix to `EmailReceiver#group_incoming_emails_regex` to include the `group.email_username` so the group does not get a staged user created and invited to the topic (which was a problem for IMAP), as well as updating `Group.find_by_email` to find using the `email_username` as well for inbound emails with that as the TO address.
#### Note
This is safe to merge without impacting anyone seriously. If people had SMTP enabled for a group they would have IMAP enabled too currently, and that is a very small amount of users because IMAP is an alpha product, and also because the UserNotification change has a guard to make sure it is not used if IMAP is enabled for the group. The existing IMAP tests work, and I tested this functionality by manually POSTing replies to the SMTP address into my local discourse.
There will probably be more work needed on this, but it needs to be tested further in a real hosted environment to continue.
Setting a key/value pair in DistributedCache involves waiting on the
write to Redis to finish. In most cases, we don't need to wait on the
setting of the cache to finish. We just need to take our return value
and move on.
This improves the display of available tags in categories that are
configured to require at least (x) tags from a tag group.
There are two changes included:
- regular users will now see all the available tags in the required tag
group (previously they could see a max. of 5 tags)
- staff users will now see the tags from the required tag group when
the tag group contains more tags than the default limit (also set to 5)
Both changes only apply to the default query (i.e. no search terms).
It was not clear that replace watched words can be used to replace text
with URLs. This introduces a new watched word type that makes it easier
to understand.
The query is being executed each time we try and generate the link path
for a stylesheet within the duration of a reqeust. Categories are not
updated that often so repeating this query multiple times a request is
wasteful.
At the time of this commit, there is a `publish_discourse_stylesheet`
ActiveRecord callback on the `Category` model which clears the cache of
`Stylesheet::Manager` each time a category is saved.
`SvgSprite.custom_sprite_paths` does not just fetch custom SVG sprite
from themes. It also picks up any custom svg icons from a pre-defined
plugin folder.
Follow-up to 0700e9382d
The XML parsing of SVGs is done whenever the cache expires or on the
first load after a reboot. In one of our production instance, parsing
ranges from 30ms to 70ms which is not ideal. Instead, we've decided to
make a small memory trade off here by memoizing the core SVGs once on
boot to avoid parsing of the SVG files during the duration of a request.
The memozied hash will take up 57440 bytes or 0.05744 megabytes in size.
I merged this PR in yesterday, finally thinking this was done https://github.com/discourse/discourse/pull/12958 but then a wild performance regression occurred. These are the problem methods:
1aa20bd681/app/serializers/topic_tracking_state_serializer.rb (L13-L21)
Turns out date comparison is super expensive on the backend _as well as_ the frontend.
The fix was to just move the `treat_as_new_topic_start_date` into the SQL query rather than using the slower `UserOption#treat_as_new_topic_start_date` method in ruby. After this change, 1% of the total time is spent with the `created_in_new_period` comparison instead of ~20%.
----
History:
Original PR which had to be reverted **https://github.com/discourse/discourse/pull/12555**. See the description there for what this PR is achieving, plus below.
The issue with the original PR is addressed in 92ef54f402
If you went to the `x unread` link for a tag Chrome would freeze up and possibly crash, or eventually unfreeze after nearly 10 mins. Other routes for unread/new were similarly slow. From profiling the issue was the `sync` function of `topic-tracking-state.js`, which calls down to `isNew` which in turn calls `moment`, a change I had made in the PR above. The time it takes locally with ~1400 topics in the tracking state is 2.3 seconds.
To solve this issue, I have moved these calculations for "created in new period" and "unread not too old" into the tracking state serializer.
When I was looking at the profiler I also noticed this issue which was just compounding the problem. Every time we modify topic tracking state we recalculate the sidebar tracking/everything/tag counts. However this calls `forEachTracked` and `countTags` which can be quite expensive as they go through the whole tracking state (and were also calling the removed moment functions).
I added some logs and this was being called 30 times when navigating to a new /unread route because `sync` is being called from `build-topic-route` (one for each topic loaded due to pagination). So I just added a debounce here and it makes things even faster.
Finally, I changed topic tracking state to use a Map so our counts of the state keys is faster (Maps have .size whereas objects you have to do Object.keys(obj) which is O(n).)
<!-- NOTE: All pull requests should have tests (rspec in Ruby, qunit in JavaScript). If your code does not include test coverage, please include an explanation of why it was omitted. -->
* FIX: return an empty result if response from Amazon is missing attributes
Check we have the basic attributes requires to construct a Onebox for Amazon.
This is an attempt to handle scenarios where we receive a valid 200-status response from an Amazon request that does not include the data we’re expecting.
* Update lib/onebox/engine/amazon_onebox.rb
Co-authored-by: Régis Hanol <regis@hanol.fr>
Co-authored-by: Régis Hanol <regis@hanol.fr>
Refactors `TrustLevel` and moves translations from server to client
Additional changes:
* "staff" and "admin" wasn't translatable in site settings
* it replaces a concatenated string with a translation
* uses translation for trust levels in users_by_trust_level report
* adds a DB migration to rename keys of translation overrides affected by this commit
On the web, we display only an excerpt in a monospace font and the rest
of the body is hidden under ellipsis. The email displayed both of them
and it did not use the same style. This commit leaves only the excerpt
in emails and makes it use a monospace font to display it.
Original PR which had to be reverted **https://github.com/discourse/discourse/pull/12555**. See the description there for what this PR is achieving, plus below.
The issue with the original PR is addressed in 92ef54f402
If you went to the `x unread` link for a tag Chrome would freeze up and possibly crash, or eventually unfreeze after nearly 10 mins. Other routes for unread/new were similarly slow. From profiling the issue was the `sync` function of `topic-tracking-state.js`, which calls down to `isNew` which in turn calls `moment`, a change I had made in the PR above. The time it takes locally with ~1400 topics in the tracking state is 2.3 seconds.
To solve this issue, I have moved these calculations for "created in new period" and "unread not too old" into the tracking state serializer.
When I was looking at the profiler I also noticed this issue which was just compounding the problem. Every time we modify topic tracking state we recalculate the sidebar tracking/everything/tag counts. However this calls `forEachTracked` and `countTags` which can be quite expensive as they go through the whole tracking state (and were also calling the removed moment functions).
I added some logs and this was being called 30 times when navigating to a new /unread route because `sync` is being called from `build-topic-route` (one for each topic loaded due to pagination). So I just added a debounce here and it makes things even faster.
Finally, I changed topic tracking state to use a Map so our counts of the state keys is faster (Maps have .size whereas objects you have to do Object.keys(obj) which is O(n).)
This overhauls the user interface for the group email settings management, aiming to make it a lot easier to test the settings entered and confirm they are correct before proceeding. We do this by forcing the user to test the settings before they can be saved to the database. It also includes some quality of life improvements around setting up IMAP and SMTP for our first supported provider, GMail. This PR does not remove the old group email config, that will come in a subsequent PR. This is related to https://meta.discourse.org/t/imap-support-for-group-inboxes/160588 so read that if you would like more backstory.
### UI
Both site settings of `enable_imap` and `enable_smtp` must be true to test this. You must enable SMTP first to enable IMAP.
You can prefill the SMTP settings with GMail configuration. To proceed with saving these settings you must test them, which is handled by the EmailSettingsValidator.
If there is an issue with the configuration or credentials a meaningful error message should be shown.
IMAP settings must also be validated when IMAP is enabled, before saving.
When saving IMAP, we fetch the mailboxes for that account and populate them. This mailbox must be selected and saved for IMAP to work (the feature acts as though it is disabled until the mailbox is selected and saved):
### Database & Backend
This adds several columns to the Groups table. The purpose of this change is to make it much more explicit that SMTP/IMAP is enabled for a group, rather than relying on settings not being null. Also included is an UPDATE query to backfill these columns. These columns are automatically filled when updating the group.
For GMail, we now filter the mailboxes returned. This is so users cannot use a mailbox like Sent or Trash for syncing, which would generally be disastrous.
There is a new group endpoint for testing email settings. This may be useful in the future for other places in our UI, at which point it can be extracted to a more generic endpoint or module to be included.
Discourse shouldn't dynamically calculate the path of uploads and optimized images after a file has been stored on disk or S3. Otherwise it might calculate the wrong path if the SHA1 or extension stored in the database doesn't match the actual file path.
* FIX: Improve GitHub folder regexp in Onebox
It used to match any GitHub URL that was not matched by the other GitHub
Oneboxes and it did not do a good job at handling those. With this
change, the generic Onebox will handle the remaining URLs.
* FEATURE: Add Onebox for GitHub Actions
* FEATURE: Add Onebox for PR check runs
* FIX: Remove image from GitHub folder Oneboxes
It is a generic, auto-generated image which does not provide any value.
* DEV: Add tests
* FIX: Strip HTML comments from PR body
* Move onebox gem in core library
* Update template file path
* Remove warning for onebox gem caching
* Remove onebox version file
* Remove onebox gem
* Add sanitize gem
* Require onebox library in lazy-yt plugin
* Remove onebox web specific code
This code was used in standalone onebox Sinatra application
* Merge Discourse specific AllowlistedGenericOnebox engine in core
* Fix onebox engine filenames to match class name casing
* Move onebox specs from gem into core
* DEV: Rename `response` helper to `onebox_response`
Fixes a naming collision.
* Require rails_helper
* Don't use `before/after(:all)`
* Whitespace
* Remove fakeweb
* Remove poor unit tests
* DEV: Re-add fakeweb, plugins are using it
* Move onebox helpers
* Stub Instagram API
* FIX: Follow additional redirect status codes (#476)
Don’t throw errors if we encounter 303, 307 or 308 HTTP status codes in responses
* Remove an empty file
* DEV: Update the license file
Using the copy from https://choosealicense.com/licenses/gpl-2.0/#
Hopefully this will enable GitHub to show the license UI?
* DEV: Update embedded copyrights
* DEV: Add Onebox copyright notice
* DEV: Add MIT license, convert COPYRIGHT.txt to md
* DEV: Remove an incorrect copyright claim
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
Co-authored-by: jbrw <jamie@goatforce5.org>
This PR improves the UI of bulk select so that its context is applied to the Dismiss Unread and Dismiss New buttons. Regular users (not just staff) are now able to use topic bulk selection on the /new and /unread routes to perform these dismiss actions more selectively.
For Dismiss Unread, there is a new count in the text of the button and in the modal when one or more topic is selected with the bulk select checkboxes.
For Dismiss New, there is a count in the button text, and we have added functionality to the server side to accept an array of topic ids to dismiss new for, instead of always having to dismiss all new, the same as the bulk dismiss unread functionality. To clean things up, the `DismissTopics` service has been rolled into the `TopicsBulkAction` service.
We now also show the top Dismiss/Dismiss New button based on whether the bottom one is in the viewport, not just based on the topic count.
Re-lands the change initially proposed on #8359 but without a new nginx
location block, so it has less change surface.
Co-authored-by: Jeff Wong <awole20@gmail.com>
Co-authored-by: Jeff Wong <awole20@gmail.com>
Based on feedback from Matt Haughey, we don't need to use so many words when describing a deleted topic or post.
Co-authored-by: Martin Brennan <martin@discourse.org>
The main image_optim gem now includes the timeout feature
that we had in our fork. So it is now safe to switch off of our fork and
back to the image_optim gem.
This is the link to the commit in the image_optim repo that adds the
timeout option:
ec3767dde0
One difference with the new timeout implementation is that image_optim
now handles the timeout exceptions instead of bubbling them up:
1ed0328587/lib/image_optim.rb (L128-L129)
```
rescue Errors::TimeoutExceeded
handler.result
```
So a timeout will just return `nil`, which is the same response if it
couldn't optimize an image. I don't think we were really watching for
or doing anything about these timeout warnings in our logs so I think
this is an okay change to have and we will have less warnings in our
logs now too.