Previously we were migrating multisites serially, this is extremely slow
especially when 200 dbs are involved.
The new implementation defaults to running 20 migrations concurrently, leading
to a 20x speedup.
We also amended it so errors are printed out last, something that makes
debugging failures easier.
This is code specific to Discourse cause we integrate SeedFu with our
migrations and can not include this in the multisite gem.
If a migration performs no changes it should not output stuff.
Previously we would output information about seeds which was very noisy.
On multisite this was particularly bad
If the feature is enabled, staff members can construct a URL and publish a
topic for others to browse without the regular Discourse chrome.
This is useful if you want to use Discourse like a CMS and publish
topics as articles, which can then be embedded into other systems.
* DEPRECATION: Remove support for api creds in query params
This commit removes support for api credentials in query params except
for a few whitelisted routes like rss/json feeds and the handle_mail
route.
Several tests were written to valid these changes, but the bulk of the
spec changes are just switching them over to use header based auth so
that they will pass without changing what they were actually testing.
Original commit that notified admins this change was coming was created
over 3 months ago: 2db2003187
* fix tests
* Also allow iCalendar feeds
Co-authored-by: Rafael dos Santos Silva <xfalcox@gmail.com>
* FIX: guardian always got user but sometimes it is anonymous
```
def initialize(user = nil, request = nil)
@user = user.presence || AnonymousUser.new
@request = request
end
```
AnonymouseUser defines `blank?` method
```
class AnonymousUser
def blank?
true
end
...
end
```
so if we would use @user.present? it would be correct, however, just @user is always true
This is so users with huge amount of bookmarks do not have to wait a long time to see results.
* Add a bookmark list and list serializer to server-side to be able to handle paging and load more URL
* Use load-more component to load more bookmark items, 20 at a time in user activity
* Change the way current user is loaded for bookmark ember models because it was breaking/losing resolvedTimezone when loading more items
* FEATURE: add setting `auto_approve_email_domains` to auto approve users
This commit adds a new site setting `auto_approve_email_domains` to
auto approve users based on their email address domain.
Note that if a domain already exists in `email_domains_whitelist` then
`auto_approve_email_domains` needs to be duplicated there as well,
since users won’t be able to register with email address that is
not allowed in `email_domains_whitelist`.
* Update config/locales/server.en.yml
Co-Authored-By: Robin Ward <robin.ward@gmail.com>
Usage:
```
{{d-button icon="times" label="foo.bar" isLoading=true}}
```
Note that a button loading without an icon will shrink text size to prevent button to jump in size.
A button while loading is disabled.
* FIX: Perform crop using user-specified image sizes
It used to resize the images to max width and height first and then
perform the crop operation. This is wrong because it ignored the user
specified image sizes from the Markdown.
* DEV: Use real images in test
Previously we would consider a user "present" and "last seen" if the
browser window was visible.
This has many edge cases, you could be considered present and around for
days just by having a window open and no screensaver on.
Instead we now also check that you either clicked, transitioned around app
or scrolled the page in the last minute in combination with window
visibility
This will lead to more reliable notifications via email and reduce load of
message bus for cases where a user walks away from the terminal
Get rid of harmful each loop over uploads to update. Instead we put all the unique access control posts for the uploads into a map for fast access (vs using the slow .find through array) and look up the post when it is needed when looping through the uploads in batches.
On a Discourse instance with ~93k uploads, a simplified version of the old method takes > 1 minute, and a simplified version of the new method takes ~18s and uses a lot less memory.
If the “secure media” site setting is enabled then ALL files uploaded to Discourse (images, video, audio, pdf, txt, zip etc. etc.) will follow the secure media rules. The “prevent anons from downloading files” setting will no longer have any bearing on upload security. Basically, the feature will more appropriately be called “secure uploads” instead of “secure media”.
This is being done because there are communities out there that would like all attachments and media to be secure based on category rules but still allow anonymous users to download attachments in public places, which is not possible in the current arrangement.
* FIX: Do not use original filename to extract the original filename
Prefer extracting filename from the destination path, which is build
using extracted image information.
* UX: Show better error images
Previous to this change slugs for leaves in 3 level nestings would not work
Our UX picks only the last two levels
This also makes the results consistent for slugs as it enforces order.
- Define the CSP based on the requested domain / scheme (respecting force_https)
- Update EnforceHostname middleware to allow secondary domains, add specs
- Add URL scheme to anon cache key so that CSP headers are cached correctly
This is mostly useful while developing a plugin, to avoid manual actions of deleting tables and schema_migrations rows.
Usage:
bundle exec rake plugin:migrate:down[discourse-calendar]
New `duration` attribute is introduced for the `set_or_create_timer` method in the commit aad12822b7 for "based on last post" and "auto delete replies" topic timers.
* Preload custom fields for BookmarkQuery and add preload callback. Copy TopicQuery preload methodology to allow plugins to preload data for the BookmarkQuery. This fixes assigned plugin custom fields N1
* Include topic tags in initial query to avoid tags N1
Related: discourse/discourse-assign#63
* DEV: Replace User.unstage and User#unstage API with User#unstage!
Quoting @SamSaffron:
> User.unstage mixes concerns of both unstaging users and updating params which is fragile/surprising.
> u.unstage destroys notifications and raises a user_unstaged event prior to the user becoming unstaged and the user object being saved.
User#unstage! no longer updates user attributes and saves the object before triggering the `user_unstaged` event.
* Update one more spec
* Assign attributes after unstaging
* Improve the bookmark mobile on modal so it doesn't go all the way to the edge and the custom datetime input is easier to use
* Improve the rake task for syncing so it does not error for topics that no longer exist and batches 2000 inserts at a time, clearing the array each time
As another step toward fully dreprecating query parameter authentication
in API requests this change prevents an admin dashboard message from
showing up if using a whitelisted route like rss feeds or the
mail-receiver route.
Rails calls I18n.translate during initialization and by default translation overrides are used. Database migrations would fail if the system tried to migrate from an old version that didn't have the `translation_overrides` table with all its columns yet.
This makes restoring really old backups work again. Running `DISABLE_TRANSLATION_OVERRIDES=1 rake db:migrate` will allow you to upgrade such an old database as well.
Two behaviors in the mail gem collide:
1. Attachments are added as extra parts at the top level,
2. When there are both text and html parts, the content type is set to
'multipart/alternative'.
Since attachments aren't alternative renderings, for emails that contain
attachments and both html and text parts, some coercing is necessary.
* This PR changes the user activity bookmarks stream to show a new list of bookmarks based on the Bookmark record.
* If a bookmark has a name or reminder it will be shown as metadata above the topic title in the list
* The categories, tags, topic status, and assigned show for each bookmarked post based on the post topic
* Bookmarks can be deleted from the [...] menu in the list
* As well as this, the list of bookmarks from the quick access panel is now drawn from the Bookmarks table for a user:
* All of this new functionality is gated behind the enable_bookmarks_with_reminders site setting
The /bookmarks/ route now redirects directly to /user/:username/activity/bookmarks-with-reminders
* The structure of the Ember for the list of bookmarks is not ideal, this is an MVP PR so we can start testing this functionality internally. There is a little repeated code from topic.js.es6. There is an ongoing effort to start standardizing these lists that will be addressed in future PRs.
* This PR also fixes issues with feature detection for at_desktop bookmark reminders
A custom date and time can now be selected for a bookmark reminder
The reminder will not happen at the exact time but rather at the next 5 minute interval of the bookmark reminder schedule.
This PR also fixes issues with bulk deleting topic bookmarks.
* This PR implements the scheduling and notification system for bookmark reminders. Every 5 minutes a schedule runs to check any reminders that need to be sent before now, limited to **300** reminders at a time. Any leftover reminders will be sent in the next run. This is to avoid having to deal with fickle sidekiq and reminders in the far-flung future, which would necessitate having a background job anyway to clean up any missing `enqueue_at` reminders.
* If a reminder is sent its `reminder_at` time is cleared and the `reminder_last_sent_at` time is filled in. Notifications are only user-level notifications for now.
* All JavaScript and frontend code related to displaying the bookmark reminder notification is contained here. The reminder functionality is now re-enabled in the bookmark modal as well.
* This PR also implements the "Remind me next time I am at my desktop" bookmark reminder functionality. When the user is on a mobile device they are able to select this option. When they choose this option we set a key in Redis saying they have a pending at desktop reminder. The next time they change devices we check if the new device is desktop, and if it is we send reminders using a DistributedMutex. There is also a job to ensure consistency of these reminders in Redis (in case Redis drops the ball) and the at desktop reminders expire after 20 days.
* Also in this PR is a fix to delete all Bookmarks for a user via `UserDestroyer`
* Remove some `.es6` from comments where it does not matter
* Use a post processor for transpilation
This will allow us to eventually use the directory structure to
transpile rather than the extension.
* FIX: Some errors and clean up in confirm-new-email
It would throw an error if the webauthn element wasn't present.
Also I changed things so that no-module is not explicitly
referenced.
* Remove `no-module`
Instead we allow a magic comment: `// discourse-skip-module` to prevent
the asset pipeline from creating a module.
* DEV: Enable babel transpilation based on directory
If it's in `app/assets/javascripts/dicourse` it will be transpiled
even without the `.es6` extension.
* REFACTOR: Remove Tilt/ES6ModuleTranspiler
There are three modifiers:
- serialize_topic_excerpts (boolean)
- csp_extensions (array of strings)
- svg_icons (array of strings)
When multiple themes are active, the values will be combined. The combination method varies based on the setting. CSP/SVG arrays will be combined. serialize_topic_excerpts will use `Enumerable#any`.
* Do not grant badges for posts with no user
* Ensure instructions are correct in Change Owner modal
* Hide user-dependent actions from posts with no user
* Make PostRevisor work with posts with no user
* Ensure posts with no user can be deleted
* discourse-narrative-bot should ignore posts with no user
* Skip TopicLink creation for posts with no user
Due to unicorn env object recycling request.ip could point at the wrong
ip address by the time defer block is called. This usually would happen
under load.
This also avoids keeping the entire request object as referenced by the
closure.
* when importing a private theme using the themes:install rake task the SSH key is written out to a file for use by the git-clone command
* if the private key is written out without a newline at end-of-file (i.e. after it's been stripped) it's not recognized as a valid key by SSH
* so: don't strip it when writing it out, we should be fine
This pr replaces `{{{ }}}` usage by a {{html-safe}} helper. While it doesn't solve the underlying issue, it gives us a path forward without risking breaking too much existing behavior.
Also introduces an htmlSafe computed macro:
```
import { htmlSafe } from "discourse/lib/computed";
htmlDescription: htmlSafe("description")
```
Overtime {{html-safe}} usage should be removed and moved to components properties or specialized components/helpers.
PostMover passes to PostCreator a `created_at` that is a `ActiveSupport::WithTimeZone` instance (and also `is_a? Time`). Previously it was always being passed through `Time.zone.parse` so it would lose sub-second information. Now, it takes `Time` input as-is, while still parsing other types.
Rails has an odd behavior for calling .delete_all on a has_many relation - the
default behavior is to nullify the foreign key fields instead of actually
'DELETE'ing the records.
Additionally, publishing a shared draft topic creates a PostRevision that the
NotifyPostRevision job picks up which is then promptly deleted.
Use destroy_all when cleaning up the revisions and have the NotifyPostRevision
job tolerate deleted PostRevision records.
This takes a small performance hit (several SQL DELETEs instead of just one)
but shouldn't be too much of an issue (high cardinalities range from 30-100).
On startup, (including when starting a rails console) we manipule a
collection of plugin files. Writing these files is done in multiple
observable steps, which presents opportunities for race conditions and
causes temporary corruption.
This commit uses the write, fsync and rename trick to atomically
overwrite these files instead, but reads them first to avoid unnecessary
writes.
c457d3bf was a previous attempt to fix the same problem.
The rake task aborted the migration with "Already migrated" when all upload URLs linked to the correct S3 bucket even though the files didn't exist on S3. By removing the first check we force the rake task to check for the existance of uploads on S3.
Previously on boot we were always removing and adding the same pre-generated
files and symlinks.
This change attempts to avoid writing any automatically generated content if
it is exactly what it should be on disk.
This corrects issues where running a rails console can temporarily corrupt
internal state in production.
When secure media is enabled and an attachment is marked as secure we want to use the full url instead of the short-url so we get the same access control post protections as secure media uploads.
* Add uploads:sync_s3_acls rake task to ensure the ACLs in S3 are the correct (public-read or private) setting based on upload security
* Improved uploads:disable_secure_media to be more efficient and provide better messages to the user.
* Rename uploads:ensure_correct_acl task to uploads:secure_upload_analyse_and_update as it does more than check the ACL
* Many improvements to uploads:secure_upload_analyse_and_update
* Make sure that upload.access_control_post is unscoped so deleted posts are still fetched, because they still affect the security of the upload.
* Add escape hatch for capture_stdout in the form of RAILS_ENABLE_TEST_STDOUT. If provided the capture_stdout code will be ignored, so you can see the output if you need.
In some cases CTE caused pathologically bad query plans.
This optimises it so query runs by itself and caches for lifetime
of the topic query object.
This lightweight caching is done cause topic query will often
execute two queries (one for pinned and one for non pinned)
* Also fixes an issue where if webp was a downloaded hotlinked
image and then secure + sent in an email, it was not being
redacted because webp was not a supported media format in
FileHelper
* Webp originally removed as an image format in
https://github.com/discourse/discourse/pull/6377
and there was a spec to make sure a .bin webp
file did not get renamed from its type to webp.
However we want to support webp images now to make
sure they are properly redacted if secure media is
on, so change the example in the spec to use tiff,
another banned format, instead
* Attachments (non media files) were being marked as secure if just
SiteSetting.prevent_anons_from_downloading_files was enabled. this
was not correct as nothing should be marked as actually "secure" in
the DB without that site setting enabled
* Also add a proper standalone spec file for the upload security class
Follows up #64b35120
This also corrects it so bytes used for internal storage counts all the space
used, previously it was only counting uploads not optimized images.
Additionally we now correctly count storage for optimized images.
A follow-up correction to this change https://github.com/discourse/discourse/pull/9001.
When admin changes staff email still enforce old email confirm. Only allow auto-confirm of a new email by admin IF the target user is not also an admin. If an admin gets locked out of their email the site admin can use the rails console to solve the issue in a pinch.
When admin changes a user's email from the preferences page of that user:
* The user will not be sent an email to confirm that their
email is changing. They will be sent a reset password email
so they can set the password for their account at the new
email address.
* The user will still be sent an email to their old email to inform
them that it was changed.
* Admin and staff users still need to follow the same old + new
confirm process, as do users changing their own email.
A single SchemaCache instance is maintained by the connection pool, and made available via a schema_cache method on each connection. When the SchemaCache instance is fetched from the pool, its internal connection reference is updated to equal the requesting connection. However, since there is only one instance of SchemaCache, this internal connection reference is updated everywhere, and can ultimately result in multiple threads accessing the same database connection. In Discourse, this could result in Sidekiq jobs getting 'stuck' in database connections.
This patch modifies SchemaCache so that it caches the internal connection on a per-thread basis
Co-authored-by: Sam Saffron <sam.saffron@gmail.com>
Co-authored-by: Matt Palmer <mpalmer@hezmatt.org>
If a group mention could be notified on preview it was given an `<a>`
tag with the `.notify` class. When cooked it would display differently.
This patch makes the server side cooking match the client preview.
This may be the case when DiscourseLogstashLogger is initialized before
the application (see unicorn.conf.rb)
This commit is a follow-up to 28292d2759.
Co-authored-by: David Taylor <david@taylorhq.com>
Co-authored-by: Sam Saffron <sam.saffron@gmail.com>
This normalizes it so we only carry one place for grabbing disk space size
It also normalizes the command made so it uses Discourse.execute_command
which splits off params in a far cleaner way.
Previously we had many places in the app that called `hostname` to get
hostname of a server. This commit replaces the pattern in 2 ways
1. We cache the result in `Discourse.os_hostname` so it is only ever called once
2. We prefer to use Socket.gethostname which avoids making a shell command
This improves performance as we are not spawning hostname processes throughout
the app lifetime
Further on from my earlier PR #8973 also reject upload as secure if its origin URL contains images/emoji. We still check Emoji.all first to try and be canonical.
This may be a little heavy handed (e.g. if an external URL followed this same path it would be a false positive), but there are a lot of emoji aliases where the actual Emoji url is something, but you can have another image that should not be secure that that thing is an alias for. For example slight_smile.png does not show up in Emoji.all BUT slightly_smiling_face does, and it aliases slight_smile e.g. /images/emoji/twitter/slight_smile.png?v=9 and /images/emoji/twitter/slightly_smiling_face.png?v=9 are equivalent.
The rake task was broken, because the addition of the
UploadSecurity check returned true/false instead of the
upload ID to determine which uploads to set secure.
Also it was rebaking the posts in the wrong place and
pretty inefficiently at that. Also it was rebaking before
the upload was being changed to secure in the DB.
This also updates the task to set the access_control_post_id
for all uploads. the first post the upload is linked to is used
for the access control. if the upload doesn't get changed to
secure this doesn't affect anything.
Added a spec for the rake task to cover common cases.
Sometimes PullHotlinkedImages pulls down a site emoji and creates a new upload record for it. In the cases where these happen the upload is not created via the normal path that custom emoji follows, so we need to check in UploadSecurity whether the origin of the upload is based on a regular site emoji. If it is we never want to mark it as secure (we don't want emoji not accessible from other posts because of secure media).
This only became apparent because the uploads:ensure_correct_acl rake task uses UploadSecurity to check whether an upload should be secure, which would have marked a whole bunch of regular-old-emojis as secure.
After adding a tag as a synonym of another tag,
both tags will have the wrong topic counts. It's
corrected within 12 hours by the EnsureDbConsistency
job. This fix ensures the topic counts are updated
much sooner.
Previously we were caching by user_id, but the there are only two possible outcomes. Therefore we only need to cache two values.
This removes another N+1 query when serializing multiple user cards.
* Because custom emoji count as post "uploads" we were
marking them as secure when updating the secure status for post uploads.
* We were also giving them an access control post id, which meant
broken image previews from 403 errors in the admin custom emoji list.
* We now check if an upload is used as a custom emoji and do not
assign the access control post + never mark as secure.
### UI Changes
If `SiteSetting.enable_bookmarks_with_reminders` is enabled:
* Clicking "Bookmark" on a topic will create a new Bookmark record instead of a post + user action
* Clicking "Clear Bookmarks" on a topic will delete all the new Bookmark records on a topic
* The topic bookmark buttons control the post bookmark flags correctly and vice-versa
Disabled selecting the "reminder type" for bookmarks in the UI because the backend functionality is not done yet (of sending users notifications etc.)
### Other Changes
* Added delete bookmark route (but no UI yet)
* Added a rake task to sync the old PostAction bookmarks to the new Bookmark table, which can be run as many times as we want for a site (it will not create duplicates).
This is not used in core or official plugins, and has been printing a deprecation notice since v2.3.0beta4. All OpenID 2.0 code and dependencies have been dropped. The user_open_ids table remains for now, in case anyone has missed the deprecation notice, and needs to migrate their data.
Context at https://meta.discourse.org/t/-/113249
This commit removes logic about spoilers because it should live inside
of the discourse-spoiler-alert plugin.
This PR:
https://github.com/discourse/discourse-spoiler-alert/pull/38
also completely removes spoilers from excerpts in order to keep them
from leaking in topic previews and notifications.
Some auth providers (e.g. Auth0 with default configuration) send the email address in the name field. In Discourse, the name field is made public, so this commit adds a safeguard to prevent emails being made public.
For some reasons, we have two ways of associating "custom fields" to a new topic:
using 'meta_data' and 'custom_fields'.
However, if we were to provide both arguments, the 'meta_data' would be overwritten
by any 'custom_fields' provided.
This commit ensures we can use both and merges the 'custom_fields' with the 'meta_data'.
This commit adds support for an optional "logout" parameter in the
payload of the /session/sso_provider endpoint. If an SSO Consumer
adds a "logout=true" parameter to the encoded/signed "sso" payload,
then Discourse will treat the request as a logout request instead
of an authentication request. The logout flow works something like
this:
* User requests logout at SSO-Consumer site (e.g., clicks "Log me out!"
on web browser).
* SSO-Consumer site does whatever it does to destroy User's session on
the SSO-Consumer site.
* SSO-Consumer then redirects browser to the Discourse sso_provider
endpoint, with a signed request bearing "logout=true" in addition
to the usual nonce and the "return_sso_url".
* Discourse destroys User's discourse session and redirects browser back
to the "return_sso_url".
* SSO-Consumer site does whatever it does --- notably, it cannot request
SSO credentials from Discourse without the User being prompted to login
again.
Accounting for fractional seconds, a distributed mutex can be held for
almost a full second longer than its validity.
For example: if we grab the lock at 10.5 seconds passed the epoch with a
validity of 5 seconds, the lock would be released at 16 seconds passed
the epoch. However, in this case assuming that all other processing
takes a negligible amount of time, the key would be expired at 15.5
seconds passed the epoch.
Using expireat, the key is now expired exactly when the lock is released.
When we change upload's sha1 (e.g. when resizing images) it won't match the data in the most recent S3 inventory index. With this change the uploads that have been updated since the inventory has been generated are ignored.
When FinalDestination is given a URL it encodes it before doing anything else. however S3 presigned URLs should not be messed with in any way otherwise we can end up with 400 errors when downloading the URL e.g.
<Error><Code>InvalidToken</Code><Message>The provided token is malformed or otherwise invalid.</Message>
The signature of presigned URLs is very important and is automatically generated and should be preserved.
This should make the importer more resilient to incomplete or damaged
backups. It will disable some validations and attempt to automatically
repair category permissions before importing.
For example /t/ URLs were being replaced if they contained secure-media-uploads so if you made a topic called "Secure Media Uploads Are Cool" the View Topic link in the user notifications would be stripped out.
Refactored code so this secure URL detection happens in one place.
When 'categories topics' setting is set to 0, the system will
automatically try to find a value to keep the two columns (categories
and topics) symmetrical.
The value is computed as 1.5x the number of top level categories and at
least 5 topics will always be returned.
Previously if somehow a user created a blank markdown document using tag
tricks (eg `<p></p><p></p><p></p><p></p><p></p><p></p>`) and so on, we would
completely strip the document down to blank on post process due to onebox
hack.
Needs a followup cause I am still unclear about the reason for empty p stripping
and it can cause some unclear cases when we re-cook posts.
Basically, say you had already downloaded a certain image from a certain URL
using pull_hotlinked_images and the onebox. The upload would be stored
by its sha as an upload record. Whenever you linked to the same URL again
in a post (e.g. in our case an og:image on review.discourse) we would
would reuse the original upload record because of the sha1.
However when you turned on secure media this could cause problems as
the first post that uses that upload after secure media is enabled
will set the access control post for the upload to the new post.
Then if the post is deleted every single onebox/link to that same image
URL will fail forever with 403 as the secure-media-uploads URL fails
if the access control post has been deleted.
To fix this when cooking posts and pulling hotlinked images, we only
allow using an original upload by URL if its access control post
matches the current post, and if the original_sha1 is filled in,
meaning it was uploaded AFTER secure media was enabled. otherwise
we just redownload the media again to be safe, as the URL will always
be new then.
The new search modifier `in:all` can be used to include both public and personal messages in the same search.
Co-authored-by: adam j hartz <hz@mit.edu>
Previously we would use the date the post was updated at as the grant date
this caused confusion.
This also tidies up the badges sql file which was using outdated patterns
for multi line strings.
A race condition issue is possible when multiple thread/processes are calling this method.
`ls` prints out to stderr "cannot access '...': No such file or directory" if any of the files it's currently trying to list are being removed by the `xargs rm -rf` in an another process. That doesn't affect the result, but it did raise an error before this change.
Tested on a production instance where the original issue was observed.
Co-Authored-By: Régis Hanol <regis@hanol.fr>
This allows us to use `sourceURL` which otherwise does not work. In the
future we hope to have proper source maps in development mode and
disable this again.
When pull_hotlinked_images tried to run on posts with secure media (which had already been downloaded from external sources) we were getting a 404 when trying to download the image because the secure endpoint doesn't allow anon downloads.
Also, we were getting into an infinite loop of pull_hotlinked_images because the job didn't consider the secure media URLs as "downloaded" already so it kept trying to download them over and over.
In this PR I have also refactored secure-media-upload URL checks and mutations into single source of truth in Upload, adding a SECURE_MEDIA_ROUTE constant to check URLs against too.
* DEV: Add a fake Mutex that for concurrency testing with Fibers
* DEV: Support running in sleep order in concurrency tests
* FIX: A separate FallbackHandler should be used for each redis pair
This commit refactors the FallbackHandler and Connector:
* There were two different ways to determine whether the redis master
was up. There is now one way and it is the responsibility of the
new RedisStatus class.
* A background thread would be created whenever `verify_master` was
called unless the thread already existed. The thread would
periodically check the status of the redis master. However, checking
that a thread is `alive?` is an ineffective way of determining
whether it will continue to check the redis master in the future
since the thread may be in the process of winding down.
Now, this thread is created when the recorded master status goes from
up to down. Since this thread runs the only part of the code that is
able to bring the recorded status up again, we ensure that only one
thread is probing the redis master at a time and that there is always
a thread probing redis master when it is recorded as being down.
* Each time the status of the redis master was checked periodically, it
would spawn a new thread and immediately join on it. I assume this
happened to isolate the check from the current execution, but since
the join rethrows exceptions in the parent thread, this was not
effective.
* The logic for falling back was spread over the FallbackHandler and
the Connector. The connector is now a dumb object that delegates
responsibility for determining the status of redis to the
FallbackHandler.
* Previously, failing to connect to a master redis instance when it was
not recorded as down would raise an exception. Now, this exception is
passed to `Discourse.warn_exception` and the connection is made to
the slave.
This commit introduces the FallbackHandlers singleton:
* It is responsible for holding the set of FallbackHandlers.
* It adds callbacks to the fallback handlers for when a redis master
comes up or goes down. Main redis and message bus redis may exist on
different or the same redis hosts and so these callbacks may all
exist on the same FallbackHandler or on separate ones.
These objects are tested using fake concurrency provided by the
Concurrency module:
* An `around(:each)` hook is used to cause each test to run inside a
Scenario so that the test body, mocking cleanup and `after(:each)`
callbacks are run in a different Fiber.
* Therefore, holting the execution of the Execution abruptly (so that
the fibers aren't run to completion), prevents the mocking cleaning
and `after(:each)` callbacks from running. I have tried to prevent
this by recovering from all exceptions during an Execution.
* FIX: Create frozen copies of passed in config where possible
* FIX: extract start_reset method and remove method used by tests
Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
Add TopicUploadSecurityManager to handle post moves. When a post moves around or a topic changes between categories and public/private message status the uploads connected to posts in the topic need to have their secure status updated, depending on the security context the topic now lives in.
For consistency this PR introduces using custom markdown and short upload:// URLs for video and audio uploads, rather than just treating them as links and relying on the oneboxer. The markdown syntax for videos is ![file text|video](upload://123456.mp4) and for audio it is ![file text|audio](upload://123456.mp3).
This is achieved in discourse-markdown-it by modifying the rules for images in mardown-it via md.renderer.rules.image. We return HTML instead of the token when we encounter audio or video after | and the preview renders that HTML. Also when uploading an audio or video file we insert the relevant markdown into the composer.
When we were pulling hotlinked images for oneboxes in the CookedPostProcessor, we were using the direct S3 URL, which returned a 403 error and thus did not set widths and heights of the images. We now cook the URL first based on whether the upload is secure before handing off to FastImage.
The `sourceURL` directive must be on the same line as the thing it's
referencing. This patch allows it to work again in development mode
because each Javascript file ends up in its own `define(...)` line.
It will strip out any trailing whitespace and put the `sourceURL`
comment on the same line and everything seems to work.
group membership and `CategoryUser` notification level should be
respected to determine whether to notify staged users about activity in
private categories, instead of only ever generating notifications for staged
users' own topics (which has been the behaviour since
0c4ac2a7bc)
* enqueue spam/dmarc failing emails instead of hiding
* add translations for dmarc/spam enqueued reasons
* unescape quote
* if email_in_authserv_id is blank return gray for all emails
On some customer forums we are randomly getting a "You must select a valid user" error when sending a PM even when all parameters seem to be OK. This is an attempt to track it down with more data.
ReviewableScore#types extend the PostActionTypes with their own, storing the result inside a class variable. To avoid overwriting an existing flag, we need to calculate the next flag ID using these types instead of the PostAction ones. Since we first call the score types to calculate the id, this list gets memoized, leaving us with an outdated list.
To fix this, we now reload ReviewableScore#types after replacing flags.
Custom emoji, profile background, and card background were being set to secure, which we do not want as they are always in a public context and result in a 403 error from the ACL if linked directly.
### General Changes and Duplication
* We now consider a post `with_secure_media?` if it is in a read-restricted category.
* When uploading we now set an upload's secure status straight away.
* When uploading if `SiteSetting.secure_media` is enabled, we do not check to see if the upload already exists using the `sha1` digest of the upload. The `sha1` column of the upload is filled with a `SecureRandom.hex(20)` value which is the same length as `Upload::SHA1_LENGTH`. The `original_sha1` column is filled with the _real_ sha1 digest of the file.
* Whether an upload `should_be_secure?` is now determined by whether the `access_control_post` is `with_secure_media?` (if there is no access control post then we leave the secure status as is).
* When serializing the upload, we now cook the URL if the upload is secure. This is so it shows up correctly in the composer preview, because we set secure status on upload.
### Viewing Secure Media
* The secure-media-upload URL will take the post that the upload is attached to into account via `Guardian.can_see?` for access permissions
* If there is no `access_control_post` then we just deliver the media. This should be a rare occurrance and shouldn't cause issues as the `access_control_post` is set when `link_post_uploads` is called via `CookedPostProcessor`
### Removed
We no longer do any of these because we do not reuse uploads by sha1 if secure media is enabled.
* We no longer have a way to prevent cross-posting of a secure upload from a private context to a public context.
* We no longer have to set `secure: false` for uploads when uploading for a theme component.
Some specs use psql to test database restores and dropping the table after the test needs to happen outside of rspec because of transactions. The previous attempt lead to some changes to be stored in the test database.
FIX: raised a proper NotFound exception when filtering groups by username with invalid username.
FIX: properly filter the groups based on current user visibility when viewing another user's groups.
DEV: Guardian.can_see_group?(group) is now using Guardian.can_see_groups(groups) instead of duplicating the same code.
FIX: spec for groups_controller#index when group directory is disabled for logged in user.
FIX: groups_controller.sortable specs to actually test all sorting combinations.
DEV: s/response_body/body/g for slightly shorter spec code.
FIX: rewrote the "view another user's groups" specs to test all group_visibility and members_group_visibility combinations.
DEV: Various refactoring for cleaner and more consistent code.
An archive containing lots of small files could trigger an error even though the amount of decompressed data was way below the maximum allowed size. This happened because the decompression algorithm used the chunk size for calculating the remaining size instead of the actual size of the decompressed chunk.
The QUnit rake task starts a server in test mode. We need a tweak to allow dynamic CSP hostnames in test mode. This tweak is already present in development mode.
To allow CSP to work, the browser host/port must match what the server sees. Therefore we need to disable the enforce_hostname middleware in test mode. To keep rspec and production as similar as possible, we skip enforce_hostname using an environment variable.
Also move the qunit rake task to use unicorn, for consistency with development and production.
* Add a rake task to disable secure media. This sets all uploads to `secure: false`, changes the upload ACL to public, and rebakes all the posts using the uploads to make sure they point to the correct URLs. This is in a transaction for each upload with the upload being updated the last step, so if the task fails it can be resumed.
* Also allow viewing media via the secure url if secure media is disabled, redirecting to the normal CDN url, because otherwise media links will be broken while we go and rebake all the posts + update ACLs
Previously we had the ability to download a simple .gz file
new changes mean we have a a tar.gz file that needs some levels
of fiddling to get extracted correctly
MaxMind now requires an account with a license key to download files.
Discourse admins can register for such an account at:
https://www.maxmind.com/en/geolite2/signup
License key generation is available in the profile section.
Once registered you can set the license key using `DISCOURSE_MAXMIND_LICENSE_KEY`
This amends it so we unconditionally skip MaxMind DB downloads if no license key exists.
Many security scanners ship invalid mime types, this ensures we return
a very cheap response to the clients and do not log anything.
Previous attempt still re-dispatched the request to get proper error page
but in this specific case we want no error page.
Added a fix to gracefully error with a Webauthn::SecurityKeyError if somehow a user provides an unkown COSE algorithm when logging in with a security key.
If `COSE::Algorithm.find` returns nil we now fail gracefully and log the algorithm used along with the user ID and the security key params for debugging, as this will help us find other common algorithms to implement for webauthn