Previously we were migrating multisites serially, this is extremely slow
especially when 200 dbs are involved.
The new implementation defaults to running 20 migrations concurrently, leading
to a 20x speedup.
We also amended it so errors are printed out last, something that makes
debugging failures easier.
This is code specific to Discourse cause we integrate SeedFu with our
migrations and can not include this in the multisite gem.
If a migration performs no changes it should not output stuff.
Previously we would output information about seeds which was very noisy.
On multisite this was particularly bad
Get rid of harmful each loop over uploads to update. Instead we put all the unique access control posts for the uploads into a map for fast access (vs using the slow .find through array) and look up the post when it is needed when looping through the uploads in batches.
On a Discourse instance with ~93k uploads, a simplified version of the old method takes > 1 minute, and a simplified version of the new method takes ~18s and uses a lot less memory.
If the “secure media” site setting is enabled then ALL files uploaded to Discourse (images, video, audio, pdf, txt, zip etc. etc.) will follow the secure media rules. The “prevent anons from downloading files” setting will no longer have any bearing on upload security. Basically, the feature will more appropriately be called “secure uploads” instead of “secure media”.
This is being done because there are communities out there that would like all attachments and media to be secure based on category rules but still allow anonymous users to download attachments in public places, which is not possible in the current arrangement.
This is mostly useful while developing a plugin, to avoid manual actions of deleting tables and schema_migrations rows.
Usage:
bundle exec rake plugin:migrate:down[discourse-calendar]
* Improve the bookmark mobile on modal so it doesn't go all the way to the edge and the custom datetime input is easier to use
* Improve the rake task for syncing so it does not error for topics that no longer exist and batches 2000 inserts at a time, clearing the array each time
* Add uploads:sync_s3_acls rake task to ensure the ACLs in S3 are the correct (public-read or private) setting based on upload security
* Improved uploads:disable_secure_media to be more efficient and provide better messages to the user.
* Rename uploads:ensure_correct_acl task to uploads:secure_upload_analyse_and_update as it does more than check the ACL
* Many improvements to uploads:secure_upload_analyse_and_update
* Make sure that upload.access_control_post is unscoped so deleted posts are still fetched, because they still affect the security of the upload.
* Add escape hatch for capture_stdout in the form of RAILS_ENABLE_TEST_STDOUT. If provided the capture_stdout code will be ignored, so you can see the output if you need.
The rake task was broken, because the addition of the
UploadSecurity check returned true/false instead of the
upload ID to determine which uploads to set secure.
Also it was rebaking the posts in the wrong place and
pretty inefficiently at that. Also it was rebaking before
the upload was being changed to secure in the DB.
This also updates the task to set the access_control_post_id
for all uploads. the first post the upload is linked to is used
for the access control. if the upload doesn't get changed to
secure this doesn't affect anything.
Added a spec for the rake task to cover common cases.
### UI Changes
If `SiteSetting.enable_bookmarks_with_reminders` is enabled:
* Clicking "Bookmark" on a topic will create a new Bookmark record instead of a post + user action
* Clicking "Clear Bookmarks" on a topic will delete all the new Bookmark records on a topic
* The topic bookmark buttons control the post bookmark flags correctly and vice-versa
Disabled selecting the "reminder type" for bookmarks in the UI because the backend functionality is not done yet (of sending users notifications etc.)
### Other Changes
* Added delete bookmark route (but no UI yet)
* Added a rake task to sync the old PostAction bookmarks to the new Bookmark table, which can be run as many times as we want for a site (it will not create duplicates).
### General Changes and Duplication
* We now consider a post `with_secure_media?` if it is in a read-restricted category.
* When uploading we now set an upload's secure status straight away.
* When uploading if `SiteSetting.secure_media` is enabled, we do not check to see if the upload already exists using the `sha1` digest of the upload. The `sha1` column of the upload is filled with a `SecureRandom.hex(20)` value which is the same length as `Upload::SHA1_LENGTH`. The `original_sha1` column is filled with the _real_ sha1 digest of the file.
* Whether an upload `should_be_secure?` is now determined by whether the `access_control_post` is `with_secure_media?` (if there is no access control post then we leave the secure status as is).
* When serializing the upload, we now cook the URL if the upload is secure. This is so it shows up correctly in the composer preview, because we set secure status on upload.
### Viewing Secure Media
* The secure-media-upload URL will take the post that the upload is attached to into account via `Guardian.can_see?` for access permissions
* If there is no `access_control_post` then we just deliver the media. This should be a rare occurrance and shouldn't cause issues as the `access_control_post` is set when `link_post_uploads` is called via `CookedPostProcessor`
### Removed
We no longer do any of these because we do not reuse uploads by sha1 if secure media is enabled.
* We no longer have a way to prevent cross-posting of a secure upload from a private context to a public context.
* We no longer have to set `secure: false` for uploads when uploading for a theme component.
The QUnit rake task starts a server in test mode. We need a tweak to allow dynamic CSP hostnames in test mode. This tweak is already present in development mode.
To allow CSP to work, the browser host/port must match what the server sees. Therefore we need to disable the enforce_hostname middleware in test mode. To keep rspec and production as similar as possible, we skip enforce_hostname using an environment variable.
Also move the qunit rake task to use unicorn, for consistency with development and production.
* Add a rake task to disable secure media. This sets all uploads to `secure: false`, changes the upload ACL to public, and rebakes all the posts using the uploads to make sure they point to the correct URLs. This is in a transaction for each upload with the upload being updated the last step, so if the task fails it can be resumed.
* Also allow viewing media via the secure url if secure media is disabled, redirecting to the normal CDN url, because otherwise media links will be broken while we go and rebake all the posts + update ACLs
This used to work due to side effects.
`rake parallel:migrate` used to work very inconsistently and would only migrate
some of the databases.
This introduces the recommended change to db.yml so the correct database is
found based off TEST_ENV_NUMBER if for some reason we did not set it using
RAILS_DB
Also avoids a bunch of schema dumping which is not needed when migrating
parallel specs
DB number 1 is very odd cause for whatever reason parallel spec is not
setting it.
API keys are now only visible when first created. After that, only the first four characters are stored in the database for identification, along with an sha256 hash of the full key. This makes key usage easier to audit, and ensures attackers would not have access to the live site in the event of a database leak.
This makes the merge lower risk, because we have some time to revert if needed. Once the change is confirmed to be working, we will add a second commit to drop the `key` column.
This is required because bin/rake automatically loads plugins when migrating. In our continuous integration, we don't want plugins to break the core build. They should only be loaded for the plugin build.
* FEATURE: Normalize the service worker route
Update cache headers so they are not immutable outside of the rails app
Add the ability to purge the service worker cache from localhost
Rails -> nginx will pass immutable flags so the file is cached until reloaded.
In most cases, nginx will have its cache flushed on rebuild (new image)
For those needing dynamic re-caching (such as upgrading via the UI),
a rake task for flushing the service worker script is provided
through `assets:flush_sw`
This PR introduces a new secure media setting. When enabled, it prevent unathorized access to media uploads (files of type image, video and audio). When the `login_required` setting is enabled, then all media uploads will be protected from unauthorized (anonymous) access. When `login_required`is disabled, only media in private messages will be protected from unauthorized access.
A few notes:
- the `prevent_anons_from_downloading_files` setting no longer applies to audio and video uploads
- the `secure_media` setting can only be enabled if S3 uploads are already enabled and configured
- upload records have a new column, `secure`, which is a boolean `true/false` of the upload's secure status
- when creating a public post with an upload that has already been uploaded and is marked as secure, the post creator will raise an error
- when enabling or disabling the setting on a site with existing uploads, the rake task `uploads:ensure_correct_acl` should be used to update all uploads' secure status and their ACL on S3
For this to work we need to overwrite `db:rollback` in our Rakefile like
we do for migrate, so that it removes the load_config dependency. This
allows our custom migration paths to work.
This ensures we only update last_posted_at which is user facing for non messages
and non whispers.
We still update this date for secure categories, we do not revert it for
deleted posts.
Adds the settings:
raw_email_max_length, raw_rejected_email_max_length, delete_rejected_email_after_days.
These settings control retention of the "raw" emails logs.
raw_email_max_length ensures that if we get incoming email that is huge we will truncate it removing uploads from the raw log.
raw_rejected_email_max_length introduces an even more aggressive truncation for rejected incoming mail.
delete_rejected_email_after_days controls how many days we will keep rejected emails for (default 90)
Set `DEBUG_NODE=1` when running `rake smoke:test` and use your favorite tool to debug the smoke tests. See https://nodejs.org/en/docs/guides/debugging-getting-started/ for more information.
The debugger will break at the beginning of the smoke tests when the env variable is set.
Doing .pluck(:column).first is a very common pattern in Discourse and in
most cases, a limit cause isn't being added. Instead of adding a limit
clause to all these callsites, this commit adds two new methods to
ActiveRecord::Relation:
pluck_first, equivalent to limit(1).pluck(*columns).first
and pluck_first! which, like other finder methods, raises an exception
when no record is found
Zeitwerk simplifies working with dependencies in dev and makes it easier reloading class chains.
We no longer need to use Rails "require_dependency" anywhere and instead can just use standard
Ruby patterns to require files.
This is a far reaching change and we expect some followups here.
In Rails 6 due to internal changes, the following sequence no longer works:
```
RAILS_ENV=test bin/rake db:migrate
RAILS_ENV=test bin/rake db:schema:dump
dropdb discourse_test
createdb discourse_test
RAILS_ENV=test bin/rake db:schema:load
RAILS_ENV=test bin/rake db:migrate
```
What appears to be happening is that our tracking of plugin migrations is
being missed on schema:dump or load.
A more comprehensive fix restoring schema:dump / load support will be
investigated.
Prior to this change plugin migrations were not working and multisite
migrations not working.
Rails internals changed so we need to account for it.
Specifically semantics of `db:migrate` in rails changed so it is sort of
a "multisite:migrate".
* FIX: inline_uploads and subfolder
* if subfolder, also look for images with a path containing
cdn_url + relative_url_root
* FIX: migrate_to_s3 task and subfolder
Running this inline makes more sense otherwise there is extreme risk in
saturating sidekiq queue.
This also reworks ordering and selection so we double check if a post needs
rebaking prior to rebaking, this unlocks the ability to run this rake task
from multiple consoles.
* REFACTOR: Rename SiteSetting.disable_edit_notifications to disable_system_edit_notifications
- The older name could cause some confusion because the setting does not disable all edit notifications, only system ones.
* FIX: Add frozen_string_literal: true in the migration
* DEV: Deprecate 'disable_edit_notifications'
Follow up to: [FEATURE: Create a rake task for destroying categories][1]
- `Discourse.system_user` is my friend
- Remove puts statements from rake tasks that don't return anything
- `for_each` is also my friend
- Use `human_users` to also exclude discobot
- Sort/format categories:list
[1]: 092eeb5ca3
Created a rake task for destroying multiple categories along with any
subcategories and topics the belong to those categories.
Also created a rake task for listing all of your categories.
Refactored existing destroy rake tasks to use new logging method, that
allows for puts output in the console but prevents it from showing in
the specs.
* DEV: Add a new way to run specs in parallel with better output
This commit:
1. adds a new executable, `bin/interleaved_rspec` which works much like
`rspec`, but runs the tests in parallel.
2. adds a rake task, `rake interleaved:spec` which runs the whole test
suite.
3. makes autospec use this new wrapper by default. You can disable this
by running `PARALLEL_SPEC=0 rake autospec`.
It works much like the `parallel_tests` gem (and relies on it), but
makes each subprocess use a machine-readable formatter and parses this
output in order to provide a better overall summary.
(It's called interleaved, because parallel was taken and naming is
hard).
* Make popen3 invocation safer
* Use FileUtils instead of shelling out
* DRY up reporter
* Moved summary logic into Reporter
* s/interleaved/turbo/g
* Move Reporter into its own file
* Moved run into its own class
* Moved Runner into its own file
* Move JsonRowsFormatter under TurboTests
* Join on threads at the end
* Acted on feedback from eviltrout
This fixes a condition where an intermittent db connection could cause
invalid site settings to be stored
It also removes a catch all we had.
Somewhere around Rails 5 `db:create` started wanting full environment
this is a problem for Discourse since it needs to boot up data from the
db.
This removes the catch all and surgically adds a db / redis bypass to
db:create task.
* Support private uploads in S3
* Use localStore for local avatars
* Add job to update private upload ACL on S3
* Test multisite paths
* update ACL for private uploads in migrate_to_s3 task
This also corrects FileHelper.download so it supports "follow_redirect"
correctly (it used to always follow 1 redirect) and adds a `validate_url`
param that will bypass all uri validation if set to false (default is true)
This new `DISCOURSE_MAXMIND_BACKUP_PATH` can be used a secondary location
for maxmind db. That way a build machine, for example can cache it on the
host and reuse between builds.
Also per 5bfeef77 added proper error raising for download fails from
dedicated rake task
This also moves "refresh_maxmind_db_during_precompile_days" to a global
setting, it did not make sense in a site setting
`rake posts:recover_uploads_from_index`
Searches through all missing uploads in the cluster, if it finds one it
tries to find it in the "upload index file" and creates a new upload for
it.
Previously we were only catching one type of data export, the new job will
catch every csv export we have
Job is pretty safe as it filters on system user id / pm with a particular
slug
Historically we would keep the user data export posts around but delete
the uploads.
This leaves a lot of broken uploads in the system.
This rake task allows us to clean up old mess.
Filename on disk may mismatch sha of file in some old 1X setups. This will
attempt to recover file even if sha1 mismatches. We had an old bug that
caused this.
This also adds `uploads:fix_relative_upload_links` which attempts to replace
urls of the format `/upload/default/...` with `upload://`
Rebaking posts can be expensive instead of blocking here simply mark posts
for rebake.
We can then work through them faster in other jobs, plus this should not
hold of a datacenter migration.
Previously this rake job would only run on a single site which is a bit
misleading
This also adds `VERBOSE=1 rake posts:missing_uploads` that will provide a
full report of missing uploads
This allows you to wait up to N seconds for the smoke test url to come up
in some cases you want to kick off the smoke test prior to having the smoke
test env ready to accept connections
This reduces chances of errors where consumers of strings mutate inputs
and reduces memory usage of the app.
Test suite passes now, but there may be some stuff left, so we will run
a few sites on a branch prior to merging
`#find` raises an error if the id given to it is invalid. As a result,
the conditional to check whether a `group` or `badge` is `present?` will
not be executed if any of the ids are invalid.
Follow up to
6ba914033c.
#b9d82818 makes enormous improvements to our bootstrap time, however going
to still keep compress for now despite the cost and watch it for a few weeks
* Do not brotli all locales in precompile
* Try without gzip
* uglify without compressing, always gzip
* skip uglify for unused locales
* FIX: Uglifier needs harmony for ES6 compatibility
* Use node uglifier if available
* Minor refactor
No point moving all optimized image files to tombstone when the store is
changing. Also, `destroy_all` can easily blow memory since we are no
loading in batches.
This removes all uses of both `send` and `public_send` from consumers of
SiteSetting and instead introduces a `get` helper for dynamic lookup
This leads to much cleaner and safer code long term as we are always explicit
to test that a site setting is really there before sending an arbitrary
string to the class
It also removes a couple of risky stubs from the auth provider test
This change automatically resizes icons for various purposes. Admins can now upload `logo` and `logo_small`, and everything else will be auto-generated. Specific icons can still be uploaded separately if required.
## Core
- Adds an SiteIconManager module which manages automatic resizing and fallback
- Icons are looked up in the OptimizedImage table at runtime, and then cached in Redis. If the resized version is missing for some reason, then most icons will fall back to the original files. Some icons (e.g. PWA Manifest) will return `nil` (because an incorrectly sized icon is worse than a missing icon).
- `SiteSetting.site_large_icon_url` will return the optimized version, including any fallback. `SiteSetting.large_icon` continues to return the upload object. This means that (almost) no changes are required in core/plugins to support this new system.
- Icons are resized whenever a relevant site setting is changed, and during post-deploy migrations
## Wizard
- Allows `requiresRefresh` wizard steps to reload data via AJAX instead of a full page reload
- Add placeholders to the **icons** step of the wizard, which automatically update from the "Square Logo"
- Various copy updates to support the changes
- Remove the "upload-time" resizing for `large_icon`. This is no longer required.
## Site Settings UX
- Move logo/icon settings under a new "Branding" tab
- Various copy changes to support the changes
- Adds placeholder support to the `image-uploader` component
- Automatically reloads site settings after saving. This allows setting placeholders to change based on changes to other settings
- Upload site settings will be assigned a placeholder if SiteIconManager `responds_to?` an icon of the same name
## Dashboard Warnings
- Remove PWA icon and PWA title warnings. Both are now handled automatically.
## Bonus
- Updated the sketch logos to use @awesomerobot's new high-res designs
The compress brotli functionality is no longer optional, this has worked
well for years. The name of the ENV var is also confusing cause it does
not have a `DISCOURSE_` prefix which caused issues with the web upgrader
Brotli support is now unconditionally on
Includes support for flags, reviewable users and queued posts, with REST API
backwards compatibility.
Co-Authored-By: romanrizzi <romanalejandro@gmail.com>
Co-Authored-By: jjaffeux <j.jaffeux@gmail.com>
* improved emoji support
- always optimize images as part of the task
- use the unicode standard ordering/naming for sections
* UX: more height for when there are recently used
This test did not support 'no auth' use case and other auth methods except 'login'. I fixed it by simply making the call to start() in the right way.
As shown in the source code of Net::SMTP (https://github.com/ruby/ruby/blob/ruby_2_5/lib/net/smtp.rb#L452), the start() function does accept the 'user' and 'secret' arguments. Also, in do_start() function (https://github.com/ruby/ruby/blob/ruby_2_5/lib/net/smtp.rb#L542), it automatically checks the auth method and args, skips the authentication if 'user' is not provided, and selects the right auth method from 'plain', 'login' or 'cram_md5'. This is exactly all of what we should do in a connection test and the odd 'auth_login' call in the previous code makes problems.
BTW, I am using 'localhost' as the third argument, which is the same as the default value in start(). This parameter is the domain address sent along with the 'ehlo' command in SMTP protocol. I have seen some documents, e.g. https://github.com/tpn/msmtp/blob/master/doc/msmtp.1#L455, saying that 'localhost' is fine. It works for me.