Commit Graph

88 Commits

Author SHA1 Message Date
David Taylor bfe0eccdd9
FIX: Extension-less secure uploads (#29914)
Previously, the secure-upload redirection logic would fail for extension-less files. This commit updates it to work, and adds a spec for the behavior.

Extension-less file uploads are not allowed by default, so this is a very niche situation.
2024-11-25 12:18:21 +00:00
Osama Sayegh 00196b8652
FIX: Restore and deprecate the `:type` param of `uploads#create` (#29736)
Follow up to 6f8f6a7726

Prior to the linked commit, the `uploads#create` endpoint had a `upload_type` and `type` param that acted as aliases for each other and raised an error if both of them were missing. In the linked commit, we removed the `type` param and always required the `upload_type` param which break API consumers that only included `type` in their requests.

This commit adds back the `type` param temporarily and introduces a deprecation message for it so that API consumers are made aware of the eventual removal of the `type` param.
2024-11-13 14:07:20 +03:00
Martin Brennan 0568d36133
FIX: Use dualstack S3 endpoint for direct uploads (#29611)
When we added direct S3 uploads to Discourse, which use
presigned URLs, we never took into account the dualstack
endpoints for IPv6 on S3.

This commit fixes the issue by using the dualstack endpoints
for presigned URLs and requests, which are used in the
get-presigned-put and batch-presign-urls endpoints used when
directly uploading to S3.

It also makes regular S3 requests for `put` and so on use
dualstack URLs. It doesn't seem like there is a downside to
doing this, but a bunch of specs needed to be updated to reflect this.
2024-11-07 11:06:39 +10:00
Osama Sayegh 6f8f6a7726
FIX: Pass upload type correctly to uploads#create (#29600)
Prior to Uppy, the `uploads#create` endpoint used to receive a `type` param that indicated the purpose/target of the upload, such as `avatar`, `site_setting` and so on. With the introduction of Uppy, the `type` param became the MIME type of the file being uploaded, and the purpose/target of the upload became a new param called `upload_type`, however the backend could still use the `type` param (which now contains MIME type) as the purpose/target of the upload if `upload_type` is absent.

We technically don't need to send the MIME type over the network, but it seems like it's done by Uppy and we have no control over the `type` param that Uppy includes:

758de8167b/app/assets/javascripts/discourse/app/lib/uppy/uppy-upload.js (L146-L151)

This commit does a couple of things:

1. It amends the `uploads#create` endpoint so it always requires the `upload_type` param and doesn't fallback to `type` if `upload_type` is absent
2. It forces consumers of the `UppyUpload` class (and by extension `UppyImageUploader`) to specify `type` of the upload

Internal topic: t/140945.
2024-11-06 07:00:35 +03:00
Alan Guo Xiang Tan ed6c9d1545
DEV: Call Discourse.redis.flushdb after the end of each test (#29117)
There have been too many flaky tests as a result of leaking state in
Redis so it is easier to resolve them by ensuring we flush Redis'
database.

Locally on my machine, calling `Discourse.redis.flushdb` takes around
0.1ms which means this change will have very little impact on test
runtimes.
2024-10-09 07:19:31 +08:00
Guhyoun Nam 9c1812e071
FEATURE: add `system_user_max_attachment_size_kb` site setting (#28351)
* System user attachment size WIP

* spec check

* controller update

* add max to system_user_max_attachment_size_kb

* DEV: update to use static method for `max_attachment_size_for_user`

add test to use large image.
add check for failure.

* DEV: update `system_user_max_attachment_size_kb` default value to 0

remove unecessary test.
update tests to reflect the new default value of `system_user_max_attachment_size_kb`

* DEV: update maximum_file_size to check when is an attachment made by a system user

Add tests for when `system_user_max_attachment_size_kb` is over and under the limit
Add test for checking interaction with `max_attachment_size_kb`

* DEV: move `max_attachment_size_for_user` to private methods

* DEV: turn `max_attachment_size_for_user` into a static method

* DEV: typo in test case

* DEV: move max_attachment_size_for_user to private class method

* Revert "DEV: move max_attachment_size_for_user to private class method"

This reverts commit 5d5ae0b715.

---------

Co-authored-by: Gabriel Grubba <gabriel@discourse.org>
2024-08-16 11:03:39 -03:00
Alan Guo Xiang Tan 003b80e62f
SECURITY: Add rate limits for uploads 2024-03-15 14:24:00 +08:00
Ted Johansson 57ea56ee05
DEV: Remove full group refreshes from tests (#25414)
We have all these calls to Group.refresh_automatic_groups! littered throughout the tests. Including tests that are seemingly unrelated to groups. This is because automatic group memberships aren't fabricated when making a vanilla user. There are two places where you'd want to use this:

You have fabricated a user that needs a certain trust level (which is now based on group membership.)
You need the system user to have a certain trust level.
In the first case, we can pass refresh_auto_groups: true to the fabricator instead. This is a more lightweight operation that only considers a single user, instead of all users in all groups.

The second case is no longer a thing after #25400.
2024-01-25 14:28:26 +08:00
Ted Johansson 1b28823638
SECURITY: Prevent guest users from accessing secure uploads when login required 2024-01-08 08:02:19 -07:00
Krzysztof Kotlarek 1017820012
DEV: Convert allow_uploaded_avatars to groups (#24810)
This change converts the allow_uploaded_avatars site setting to uploaded_avatars_allowed_groups.

See: https://meta.discourse.org/t/283408

Hides the old setting
Adds the new site setting
Adds a deprecation warning
Updates to use the new setting
Adds a migration to fill in the new setting if the old setting was changed
Adds an entry to the site_setting.keywords section
Updates tests to account for the new change
After a couple of months, we will remove the allow_uploaded_avatars setting entirely.

Internal ref: /t/117248
2023-12-13 10:53:19 +11:00
Daniel Waterworth 6e161d3e75
DEV: Allow fab! without block (#24314)
The most common thing that we do with fab! is:

    fab!(:thing) { Fabricate(:thing) }

This commit adds a shorthand for this which is just simply:

    fab!(:thing)

i.e. If you omit the block, then, by default, you'll get a `Fabricate`d object using the fabricator of the same name.
2023-11-09 16:47:59 -06:00
Martin Brennan fe05fdae24
DEV: Introduce S3 transfer acceleration for uploads behind hidden setting (#24238)
This commit adds an `enable_s3_transfer_acceleration` site setting,
which is hidden to begin with. We are adding this because in certain
regions, using https://aws.amazon.com/s3/transfer-acceleration/ can
drastically speed up uploads, sometimes as much as 70% in certain
regions depending on the target bucket region. This is important for
us because we have direct S3 multipart uploads enabled everywhere
on our hosting.

To start, we only want this on the uploads bucket, not the backup one.
Also, this will accelerate both uploads **and** downloads, depending
on whether a presigned URL is used for downloading. This is the case
when secure uploads is enabled, not anywhere else at this time. To
enable the S3 acceleration on downloads more generally would be a
more in-depth change, since we currently store S3 Upload record URLs
like this:

```
 url: "//test.s3.dualstack.us-east-2.amazonaws.com/original/2X/6/123456.png"
```

For acceleration, `s3.dualstack` would need to be changed to `s3-accelerate.dualstack`
here.

Note that for this to have any effect, Transfer Acceleration must be enabled
on the S3 bucket used for uploads per https://docs.aws.amazon.com/AmazonS3/latest/userguide/transfer-acceleration-examples.html.
2023-11-07 11:50:40 +10:00
Martin Brennan cf42466dea
DEV: Add S3 upload system specs using minio (#22975)
This commit adds some system specs to test uploads with
direct to S3 single and multipart uploads via uppy. This
is done with minio as a local S3 replacement. We are doing
this to catch regressions when uppy dependencies need to
be upgraded or we change uppy upload code, since before
this there was no way to know outside manual testing whether
these changes would cause regressions.

Minio's server lifecycle and the installed binaries are managed
by the https://github.com/discourse/minio_runner gem, though the
binaries are already installed on the discourse_test image we run
GitHub CI from.

These tests will only run in CI unless you specifically use the
CI=1 or RUN_S3_SYSTEM_SPECS=1 env vars.

For a history of experimentation here see https://github.com/discourse/discourse/pull/22381

Related PRs:

* https://github.com/discourse/minio_runner/pull/1
* https://github.com/discourse/minio_runner/pull/2
* https://github.com/discourse/minio_runner/pull/3
2023-08-23 11:18:33 +10:00
Martin Brennan 9174716737
DEV: Remove Discourse.redis.delete_prefixed (#22103)
This method is a huge footgun in production, since it calls
the Redis KEYS command. From the Redis documentation at
https://redis.io/commands/keys/:

> Warning: consider KEYS as a command that should only be used in
production environments with extreme care. It may ruin performance when
it is executed against large databases. This command is intended for
debugging and special operations, such as changing your keyspace layout.
Don't use KEYS in your regular application code.

Since we were only using `delete_prefixed` in specs (now that we
removed the usage in production in 24ec06ff85)
we can remove this and instead rely on `use_redis_snapshotting` on the
particular tests that need this kind of clearing functionality.
2023-06-16 12:44:35 +10:00
Martin Brennan 341f87efb7
FIX: Show gif upload size limit error straight away (#21633)
When uploading images via direct to S3 upload, we were
assuming that we could not pre-emptively check the file
size because the client may do preprocessing to reduce
the size, and UploadCreator could also further reduce the
size.

This, however, is not true of gifs, so we would have an
issue where you upload a gif > the max_image_size_kb
setting and had to wait until the upload completed for
this error to show.

Now, instead, when we direct upload gifs to S3, we check
the size straight away and present a file size error to
the user rather than making them wait. This will increase
meme efficiency by approximately 1000%.
2023-05-18 10:36:34 +02:00
Martin Brennan 6feb436303
DEV: Change external upload rate limit maximums to settings (#20577)
Way back when this was introduced way back in b96c10a903
I didn't have any frame of reference for what these max rate
limit numbers should be, so 10 seemed like a reasonable limit
until a real world case where this did not make sense came
along.

The time has come.

Moving these into site settings, which are hidden since in most
cases there is no need to change these.
2023-03-08 15:27:17 +10:00
David Taylor cb932d6ee1
DEV: Apply syntax_tree formatting to `spec/*` 2023-01-09 11:49:28 +00:00
Martin Brennan 56eaf91589
FIX: Do not error when anon user looks at secure upload for deleted post (#19792)
If a secure upload's access_control_post was trashed, and an anon user
tried to look at that upload, they would get a 500 error rather than
the correct 403 because of an error inside the PostGuardian logic.
2023-01-09 16:12:10 +10:00
Martin Brennan 8ebd5edd1e
DEV: Rename secure_media to secure_uploads (#18376)
This commit renames all secure_media related settings to secure_uploads_* along with the associated functionality.

This is being done because "media" does not really cover it, we aren't just doing this for images and videos etc. but for all uploads in the site.

Additionally, in future we want to secure more types of uploads, and enable a kind of "mixed mode" where some uploads are secure and some are not, so keeping media in the name is just confusing.

This also keeps compatibility with the `secure-media-uploads` path, and changes new
secure URLs to be `secure-uploads`.

Deprecated settings:

* secure_media -> secure_uploads
* secure_media_allow_embed_images_in_emails -> secure_uploads_allow_embed_images_in_emails
* secure_media_max_email_embed_image_size_kb -> secure_uploads_max_email_embed_image_size_kb
2022-09-29 09:24:33 +10:00
Loïc Guitaut 3eaac56797 DEV: Use proper wording for contexts in specs 2022-08-04 11:05:02 +02:00
Phil Pirozhkov 493d437e79
Add RSpec 4 compatibility (#17652)
* Remove outdated option

04078317ba

* Use the non-globally exposed RSpec syntax

https://github.com/rspec/rspec-core/pull/2803

* Use the non-globally exposed RSpec syntax, cont

https://github.com/rspec/rspec-core/pull/2803

* Comply to strict predicate matchers

See:
 - https://github.com/rspec/rspec-expectations/pull/1195
 - https://github.com/rspec/rspec-expectations/pull/1196
 - https://github.com/rspec/rspec-expectations/pull/1277
2022-07-28 10:27:38 +08:00
Martin Brennan 641c4e0b7a
FEATURE: Make S3 presigned GET URL expiry configurable (#16912)
Previously we hardcoded the DOWNLOAD_URL_EXPIRES_AFTER_SECONDS const
inside S3Helper to be 5 minutes (300 seconds). For various reasons,
some hosted sites may need this to be longer for other integrations.

The maximum expiry time for presigned URLs is 1 week (which is
604800 seconds), so that has been added as a validation on the
setting as well. The setting is hidden because 99% of the time
it should not be changed.
2022-05-26 09:53:01 +10:00
Martin Brennan 244836ddd4
FIX: Use hidden site setting for batch presign rate limit (#16692)
This was causing issues on some sites, having the const, because this really is heavily
dependent on upload speed. We request 5-10 URLs at a time with this endpoint; for
a 1.5GB upload with 5mb parts this could mean 60 requests to the server to get all
the part URLs. If the user's upload speed is super fast they may request all 60
batches in a minute, if it is slow they may request 5 batches in a minute.

The other external upload endpoints are not hit as often, so they can stay as constant
values for now. This commit also increases the default to 20 requests/minute.
2022-05-10 11:14:26 +10:00
Martin Brennan 7af01d88d2
FIX: Better 0 file size detection and logging (#16116)
When creating files with create-multipart, if the file
size was somehow zero we were showing a very unhelpful
error message to the user. Now we show a nicer message,
and proactively don't call the API if we know the file
size is 0 bytes in JS, along with extra console logging
to help with debugging.
2022-03-07 12:39:33 +10:00
David Taylor c9dab6fd08
DEV: Automatically require 'rails_helper' in all specs (#16077)
It's very easy to forget to add `require 'rails_helper'` at the top of every core/plugin spec file, and omissions can cause some very confusing/sporadic errors.

By setting this flag in `.rspec`, we can remove the need for `require 'rails_helper'` entirely.
2022-03-01 17:50:50 +00:00
Jarek Radosz 2fc70c5572
DEV: Correctly tag heredocs (#16061)
This allows text editors to use correct syntax coloring for the heredoc sections.

Heredoc tag names we use:

languages: SQL, JS, RUBY, LUA, HTML, CSS, SCSS, SH, HBS, XML, YAML/YML, MF, ICS
other: MD, TEXT/TXT, RAW, EMAIL
2022-02-28 20:50:55 +01:00
Martin Brennan b96c10a903
DEV: Extract shared external upload routes into controller helper (#14984)
This commit refactors the direct external upload routes (get presigned
put, complete external, create/abort/complete multipart) into a
helper which is then included in both BackupController and the
UploadController. This is done so UploadController doesn't need
strange backup logic added to it, and so each controller implementing
this helper can do their own validation/error handling nicely.

This is a follow up to e4350bb966
2021-11-18 09:17:23 +10:00
Martin Brennan e4350bb966
FEATURE: Direct S3 multipart uploads for backups (#14736)
This PR introduces a new `enable_experimental_backup_uploads` site setting (default false and hidden), which when enabled alongside `enable_direct_s3_uploads` will allow for direct S3 multipart uploads of backup .tar.gz files.

To make multipart external uploads work with both the S3BackupStore and the S3Store, I've had to move several methods out of S3Store and into S3Helper, including:

* presigned_url
* create_multipart
* abort_multipart
* complete_multipart
* presign_multipart_part
* list_multipart_parts

Then, S3Store and S3BackupStore either delegate directly to S3Helper or have their own special methods to call S3Helper for these methods. FileStore.temporary_upload_path has also removed its dependence on upload_path, and can now be used interchangeably between the stores. A similar change was made in the frontend as well, moving the multipart related JS code out of ComposerUppyUpload and into a mixin of its own, so it can also be used by UppyUploadMixin.

Some changes to ExternalUploadManager had to be made here as well. The backup direct uploads do not need an Upload record made for them in the database, so they can be moved to their final S3 resting place when completing the multipart upload.

This changeset is not perfect; it introduces some special cases in UploadController to handle backups that was previously in BackupController, because UploadController is where the multipart routes are located. A subsequent pull request will pull these routes into a module or some other sharing pattern, along with hooks, so the backup controller and the upload controller (and any future controllers that may need them) can include these routes in a nicer way.
2021-11-11 08:25:31 +10:00
Martin Brennan 6a68bd4825
DEV: Limit list multipart parts to 1 (#14853)
We are only using list_multipart_parts right now in the
uploads controller for multipart uploads to check if the
upload exists; thus we don't need up to 1000 parts.

Also adding a note for future explorers that list_multipart_parts
only gets 1000 parts max, and adding params for max parts
and starting parts.
2021-11-10 08:01:28 +10:00
David Taylor aac3547cc2
DEV: Update AWS API stub following gem version bump (#14673)
The latest version of the gem doesn't send whitespace in this request body, so we need to update the test stub accordingly
2021-10-20 23:04:08 +01:00
Yasuo Honda dbbfad7ed0 FIX: Support Ruby 3 keyword arguments 2021-10-05 11:25:00 -04:00
Martin Brennan dba6a5eabf
FEATURE: Humanize file size error messages (#14398)
The file size error messages for max_image_size_kb and
max_attachment_size_kb are shown to the user in the KB
format, regardless of how large the limit is. Since we
are going to support uploading much larger files soon,
this KB-based limit soon becomes unfriendly to the end
user.

For example, if the max attachment size is set to 512000
KB, this is what the user sees:

> Sorry, the file you are trying to upload is too big (maximum
size is 512000KB)

This makes the user do math. In almost all file explorers that
a regular user would be familiar width, the file size is shown
in a format based on the maximum increment (e.g. KB, MB, GB).

This commit changes the behaviour to output a humanized file size
instead of the raw KB. For the above example, it would now say:

> Sorry, the file you are trying to upload is too big (maximum
size is 512 MB)

This humanization also handles decimals, e.g. 1536KB = 1.5 MB
2021-09-22 07:59:45 +10:00
Martin Brennan 99ec8eb6df
FIX: Capture S3 metadata when calling create_multipart (#14161)
The generate_presigned_put endpoint for direct external uploads
(such as the one for the uppy-image-uploader) records allowed
S3 metadata values on the uploaded object. We use this to store
the sha1-checksum generated by the UppyChecksum plugin, for later
comparison in ExternalUploadManager.

However, we were not doing this for the create_multipart endpoint,
so the checksum was never captured and compared correctly.

Also includes a fix to make sure UppyChecksum is the last preprocessor to run.
It is important that the UppyChecksum preprocessor is the last one to
be added; the preprocessors are run in order and since other preprocessors
may modify the file (e.g. the UppyMediaOptimization one), we need to
checksum once we are sure the file data has "settled".
2021-08-27 09:50:23 +10:00
Martin Brennan d295a16dab
FEATURE: Uppy direct S3 multipart uploads in composer (#14051)
This pull request introduces the endpoints required, and the JavaScript functionality in the `ComposerUppyUpload` mixin, for direct S3 multipart uploads. There are four new endpoints in the uploads controller:

* `create-multipart.json` - Creates the multipart upload in S3 along with an `ExternalUploadStub` record, storing information about the file in the same way as `generate-presigned-put.json` does for regular direct S3 uploads
* `batch-presign-multipart-parts.json` - Takes a list of part numbers and the unique identifier for an `ExternalUploadStub` record, and generates the presigned URLs for those parts if the multipart upload still exists and if the user has permission to access that upload
* `complete-multipart.json` - Completes the multipart upload in S3. Needs the full list of part numbers and their associated ETags which are returned when the part is uploaded to the presigned URL above. Only works if the user has permission to access the associated `ExternalUploadStub` record and the multipart upload still exists.

  After we confirm the upload is complete in S3, we go through the regular `UploadCreator` flow, the same as `complete-external-upload.json`, and promote the temporary upload S3 into a full `Upload` record, moving it to its final destination.
* `abort-multipart.json` - Aborts the multipart upload on S3 and destroys the `ExternalUploadStub` record if the user has permission to access that upload.

Also added are a few new columns to `ExternalUploadStub`:

* multipart - Whether or not this is a multipart upload
* external_upload_identifier - The "upload ID" for an S3 multipart upload
* filesize - The size of the file when the `create-multipart.json` or `generate-presigned-put.json` is called. This is used for validation.

When the user completes a direct S3 upload, either regular or multipart, we take the `filesize` that was captured when the `ExternalUploadStub` was first created and compare it with the final `Content-Length` size of the file where it is stored in S3. Then, if the two do not match, we throw an error, delete the file on S3, and ban the user from uploading files for N (default 5) minutes. This would only happen if the user uploads a different file than what they first specified, or in the case of multipart uploads uploaded larger chunks than needed. This is done to prevent abuse of S3 storage by bad actors.

Also included in this PR is an update to vendor/uppy.js. This has been built locally from the latest uppy source at d613b849a6. This must be done so that I can get my multipart upload changes into Discourse. When the Uppy team cuts a proper release, we can bump the package.json versions instead.
2021-08-25 08:46:54 +10:00
Bianca Nenciu ff367e22fb
FEATURE: Make allow_uploaded_avatars accept TL (#14091)
This gives admins more control over who can upload custom profile
pictures.
2021-08-24 10:46:28 +03:00
Martin Brennan 6774c600a4
DEV: Fix uploads controller flaky presigned put spec (#13985)
Was missing RateLimiter.clear_all!, leading to 403 errors
2021-08-10 14:30:22 +10:00
Martin Brennan b500949ef6
FEATURE: Initial implementation of direct S3 uploads with uppy and stubs (#13787)
This adds a few different things to allow for direct S3 uploads using uppy. **These changes are still not the default.** There are hidden `enable_experimental_image_uploader` and `enable_direct_s3_uploads`  settings that must be turned on for any of this code to be used, and even if they are turned on only the User Card Background for the user profile actually uses uppy-image-uploader.

A new `ExternalUploadStub` model and database table is introduced in this pull request. This is used to keep track of uploads that are uploaded to a temporary location in S3 with the direct to S3 code, and they are eventually deleted a) when the direct upload is completed and b) after a certain time period of not being used. 

### Starting a direct S3 upload

When an S3 direct upload is initiated with uppy, we first request a presigned PUT URL from the new `generate-presigned-put` endpoint in `UploadsController`. This generates an S3 key in the `temp` folder inside the correct bucket path, along with any metadata from the clientside (e.g. the SHA1 checksum described below). This will also create an `ExternalUploadStub` and store the details of the temp object key and the file being uploaded.

Once the clientside has this URL, uppy will upload the file direct to S3 using the presigned URL. Once the upload is complete we go to the next stage.

### Completing a direct S3 upload

Once the upload to S3 is done we call the new `complete-external-upload` route with the unique identifier of the `ExternalUploadStub` created earlier. Only the user who made the stub can complete the external upload. One of two paths is followed via the `ExternalUploadManager`.

1. If the object in S3 is too large (currently 100mb defined by `ExternalUploadManager::DOWNLOAD_LIMIT`) we do not download and generate the SHA1 for that file. Instead we create the `Upload` record via `UploadCreator` and simply copy it to its final destination on S3 then delete the initial temp file. Several modifications to `UploadCreator` have been made to accommodate this.

2. If the object in S3 is small enough, we download it. When the temporary S3 file is downloaded, we compare the SHA1 checksum generated by the browser with the actual SHA1 checksum of the file generated by ruby. The browser SHA1 checksum is stored on the object in S3 with metadata, and is generated via the `UppyChecksum` plugin. Keep in mind that some browsers will not generate this due to compatibility or other issues.

    We then follow the normal `UploadCreator` path with one exception. To cut down on having to re-upload the file again, if there are no changes (such as resizing etc) to the file in `UploadCreator` we follow the same copy + delete temp path that we do for files that are too large.

3. Finally we return the serialized upload record back to the client

There are several errors that could happen that are handled by `UploadsController` as well.

Also in this PR is some refactoring of `displayErrorForUpload` to handle both uppy and jquery file uploader errors.
2021-07-28 08:42:25 +10:00
Martin Brennan 7911124d3d
FEATURE: Uppy image uploader with UppyUploadMixin (#13656)
This PR adds the first use of Uppy in our codebase, hidden behind a enable_experimental_image_uploader site setting. When the setting is enabled only the user card background uploader will use the new uppy-image-uploader component added in this PR.

I've introduced an UppyUpload mixin that has feature parity with the existing Upload mixin, and improves it slightly to deal with multiple/single file distinctions and validations better. For now, this just supports the XHRUpload plugin for uppy, which keeps our existing POST to /uploads.json.
2021-07-13 12:22:00 +10:00
Blake Erickson 44153cde18
FIX: Be able to handle long file extensions (#12375)
* FIX: Be able to handle long file extensions

Some applications have really long file extensions, but if we truncate
them weird behavior ensues.

This commit changes the file extension size from 10 characters to 255
characters instead.

See:

https://meta.discourse.org/t/182824

* Keep truncation at 10, but allow uppercase and dashes
2021-03-17 12:01:29 -06:00
David Taylor 821bb1e8cb
FEATURE: Rename 'Discourse SSO' to DiscourseConnect (#11978)
The 'Discourse SSO' protocol is being rebranded to DiscourseConnect. This should help to reduce confusion when 'SSO' is used in the generic sense.

This commit aims to:
- Rename `sso_` site settings. DiscourseConnect specific ones are prefixed `discourse_connect_`. Generic settings are prefixed `auth_`
- Add (server-side-only) backwards compatibility for the old setting names, with deprecation notices
- Copy `site_settings` database records to the new names
- Rename relevant translation keys
- Update relevant translations

This commit does **not** aim to:
- Rename any Ruby classes or methods. This might be done in a future commit
- Change any URLs. This would break existing integrations
- Make any changes to the protocol. This would break existing integrations
- Change any functionality. Further normalization across DiscourseConnect and other auth methods will be done separately

The risks are:
- There is no backwards compatibility for site settings on the client-side. Accessing auth-related site settings in Javascript is fairly rare, and an error on the client side would not be security-critical.
- If a plugin is monkey-patching parts of the auth process, changes to locale keys could cause broken error messages. This should also be unlikely. The old site setting names remain functional, so security-related overrides will remain working.

A follow-up commit will be made with a post-deploy migration to delete the old `site_settings` rows.
2021-02-08 10:04:33 +00:00
Martin Brennan 4193eb0419
FIX: Respect force download when downloading secure media via lightbox (#10769)
The download link on the lightbox for images was not downloading the image if the upload was marked secure, because the code in the upload controller route was not respecting the dl=1 param for force download.

This PR fixes this so the download link works for secure images as well as regular ligthboxed images.
2020-09-29 12:12:03 +10:00
Jarek Radosz e00abbe1b7 DEV: Clean up S3 specs, stubs, and helpers
Extracted commonly used spec helpers into spec/support/uploads_helpers.rb, removed unused stubs and let definitions. Makes it easier to write new S3-related specs without copy and pasting setup steps from other specs.
2020-09-28 12:02:25 +01:00
Régis Hanol 48b4ed41f5 FIX: uploading an existing image as a site setting
The previous fix (f43c0a5d85) wasn't working for images that were already uploaded.
The "metadata" (eg. 'for_*' and 'secure' attributes) were not added to existing uploads.

Also used 'Upload.get_from_url' is the admin/site_setting controller to properly retrieve
an upload from its URL.

Fixed the Upload::URL_REGEX to use the \h (hexadecimal) for the SHA

Follow-up-to: f43c0a5d85
2020-07-03 19:16:54 +02:00
Régis Hanol f43c0a5d85 FIX: uploading an image as a site setting
When uploading an image as a site setting, we need to return the "raw" URL, otherwise
when saving the site setting, the upload won't be looked up properly.

Follow-up-to: f11363d446
2020-07-03 13:23:10 +02:00
Vinoth Kannan f11363d446 FIX: return cdn url for uploads if available.
Currently it is displaying non-cdn urls in the composer preview.
2020-07-02 06:36:14 +05:30
Krzysztof Kotlarek f99f6ca111
FIX: randomize file name when created from fixtures (#9731)
* FIX: randomize file name when created from fixtures

When a temporary file is created from fixtures it should have a unique name.
It is to prevent a collision in parallel specs evaluation

* FIX: use /tmp/pid folder to keep fixture files
2020-05-19 09:09:36 +10:00
David Taylor 96848b7649
UX: Allow secure media URLs to be cached for a short period of time
Signed S3 URLs are valid for 15 seconds, so we can safely allow the browser to cache them for 10 seconds. This should help with large numbers of requests when composing a post with many images.
2020-05-18 15:00:41 +01:00
Jarek Radosz 781e3f5e10
DEV: Use `response.parsed_body` in specs (#9615)
Most of it was autofixed with rubocop-discourse 2.1.1.
2020-05-07 17:04:12 +02:00
Blake Erickson d04ba4b3b2
DEPRECATION: Remove support for api creds in query params (#9106)
* DEPRECATION: Remove support for api creds in query params

This commit removes support for api credentials in query params except
for a few whitelisted routes like rss/json feeds and the handle_mail
route.

Several tests were written to valid these changes, but the bulk of the
spec changes are just switching them over to use header based auth so
that they will pass without changing what they were actually testing.

Original commit that notified admins this change was coming was created
over 3 months ago: 2db2003187

* fix tests

* Also allow iCalendar feeds

Co-authored-by: Rafael dos Santos Silva <xfalcox@gmail.com>
2020-04-06 16:55:44 -06:00
Martin Brennan 097851c135
FIX: Change secure media to encompass attachments as well (#9271)
If the “secure media” site setting is enabled then ALL files uploaded to Discourse (images, video, audio, pdf, txt, zip etc. etc.) will follow the secure media rules. The “prevent anons from downloading files” setting will no longer have any bearing on upload security. Basically, the feature will more appropriately be called “secure uploads” instead of “secure media”.

This is being done because there are communities out there that would like all attachments and media to be secure based on category rules but still allow anonymous users to download attachments in public places, which is not possible in the current arrangement.
2020-03-26 07:16:02 +10:00