As sharing has some hover behavior, it was looking slightly clunky with fast edit changing position. Putting sharing at the last position will reduce this effect.
When the loading spinner is removed (e.g. via the loading-slider component), the subcategory list view will persist, even when no longer required. This is because we were conditionally rendering the list into the `header-list-container` outlet. When the condition was false, we were doing nothing. Instead, we should use `disconectOutlet` to make sure the content is removed from the DOM.
Firefox does not return a PerformanceMeasure object when using
performance.mark and performance.measure, even though MDN says it
should https://developer.mozilla.org/en-US/docs/Web/API/Performance/measure#return_value
So for now, we disable the upload instrumentation with a test
to see if a PerformanceMeasure (or anything really) is returned.
When creating a reply after already navigating out of the
topic (e.g. open the reply composer, go to a different topic,
then create the post), the _removeDeleteOnOwnerReplyBookmarks
function was erroring because it relied on the topic model
being present.
We can skip this function altogether if the topic model is _not_
present, because the PostCreator already takes care of deleting
bookmarks with the on_owner_reply auto_delete_preference. The
_removeDeleteOnOwnerReplyBookmarks function just cleans up the
in-memory post stream and topic model.
In the user bookmark list, when we show the excerpt of the bookmark
(which is usually just the bookmarked post excerpt), we want to show
the first unread post's excerpt instead for for_topic bookmarks. This
is because when the user clicks on that bookmark link, they are taken
to the first unread post in the topic, not the OP, as per:
27699648ef
This commit allows for measuring the time taken for
individual uploads via the new uppy interfaces, only
if the enable_upload_debug_mode site setting is enabled.
Also in this PR, for upload errors with a specific message
locally, we return the real message to show in the modal
instead of the upload.failed message so the developer
does not have to dig around in logs.
The file size error messages for max_image_size_kb and
max_attachment_size_kb are shown to the user in the KB
format, regardless of how large the limit is. Since we
are going to support uploading much larger files soon,
this KB-based limit soon becomes unfriendly to the end
user.
For example, if the max attachment size is set to 512000
KB, this is what the user sees:
> Sorry, the file you are trying to upload is too big (maximum
size is 512000KB)
This makes the user do math. In almost all file explorers that
a regular user would be familiar width, the file size is shown
in a format based on the maximum increment (e.g. KB, MB, GB).
This commit changes the behaviour to output a humanized file size
instead of the raw KB. For the above example, it would now say:
> Sorry, the file you are trying to upload is too big (maximum
size is 512 MB)
This humanization also handles decimals, e.g. 1536KB = 1.5 MB
This commit also hides a number of options which are not used during Discourse development.
Change have been tested on both the legacy `/qunit` route, and the Ember CLI `/tests` route.
This adds support for `qunit_skip_core`, `qunit_skip_plugins` and `qunit_single_plugin` parameters on the Ember CLI `/tests` route using the `addModuleExcludeMatcher` API. Legacy support is maintained for the `/qunit` route.
".search-menu" matches the parent element of the element that was
previously selected. This is a better choice because it offers some
flexibility over the DOM structure without breaking the keyboard
shortcuts.
Instead of going to the OP of the topic for topic-level bookmarks
(which are bookmarks where for_topic is true) when clicking on the
bookmark in the quick access menu or on the user bookmark list,
this commit takes the user to the last unread post in
the topic instead. This should be generally more useful than landing
on the unchanging OP.
To make this work nicely, I needed to add the last_read_post_number to
the BookmarkQuery based on the TopicUser association. It should not add
too much extra weight to the query, because it is limited to the user
that we are fetching bookmarks for.
Also fixed an issue where the bookmark serializer highest_post_number was
not taking into account whether the user was staff, which is when we
should use highest_staff_post_number instead.
* DEV: use active record `save!` instead of mini sql.
The "save" method will trigger the before_save callback "match_primary_group_changes" for User model. Else `flair_group_id` won't be removed from the user.
* check whether the method `match_primary_group_changes` called or not.
Allows creating a bookmark with the `for_topic` flag introduced in d1d2298a4c set to true. This happens when clicking on the Bookmark button in the topic footer when no other posts are bookmarked. In a later PR, when clicking on these topic-level bookmarks the user will be taken to the last unread post in the topic, not the OP. Only the OP can have a topic level bookmark, and users can also make a post-level bookmark on the OP of the topic.
I had to do some pretty heavy refactors because most of the bookmark code in the JS topics controller was centred around instances of Post JS models, but the topic level bookmark is not centred around a post. Some refactors were just for readability as well.
Also removes some missed reminderType code from the purge in 41e19adb0d
We want to be able to skip plugins from doing any work under
certain conditions, and to be able raise their own errors if
a file being uploaded is completely incompatible with the concept
of the plugin if it is enabled. For example, the UppyChecksum plugin
is happy to skip hashing large files, but the UppyUploadEncrypt
plugin from discourse-encrypt relies on the file being encrypted
to do anything with the upload, so it is considered a blocking
error if the user uploads a file that is too large.
This improves the base functions available in uppy-plugin-base and
extendable-uploader to handle this, as well as introducing a
HUGE_FILE_THRESHOLD_BYTES variable which represents 100MB in bytes,
matching the ExternalUploadManager::DOWNLOAD_LIMIT on the
server side.
discourse-encrypt to take advantage of this new functionality will
follow in discourse/discourse-encrypt#141
Also promote the `create_notification_alert` and `push_notification`
methods from instance methods to class methods so that plugins can call
them. This is temporary until we add a more comprehensive API for
extending `PostAlerter`.
We want to be able to skip plugins from doing any work under
certain conditions, and to be able raise their own errors if
a file being uploaded is completely incompatible with the concept
of the plugin if it is enabled. For example, the UppyChecksum plugin
is happy to skip hashing large files, but the UppyUploadEncrypt
plugin from discourse-encrypt relies on the file being encrypted
to do anything with the upload, so it is considered a blocking
error if the user uploads a file that is too large.
This improves the base functions available in uppy-plugin-base and
extendable-uploader to handle this, as well as introducing a
HUGE_FILE_THRESHOLD_BYTES variable which represents 100MB in bytes,
matching the ExternalUploadManager::DOWNLOAD_LIMIT on the
server side.
discourse-encrypt to take advantage of this new functionality will
follow in https://github.com/discourse/discourse-encrypt/pull/141
After deleting a category, we should soft-delete the category definition topic instead of hard deleting it. Else it causes issues while doing the user merge action if the source user has an orphan post that belongs to the deleted topic.
- do not reduce opacity of disabled buttons if they are loading
- replace ‘|’ by single quotes not double quotes
- always start from index 0
- reduces amount of work by checking row's length
- apply quotefix to fallback
- do not add 1 to caretposition if index is 0
The algorithm will now do the following:
- split selection to retain only first line
- removes possible "* "
- check for first inclusion
- fallback to first row if nothing found
We don't actually use the reminder_type for bookmarks anywhere;
we are just storing it. It has no bearing on the UI. It used
to be relevant with the at_desktop bookmark reminders (see
fa572d3a7a)
This commit marks the column as readonly, ignores it, and removes
the index, and it will be dropped in a later PR. Some plugins
are relying on reminder_type partially so some stubs have been
left in place to avoid errors.
This partially reverts commit ddb458343d.
Seeing performance degrade on larger sites so back to drawing board on
this one. Instead of the DISTINCT LEFT JOIN, we switch back to
IN(subquery).
First reported in https://meta.discourse.org/t/-/202482/19
There are two optimizations being applied here:
1. Fetch a user's group ids in a seperate query instead of including it
as a sub-query. When I tried a subquery, the query plan becomes very
inefficient.
1. Join against the `topic_allowed_users` and `topic_allowed_groups`
table instead of doing an IN against a subquery where we UNION the
`topic_id`s from the two tables. From my profiling, this enables PG to
do a backwards index scan on the `index_topics_on_timestamps_private`
index.
This commit fixes a bug where listing all messages was incorrectly
excluding topics if a topic has been archived by a group even if the
user did not belong to the group.
This commit also fixes another bug where dismissing private messages
selectively was subjected to the default limit of 30.
This new column will be used to indicate that a bookmark
is at the topic level. The first post of a topic can be
bookmarked twice after this change -- with for_topic set
to true and with for_topic set to false.
A later PR will use this column for logic to bookmark the
topic, and then topic-level bookmark links will take you
to the last unread post in the topic.
See also 22208836c5
We don't need no stinkin' denormalization! This commit ignores
the topic_id column on bookmarks, to be deleted at a later date.
We don't really need this column and it's better to rely on the
post.topic_id as the canonical topic_id for bookmarks, then we
don't need to remember to update both columns if the bookmarked
post moves to another topic.
Discourse is sending regularly message to admins when potential problems are persisted. Most of the time they have exactly the same content. In that case, when there are no replies, the old one should be trashed before a new one is created.
This commit sets `tap_failed_tests_only` to `true` in our testem config, so now only the failing tests will show in our GitHub CI Ember test runs, which saves developers from having to hunt through all of the passing tests using GitHub's janky console output scrollback.
There was a check for closed code blocks (which had both opening and
closing markups), but it did not work for the case when the text ends
in an open code block.
Administrators can use second factor to confirm granting admin access
without using email. The old method of confirmation via email is still
used as a fallback when second factor is unavailable.
The previous excerpt was a simple truncated raw message. Starting with
this commit, the raw content of the draft is cooked and an excerpt is
extracted from it. The logic for extracting the excerpt mimics the the
`ExcerptParser` class, but does not implement all functionality, being
a much simpler implementation.
The two draft controllers have been merged into one and the /draft.json
route has been changed to /drafts.json to be consistent with the other
route names.
This is necessary to allow for large file uploads via
the direct S3 upload mechanism, as we convert the external
file to an Upload record via ExternalUploadManager once
it is complete.
This will allow for files larger than 2,147,483,647 bytes (2.14GB)
to be referenced in the uploads table.
This is a table locking migration, but since it is not as highly
trafficked as posts, topics, or users, the disruption should be minimal.
This is my second try at this. The first b246a63a59 raised an issue
with the event delegation not working because the topic id changed.
This adds support for delegating events to dynamic keys by passing a
function where a static key would normally be needed. This means that
each timeline will have its own unique state key and events will only
delegate to the proper topic.
Translations are often multi-line. Using a regular `<input>` doesn't allow newlines, so if you try to edit a multiline theme translation, all the line breaks will be removed.
This commit updates the theme translations UI to use `<textarea>`, just like the core translation editing UI.
allowUpload can be false for the composer if there are no
allowed file extensions. This causes the _bindMobileUploadButton
code to fail because the button does not get rendered in the
template if !allowUpload. This commit changes composer-editor
to only bind upload functionality if allowUpload.
We've observed an error where the back button is displayed improperly in
the topic timeline. It's unfortunately been hard to reproduce but we
suspect it's related to leftover state when re-rendering.
This fix optimistically tries to fix the error by introducing the
topic's id to the unique key the widgets use for state. We can deploy
this and keep an eye out for the bug in the future.
This fixes an error when trying to upload a profile
background image for the user card when the
enable_direct_s3_uploads setting was true:
> Failed to execute 'send' on 'XMLHttpRequest': The object's state must be OPENED.
This was fixed in the upstream commit by the uppy devs:
5937bf2127
When a user archives a personal message, they are redirected back to the
inbox and will refresh the list of the topics for the given filter.
Publishing an event to the user results in an incorrect incoming message
because the list of topics has already been refreshed.
This does mean that if a user has two tabs opened, the non-active tab
will not receive the incoming message but at this point we do not think
the technical trade-offs are worth it to support this feature. We
basically have to somehow exclude a client from an incoming message
which is not easy to do.
Follow-up to fc1fd1b416
This abstracts interaction with uppy for uppy plugin classes
into base classes for Preprocessor plugins, so anyone
making these uppy plugins doesn't have to think as much about uppy
underneath the hood. This also makes the logging and validation
nicer, and provides a more consistent way to emit progress and
completion events.
In a future commit, we will introduce another base class for
`UploadUploaderPlugin` which will be used to be able to hijack
the upload process to go to a different provider (e.g. for discourse-video)
Short URLs were resolved before diffHTML was loaded and content was
swapped by it, which meant that no URLs were found and the URLs remained
unsolved. This caused image elements to be blank.
* DEV: Updated diffHTML to 1.0.0-beta.20
Watched words of type 'replace' or 'link' replaced the text inside
mentions or hashtags too, which broke these. These types of watched
words must skip any match that has an @ or # before it.
At this point in time, we do not think supporting unread and new when an
admin is looking at another user's messages is worth supporting.
Follow-up to fc1fd1b416
This was problematic if something like SCSS file throws an error as the
app would tell Ember CLI to bootstrap as if everything is fine and not
display the error.
The fix is to only hijack the rendering at the end of the template
instead of the beginning.
This commit updates the RSS post content to use email formatting. Many
plugins are using the `reduce_cooked` method to format content that is
not displayed outside of Discourse application. Using email formatting
also strips the secure media and various other things that is only meant
for Discourse client side application.
The modification date should always be a meta tag to make this less confusing. Especially for imported posts.
That's more in line with how the rest of Discourse presents post dates.
There are a few fixes at play here:
1) We were still not initializing objects to the correct types.
2) If a debounce timed out, it was returning a string instead of an
array which was not appropriately handled.
3) In testing mode we never cancel the search promise for stability.
In order to include the new/unread count in the browse more message
under suggested topics, a couple of technical changes have to be made.
1. `PrivateMessageTopicTrackingState` is now auto-injected which is
similar to how it is done for `TopicTrackingState`. This is done so
we don't have to attempt to pass the `PrivateMessageTopicTrackingState`
object multiple levels down into the suggested-topics component. While
the object is auto-injected, we only fetch the initial state and start
tracking when the relevant private messages routes has been hit and only
when a private message's suggested topics is loaded. This is
done as we do not want to add the extra overhead of fetching the inital
state to all page loads but instead wait till the private messages
routes are hit.
2. Previously, we would stop tracking once the `user-private-messages`
route has been deactivated. However, that is not ideal since
navigating out of the route and back means we send an API call to the
server each time. Since `PrivateMessageTopicTrackingState` is kept in
sync cheaply via messageBus, we can just continue to track the state
even if the user has navigated away from the relevant stages.
The smoke test has been failing with the error:
```
TypeError: Cannot read properties of undefined (reading 'Core')
```
Since de20c46077
and 9873a942e3 this error has been occurring,
possibly now because Uppy is required by a plugin. Adding uppy.js into
the require list for theme_qunit_vendor.js fixes the issue.
This new interface will be used explicitly to add upload
preprocessors in the form of uppy plugins. These will be
run for each upload in the composer (dependent on the logic
of the plugin itself), before the UppyChecksum plugin is
finally run.
Since discourse-encrypt uses the existing addComposerUploadHandler
API for essentially preprocessing an upload and not uploading it
to a different place, it will be the first plugin to use this interface,
along with the register-media-optimization-upload-processor initializer
in core.
Related https://github.com/discourse/discourse-encrypt/pull/131.
Improves the create account modal for screen readers by doing the following:
* Making the `modal-alert` section into an `aria-role="alert"` region and making it show and hide using height instead of display:none so screen readers pick it up. Made a change so the field-related error messages are always shown beneath the field.
* Add `aria-invalid` and `aria-describedby` attributes to each field in the modal, so the screen reader will read out the error hint on error. This necessitated an Ember component extension to allow both the `aria-*` attributes to be bound and to render on `{{input}}`.
* Moved the social login buttons to the right in the HTML structure so they are not read out first.
* Added `aria-label` attributes to the login buttons so they can have different content for screen readers.
* In some cases for modals, the title that should be used for the `aria-labelledby` attribute is within the modal content and not the discourse-modal-title title. This introduces a new titleAriaElementId property to the d-modal component that is then used by the create-account modal to read out the title
------
This is the same as e0d2de73d8 but
fixes the Ember-input-component-extension to use the public
Ember components TextField and TextArea instead of the private
TextSupport so the extension works in both normal Ember and
Ember CLI.
Improves the create account modal for screen readers by doing the following:
* Making the `modal-alert` section into an `aria-role="alert"` region and making it show and hide using height instead of display:none so screen readers pick it up. Made a change so the field-related error messages are always shown beneath the field.
* Add `aria-invalid` and `aria-describedby` attributes to each field in the modal, so the screen reader will read out the error hint on error. This necessitated an Ember component extension to allow both the `aria-*` attributes to be bound and to render on `{{input}}`.
* Moved the social login buttons to the right in the HTML structure so they are not read out first.
* Added `aria-label` attributes to the login buttons so they can have different content for screen readers.
* In some cases for modals, the title that should be used for the `aria-labelledby` attribute is within the modal content and not the discourse-modal-title title. This introduces a new titleAriaElementId property to the d-modal component that is then used by the create-account modal to read out the
This adds a new property, `pluginId` which you can pass to `modifyClass`
which prevent the class from being modified over and over again.
This also includes a fix for polls which was leaking state between tests
which this new functionality exposed.
When using ComposerUpload and/or ComposerUploadUppy, we were
always calling bindMobileUploadButton. However with more composer-like
interfaces being developed, we need this to be optional, as not
everywhere will have a separate mobile upload button to bind to.
Also makes it so the composer extending the ComposerUpload mixins is
responsible for explicitly unbinding the mobile upload button if
it needs to.
DEV: Allow passing cook_method to TopicEmbed.import to override default
This will be used in the rss-polling plugin when we want to have
oneboxes on feed content, like youtube for example.
Previously, a group's `default_notification_level` change will only affect the users added after it.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
* DEV: Use named parameters for dir-span helper
Follow up to: e50a5c0c73
In order to improve code clarity this change introduces named parameters
for the dir-span helper. This is specifically for the new `htmlSafe`
parameter which you can use instead of just passing in a boolean if the
strings you are passing in have already been escaped.
Before: `{{dir-span category.description false}}`
After: `{{dir-span category.description htmlSafe=true}}`
* Set default value for params arg
PresenceChannel aims to be a generic system for allow the server, and end-users, to track the number and identity of users performing a specific task on the site. For example, it might be used to track who is currently 'replying' to a specific topic, editing a specific wiki post, etc.
A few key pieces of information about the system:
- PresenceChannels are identified by a name of the format `/prefix/blah`, where `prefix` has been configured by some core/plugin implementation, and `blah` can be any string the implementation wants to use.
- Presence is a boolean thing - each user is either present, or not present. If a user has multiple clients 'present' in a channel, they will be deduplicated so that the user is only counted once
- Developers can configure the existence and configuration of channels 'just in time' using a callback. The result of this is cached for 2 minutes.
- Configuration of a channel can specify permissions in a similar way to MessageBus (public boolean, a list of allowed_user_ids, and a list of allowed_group_ids). A channel can also be placed in 'count_only' mode, where the identity of present users is not revealed to end-users.
- The backend implementation uses redis lua scripts, and is designed to scale well. In the future, hard limits may be introduced on the maximum number of users that can be present in a channel.
- Clients can enter/leave at will. If a client has not marked itself 'present' in the last 60 seconds, they will automatically 'leave' the channel. The JS implementation takes care of this regular check-in.
- On the client-side, PresenceChannel instances can be fetched from the `presence` ember service. Each PresenceChannel can be used entered/left/subscribed/unsubscribed, and the service will automatically deduplicate information before interacting with the server.
- When a client joins a PresenceChannel, the JS implementation will automatically make a GET request for the current channel state. To avoid this, the channel state can be serialized into one of your existing endpoints, and then passed to the `subscribe` method on the channel.
- The PresenceChannel JS object is an ember object. The `users` and `count` property can be used directly in ember templates, and in computed properties.
- It is important to make sure that you `unsubscribe()` and `leave()` any PresenceChannel objects after use
An example implementation may look something like this. On the server:
```ruby
register_presence_channel_prefix("site") do |channel|
next nil unless channel == "/site/online"
PresenceChannel::Config.new(public: true)
end
```
And on the client, a component could be implemented like this:
```javascript
import Component from "@ember/component";
import { inject as service } from "@ember/service";
export default Component.extend({
presence: service(),
init() {
this._super(...arguments);
this.set("presenceChannel", this.presence.getChannel("/site/online"));
},
didInsertElement() {
this.presenceChannel.enter();
this.presenceChannel.subscribe();
},
willDestroyElement() {
this.presenceChannel.leave();
this.presenceChannel.unsubscribe();
},
});
```
With this template:
```handlebars
Online: {{presenceChannel.count}}
<ul>
{{#each presenceChannel.users as |user|}}
<li>{{avatar user imageSize="tiny"}} {{user.username}}</li>
{{/each}}
</ul>
```
Uppy V2 includes the S3 multipart batch presigning change
we contributed in d613b849a6
so we need to upgrade it. This also brings both package.json
files into line and accounts for the renaming of Plugin
to BasePlugin in Uppy.
This has been tested and is working locally for both
regular Ember and Ember CLI, for uploads.json
XHR uploads and for direct S3 uploads (single and multipart).
This mixin needs to be shared between the composer and composer-like
user interfaces. This commit makes it so the events and the underlying
data model is configurable by the component extending the ComposerUploadUppy
mixin.
Also removes two MessageBus unsubscribe calls which were unnecessary.
The generate_presigned_put endpoint for direct external uploads
(such as the one for the uppy-image-uploader) records allowed
S3 metadata values on the uploaded object. We use this to store
the sha1-checksum generated by the UppyChecksum plugin, for later
comparison in ExternalUploadManager.
However, we were not doing this for the create_multipart endpoint,
so the checksum was never captured and compared correctly.
Also includes a fix to make sure UppyChecksum is the last preprocessor to run.
It is important that the UppyChecksum preprocessor is the last one to
be added; the preprocessors are run in order and since other preprocessors
may modify the file (e.g. the UppyMediaOptimization one), we need to
checksum once we are sure the file data has "settled".
The user-topic-list template is also in use in other places when we want to improve blank page syndrome, so this PR is a preparation for that changes as well.
- uses tagName=""
- removes user property which is not being used
- extract utility functions
- better wording for boolean properties
- initializes all properties
- uses @action
- uses optional chaining
- other minor changes
This rolls uppy back to the previous bundle that was used,
which will break multipart functionality (which is not yet
enabled anywhere).
No other upload functionality should be affected by this change,
it will be as if d295a16dab had
not been merged.
Users can invite people to topic and they will be automatically
redirected to the topic when logging in after signing up. This commit
ensures a "invited_to_topic" notification is created when the invite is
redeemed.
The same notification is used for the "Notify" sharing method that is
found in share topic modal.
This bug was introduced by f66007ec83.
In PostJobsEnqueuer we previously did not fire the after_post_create
event and after_topic_create event for private message topics. This was
changed in the above commit in order to publish message bus messages
for topic tracking state updates. Unfortunately this caused the
NotifyMailingListSubscribers job to be enqueued for all posts including
private messages, and admins and the users involved in the PMs got
emailed the contents of the PMs if they had mailing list mode enabled.
Luckily the impact of this was mitigated by a Guardian#can_see? check
for each mailing list mode user in the NotifyMailingListSubscribers job.
We never want to notify mailing list mode subscribers for private messages
so an early return has been added there, plus the logic in PostJobsEnqueuer
has been fixed, and tests have been added to that class where there were
none before.
Since ad3ec5809f when a user chooses
the Dismiss New... option in the New topic list, we send a request
to topics/reset-new.json with ?tracked=false as the only parameter.
This then uses Topic as the scope for topics to dismiss, with no
other limitations. When we do topic_scope.pluck(:id), it gets the
ID of every single topic in the database (that is not deleted) to
pass to TopicsBulkAction, causing a huge query with severe performance
issues.
This commit changes the default scope to use
`TopicQuery.new(current_user).new_results(limit: false)`
which should only use the topics in the user's New list, which
will be a much smaller list, depending on the user's "new_topic_duration_minutes"
setting.
Whenever we `subscribe` to something there should be an equivalent
`unsubscribe` and this implements it for `LogsNotice`.
In the future we should make this closer to what Ember expects a Service
to be, but at least it's properly cleaning up after itself now.
This can be used to change the list of topic posters. For example,
discourse-solved can use this to move the user who posted the solution
after the original poster.
See the previous commit d66b258b0e as
well.
If enable_upload_debug_mode is true, we do not want to abort the
direct S3 upload, because that will delete the file on S3 and prevent
further inspection of any errors that have come up.
There are certain design decisions that were made in this commit.
Private messages implements its own version of topic tracking state because there are significant differences between regular and private_message topics. Regular topics have to track categories and tags while private messages do not. It is much easier to design the new topic tracking state if we maintain two different classes, instead of trying to mash this two worlds together.
One MessageBus channel per user and one MessageBus channel per group. This allows each user and each group to have their own channel backlog instead of having one global channel which requires the client to filter away unrelated messages.
This pull request introduces the endpoints required, and the JavaScript functionality in the `ComposerUppyUpload` mixin, for direct S3 multipart uploads. There are four new endpoints in the uploads controller:
* `create-multipart.json` - Creates the multipart upload in S3 along with an `ExternalUploadStub` record, storing information about the file in the same way as `generate-presigned-put.json` does for regular direct S3 uploads
* `batch-presign-multipart-parts.json` - Takes a list of part numbers and the unique identifier for an `ExternalUploadStub` record, and generates the presigned URLs for those parts if the multipart upload still exists and if the user has permission to access that upload
* `complete-multipart.json` - Completes the multipart upload in S3. Needs the full list of part numbers and their associated ETags which are returned when the part is uploaded to the presigned URL above. Only works if the user has permission to access the associated `ExternalUploadStub` record and the multipart upload still exists.
After we confirm the upload is complete in S3, we go through the regular `UploadCreator` flow, the same as `complete-external-upload.json`, and promote the temporary upload S3 into a full `Upload` record, moving it to its final destination.
* `abort-multipart.json` - Aborts the multipart upload on S3 and destroys the `ExternalUploadStub` record if the user has permission to access that upload.
Also added are a few new columns to `ExternalUploadStub`:
* multipart - Whether or not this is a multipart upload
* external_upload_identifier - The "upload ID" for an S3 multipart upload
* filesize - The size of the file when the `create-multipart.json` or `generate-presigned-put.json` is called. This is used for validation.
When the user completes a direct S3 upload, either regular or multipart, we take the `filesize` that was captured when the `ExternalUploadStub` was first created and compare it with the final `Content-Length` size of the file where it is stored in S3. Then, if the two do not match, we throw an error, delete the file on S3, and ban the user from uploading files for N (default 5) minutes. This would only happen if the user uploads a different file than what they first specified, or in the case of multipart uploads uploaded larger chunks than needed. This is done to prevent abuse of S3 storage by bad actors.
Also included in this PR is an update to vendor/uppy.js. This has been built locally from the latest uppy source at d613b849a6. This must be done so that I can get my multipart upload changes into Discourse. When the Uppy team cuts a proper release, we can bump the package.json versions instead.
Steps to reproduce:
1. Go to activity/bookmarks
2. Search for something that isn’t in your bookmarks, so you get no results
3. Navigate away and then click "Bookmarked" on the sidebar or open the user menu and click the View All Bookmarks button on the bottom of the bookmarks tab, and you get the message "You haven't bookmarked anything yet".
This commit fixes the problem. We have a controller with a query parameter q that contains a search query. And we also have a property searchTerm that is bound to the search box on the page and mirrors the value in q. We were using a value from searchTerm when querying the server, but ember controllers are singletons so the searchTerm value persisted between page visits and leaded to this bug.
To make things work properly, we should be using the value from q everywhere except two places when we copy a value from q to searchTerm and vice versa.
Major changes included:
- better support for screen readers
- trapping focus in modals
- better tabbing order in composer
- alerts on no content found/number of items found
- better autofocus in modals
- mini-tag-chooser is now a multi-select component
- each multi-select-component will now display selection on one row
Plugin API is allowing to add small action codes dedicated to groups.
This will be used by assign-plugin when topic is assigned or unassigned from group.
When resetting the preprocessor status states, we weren't using
the same default state as when the preprocessor status state is
first initialized with an associated plugin. This commit brings
the two into alignment, fixing a bug where if you cancelled an
upload then tried a new one the "Processing Upload" message would
never change to "Uploading... X", so any subsequent uploads were
uncancellable.
Since the state was not being reset correctly, the properties that
were supposed to be numbers ended up as `undefined`, so when calling
prop-- or prop++, they turned into NaN.
This change only applies when uppy is calling the media-optimization-worker.
Since the old way of calling the worker via jQuery file uploader will
be removed soon, there is no point coming up with some random string
to use in place of the file name for the promise resolvers there, we
can live with this for now.
When we encountered an error with the media-optimization-worker,
we stopped the worker, which made it so further messages were not
received when optimizing images in parallel. Removed this based
on an option.
Also added more debugging lines to help track down issues.
* FIX: Revoking admin or moderator status doesn't require refresh to delete/anonymize/merge user
On the /admin/users/<id>/<username> page, there are action buttons that are either visible or hidden depending on a few fields from the AdminDetailsSerializer: `can_be_deleted`, `can_be_anonymized`, `can_be_merged`, `can_delete_all_posts`.
These fields are updated when granting/revoking admin or moderator status. However, those updates were not being reflected on the page. E.g. if a user is granted moderation privileges, the 'anonymize user' and 'merge' buttons still appear on the page, which is inconsistent with the backend state of the user. It requires refreshing the page to update the state.
This commit fixes that issue, by syncing the client model state with the server state when handling a successful response from the server. Now, when revoking privileges, the buttons automatically appear without refreshing the page. Similarly, when granting moderator privileges, the buttons automatically disappear without refreshing the page.
* Add detailed user response to spec for changed routes.
Add tests to verify that the revoke_moderation, grant_moderation, and revoke_admin routes return a response formatted according to the AdminDetailedUserSerializer.
When a theme's default color scheme is not marked as user selectable, we were outputting the numeric ID in the UI. This outputs "Theme default" instead.
This is useful in the DiscourseHub mobile app, currently the app queries
the `about.json` endpoint, which can raise a CORS issue in some cases,
for example when the site only accepts logins from an external provider.
I was storing the wrong object as the event listener
reference for the paste and mobile upload button click
events so they were not being cleaned properly on element
destruction.
Also renamed `uploadButton` to the more descriptive
`mobileUploadButton`.
When the composer reply is cancelled and the draft is trashed,
the isUploading and isProcessing statuses were not being reset,
so when the composer was opened again the Uploading... or
Processing... message still showed even when the uploads had
been cancelled correctly.
The regular composer-upload mixin suffered the same problem
as the uppy one, where the Processing/Uploading message was not
reset when a reply was cancelled and the draft destroyed.
When I added the paste event for files in the composer to
send to Uppy, I inadvertently called event.preventDefault()
if the pasted data was text. I removed that now, and I only
return early if the user cannot upload, and if there are no
files on the clipboard nothing happens.
Adds uppy upload functionality behind a
enable_experimental_composer_uploader site setting (default false,
and hidden).
When enabled this site setting will make the composer-editor-uppy
component be used within composer.hbs, which in turn points to
a ComposerUploadUppy mixin which overrides the relevant
functions from ComposerUpload. This uppy uploader has parity
with all the features of jQuery file uploader in the original
composer-editor, including:
progress tracking
error handling
number of files validation
pasting files
dragging and dropping files
updating upload placeholders
upload markdown resolvers
processing actions (the only one we have so far is the media optimization
worker by falco, this works)
cancelling uploads
For now all uploads still go via the /uploads.json endpoint, direct
S3 support will be added later.
Also included in this PR are some changes to the media optimization
service, to support uppy's different file data structures, and also
to make the promise tracking and resolving more robust. Currently
it uses the file name to track promises, we can switch to something
more unique later if needed.
Does not include custom upload handlers, that will come
in a later PR, it is a tricky problem to handle.
Also, this new functionality will not be used in encrypted PMs because
encrypted PM uploads rely on custom upload handlers.