Ensures that `UserStat#post_count` and `UserStat#topic_count` does not
go below 0. When it does like it did now, we tend to have bugs in our
code since we're usually coding with the assumption that the count isn't
negative.
In order to support the constraints, our post and topic fabricators in
tests will now automatically increment the count for the respective
user's `UserStat` as well. We have to do this because our fabricators
bypasss `PostCreator` which holds the responsibility of updating `UserStat#post_count` and
`UserStat#topic_count`.
It was possible to see notifications of other users using routes:
- notifications/responses
- notifications/likes-received
- notifications/mentions
- notifications/edits
We weren't showing anything private (like notifications about private messages), only things that're publicly available in other places. But anyway, it feels strange that it's possible to look at notifications of someone else. Additionally, there is a risk that we can unintentionally leak something on these pages in the future.
This commit restricts these routes.
This pull request introduces the endpoints required, and the JavaScript functionality in the `ComposerUppyUpload` mixin, for direct S3 multipart uploads. There are four new endpoints in the uploads controller:
* `create-multipart.json` - Creates the multipart upload in S3 along with an `ExternalUploadStub` record, storing information about the file in the same way as `generate-presigned-put.json` does for regular direct S3 uploads
* `batch-presign-multipart-parts.json` - Takes a list of part numbers and the unique identifier for an `ExternalUploadStub` record, and generates the presigned URLs for those parts if the multipart upload still exists and if the user has permission to access that upload
* `complete-multipart.json` - Completes the multipart upload in S3. Needs the full list of part numbers and their associated ETags which are returned when the part is uploaded to the presigned URL above. Only works if the user has permission to access the associated `ExternalUploadStub` record and the multipart upload still exists.
After we confirm the upload is complete in S3, we go through the regular `UploadCreator` flow, the same as `complete-external-upload.json`, and promote the temporary upload S3 into a full `Upload` record, moving it to its final destination.
* `abort-multipart.json` - Aborts the multipart upload on S3 and destroys the `ExternalUploadStub` record if the user has permission to access that upload.
Also added are a few new columns to `ExternalUploadStub`:
* multipart - Whether or not this is a multipart upload
* external_upload_identifier - The "upload ID" for an S3 multipart upload
* filesize - The size of the file when the `create-multipart.json` or `generate-presigned-put.json` is called. This is used for validation.
When the user completes a direct S3 upload, either regular or multipart, we take the `filesize` that was captured when the `ExternalUploadStub` was first created and compare it with the final `Content-Length` size of the file where it is stored in S3. Then, if the two do not match, we throw an error, delete the file on S3, and ban the user from uploading files for N (default 5) minutes. This would only happen if the user uploads a different file than what they first specified, or in the case of multipart uploads uploaded larger chunks than needed. This is done to prevent abuse of S3 storage by bad actors.
Also included in this PR is an update to vendor/uppy.js. This has been built locally from the latest uppy source at d613b849a6. This must be done so that I can get my multipart upload changes into Discourse. When the Uppy team cuts a proper release, we can bump the package.json versions instead.
Configuring staged users to watch categories and tags is a way to sign
them up to get many emails. These emails may be unwanted and get marked
as spam, hurting the site's email deliverability.
Users can opt-in to email notifications by logging on to their
account and configuring their own preferences.
If staff need to be able to configure these preferences on behalf of
staged users, the "allow changing staged user tracking" site setting
can be enabled. Default is to not allow it.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
The 'Discourse SSO' protocol is being rebranded to DiscourseConnect. This should help to reduce confusion when 'SSO' is used in the generic sense.
This commit aims to:
- Rename `sso_` site settings. DiscourseConnect specific ones are prefixed `discourse_connect_`. Generic settings are prefixed `auth_`
- Add (server-side-only) backwards compatibility for the old setting names, with deprecation notices
- Copy `site_settings` database records to the new names
- Rename relevant translation keys
- Update relevant translations
This commit does **not** aim to:
- Rename any Ruby classes or methods. This might be done in a future commit
- Change any URLs. This would break existing integrations
- Make any changes to the protocol. This would break existing integrations
- Change any functionality. Further normalization across DiscourseConnect and other auth methods will be done separately
The risks are:
- There is no backwards compatibility for site settings on the client-side. Accessing auth-related site settings in Javascript is fairly rare, and an error on the client side would not be security-critical.
- If a plugin is monkey-patching parts of the auth process, changes to locale keys could cause broken error messages. This should also be unlikely. The old site setting names remain functional, so security-related overrides will remain working.
A follow-up commit will be made with a post-deploy migration to delete the old `site_settings` rows.
Previously we were caching by user_id, but the there are only two possible outcomes. Therefore we only need to cache two values.
This removes another N+1 query when serializing multiple user cards.
All posts created by the user are counted unless they are deleted,
belong to a PM sent between a non-human user and the user or belong
to a PM created by the user which doesn't have any other recipients.
It also makes the guardian prevent self-deletes when SSO is enabled.
This reduces chances of errors where consumers of strings mutate inputs
and reduces memory usage of the app.
Test suite passes now, but there may be some stuff left, so we will run
a few sites on a branch prior to merging
This is a feature that used to be present in discourse-assign but is
much easier to implement in core. It also allows a topic to be assigned
without it claiming for review and vice versa and allows it to work with
category group reviewers.
* drafts in user profile: only show to user herself (not to admins), use avatar replying to (instead of topic OP), add keyboard shortcut for drafts, simplify display labels
* use JSON when testing Draft.stream
* add drafts.json endpoint, user profile tab with drafts stream
* improve drafts stream display in user profile
* truncate excerpts in drafts list, better handling for resume draft action
* improve draft stream SQL query, add rspec tests
* if composer is open, quietly close it when user opens another draft from drafts stream; load PM draft only when user is in /u/username/messages (instead of /u/username)
* cleanup
* linting fixes
* apply prettier styling to modified files
* add client tests for drafts, includes a fixture for drafts.json
* improvements to code following review
* refresh drafts route when user deletes a draft open in the composer while being in the drafts route; minor prettier scss fix
* added more spec tests, deleted an acceptance test for removing drafts that was too finicky, formatting and code style fixes, added appEvent for draft:destroyed
* prettier, eslint fixes
* use "username_lower" from users table, added error handling for rejected promises
* adds guardian spec for can_see_drafts, adds improvements following code review
* move DraftsController spec to its own file
* fix failing drafts qunit test, use getOwner instead of deprecated this.container
* limit test fixture for draft.json testing to new_topic request only
implemented review items.
Blocking previous codes - valid 2-factor auth tokens can only be authenticated once/30 seconds.
I played with updating the “last used” any time the token was attempted but that seemed to be overkill, and frustrating as to why a token would fail.
Translatable texts.
Move second factor logic to a helper class.
Move second factor specific controller endpoints to its own controller.
Move serialization logic for 2-factor details in admin user views.
Add a login ember component for de-duplication
Fix up code formatting
Change verbiage of google authenticator
add controller tests:
second factor controller tests
change email tests
change password tests
admin login tests
add qunit tests - password reset, preferences
fix: check for 2factor on change email controller
fix: email controller - only show second factor errors on attempt
fix: check against 'true' to enable second factor.
Add modal for explaining what 2fa with links to Google Authenticator/FreeOTP
add two factor to email signin link
rate limit if second factor token present
add rate limiter test for second factor attempts
- Show bounce score on user admin page
- Added reset bounce score button on user admin page
- Only whitelisted email types are sent to emails with high bounce score
- FIX: properly detect bounces even when there is no TO: header in the email
- Don't desactivate a user when reaching the bounce threshold
The "Show more notifications..." link in the notifications dropdown now
links to /my/notifications, which is a historical view of all
notifications you have recieved.
Notification history is loaded in blocks of 60 at a time.
Admins can see others' notification history. (This was requested for
'debugging purposes', though that's what impersonation is for, IMO.)
All flags should end up in one of the three dispositions
- Agree
- Disagree
- Defer
In the administration area, the *active* flags section displays 4 buttons
- Agree (hide post + send PM)
- Disagree
- Defer
- Delete
Clicking "Delete" will open a modal that offer to
- Delete Post & Defer Flags
- Delete Post & Agree with Flags
- Delete Spammer (if available)
When the flag has a list associated, the list will now display 1
response and 1 reply and a "show more..." link if there are more in the
conversation. Replying to the conversation will NOT give a disposition.
Moderators must click the buttons that does that.
If someone clicks one buttons, this will add a default moderator message
from that moderator saying what happened.
The *old* flags section now displays the proper dispositions and is
super duper fast (no more N+9999 queries).
FIX: the old list includes deleted topics
FIX: the lists now properly display the topic states (deleted, closed,
archived, hidden, PM)
FIX: flagging a topic that you've already flagged the first post