This is so users with huge amount of bookmarks do not have to wait a long time to see results.
* Add a bookmark list and list serializer to server-side to be able to handle paging and load more URL
* Use load-more component to load more bookmark items, 20 at a time in user activity
* Change the way current user is loaded for bookmark ember models because it was breaking/losing resolvedTimezone when loading more items
Introduce the concept of "high priority notifications" which include PM and bookmark reminder notifications. Now bookmark reminder notifications act in the same way as PM notifications (float to top of recent list, show in the green bubble) and most instances of unread_private_messages in the UI have been replaced with unread_high_priority_notifications.
The user email digest is changed to just have a section about unread high priority notifications, the unread PM section has been removed.
A high_priority boolean column has been added to the Notification table and relevant indices added to account for it.
unread_private_messages has been kept on the User model purely for backwards compat, but now just returns unread_high_priority_notifications count so this may cause some inconsistencies in the UI.
* DEV: Replace User.unstage and User#unstage API with User#unstage!
Quoting @SamSaffron:
> User.unstage mixes concerns of both unstaging users and updating params which is fragile/surprising.
> u.unstage destroys notifications and raises a user_unstaged event prior to the user becoming unstaged and the user object being saved.
User#unstage! no longer updates user attributes and saves the object before triggering the `user_unstaged` event.
* Update one more spec
* Assign attributes after unstaging
Adds 3 config values that allow to set a custom provider of Gravatar-like API accessible from gravatar_base_url. The gravatar_name is purely cosmetic, but helps with associating name with the service that actually provides the avatars. gravatar_login_url is a link relative to gravatar_base_url, which provides the user with the login to the Gravatar service
* PERF: Allow passing an existing list of user field ids when loading
This avoids the need for running `UserField.pluck(:id)` for each user that is serialized
* Memoize user_fields to avoid rebuilding hash ever time
This is not used in core or official plugins, and has been printing a deprecation notice since v2.3.0beta4. All OpenID 2.0 code and dependencies have been dropped. The user_open_ids table remains for now, in case anyone has missed the deprecation notice, and needs to migrate their data.
Context at https://meta.discourse.org/t/-/113249
Previously you'd get a server side generic error due to a password check
failing. Now the input element has a maxlength attribute and the server
side will respond with a nicer error message if the value is too long.
The following methods have long been deprecated in ruby due to flaws in their implementation per http://blade.nagaokaut.ac.jp/cgi-bin/vframe.rb/ruby/ruby-core/29293?29179-31097:
URI.escape
URI.unescape
URI.encode
URI.unencode
escape/encode are just aliases for one another. This PR uses the Addressable gem to replace these methods with its own encode, unencode, and encode_component methods where appropriate.
I have put all references to Addressable::URI here into the UrlHelper to keep them corralled in one place to make changes to this implementation easier.
Addressable is now also an explicit gem dependency.
* Add timezone to user_options table
* Also migrate existing timezone values from UserCustomField,
which is where the discourse-calendar plugin is storing them
* Allow user to change their core timezone from Profile
* Auto guess & set timezone on login & invite accept & signup
* Serialize user_options.timezone for group members. this is so discourse-group-timezones can access the core user timezone, as it is being removed in discourse-calendar.
* Annotate user_option with timezone
* Validate timezone values
* Fix user title logic when badge name customized
* Fix an issue where a user's title was not considered a badge granted title when the user used a badge for their title and the badge name was customized. this affected the effectiveness of revoke_ungranted_titles! which only operates on badge_granted_titles.
* When a user's title is set now it is considered a badge_granted_title if the badge name OR the badge custom name from TranslationOverride is the same as the title
* When a user's badge is revoked we now also revoke their title if the user's title matches the badge name OR the badge custom name from TranslationOverride
* Add a user history log when the title is revoked to remove confusion about why titles are revoked
* Add granted_title_badge_id to user_profile, now when we set badge_granted_title on a user profile when updating a user's title based on a badge, we also remember which badge matched the title
* When badge name (or custom text) changes update titles of users in a background job
* When the name of a badge changes, or in the case of system badges when their custom translation text changes, then we need to update the title of all corresponding users who have a badge_granted_title and matching granted_title_badge_id. In the case of system badges we need to first get the proper badge ID based on the translation key e.g. badges.regular.name
* Add migration to backfill all granted_title_badge_ids for both normal badge name titles and titles using custom badge text.
- Allow revoking keys without deleting them
- Auto-revoke keys after a period of no use (default 6 months)
- Allow multiple keys per user
- Allow attaching a description to each key, for easier auditing
- Log changes to keys in the staff action log
- Move all key management to one place, and improve the UI
Doing .pluck(:column).first is a very common pattern in Discourse and in
most cases, a limit cause isn't being added. Instead of adding a limit
clause to all these callsites, this commit adds two new methods to
ActiveRecord::Relation:
pluck_first, equivalent to limit(1).pluck(*columns).first
and pluck_first! which, like other finder methods, raises an exception
when no record is found
- Increase size of the reviewable's conversation excerpt to prevent truncation of the new copy
- Remove the `domain` parameter from the `flag_linked_posts_as_spam` method in the user model since it is no longer needed
- Remove the `domain` interpolation variable from all translation files
- Add "All posts from this user that include links should be reviewed." to server.en.yml for added clarity on why the posts entered the queue
Zeitwerk simplifies working with dependencies in dev and makes it easier reloading class chains.
We no longer need to use Rails "require_dependency" anywhere and instead can just use standard
Ruby patterns to require files.
This is a far reaching change and we expect some followups here.
Adds 2 factor authentication method via second factor security keys over [web authn](https://developer.mozilla.org/en-US/docs/Web/API/Web_Authentication_API).
Allows a user to authenticate a second factor on login, login-via-email, admin-login, and change password routes. Adds registration area within existing user second factor preferences to register multiple security keys. Supports both external (yubikey) and built-in (macOS/android fingerprint readers).
When activating a user via an external provider, this would cause the "this account is not activated" message to show on the first attempt, even though the account had been activated correctly.
All posts created by the user are counted unless they are deleted,
belong to a PM sent between a non-human user and the user or belong
to a PM created by the user which doesn't have any other recipients.
It also makes the guardian prevent self-deletes when SSO is enabled.
This is a low severity security fix because it requires a logged in
admin user to update a site setting via the API directly to an invalid
value.
The fix adds validation for the affected site settings, as well as a
secondary fix to prevent injection in the event of bad data somehow
already exists.
Previously we used custom fields to denote a user was anonymous, this was
risky in that custom fields are prone to race conditions and are not
properly dedicated, missing constraints and so on.
The new table `anonymous_users` is properly protected. There is only one
possible shadow account per user, which is enforced using a constraint.
Every anonymous user will have a unique row in the new table.
We were blocking user registrations with same username and password,
but allowing usernames to be changed to be same as password later.
Also disallow names to be the same as password.
This reduces chances of errors where consumers of strings mutate inputs
and reduces memory usage of the app.
Test suite passes now, but there may be some stuff left, so we will run
a few sites on a branch prior to merging
This removes all uses of both `send` and `public_send` from consumers of
SiteSetting and instead introduces a `get` helper for dynamic lookup
This leads to much cleaner and safer code long term as we are always explicit
to test that a site setting is really there before sending an arbitrary
string to the class
It also removes a couple of risky stubs from the auth provider test
`Upload#url` is more likely and can change from time to time. When it
does changes, we don't want to have to look through multiple tables to
ensure that the URLs are all up to date. Instead, we simply associate
uploads properly to `UserProfile` so that it does not have to replicate
the URLs in the table.
Minor fixes to add Rails 6 support to Discourse, we now will boot
with RAILS_MASTER=1, all specs pass
Only one tiny deprecation left
Largest change was the way ActiveModel:Errors changed interface a
bit but there is a simple backwards compat way of working it
"Rejecting" a user in the queue is equivalent to deleting them, which
would then making it impossible to review rejected users. Now we store
information about the user in the payload so if they are deleted things
still display in the Rejected view.
Secondly, if a user is destroyed outside of the review queue, it will
now automatically "Reject" that queue item.
Conversely, if a user is deactivated the reviewable should automatically
be rejected.
Before this fix, if a user was not active they'd still show in the
review queue but without an "Approve" button which was confusing.
Includes support for flags, reviewable users and queued posts, with REST API
backwards compatibility.
Co-Authored-By: romanrizzi <romanalejandro@gmail.com>
Co-Authored-By: jjaffeux <j.jaffeux@gmail.com>
This corrects 2 issues:
First is a regression with d7c08e21 for some reason dependent :delete_all
respects default scopes where-as dependent :destroy bypasses it.
Secondly, we were keeping orphan user actions around on user destroy, this
ensures we remove all the user actions not only ones that originated by
the user.
So for example: if I like a post of user A we create a user action saying I
did that, but once user A is deleted we were not removing the action leading
to an orphan action in the database.
Users can have 100s of thousands of post and user actions, we do not want
to destroy each individually cause the tracking is enormous and the amount
of queries we would need is enormous.
This gives up on the `after_commit` hook on `post_actions` which ships a message
to clients to synchronize a post, so some phantom post_actions may remain
in the UX in the rare occasion we delete a user. The phantoms will be gone
on reload.
Changes to functionality
- Removed syncing of user metadata including gender, location etc.
These are no longer available to standard Facebook applications.
- Removed the remote 'revoke' functionality. No other providers have
it, and it does not appear to be standard practice in other apps.
- The 'facebook_no_email' event is no longer logged. The system can
cope fine with a missing email address.
Data is migrated to the new user_associated_accounts table.
facebook_user_infos can be dropped once we are confident the data has
been migrated successfully.
At the moment core providers are hard-coded in Javascript, and plugin providers get added to the JS payload at compile time. This refactor means that we only ship enabled providers to the client.
Introduce new patterns for direct sql that are safe and fast.
MiniSql is not prone to memory bloat that can happen with direct PG usage.
It also has an extremely fast materializer and very a convenient API
- DB.exec(sql, *params) => runs sql returns row count
- DB.query(sql, *params) => runs sql returns usable objects (not a hash)
- DB.query_hash(sql, *params) => runs sql returns an array of hashes
- DB.query_single(sql, *params) => runs sql and returns a flat one dimensional array
- DB.build(sql) => returns a sql builder
See more at: https://github.com/discourse/mini_sql
This updates tests to use latest rails 5 practice
and updates ALL dependencies that could be updated
Performance testing shows that performance has not regressed
if anything it is marginally faster now.
- remove inactive user report and replace with posts
- clean up internals so grouping by week happens on client
- when switching periods old report was not destroyed leading to bugs
- calculate trend based on previous interval ... not previous 30 days
- show percentages for mau/dau
- be more careful about utc date usage
- show uniqu and click through rate on search panel
- publish key of report with report so we only load the correct one
- subscribe earlier in channel in case of concurrency issues
* Feature: Push notifications for Android
Notification config for desktop and mobile are merged.
Desktop notifications stay as they are for desktop views.
If mobile mode, push notifications are enabled.
Added push notification subscriptions in their own table, rather than through
custom fields.
Notification banner prompts appear for both mobile and desktop when enabled.
* This looks like we're doing the same thing but
we're debugging a race condition where a user
can be created without an email record. Therefore,
we prefer the more obvious method of assigning an
association.
implemented review items.
Blocking previous codes - valid 2-factor auth tokens can only be authenticated once/30 seconds.
I played with updating the “last used” any time the token was attempted but that seemed to be overkill, and frustrating as to why a token would fail.
Translatable texts.
Move second factor logic to a helper class.
Move second factor specific controller endpoints to its own controller.
Move serialization logic for 2-factor details in admin user views.
Add a login ember component for de-duplication
Fix up code formatting
Change verbiage of google authenticator
add controller tests:
second factor controller tests
change email tests
change password tests
admin login tests
add qunit tests - password reset, preferences
fix: check for 2factor on change email controller
fix: email controller - only show second factor errors on attempt
fix: check against 'true' to enable second factor.
Add modal for explaining what 2fa with links to Google Authenticator/FreeOTP
add two factor to email signin link
rate limit if second factor token present
add rate limiter test for second factor attempts
Setting a user's default groups based on their email address should only be done once, ie. when they confirm their email address.
Previously we were doing this everytime we'd save a user record 🤷
Revamped system for managing authentication tokens.
- Every user has 1 token per client (web browser)
- Tokens are rotated every 10 minutes
New system migrates the old tokens to "legacy" tokens,
so users still remain logged on.
Also introduces weekly job to expire old auth tokens.
also publish seen_notification_id so we can tell what is new and what is old
cleanup controller so it correctly checks user
fix bug around clearing notification when people click mark read
This feature ensures session cookie lifespan is extended
when user is online.
Also decreases session timeout from 90 to 60 days.
Ensures all users (including logged on ones) get expiring sessions.