Prior to this change, we had weights for very_high, high, low and
very_low. This means there were 4 weights to tweak and what weights to
use for `very_high/high` and `very_low/low` pair was hard to explain.
This change makes it such that `very_high` search priority will always
ensure that the posts are ranked at the top while `very_low` search
priority will ensure that the posts are ranked at the very bottom.
* FEATURE: Do not delete invite if link was copied
* FIX: Show error to user if invite redeeming fails
The error was only displayed to console.
* UX: Better placement of bulk buttons
Destroy all expired invites should be on the expired tab, not pending.
* FIX: Ensure invited_groups is unique per invite and group
* FIX: Do not refresh topic list if title unchanged
* FIX: Do not close modal on enter
This intereferes with the group and topic chooser.
Wrapping everything in a form disables this behavior.
* FIX: Move link and email options outside advanced section
* FIX: Do not close modal if saving a link invite
User may still want to copy the link.
- removes the option from site settings
- deletes the site setting on existing sites that have it
- marks posts using emojis as requiring a rebake
Note that the actual image files are not removed here, the plan is to
remove them in a few weeks/months (when presumably the rebaking of old
posts has been completed).
This commit includes other various improvements to watched words.
auto_silence_first_post_regex site setting was removed because it overlapped
with 'require approval' watched words.
* FIX: optimise MoveNewSinceToTable
Avoids shuffling all ids around to the app (only use min / max)
Ensure the query for boundaries is ordered by user_id
The bug was mentioned on meta: https://meta.discourse.org/t/pressing-dismiss-new-doesnt-clear-new-topics/179858
Problem is that sometimes the user has TopicUser records with `last_read_post_number` set as NULL. In that case, the topic is still "new" to them and should be dismissed when they click dismiss button.
In addition, I added that condition to post_migration and bumped the number to fix existing records. Migration is written to be idempotent so it will make no harm to already deployed instances.
Original PR was reverted because of broken migration https://github.com/discourse/discourse/pull/12058
I fixed it by adding this line
```
AND topics.id IN(SELECT id FROM topics ORDER BY created_at DESC LIMIT :max_new_topics)
```
This time it is left joining a limited amount of topics. I tested it on few databases and it worked quite smooth
This migration is quite heavy because of join to all potential topics which should be `dismissed` for each user. To make it a little bit more efficient I did two things:
- move conditions to join so it should use fewer rows
- do that in batches - 1000 users at the time
Follow up https://github.com/discourse/discourse/pull/11968
Dismiss all new topics using the same DismissTopicService. In addition, MessageBus receives exact topic ids which should be marked as `seen`.
The 'Discourse SSO' protocol is being rebranded to DiscourseConnect. This should help to reduce confusion when 'SSO' is used in the generic sense.
This commit aims to:
- Rename `sso_` site settings. DiscourseConnect specific ones are prefixed `discourse_connect_`. Generic settings are prefixed `auth_`
- Add (server-side-only) backwards compatibility for the old setting names, with deprecation notices
- Copy `site_settings` database records to the new names
- Rename relevant translation keys
- Update relevant translations
This commit does **not** aim to:
- Rename any Ruby classes or methods. This might be done in a future commit
- Change any URLs. This would break existing integrations
- Make any changes to the protocol. This would break existing integrations
- Change any functionality. Further normalization across DiscourseConnect and other auth methods will be done separately
The risks are:
- There is no backwards compatibility for site settings on the client-side. Accessing auth-related site settings in Javascript is fairly rare, and an error on the client side would not be security-critical.
- If a plugin is monkey-patching parts of the auth process, changes to locale keys could cause broken error messages. This should also be unlikely. The old site setting names remain functional, so security-related overrides will remain working.
A follow-up commit will be made with a post-deploy migration to delete the old `site_settings` rows.
Because of a where clause of duration_minutes != duration, where
duration_minutes was NULL, the previous migration to fill the new
duration_minutes column failed. This corrects the failed migration
by just running the update where duration_minutes is NULL and duration
IS NOT NULL.
Previous commit is 4af77f1
This PR allows entering a float value for topic timers e.g. 0.5 for 30 minutes when entering hours, 0.5 for 12 hours when entering days. This is achieved by adding a new column to store the duration of a topic timer in minutes instead of the ambiguous both hours and days that it could be before.
This PR has ommitted the post migration to delete the duration column in topic timers; it will be done in a subsequent PR to ensure that no data is lost if the UPDATE query to set duration_mintues fails.
I have to keep the old keyword of duration in set_or_create_topic_timer for backwards compat, will remove at a later date after plugins are updated.
This is a try to simplify logic around dismiss new topics to have one solution to work in all places - dismiss all-new, dismiss new in a specific category or even in a specific tag.
Adds a new column/setting to groups, allow_unknown_sender_topic_replies, which is default false. When enabled, this scenario is allowed via IMAP:
* OP sends an email to the support email address which is synced to a group inbox via IMAP, creating a group topic
* Group user replies to the group topic
* An email notification is sent to the OP of the topic via GroupSMTPMailer
* The OP has several email accounts and the reply is sent to all of them, or they forward their reply to another email account
* The OP replies from a different email address than the OP (gloria@gmail.com instead of gloria@hey.com for example)
* The a new staged user is created, the new reply is accepted and added to the topic, and the staged user is added to the topic allowed users
Without allow_unknown_sender_topic_replies enabled the new reply creates an entirely new topic (because the email address it is sent from is not previously part of the topic email chain).
This PR adds security_last_changed_at and security_last_changed_reason to uploads. This has been done to make it easier to track down why an upload's secure column has changed and when. This necessitated a refactor of the UploadSecurity class to provide reasons why the upload security would have changed.
As well as this, a source is now provided from the location which called for the upload's security status to be updated as they are several (e.g. post creator, topic security updater, rake tasks, manual change).
Run the 'MigrateSearchDataAfterDefaultLocaleRename' post migration in batches of 500k records.
This will hopefully prevent any potential deadlocks on large tables.
This adds a new table UserNotificationSchedules which stores monday-friday start and ends times that each user would like to receive notifications (with a Boolean enabled to remove the use of the schedule). There is then a background job that runs every day and creates do_not_disturb_timings for each user with an enabled notification schedule. The job schedules timings 2 days in advance. The job is designed so that it can be run at any point in time, and it will not create duplicate records.
When a users saves their notification schedule, the schedule processing service will run and schedule do_not_disturb_timings. If the user should be in DND due to their schedule, the user will immediately be put in DND (message bus publishes this state).
The UI for a user's notification schedule is in user -> preferences -> notifications. By default every day is 8am - 5pm when first enabled.
This should make it easier to track down how the incoming email was created, which is one of four locations:
The POP3 poller (which picks up reply via email replies)
The admin email controller #handle_mail (which is where hosted mail is sent)
The IMAP sync tool
The group SMTP mailer, which sends emails when replying to IMAP topics, pre-emptively creating IncomingEmail records to avoid double syncing
Feature for `Must Approve Users` setup. When a user is rejected, a staff member can optionally set a reason for audit purposes. In addition, feedback email can be sent to the user.
Meta: https://meta.discourse.org/t/account-rejection-email/103112/8
This PR adds functionality for the IMAP sync code to detect if a UID that is missing from the mail group mailbox is in the Spam/Junk folder for the mail account, and if so delete the associated Discourse topic. This is identical to what we do for emails that are moved for Trash.
If an email is missing but not in Spam or Trash, then we mark the incoming email record with imap_missing: true. This may be used in future to further filter or identify these emails, and perhaps go hunting for them in the email account in bulk.
Note: This adds some code duplication because the trash and spam email detection and handling is very similar. I intend to do more refactors/improvements to the IMAP sync code in time because there is a lot of room for improvement.
These 2 indexes optimise performance on profile pages.
The summary page displays:
1. A list of "Top Link" - links sorted by number of clicks posted by user
2. A list of "Top Replies" - replies made by a user that go the most hearts
These two areas could devolve into full index or table scans, new indexes are there to avoid this cost on large dbs
One minor downside is that storage requirements go a tiny bit up to maintain the new indexes
There have been production instances where the migrations has run but the records
that were meant to be deleted have not been deleted. The root cause is
unknown but the migration is safe and simple to re-run. The problem is
not reproducible locally so we're not spending too much time on digging
up the root. Time vs business cost tradeoff.
Follow-up fb15da43da
This will only happen if the user_associated_accounts table is currently empty. It's useful for people that may be depending on primary key values in data explorer queries. This change will only have an effect on sites which have not already run this migration.
Themes marked for auto update will be automatically updated when
Discourse is updated. This is triggered by discourse_docker or
docker_manager running Rake task 'themes:update'.
* FIX: Store Reviewable's force_review as a boolean.
Using the `force_review` flag raises the score to hit the minimum visibility threshold. This strategy turned out to be ineffective on sites with a high number of flags, where these values could rapidly fluctuate.
This change adds a `force_review` column on the reviewables table and modifies the `Reviewable#list_for` method to show these items when passing the `status: :pending` option, even if the score is not high enough. ReviewableQueuedPosts and ReviewableUsers are always created using this option.
This commit adds an additional find_user_by_email hook to ManagedAuthenticator so that GitHub login can continue to support secondary email addresses
The github_user_infos table will be dropped in a follow-up commit.
This is the last core authenticator to be migrated to ManagedAuthenticator 🎉
- IgnoredUser records should all now have an expiring_at value. This commit enforces that in the DB, and fixes any corrupt rows
- Changes to the ignored user list are now handled by the `/u/{username}/notification_level` endpoint. This allows setting expiration dates on the ignore. This commit removes the old logic for saving a list of usernames in the user preferences.
- Many specs were calling `IgnoredUser.create`. This commit changes them to use `Fabricate(:ignored_user)` for consistency
This commit adds a site setting `auto_close_topics_create_linked_topic`
which when enabled works in conjunction with `auto_close_topics_post_count`
setting and creates a new linked topic for the topic just closed.
The auto-created new topic contains a link for all the previous topics
and the topic titles are appended with `(Part {n})`.
The setting is enabled by default.
Adds a new slow mode for topics that are heating up. Users will have to wait for a period of time before being able to post again.
We store this interval inside the topics table and track the last time a user posted using the last_posted_at datetime in the TopicUser relation.
Dependency on gifsicle, allow_animated_avatars and allow_animated_thumbnails
site settings were all removed. Animated GIF images are still allowed, but
the generated optimized images are no longer animated for those (which were
used for avatars and thumbnails).
The added 'animated' is populated by extracting information using FastImage.
This field was used to selectively reoptimize old animations. This process
happens in the background.
Now that we have support for user-selectable color schemes, it makes sense
to simplify seeding and theme updates in the wizard.
We now:
- seed only one theme, named "Default" (previously "Light")
- seed a user-selectable Dark color scheme
- rename the "Themes" wizard step to "Colors"
- update the default theme's color scheme if a default is set
(a new theme is created if there is no default)
This code is a no-op on all sites, even though it looks rather dangerous
this migration has long run prior to people trying exploit it.
That said ... hygiene here ... is not good.
Remove this legacy, we do not want it, even in historical migrations.
Previously, Jobs::EnqueueDigestEmails would enqueue a digest job for every user, even if there are no topics to send. The digest job would exit, no email would send, and last_emailed_at would not change. 30 minutes later, Jobs::EnqueueDigestEmails would run again and re-enqueue jobs for the same users.
120fa8ad introduced a temporary mitigation for this issue, by randomly selecting a subset of those users each time.
This commit adds a new `digest_attempted_at` column to the `user_stats` table. This column is updated every time a digest job completes for a user. Using this, we can avoid scheduling digest jobs for the same user every 30 minutes. This also removes the random user selection in 120fa8ad, and instead prioritizes users who had digests attempted the longest time ago.
See https://meta.discourse.org/t/changing-a-users-email/164512 for additional context.
Previously when an admin user changed a user's email we assumed that they would need a password reset too because they likely did not have access to their account. This proved to be incorrect, as there are other reasons a user needs admin to change their email. This PR:
* Changes the admin change email for user flow so the user is sent an email to confirm the change
* We now record who the email change request was requested by
* If the requested by user is admin and not the user we note this in the email sent to the user
* We also make the confirm change email route open to anonymous users, so it can be clicked by the user even if they do not have access to their account. If there is a logged in user we make sure the confirmation matches the current user.
The reviewable was updated despite the user not being approved because a u.id = r.target_id condition is missing. It only affected user reviewables that were pending when the migration ran. Users were not auto-approved.
This PR removes the user reminder topic timers, because that system has been supplanted and improved by bookmark reminders. The option is removed from the UI and all existing user reminder topic timers are migrated to bookmark reminders.
Migration does this:
* Get all topic_timers with status_type 5 (reminders)
* Gets all bookmarks where the user ID and topic ID match
* Loops through the found topic timers
* If there is no bookmark for the OP of the topic, then we just create a bookmark with a reminder
* If there is a bookmark for the OP of the topic and it does **not** have a reminder set, then just
update it with the topic timer reminder
* If there is a bookmark for the OP of the topic with a reminder then just discard the topic timer
* Cancels all outstanding user reminder topic timers
* **Trashes (not deletes) all user reminder topic timers**
Notes:
* For now I have left the user reminder topic timer job class in place; this is so the jobs can be cancelled in the migration. It and the specs will be deleted in the next PR.
* At a later date I will write a migration to delete all trashed user topic timers. They are not deleted here in case there are data issues and they need to be recovered.
* A future PR will change the UI of the topic timer modal to make it look more like the bookmark modal.
On the topic view route we query for reviewables of each post in the stream,
using a query that filters on two unindexed columns. This results in a Parallel Seq Scan
over all rows, which can take quite some time (~20ms was seen) on forums with lots of flags
After index is added PostgreSQL planner opts for a simple Index Scan and runs in sub 1ms.
Before:
```
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize GroupAggregate (cost=11401.08..11404.87 rows=20 width=28) (actual time=19.209..19.209 rows=1 loops=1)
Group Key: r.target_id
-> Gather Merge (cost=11401.08..11404.41 rows=26 width=28) (actual time=19.202..20.419 rows=1 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial GroupAggregate (cost=10401.06..10401.38 rows=13 width=28) (actual time=16.958..16.958 rows=0 loops=3)
Group Key: r.target_id
-> Sort (cost=10401.06..10401.09 rows=13 width=16) (actual time=16.956..16.956 rows=0 loops=3)
Sort Key: r.target_id
Sort Method: quicksort Memory: 25kB
Worker 0: Sort Method: quicksort Memory: 25kB
Worker 1: Sort Method: quicksort Memory: 25kB
-> Nested Loop (cost=0.42..10400.82 rows=13 width=16) (actual time=15.894..16.938 rows=0 loops=3)
-> Parallel Seq Scan on reviewables r (cost=0.00..10302.47 rows=8 width=12) (actual time=15.882..16.927 rows=0 loops=3)
Filter: (((target_type)::text = 'Post'::text) AND (target_id = ANY ('{7565483,7565563,7565566,7565567,7565568,7565569,7565579,7565580,7565583,7565586,7565588,7565589,7565601,7565602,7565603,7565613,7565620,7565623,7565624,7565626}'::integer[])))
Rows Removed by Filter: 49183
-> Index Scan using index_reviewable_scores_on_reviewable_id on reviewable_scores s (cost=0.42..12.27 rows=2 width=8) (actual time=0.029..0.030 rows=1 loops=1)
Index Cond: (reviewable_id = r.id)
Planning Time: 0.318 ms
Execution Time: 20.470 ms
```
After:
```
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GroupAggregate (cost=0.84..342.54 rows=20 width=28) (actual time=0.038..0.038 rows=1 loops=1)
Group Key: r.target_id
-> Nested Loop (cost=0.84..341.95 rows=31 width=16) (actual time=0.020..0.033 rows=1 loops=1)
-> Index Scan using index_reviewables_on_target_id on reviewables r (cost=0.42..96.07 rows=20 width=12) (actual time=0.013..0.026 rows=1 loops=1)
Index Cond: (target_id = ANY ('{7565483,7565563,7565566,7565567,7565568,7565569,7565579,7565580,7565583,7565586,7565588,7565589,7565601,7565602,7565603,7565613,7565620,7565623,7565624,7565626}'::integer[]))
-> Index Scan using index_reviewable_scores_on_reviewable_id on reviewable_scores s (cost=0.42..12.27 rows=2 width=8) (actual time=0.005..0.005 rows=1 loops=1)
Index Cond: (reviewable_id = r.id)
Planning Time: 0.253 ms
Execution Time: 0.067 ms
```
The problem with this index is that on sites with a high non-pm to pm
posts ratio, the index is esstentially duplicating the existing index on
`PostSearchData#search_data`. If the site is huge, the index ends up
taking up more diskspace.
Very large batches can take an enormous amount of time due to churn
Limiting to 200k changes at a time gives us a far larger chance of finishing
the job without timing out or deadlocking.
Enabling the moderators_manage_categories_and_groups site setting will allow moderator users to create/manage groups.
* show New Group form to moderators
* Allow moderators to update groups and read logs, where appropriate
* Rename site setting from create -> manage
* improved tests
* Migration should rename old log entries
* Log group changes, even if those changes mean you can no longer see the group
* Slight reshuffle
* RouteTo /g if they no longer have permissions to view group
A giant transaction in a post migration can be very risky.
This splits the large amount of work this migration needs to do into 2 parts:
1. A re-runnable cleanup job prior to transaction
2. A minimally sized transaction to add the database constraint
This avoids large amounts of churn on the table
Convert all IMAP logging to write to a database table for easier inspection. These logs are cleaned up daily if they are > 5 days old.
Logs can easily be watched in dev by setting DISCOURSE_DEV_LOG_LEVEL=\"debug\" and running tail -f development.log | grep IMAP
When we run the S3 inventory, mark uploads that exist as verified true, those that don't as verified false, and uploads not included in the check / not yet checked as verified nil.
* FEATURE: set notification levels when added to a group
This feature allows admins and group owners to define default
category and tag tracking levels that will be applied to user
preferences automatically at the time when users are added to the
group. Users are free to change those preferences afterwards.
When removed from a group, the user's notification preferences aren't
changed.
Adds a imap_group_id column to IncomingEmail to deal with an issue where we were trying to update emails in the mailbox, calling IncomingEmail.where(imap_sync: true). However UID and UIDVALIDITY could be the same across accounts. So if group A used IMAP details for Gmail account A, and group B used IMAP details for Gmail account B, and both tried to sync changes to an email with UID of 3 (e.g. changing Labels), one account could affect the other. This even applied to Archiving!
Also in this PR:
* Fix error occurring if we do a uid_fetch and no emails are returned
* Allow for creating labels within the target mailbox (previously we would not do this, only use existing labels)
* Improve consistency for log messages
* Add specs for generic IMAP provider (Gmail specs still to come)
* Add custom archiving support for Gmail
* Only use Message-ID for uniqueness of IncomingEmail if it was generated by us
* Various refactors and improvements
For the following conditions, the TopicUser.bookmarked column was not updated correctly:
* When a bookmark was auto-deleted because the reminder was sent
* When a bookmark was auto-deleted because the owner of the bookmark replied to the topic
This adds another migration to fix the out-of-sync column and also some refactors to BookmarkManager to allow for more of these delete cases. BookmarkManager is used instead of directly destroying the bookmark in PostCreator and BookmarkReminderNotificationHandler.
Because previous migration was already deployed and some databases were already migrated, I needed to add some conditions to the migration.
Previous migration - https://github.com/discourse/discourse/blob/master/db/post_migrate/20200629232159_rename_path_whitelist_to_allowed_paths.rb
What will happen in a scenario when previous migration was not run.
1. column allowed_paths will be created
2. allowed_path will be populated with data from path_whitelist
3. path_whitelist column will be dropped
What will happen in a scenario when previous migration was already run.
1. column allowed_paths will not be created because already exists - `unless column_exists?(:embeddable_hosts, :allowed_paths)`
2. Data will not be copied because path_whitelist is missing - `if column_exists?(:embeddable_hosts, :path_whitelist) && column_exists?(:embeddable_hosts, :allowed_paths)`
3. path_whitelist column deletion will be skipped - `if column_exists?(:embeddable_hosts, :path_whitelist)`
This adds an option to "delete on owner reply" to bookmarks. If you select this option in the modal, then reply to the topic the bookmark is in, the bookmark will be deleted on reply.
This PR also changes the checkboxes for these additional bookmark options to an Integer column in the DB with a combobox to select the option you want.
The use cases are:
* Sometimes I will bookmark the topics to read it later. In this case we definitely don’t need to keep the bookmark after I replied to it.
* Sometimes I will read the topic in mobile and I will prefer to reply in PC later. Or I may have to do some research before reply. So I will bookmark it for reply later.
* FEATURE: Allow List for PMs
This feature adds a new user setting that is disabled by default that
allows them to specify a list of users that are allowed to send them
private messages. This way they don't have to maintain a large list of
users they don't want to here from and instead just list the people they
know they do want. Staff will still always be able to send messages to
the user.
* Update PR based on feedback
* Remove unneeded bookmark name index.
* Change bookmark search query to use post_search_data. This allows searching on topic title and post content
* Tweak the style/layout of the bookmark list so the search looks better and the whole page fits better on mobile.
* Added scopes UI
* Create scopes when creating a new API key
* Show scopes on the API key show route
* Apply scopes on API requests
* Extend scopes from plugins
* Add missing scopes. A mapping can be associated with multiple controller actions
* Only send scopes if the use global key option is disabled. Use the discourse plugin registry to add new scopes
* Add not null validations and index for api_key_id
* Annotate model
* DEV: Move default mappings to ApiKeyScope
* Remove unused attribute and improve UI for existing keys
* Support multiple parameters separated by a comma
Follow up to https://github.com/discourse/discourse/pull/10188/files
There are still TopicUser records where bookmarked is true even though there are no Bookmark or PostAction records with the type of bookmark for the associated topic and user. This migration corrects this issue by setting bookmarked to false for these cases.
We have a couple of examples of enormous amounts of text being entered in the name column of bookmarks. This is not desirable...it is just meant to be a short note / reminder of why you bookmarked this.
This PR caps the column at 100 characters and truncates existing names in the database to 100 characters.
* This is causing issues where sometimes bookmarked is out of sync with what is in the Bookmark table. The BookmarkManager handles updating this column now.
* Add migration to fix bookmarked column that is incorrectly marked false when a Bookmark record exists.