Commit Graph

564 Commits

Author SHA1 Message Date
Angus McLeod df3886d6e5
FEATURE: Experimental support for group membership via google auth (#14835)
This commit introduces a new site setting "google_oauth2_hd_groups". If enabled, group information will be fetched from Google during authentication, and stored in the Discourse database. These 'associated groups' can be connected to a Discourse group via the "Membership" tab of the group preferences UI. 

The majority of the implementation is generic, so we will be able to add support to more authentication methods in the near future.

https://meta.discourse.org/t/managing-group-membership-via-authentication/175950
2021-12-09 12:30:27 +00:00
Mark VanLandingham bd140948e3
DEV: Changes to support chat uploads (#15153) 2021-12-01 13:24:16 -06:00
Dan Ungureanu 6e2d4a14ac
FIX: Delete unconfirmed AND expired email tokens only (#15089) 2021-11-25 10:34:30 +02:00
Dan Ungureanu fa8cd629f1
DEV: Hash tokens stored from email_tokens (#14493)
This commit adds token_hash and scopes columns to email_tokens table.
token_hash is a replacement for the token column to avoid storing email
tokens in plaintext as it can pose a security risk. The new scope column
ensures that email tokens cannot be used to perform a different action
than the one intended.

To sum up, this commit:

* Adds token_hash and scope to email_tokens

* Reuses code that schedules critical_user_email

* Refactors EmailToken.confirm and EmailToken.atomic_confirm methods

* Periodically cleans old, unconfirmed or expired email tokens
2021-11-25 09:34:39 +02:00
Bianca Nenciu 1c3c0f04d9
FEATURE: Pull hotlinked images in user bios (#14726) 2021-10-29 17:58:05 +03:00
Vinoth Kannan a6de4a5ce9
DEV: use upload id to save in theme setting instead of URL. (#14341)
When we use URL instead it creates the problem while changing the CDN hostname.
2021-09-16 07:58:53 +05:30
Krzysztof Kotlarek d99735e24d
FEATURE: remove duplicated messages about new advices (#14319)
Discourse is sending regularly message to admins when potential problems are persisted. Most of the time they have exactly the same content. In that case, when there are no replies, the old one should be trashed before a new one is created.
2021-09-15 08:59:25 +10:00
Mark VanLandingham f4cca4af75
DEV: Do not clean up chat message uploads (#14267) 2021-09-07 13:33:48 -05:00
David Taylor 31db83527b DEV: Introduce PresenceChannel API for core and plugin use
PresenceChannel aims to be a generic system for allow the server, and end-users, to track the number and identity of users performing a specific task on the site. For example, it might be used to track who is currently 'replying' to a specific topic, editing a specific wiki post, etc.

A few key pieces of information about the system:
- PresenceChannels are identified by a name of the format `/prefix/blah`, where `prefix` has been configured by some core/plugin implementation, and `blah` can be any string the implementation wants to use.
- Presence is a boolean thing - each user is either present, or not present. If a user has multiple clients 'present' in a channel, they will be deduplicated so that the user is only counted once
- Developers can configure the existence and configuration of channels 'just in time' using a callback. The result of this is cached for 2 minutes.
- Configuration of a channel can specify permissions in a similar way to MessageBus (public boolean, a list of allowed_user_ids, and a list of allowed_group_ids). A channel can also be placed in 'count_only' mode, where the identity of present users is not revealed to end-users.
- The backend implementation uses redis lua scripts, and is designed to scale well. In the future, hard limits may be introduced on the maximum number of users that can be present in a channel.
- Clients can enter/leave at will. If a client has not marked itself 'present' in the last 60 seconds, they will automatically 'leave' the channel. The JS implementation takes care of this regular check-in.
- On the client-side, PresenceChannel instances can be fetched from the `presence` ember service. Each PresenceChannel can be used entered/left/subscribed/unsubscribed, and the service will automatically deduplicate information before interacting with the server.
- When a client joins a PresenceChannel, the JS implementation will automatically make a GET request for the current channel state. To avoid this, the channel state can be serialized into one of your existing endpoints, and then passed to the `subscribe` method on the channel.
- The PresenceChannel JS object is an ember object. The `users` and `count` property can be used directly in ember templates, and in computed properties.
- It is important to make sure that you `unsubscribe()` and `leave()` any PresenceChannel objects after use

An example implementation may look something like this. On the server:

```ruby
register_presence_channel_prefix("site") do |channel|
  next nil unless channel == "/site/online"
  PresenceChannel::Config.new(public: true)
end
```

And on the client, a component could be implemented like this:

```javascript
import Component from "@ember/component";
import { inject as service } from "@ember/service";

export default Component.extend({
  presence: service(),
  init() {
    this._super(...arguments);
    this.set("presenceChannel", this.presence.getChannel("/site/online"));
  },
  didInsertElement() {
    this.presenceChannel.enter();
    this.presenceChannel.subscribe();
  },
  willDestroyElement() {
    this.presenceChannel.leave();
    this.presenceChannel.unsubscribe();
  },
});
```

With this template:

```handlebars
Online: {{presenceChannel.count}}
<ul>
  {{#each presenceChannel.users as |user|}} 
    <li>{{avatar user imageSize="tiny"}} {{user.username}}</li>
  {{/each}}
</ul>
```
2021-08-27 16:26:06 +01:00
Martin Brennan b500949ef6
FEATURE: Initial implementation of direct S3 uploads with uppy and stubs (#13787)
This adds a few different things to allow for direct S3 uploads using uppy. **These changes are still not the default.** There are hidden `enable_experimental_image_uploader` and `enable_direct_s3_uploads`  settings that must be turned on for any of this code to be used, and even if they are turned on only the User Card Background for the user profile actually uses uppy-image-uploader.

A new `ExternalUploadStub` model and database table is introduced in this pull request. This is used to keep track of uploads that are uploaded to a temporary location in S3 with the direct to S3 code, and they are eventually deleted a) when the direct upload is completed and b) after a certain time period of not being used. 

### Starting a direct S3 upload

When an S3 direct upload is initiated with uppy, we first request a presigned PUT URL from the new `generate-presigned-put` endpoint in `UploadsController`. This generates an S3 key in the `temp` folder inside the correct bucket path, along with any metadata from the clientside (e.g. the SHA1 checksum described below). This will also create an `ExternalUploadStub` and store the details of the temp object key and the file being uploaded.

Once the clientside has this URL, uppy will upload the file direct to S3 using the presigned URL. Once the upload is complete we go to the next stage.

### Completing a direct S3 upload

Once the upload to S3 is done we call the new `complete-external-upload` route with the unique identifier of the `ExternalUploadStub` created earlier. Only the user who made the stub can complete the external upload. One of two paths is followed via the `ExternalUploadManager`.

1. If the object in S3 is too large (currently 100mb defined by `ExternalUploadManager::DOWNLOAD_LIMIT`) we do not download and generate the SHA1 for that file. Instead we create the `Upload` record via `UploadCreator` and simply copy it to its final destination on S3 then delete the initial temp file. Several modifications to `UploadCreator` have been made to accommodate this.

2. If the object in S3 is small enough, we download it. When the temporary S3 file is downloaded, we compare the SHA1 checksum generated by the browser with the actual SHA1 checksum of the file generated by ruby. The browser SHA1 checksum is stored on the object in S3 with metadata, and is generated via the `UppyChecksum` plugin. Keep in mind that some browsers will not generate this due to compatibility or other issues.

    We then follow the normal `UploadCreator` path with one exception. To cut down on having to re-upload the file again, if there are no changes (such as resizing etc) to the file in `UploadCreator` we follow the same copy + delete temp path that we do for files that are too large.

3. Finally we return the serialized upload record back to the client

There are several errors that could happen that are handled by `UploadsController` as well.

Also in this PR is some refactoring of `displayErrorForUpload` to handle both uppy and jquery file uploader errors.
2021-07-28 08:42:25 +10:00
Alan Guo Xiang Tan 0d8144b62b DEV: Improve logging of errors in `Jobs::ProcessUserNotificationSchedules`
Gives us the actual error and backtrace to work with. Otherwise, the
logging of the error is not useful at all.
2021-07-21 12:20:44 +08:00
Penar Musaraj 361c8be547
PERF: Add scheduled job to delete old stylesheet cache rows (#13747) 2021-07-16 10:58:01 -04:00
Martin Brennan c6f2459cc4
FIX: Do not prevent other topic timers running on error (#13665)
There was an issue with the TopicTimerEnqueuer where any timer
that failed to enqueue_typed_job with an error would prevent
all other pending timers after the one that errored from running.

To mitigate this we just capture the error and log it (so we can
still fix it if needed for bug crushing) and proceed with the
rest of the timer enqueues.

The commit https://github.com/discourse/discourse/pull/13544 highlighted
this issue originally in hosted sites.

<!-- NOTE: All pull requests should have tests (rspec in Ruby, qunit in JavaScript). If your code does not include test coverage, please include an explanation of why it was omitted. -->
2021-07-08 12:49:58 +10:00
Roman Rizzi 2c918a3161
FEATURE: Staff can receive pending user reminders more frequently. (#13422)
* FEATURE: Staff can receive pending user reminders more frequently.

We now express the "pending_users_reminder_delay"  in minutes instead of hours so staff can have finer control over the delay.

We need to keep in mind that the reminders could still take up to 20 minutes, even when using a lower value. We send them from a scheduled job.

* Migrate to a new site setting for the reminders delay
2021-06-24 10:02:56 -03:00
Jarek Radosz 046a875222
DEV: Improve `script/downsize_uploads.rb` (#13508)
* Only shrink images that are used in Posts and no other models
* Don't save the upload if the size is the same
2021-06-24 00:09:40 +02:00
Roman Rizzi 4dc8c3c409
FEATURE: Blocking is optional when deleting a user from the review queue. (#13375)
Subclasses must call #delete_user_actions inside build_actions to support user deletion. The method adds a delete user bundle, which has a delete and a delete + block option. Every subclass is responsible for implementing these actions.
2021-06-15 12:35:45 -03:00
Dan Ungureanu d903d4dc5a
DEV: Periodically delete old email change requests (#13054)
Email change requests are never deleted no matter if they completed
successfully or not. The abandoned requests have the disadvantage of
showing up as unconfirmed emails in user's preferences page.
2021-05-14 10:34:56 +03:00
Roman Rizzi d4b5a81b05
FIX: Recalculate scores only when approving or transitioning to pending. (#13009)
Recalculating a ReviewableFlaggedPost's score after rejecting or ignoring it sets the score as 0, which means that we can't find them after reviewing. They don't surpass the minimum priority threshold and are hidden.

Additionally, we only want to use agreed flags when calculating the different priority thresholds.
2021-05-10 14:09:04 -03:00
Roman Rizzi 60059a7190
FEATURE: A low priority filter for the review queue. (#12822)
This filter hides reviewables with a score lower than the "reviewable_low_priority_threshold" setting. We only use reviewables that already met this threshold to calculate the Medium and High priority filters.
2021-04-23 15:34:24 -03:00
Gerhard Schlager 92e222d246
FIX: POP3 polling shouldn't stop after exception or old email (#12742) 2021-04-19 10:27:29 +02:00
Osama Sayegh a23d0f9961
UX: Add image uploader widget for uploading badge images (#12377)
Currently the process of adding a custom image to badge is quite clunky; you have to upload your image to a topic, and then copy the image URL and pasting it in a text field. Besides being clucky, if the topic or post that contains the image is deleted, the image will be garbage-collected in a few days and the badge will lose the image because the application is not that the image is referenced by a badge.

This commit improves that by adding a proper image uploader widget for badge images.
2021-03-17 08:55:23 +03:00
David Taylor 4430bc153d
FIX: Do not clean up uploads when they're used by theme settings (#12326)
We intend to move ThemeSetting to use an upload_id column, rather than storing the URL. So this is a short-term solution.
2021-03-09 19:16:45 +00:00
Gerhard Schlager a96a5db0fb
DEV: Add option to send system message to groups (#12256) 2021-03-02 18:51:50 +01:00
Roman Rizzi 4e716e9ce5
FIX: Reduce the time_read threshold to one minute. (#12159)
* FIX: Reduce the time_read threshold to one minute.

Five minutes is too much and could fill the queue with false positives.

* Update spec/jobs/enqueue_suspect_users_spec.rb

Co-authored-by: Arpit Jalan <arpit@techapj.com>

Co-authored-by: Arpit Jalan <arpit@techapj.com>
2021-02-20 08:25:32 -03:00
Roman Rizzi 95fb363c2a
FEATURE: Use the "time_read" stat to flag users as suspicious. (#12145)
Completing the discobot tutorial gives you ~3m of reading time, so we set the limit at 5m. Additionally, we use an "OR" clause to cover the case when you just scroll through a single topic.
2021-02-19 13:10:19 -03:00
Krzysztof Kotlarek ad3ec5809f
FIX: Dismiss new with better migration (#12062)
Original PR was reverted because of broken migration https://github.com/discourse/discourse/pull/12058

I fixed it by adding this line
```
          AND topics.id IN(SELECT id FROM topics ORDER BY created_at DESC LIMIT :max_new_topics)
```

This time it is left joining a limited amount of topics. I tested it on few databases and it worked quite smooth
2021-02-15 08:50:33 +11:00
Penar Musaraj 900d4187ef
DEV: Prevents rate limits for new feature checks on multisite (#12053) 2021-02-12 08:52:59 -05:00
Krzysztof Kotlarek a696cc07d2
Revert "FEATURE: Ability to dismiss all new topics (#12018)" (#12058)
This reverts commits 7426764af4 and f5b18e2a31
2021-02-12 08:50:25 +11:00
Krzysztof Kotlarek f5b18e2a31
FEATURE: Ability to dismiss all new topics (#12018)
Follow up https://github.com/discourse/discourse/pull/11968

Dismiss all new topics using the same DismissTopicService. In addition, MessageBus receives exact topic ids which should be marked as `seen`.
2021-02-11 13:35:09 +11:00
Krzysztof Kotlarek f39e7fe81d
FEATURE: New way to dismiss new topics (#11927)
This is a try to simplify logic around dismiss new topics to have one solution to work in all places - dismiss all-new, dismiss new in a specific category or even in a specific tag.
2021-02-04 11:27:34 +11:00
Penar Musaraj 49e97279c7 FEATURE: Add daily job to check for new features 2021-02-01 10:31:44 +08:00
Mark VanLandingham 809274fe0d
DEV: Replace 'processed' column on notifications with new table (#11864) 2021-01-27 10:29:24 -06:00
Régis Hanol aa1138ff71
FIX: reindex_search job should work on model with no search data (#11819)
Lots of changes but it's mostly a refactoring.

The interesting part that was fix are the 'load_problem_<model>_ids' methods.
They will now return records with no search data associated so they can be properly indexed for the search.
This "bad" state usually happens after a migration.
2021-01-25 11:23:36 +01:00
Mark VanLandingham 1a7922bea2
FEATURE: Create notification schedule to automatically set do not disturb time (#11665)
This adds a new table UserNotificationSchedules which stores monday-friday start and ends times that each user would like to receive notifications (with a Boolean enabled to remove the use of the schedule). There is then a background job that runs every day and creates do_not_disturb_timings for each user with an enabled notification schedule. The job schedules timings 2 days in advance. The job is designed so that it can be run at any point in time, and it will not create duplicate records.

When a users saves their notification schedule, the schedule processing service will run and schedule do_not_disturb_timings. If the user should be in DND due to their schedule, the user will immediately be put in DND (message bus publishes this state).

The UI for a user's notification schedule is in user -> preferences -> notifications. By default every day is 8am - 5pm when first enabled.
2021-01-20 10:31:52 -06:00
Martin Brennan fb184fed06
DEV: Add created_via column to IncomingEmail (#11751)
This should make it easier to track down how the incoming email was created, which is one of four locations:

The POP3 poller (which picks up reply via email replies)
The admin email controller #handle_mail (which is where hosted mail is sent)
The IMAP sync tool
The group SMTP mailer, which sends emails when replying to IMAP topics, pre-emptively creating IncomingEmail records to avoid double syncing
2021-01-20 13:22:41 +10:00
Martin Brennan 0034cbda8a
DEV: Change Topic Timer from enqueue_at scheduled jobs to incrementally executed jobs (#11698)
Moves the topic timer jobs from being scheduled ahead of time with enqueue_at to a 5 minute scheduled run like bookmark reminders, in a new job called Jobs::EnqueueTopicTimers. Backwards compatibility is maintained by checking if an existing topic timer job is enqueued in sidekiq for the timer, and if it is not running it inside the new job.

The functionality to close/open a topic if it is in the opposite state still remains in the after_save block of TopicTimer, with further commentary, which is used for Open/Close Temporarily.

This also removes the ensure_consistency! functionality of topic timers as it is no longer needed; the new job will always pick up the timers because they are not stored in a fragile state of sidekiq.
2021-01-19 13:30:58 +10:00
Martin Brennan 5710d5d771
FIX: Do not process pop3 mails > 1 week old (#11740)
This adds a safe default to not process pop3 emails when the pop3 polling option is set up that are > 1 week old. This is to avoid the situation where an older mailbox is used, which causes us to go and process all emails in that mailbox, sending out error emails to the senders of emails which cannot be parsed successfully.
2021-01-19 09:49:50 +10:00
Mark VanLandingham 4601f3be7e
FEATURE: Send notification emails when users leave do not disturb mode (#11643) 2021-01-07 10:49:49 -06:00
Roman Rizzi 8a7fe3b276
FIX: Don't enqueue imported users when there're multiple custom fields. (#11559)
My initial implementation didn't consider this case. We should skip imported users if the "imported_id" field is present, even if there're other custom fields.
2020-12-22 14:28:07 -03:00
Gerhard Schlager 333f0af0ec
FIX: Cached badge_count isn't updated after backfilling badges (#11281) 2020-11-18 22:01:56 +01:00
Joffrey JAFFEUX de174ef0c4
DEV: cap notifications per run at 300 as stated in comment (#11252) 2020-11-17 09:08:12 +10:00
David Taylor 5140ec9acf
DEV: Cleanup ignored user logic (#11107)
- IgnoredUser records should all now have an expiring_at value. This commit enforces that in the DB, and fixes any corrupt rows
- Changes to the ignored user list are now handled by the `/u/{username}/notification_level` endpoint. This allows setting expiration dates on the ignore. This commit removes the old logic for saving a list of usernames in the user preferences.
- Many specs were calling `IgnoredUser.create`. This commit changes them to use `Fabricate(:ignored_user)` for consistency
2020-11-03 12:38:54 +00:00
David Taylor 1b7d39fa85
FIX: Remove 4 month limit on IgnoredUser records (#11105)
b8c676e7 added the 'forever' option to the UI, and this is correctly stored in the database. However, we had a hard-coded limit of 4 months in the cleanup job. This commit removes the limit, so ignores can last forever.
2020-11-03 12:12:43 +00:00
Bianca Nenciu be5efc9410
FIX: Ensure old uploads can have animated field updated (#10963)
If admins decreased the maximum filesize limit the ActiveRecord
validations would fail.
2020-10-20 19:11:43 +03:00
Bianca Nenciu 43e52a7dc1
DEV: Remove gifsicle dependency (#10357)
Dependency on gifsicle, allow_animated_avatars and allow_animated_thumbnails
site settings were all removed. Animated GIF images are still allowed, but
the generated optimized images are no longer animated for those (which were
used for avatars and thumbnails).

The added 'animated' is populated by extracting information using FastImage.
This field was used to selectively reoptimize old animations. This process
happens in the background.
2020-10-16 13:41:27 +03:00
Martin Brennan c3cede697d
FEATURE: Add weekly bookmark cleanup code (#10899)
When posts or topics are deleted we don't want to immediately delete associated bookmarks, so we have a grace period to recover them and their reminders if the post or topic is un-deleted. This PR adds a task to the Weekly scheduled job to go and delete bookmarks attached to posts or topics deleted > 3 days ago.
2020-10-14 09:38:57 +10:00
Bianca Nenciu 25b8ed740b
DEV: Make site setting type uploaded_image_list use upload IDs (#10401)
It used to be a list of concatenated upload URLs which was prone to
break.
2020-10-13 16:17:06 +03:00
Gerhard Schlager bdbee36961 DEV: Fix typo 2020-10-07 23:43:11 +02:00
David Taylor c0293339b8
PERF: Do not enqueue digest emails when attempted recently (#10849)
Previously, Jobs::EnqueueDigestEmails would enqueue a digest job for every user, even if there are no topics to send. The digest job would exit, no email would send, and last_emailed_at would not change. 30 minutes later, Jobs::EnqueueDigestEmails would run again and re-enqueue jobs for the same users.

120fa8ad introduced a temporary mitigation for this issue, by randomly selecting a subset of those users each time.

This commit adds a new `digest_attempted_at` column to the `user_stats` table. This column is updated every time a digest job completes for a user. Using this, we can avoid scheduling digest jobs for the same user every 30 minutes. This also removes the random user selection in 120fa8ad, and instead prioritizes users who had digests attempted the longest time ago.
2020-10-07 15:30:38 +01:00
Sam 120fa8ad2f
PERF: Introduce absolute limit of digests per 30 minutes (#10845)
To avoid blocking the sidekiq queue a limit of 10,000 digests per 30 minutes
is introduced.

This acts as a safety measure that makes sure we don't keep pouring oil on
a fire.

On multisites it is recommended to set the number way lower so sites do not
dominate the backlog. A reasonable default for multisites may be 100-500.

This can be controlled with the environment var

DISCOURSE_MAX_DIGESTS_ENQUEUED_PER_30_MINS_PER_SITE
2020-10-07 17:30:15 +11:00