A simplified version of the logic used in the function before my fix is as follow:
```ruby
result = []
things = [0,1,2,3]
max_values = 2
every = (things.size.to_f / max_values).ceil
things.each_with_index do |t, index|
next unless (t % every) === 0
result << t
end
p result # [0, 2]
# 3 doesn’t get included
```
The problem is that if you get unlucky two times you won't get last tuple(s) and might get a very erroneous date.
Double unlucky:
- last tuple index % computed every !== 0 and you don't get the last tuple
- the last tuple is related to a post with a very different date than the previous tuples (on year difference in our case)
Improvements to make console access to IncomingEmail more pleasant, and stopping certain IMAP logs from landing in the DB because they just create too much noise,
This should make it easier to track down how the incoming email was created, which is one of four locations:
The POP3 poller (which picks up reply via email replies)
The admin email controller #handle_mail (which is where hosted mail is sent)
The IMAP sync tool
The group SMTP mailer, which sends emails when replying to IMAP topics, pre-emptively creating IncomingEmail records to avoid double syncing
It returned a 429 error code with a 'Retry-After' header if a
RateLimiter::LimitExceeded was raised and unhandled, but the header was
missing if the request was limited in the 'RequestTracker' middleware.
Admins can now edit translations in different languages without having to change their locale. We display a warning when there's a fallback language set.
Transient errors in migration are ignored, silently corrupting
data, and the migration is incomplete and misses many sources of
uploads, which will lead to an incorrect expectation of independence
from the remote object storage after announcing that the migration
was successful, regardles of whether transient errors permanently
corrupted the data.
Remove this migration until such time as it is re-written to
follow the same pattern has the migration to s3, moving the
core logic out of the task.
This PR fixes a race condition with the IMAP notification code. In the `Email::Receiver` we call the `NewPostManager` to create the post and enqueue jobs and sends alerts via `PostAlerter`. However, if the post alerter reaches the `notify_pm_users` and the `group_notifying_via_smtp` method _before_ the incoming email is updated with the post and topic, we unnecessarily send a notification to the person who just posted. The result of this is that the IMAP syncer re-imports the email sent to the user about their own post, which looks like this in the group inbox:
To fix this, we skip the jobs enqueued by `NewPostManager` and only enqueue them with `PostJobsEnqueuer` manually _after_ the incoming email record has been updated with the post and topic.
Other improvements:
* Moved code to calculate email addresses from `IncomingEmail` records into the topic, with a group passed in, for easier testing and debugging. It is not the responsibility of the post alerter to figure this stuff out.
* Add shortcut methods on `IncomingEmail` to split or provide an empty array for to and cc addresses to avoid repetition.
Feature for `Must Approve Users` setup. When a user is rejected, a staff member can optionally set a reason for audit purposes. In addition, feedback email can be sent to the user.
Meta: https://meta.discourse.org/t/account-rejection-email/103112/8
This PR adds functionality for the IMAP sync code to detect if a UID that is missing from the mail group mailbox is in the Spam/Junk folder for the mail account, and if so delete the associated Discourse topic. This is identical to what we do for emails that are moved for Trash.
If an email is missing but not in Spam or Trash, then we mark the incoming email record with imap_missing: true. This may be used in future to further filter or identify these emails, and perhaps go hunting for them in the email account in bulk.
Note: This adds some code duplication because the trash and spam email detection and handling is very similar. I intend to do more refactors/improvements to the IMAP sync code in time because there is a lot of room for improvement.
Splits the `ToggleTopicClosed` job into two distinct `OpenTopic` and `CloseTopic` jobs to make the code clearer. The old job cannot be deleted yet because of outstanding sidekiq schedules, so a todo has been added to do so later this year.
Also replaced mentions of `topic_status_update` with `topic_timer` in some files, because the `topic_status_update` model is obsolete and replaced by topic timer.
Added some shortcut methods for checking if a topic is open/whether a user can change an open topic.
If the sliding window size is N seconds, then a moment at the Nth second
should be considered as the moment outside of the sliding window.
Otherwise, if the sliding window is already full, at the Nth second,
a new call wouldn't be allowed, but a time to wait before the next call
would be equal to zero, which is confusing.
In other words, the end of the time range shouldn't be included in the
sliding window.
Let's say we start at the second 0, and the sliding window size is 10
seconds. In the current version of rate limiter, this sliding window will
be considered as a time range [0, 10] (including the end of the range),
which actually is 11 seconds in length.
After this fix, the time range will be considered as [0, 10)
(excluding the end of the range), which is exactly 10 seconds in length.
It was a problem because during this operation only the first frame
is kept. This commit removes the alternative solution to check if a GIF
image is animated.
* DEV: TopicTrackingState calls should happen in the background
It was observed that calling TopicTrackingState on popular topics could result in a large number of calls to redis, resulting in slow response times when posting replies.
These calls should be moved to a background job.
* DEV: PostUpdateTopicTrackingState should execute on default queue
Fixes a rare race condition causing the `Imap::Sync` class to create an incoming email and associated post/topic, which then kicks off the PostAlerter to notify others in the PM about a reply in the topic, but for the OP which is not necessary (because the person emailing the IMAP inbox already knows about the OP). Basically, we should never be sending the group SMTP email for the first post in a topic.
Also in this PR:
* Custom attribute accessors for the to/from/cc addresses on `IncomingEmail`, to parse them from an array to a joined string so the logic for this is only in one place.
* Store extra detail against the `IncomingEmail` created in `GroupSmtpMailer`
* regex test Mail header Reply-To as string instead of Field, which fixes `warning: deprecated Object#=~ is called on Mail::Field; it always returns nil`
* Add DEBUG_IMAP to log all IMAP logs as warnings for easier debugging
* Changed the Rails logging to `ImapSyncLog` in the `GroupSmtpMailer`
* FIX: Inline onebox should use encoding from Content-Type header when present
* Use Regexp.last_match(1)
Signed-off-by: OsamaSayegh <asooomaasoooma90@gmail.com>
- Only initialize the S3Helper when needed
- Skip initializing the S3Helper for S3Store#cdn_url
- Allow cook_url to be passed a `local` hint to skip unnecessary checks
These are a few small tweaks that slightly improve performance.
- we omitted 1 query from the post guardian which could cause an N+1
- cook_url has been sped up a bit
- url helper avoids re-creating sets for no reason
Inventory on S3 always lagged, over the past few weeks we are noticing that
1 day of lag is not enough.
We are increasing this to 2, to ensure that we do not get false positive
reports.
Final sigma is not lower cased correctly in Ruby causing issues with routing.
This works around the issue by downcasing all usernames containing a sigma using JS.
You can now use `@me` to search for posts created by yourself, this is particularly handy if you have a long username.
`@me rainbow` will find all posts you created with the word rainbow.
Also cleans up test suite so it has no warnings.
This allows plugins to call `register_demon_process` with a Class inheriting from Demon::Base. The unicorn master process will take care of spawning, monitoring and restarting the process. This API should be used with extreme caution, but it is significantly cleaner than spawning processes/threads in an `after_initialize` block.
This commit also cleans up the demon spawning logging so that it uses the same format as unicorn worker logging. It also switches to the block form of `fork` to ensure that Demons exit after running, rather than returning execution to where the fork took place.