Anonymizing a user changed their email address, destroyed all
associated InvitedUser records, but did not destroy the invites
associated to user's email.
When we call Bookmark.cleanup! we want to make sure that
topic_user.bookmarked is updated for topics linked to the
bookmarks that were deleted. Also when PostDestroyer calls
destroy and recover. We have a job for this already --
SyncTopicUserBookmarked -- so we just utilize that.
Subclasses must call #delete_user_actions inside build_actions to support user deletion. The method adds a delete user bundle, which has a delete and a delete + block option. Every subclass is responsible for implementing these actions.
Notifying about a tag change sometimes resulted in loading a large
number of users in memory just to perform an exclusion. This commit
prefers to do inclusion (i.e. instead of exclude users X, do include
users in groups Y) and does it in SQL to avoid fetching unnecessary
data that is later discarded.
The previous commits removed reviewables leading to a bad user
experience. This commit updates the status, replaces actions with a
message and greys out the reviewable.
Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
Email change requests are never deleted no matter if they completed
successfully or not. The abandoned requests have the disadvantage of
showing up as unconfirmed emails in user's preferences page.
Recalculating a ReviewableFlaggedPost's score after rejecting or ignoring it sets the score as 0, which means that we can't find them after reviewing. They don't surpass the minimum priority threshold and are hidden.
Additionally, we only want to use agreed flags when calculating the different priority thresholds.
* FEATURE: add support for like webhooks
Add support for like webhooks. Webhook events only send on user membership
in the defined webhook group filters.
This also fixes group webhook events, as before this was never used, and
the logic was not correct.
This filter hides reviewables with a score lower than the "reviewable_low_priority_threshold" setting. We only use reviewables that already met this threshold to calculate the Medium and High priority filters.
It used to check if an upload record exists, which is wrong because an
invalid upload record exists even if the upload was not created.
The other improvement is a better log message.
Previously watched words ignored topic titles when applying auto tagging rules.
Also copy has been improved to reflect how the system behaves.
The text hints that we are only watching first post now
scopes are incredibly annoying to preload, simply adding :user_emails is not
enough.
Instead of relying on scopes simply iterate through user_emails which is
properly preloaded.
This removes 2 * N+1 when generating user reports.
When posts are moved from one topic to another, the `topic_user.bookmarked` column for all users in the new and the old topic needs to be resynced, for example because a user bookmarks post 12 in topic 1, then it is moved to topic 2, the topic_user record for topic 1 should no longer be bookmarked. A background job has been added to sync the column for a specified topic, or for no topic at all, which does it for all topics like the migration.
Also includes a migration that we have run in the past to fix bad data.
----
This has been addressed in other places in the past:
https://github.com/discourse/discourse/pull/10211https://github.com/discourse/discourse/pull/10188
When bulk inviting, the uploaded CSV file may contain wrong values for
the user fields. This tries to automatically correct them by finding
the most similar option (by ignoring the case).
Admins can use bulk invites to pre-populate user fields. The imported
CSV file must have a header with "email" column (first position) and
names of the user fields (exact match).
Under the hood, the bulk invite will create staged users and populate
the user fields of those.
The user mailing list mode continued to be silently enabled and
UserEmail job checked just that ignoring site setting
disable_mailing_list_mode.
An additional migrate was added to set disable_mailing_list_mode
to false if any users enabled the mailing list mode already.
Currently the process of adding a custom image to badge is quite clunky; you have to upload your image to a topic, and then copy the image URL and pasting it in a text field. Besides being clucky, if the topic or post that contains the image is deleted, the image will be garbage-collected in a few days and the badge will lose the image because the application is not that the image is referenced by a badge.
This commit improves that by adding a proper image uploader widget for badge images.
It was used both when inviting from a topic page and when creating
invites with "Send to topic on first login", while it should be used
only in the former case.
Onebox content may only be resolved during the process_post job. Onebox content could change the content of the excerpt, so we need to make sure the excerpt is updated accordingly.
The urls that we generate for mobile post notifications don't take into
account the subfolder url if a site happens to have one configured. When
this happens when you tap on a new mobile notification it takes you to
a url that doesn't work because it is missing the subfolder portion.
I honestly think this should be handled in the Post model like we do
with the Topic model. `Post.url` should know how to handle subfolder
installs, but that seemed like a very risky change because there are
lots of other places in the codebase where we tack on the base_path and
I didn't want to risk duplicating it.
I also found a small typo in the topics controller spec.
The user interface has been reorganized to show email and link invites
in the same screen. Staff has more control over creating and updating
invites. Bulk invite has also been improved with better explanations.
On the server side, many code paths for email and link invites have
been merged to avoid duplicated logic. The API returns better responses
with more appropriate HTTP status codes.
* FIX: Reduce the time_read threshold to one minute.
Five minutes is too much and could fill the queue with false positives.
* Update spec/jobs/enqueue_suspect_users_spec.rb
Co-authored-by: Arpit Jalan <arpit@techapj.com>
Co-authored-by: Arpit Jalan <arpit@techapj.com>
Completing the discobot tutorial gives you ~3m of reading time, so we set the limit at 5m. Additionally, we use an "OR" clause to cover the case when you just scroll through a single topic.
Original PR was reverted because of broken migration https://github.com/discourse/discourse/pull/12058
I fixed it by adding this line
```
AND topics.id IN(SELECT id FROM topics ORDER BY created_at DESC LIMIT :max_new_topics)
```
This time it is left joining a limited amount of topics. I tested it on few databases and it worked quite smooth
Follow up https://github.com/discourse/discourse/pull/11968
Dismiss all new topics using the same DismissTopicService. In addition, MessageBus receives exact topic ids which should be marked as `seen`.
The bug was mentioned on meta https://meta.discourse.org/t/users-are-seeing-handling-of-unhandled-tag-again/155367
It was related to users who are watching a specific topic. In that case, when the hidden tag was added or removed to the topic they were notified by `NotifyTagChangeJob`.
That job should take hidden tags into consideration. If all changed tags are in a hidden group, it should exclude user not belong to that group.
At the same time, if visible to anyone tag is added or removed users watching topic should be notified.
The 'Discourse SSO' protocol is being rebranded to DiscourseConnect. This should help to reduce confusion when 'SSO' is used in the generic sense.
This commit aims to:
- Rename `sso_` site settings. DiscourseConnect specific ones are prefixed `discourse_connect_`. Generic settings are prefixed `auth_`
- Add (server-side-only) backwards compatibility for the old setting names, with deprecation notices
- Copy `site_settings` database records to the new names
- Rename relevant translation keys
- Update relevant translations
This commit does **not** aim to:
- Rename any Ruby classes or methods. This might be done in a future commit
- Change any URLs. This would break existing integrations
- Make any changes to the protocol. This would break existing integrations
- Change any functionality. Further normalization across DiscourseConnect and other auth methods will be done separately
The risks are:
- There is no backwards compatibility for site settings on the client-side. Accessing auth-related site settings in Javascript is fairly rare, and an error on the client side would not be security-critical.
- If a plugin is monkey-patching parts of the auth process, changes to locale keys could cause broken error messages. This should also be unlikely. The old site setting names remain functional, so security-related overrides will remain working.
A follow-up commit will be made with a post-deploy migration to delete the old `site_settings` rows.
This PR allows entering a float value for topic timers e.g. 0.5 for 30 minutes when entering hours, 0.5 for 12 hours when entering days. This is achieved by adding a new column to store the duration of a topic timer in minutes instead of the ambiguous both hours and days that it could be before.
This PR has ommitted the post migration to delete the duration column in topic timers; it will be done in a subsequent PR to ensure that no data is lost if the UPDATE query to set duration_mintues fails.
I have to keep the old keyword of duration in set_or_create_topic_timer for backwards compat, will remove at a later date after plugins are updated.
This is a try to simplify logic around dismiss new topics to have one solution to work in all places - dismiss all-new, dismiss new in a specific category or even in a specific tag.
* FIX: In FastImage 2.2.2 an error is raised with a `nil` path
Sometimes Discourse.store.path_for would return `nil`, which the job
handled gracefully before, but raises an error with the new version of
the gem.
Note the logic of this job is a bit awkward since it depends on `nil`
being a string, but at least now it's no longer filling logs with
errors.
* Update app/jobs/onceoff/fix_invalid_gravatar_uploads.rb
Co-authored-by: Bianca Nenciu <nbianca@users.noreply.github.com>
Co-authored-by: Bianca Nenciu <nbianca@users.noreply.github.com>
This PR revamps the topic timer UI, using the time shortcut selector from the bookmark modal.
* Fixes an issue where the duration of hours/days after last reply or auto delete replies was not enforced to be > 0
* Fixed an issue where the timer dropdown options were not reloaded correctly if the topic status changes in the background (use `MessageBus` to publish topic state in the open/close timer jobs)
* Moved the duration input and the "based on last post" option from the `future-date-input` component, as it was only used for topic timers. Also moved out the notice that is displayed which was also only relevant for topic timers.
Lots of changes but it's mostly a refactoring.
The interesting part that was fix are the 'load_problem_<model>_ids' methods.
They will now return records with no search data associated so they can be properly indexed for the search.
This "bad" state usually happens after a migration.
This adds a new table UserNotificationSchedules which stores monday-friday start and ends times that each user would like to receive notifications (with a Boolean enabled to remove the use of the schedule). There is then a background job that runs every day and creates do_not_disturb_timings for each user with an enabled notification schedule. The job schedules timings 2 days in advance. The job is designed so that it can be run at any point in time, and it will not create duplicate records.
When a users saves their notification schedule, the schedule processing service will run and schedule do_not_disturb_timings. If the user should be in DND due to their schedule, the user will immediately be put in DND (message bus publishes this state).
The UI for a user's notification schedule is in user -> preferences -> notifications. By default every day is 8am - 5pm when first enabled.
This should make it easier to track down how the incoming email was created, which is one of four locations:
The POP3 poller (which picks up reply via email replies)
The admin email controller #handle_mail (which is where hosted mail is sent)
The IMAP sync tool
The group SMTP mailer, which sends emails when replying to IMAP topics, pre-emptively creating IncomingEmail records to avoid double syncing
Moves the topic timer jobs from being scheduled ahead of time with enqueue_at to a 5 minute scheduled run like bookmark reminders, in a new job called Jobs::EnqueueTopicTimers. Backwards compatibility is maintained by checking if an existing topic timer job is enqueued in sidekiq for the timer, and if it is not running it inside the new job.
The functionality to close/open a topic if it is in the opposite state still remains in the after_save block of TopicTimer, with further commentary, which is used for Open/Close Temporarily.
This also removes the ensure_consistency! functionality of topic timers as it is no longer needed; the new job will always pick up the timers because they are not stored in a fragile state of sidekiq.
This adds a safe default to not process pop3 emails when the pop3 polling option is set up that are > 1 week old. This is to avoid the situation where an older mailbox is used, which causes us to go and process all emails in that mailbox, sending out error emails to the senders of emails which cannot be parsed successfully.
Feature for `Must Approve Users` setup. When a user is rejected, a staff member can optionally set a reason for audit purposes. In addition, feedback email can be sent to the user.
Meta: https://meta.discourse.org/t/account-rejection-email/103112/8
Splits the `ToggleTopicClosed` job into two distinct `OpenTopic` and `CloseTopic` jobs to make the code clearer. The old job cannot be deleted yet because of outstanding sidekiq schedules, so a todo has been added to do so later this year.
Also replaced mentions of `topic_status_update` with `topic_timer` in some files, because the `topic_status_update` model is obsolete and replaced by topic timer.
Added some shortcut methods for checking if a topic is open/whether a user can change an open topic.
* DEV: TopicTrackingState calls should happen in the background
It was observed that calling TopicTrackingState on popular topics could result in a large number of calls to redis, resulting in slow response times when posting replies.
These calls should be moved to a background job.
* DEV: PostUpdateTopicTrackingState should execute on default queue
Fixes a rare race condition causing the `Imap::Sync` class to create an incoming email and associated post/topic, which then kicks off the PostAlerter to notify others in the PM about a reply in the topic, but for the OP which is not necessary (because the person emailing the IMAP inbox already knows about the OP). Basically, we should never be sending the group SMTP email for the first post in a topic.
Also in this PR:
* Custom attribute accessors for the to/from/cc addresses on `IncomingEmail`, to parse them from an array to a joined string so the logic for this is only in one place.
* Store extra detail against the `IncomingEmail` created in `GroupSmtpMailer`
* regex test Mail header Reply-To as string instead of Field, which fixes `warning: deprecated Object#=~ is called on Mail::Field; it always returns nil`
* Add DEBUG_IMAP to log all IMAP logs as warnings for easier debugging
* Changed the Rails logging to `ImapSyncLog` in the `GroupSmtpMailer`
osts from topics with 'auto delete replies timer' with more than
skip_auto_delete_reply_likes likes will no longer be deleted. If 0,
all posts will be deleted.
My initial implementation didn't consider this case. We should skip imported users if the "imported_id" field is present, even if there're other custom fields.
This commit is dedicated to https://twitter.com/FiloSottile/status/1335666583126073354 for reminding me that like timestamps are valuable data.
Likes additionally include the topic_id and post_number of the acted post, to aid in analysis. Flag export does not include the disposition by staff.
When jobs are enqueued inside a transaction, it's possible that they will be executed before the necessary data is available in the database. This commit ensures all jobs are enqueued in an ActiveRecord after_commit hook.
One potential downside here is if the job fails to enqueue, the transaction will no longer be aborted. However, the chance of that happening is reasonably low, and the impact is significantly lower than the current issue where jobs are scheduled before their data is ready.
The `enqueue_jobs` is not correctly post-processing the post since the
post is being created inside a transaction block. This commit explicitly
enqueues the job outside transaction block.
When the linked topic is created we'll not hardcode the topic title and
let onebox work its magic instead so that the title can be updated
automatically.
- IgnoredUser records should all now have an expiring_at value. This commit enforces that in the DB, and fixes any corrupt rows
- Changes to the ignored user list are now handled by the `/u/{username}/notification_level` endpoint. This allows setting expiration dates on the ignore. This commit removes the old logic for saving a list of usernames in the user preferences.
- Many specs were calling `IgnoredUser.create`. This commit changes them to use `Fabricate(:ignored_user)` for consistency
b8c676e7 added the 'forever' option to the UI, and this is correctly stored in the database. However, we had a hard-coded limit of 4 months in the cleanup job. This commit removes the limit, so ignores can last forever.
This commit adds a site setting `auto_close_topics_create_linked_topic`
which when enabled works in conjunction with `auto_close_topics_post_count`
setting and creates a new linked topic for the topic just closed.
The auto-created new topic contains a link for all the previous topics
and the topic titles are appended with `(Part {n})`.
The setting is enabled by default.
There is a site setting reply_by_email_enabled which when combined with reply_by_email_address creates a Reply-To header in emails in the format "test+%{reply_key}@test.com" along with a PostReplyKey record, so when replying Discourse knows where to route the reply.
However this conflicts with the IMAP implementation. Since we are sending the email for a group via SMTP and from their actual email account, we want all replys to go to that email account as well so the IMAP sync job can pick them up and put them in the correct place. So if the group has IMAP enabled and configured, then the reply-to header will be correct.
This PR also makes a further fix to 64b0b50 by using the correct recipient user for the PostReplyKey record. If the post user is used we encounter this error:
if destination.user_id != user.id && !forwarded_reply_key?(destination, user)
raise ReplyUserNotMatchingError, "post_reply_key.user_id => #{destination.user_id.inspect}, user.id => #{user.id.inspect}"
end
This is because the user above is found from the from_address, but the destination which is the PostReplyKey is made by the post.user, which will be different people.
Our Email::Sender class accepts an optional user argument, which is used to create a PostReplyKey record when present. This record is used to sub out the %{reply_key} placeholder in the Reply-To mail header, so if we do not pass in the user we get a broken Reply-To header.
This is especially problematic in the IMAP group SMTP situation, because these emails go to customers that we are replying to, and when they reply to us the email bounces! This fixes the issue by passing user to the Email::Sender when sending a group_smtp email but there is still more to do in another PR.
This Email::Sender optional user is a bit of a footgun IMO, especially because most of the time we use it there is a user we can source. I would like to do another PR for this after this one to make the parameter not optional, so we don't end up with these reply issues down the line again.
Dependency on gifsicle, allow_animated_avatars and allow_animated_thumbnails
site settings were all removed. Animated GIF images are still allowed, but
the generated optimized images are no longer animated for those (which were
used for avatars and thumbnails).
The added 'animated' is populated by extracting information using FastImage.
This field was used to selectively reoptimize old animations. This process
happens in the background.
When posts or topics are deleted we don't want to immediately delete associated bookmarks, so we have a grace period to recover them and their reminders if the post or topic is un-deleted. This PR adds a task to the Weekly scheduled job to go and delete bookmarks attached to posts or topics deleted > 3 days ago.
Previously, Jobs::EnqueueDigestEmails would enqueue a digest job for every user, even if there are no topics to send. The digest job would exit, no email would send, and last_emailed_at would not change. 30 minutes later, Jobs::EnqueueDigestEmails would run again and re-enqueue jobs for the same users.
120fa8ad introduced a temporary mitigation for this issue, by randomly selecting a subset of those users each time.
This commit adds a new `digest_attempted_at` column to the `user_stats` table. This column is updated every time a digest job completes for a user. Using this, we can avoid scheduling digest jobs for the same user every 30 minutes. This also removes the random user selection in 120fa8ad, and instead prioritizes users who had digests attempted the longest time ago.
To avoid blocking the sidekiq queue a limit of 10,000 digests per 30 minutes
is introduced.
This acts as a safety measure that makes sure we don't keep pouring oil on
a fire.
On multisites it is recommended to set the number way lower so sites do not
dominate the backlog. A reasonable default for multisites may be 100-500.
This can be controlled with the environment var
DISCOURSE_MAX_DIGESTS_ENQUEUED_PER_30_MINS_PER_SITE
See https://meta.discourse.org/t/changing-a-users-email/164512 for additional context.
Previously when an admin user changed a user's email we assumed that they would need a password reset too because they likely did not have access to their account. This proved to be incorrect, as there are other reasons a user needs admin to change their email. This PR:
* Changes the admin change email for user flow so the user is sent an email to confirm the change
* We now record who the email change request was requested by
* If the requested by user is admin and not the user we note this in the email sent to the user
* We also make the confirm change email route open to anonymous users, so it can be clicked by the user even if they do not have access to their account. If there is a logged in user we make sure the confirmation matches the current user.
* FEATURE: Export the entire user profile as json, not just bio/website
* FEATURE: Add session log information to user export
Even though the columns are named 'auth_token' etc, the content is not actually usable to log into the forum with. Despite all that, it is still truncated for export, to avoid any 'token hash cracking' situations.
This PR removes the user reminder topic timers, because that system has been supplanted and improved by bookmark reminders. The option is removed from the UI and all existing user reminder topic timers are migrated to bookmark reminders.
Migration does this:
* Get all topic_timers with status_type 5 (reminders)
* Gets all bookmarks where the user ID and topic ID match
* Loops through the found topic timers
* If there is no bookmark for the OP of the topic, then we just create a bookmark with a reminder
* If there is a bookmark for the OP of the topic and it does **not** have a reminder set, then just
update it with the topic timer reminder
* If there is a bookmark for the OP of the topic with a reminder then just discard the topic timer
* Cancels all outstanding user reminder topic timers
* **Trashes (not deletes) all user reminder topic timers**
Notes:
* For now I have left the user reminder topic timer job class in place; this is so the jobs can be cancelled in the migration. It and the specs will be deleted in the next PR.
* At a later date I will write a migration to delete all trashed user topic timers. They are not deleted here in case there are data issues and they need to be recovered.
* A future PR will change the UI of the topic timer modal to make it look more like the bookmark modal.
After restoring a backup it takes up to 48 hours for uploads stored on S3 to appear in the S3 inventory. This change prevents alerts about missing uploads by preventing the EnsureS3UploadsExistence job from running in the first 48 hours after a restore. During the restore it deletes the count of missing uploads from the PluginStore, so that an alert isn't triggered by an old number.
It is possible that a user could exist without an email, if so we should
not enqueue a job to download their gravatar.
This commit resolves this error that can occur:
```
Job exception: undefined method `email' for nil:NilClass
/var/www/discourse/app/models/user.rb:1204:in `email'
/var/www/discourse/app/jobs/regular/update_gravatar.rb:12:in `execute'
```
This commit also fixes the original spec which actually was wrong. The
job never enqueued in the original spec and so the gravatar was never
actually updated and the test was checking if the two values were the
same, but they were both null and never updated, so of course they were
the same!
A new test has also been added to make sure the gravatar job isn't
enqueued when a user's email is missing.
* FEATURE: Use predictable filenames inside the user archive export
* FEATURE: Include badges in user archive export
* FEATURE: Add user_visits table to the user archive export
This is in preparation for improvements to the user archive export data.
Some refactors happened along the way, including calling the different _export methods 'components' of the zip file.
Additionally, make the test for post export much more comprehensive.
Copy sources:
app/jobs/regular/export_csv_file.rb
spec/jobs/export_csv_file_spec.rb
With the addition of `PostSearchData#private_message`, a partial
index consisting of only search data from regular posts can be created.
The partial index helps to speed up searches on large sites since PG
will not have to do an index scan on the entire search data index which
has shown to be a bottle neck.
Convert all IMAP logging to write to a database table for easier inspection. These logs are cleaned up daily if they are > 5 days old.
Logs can easily be watched in dev by setting DISCOURSE_DEV_LOG_LEVEL=\"debug\" and running tail -f development.log | grep IMAP
* FEATURE: don't notify about changed tags for a private message
Only staff members observing specific tag should receive a notification
* FIX: remove other category which is not used
* FIX: improved specs to ensure that revise was succesful
When a tab is open but left unattended for a while, the red, green, and blue
pills tend to go out of sync.
So whevener we open the notifications menu, we sync up the notification count
(eg. blue and green pills) with the server.
However, the reviewable count (eg. the red pill) is not a notification and
is located in the hamburger menu. This commit adds a new route on the server
side to retrieve the reviewable count for the current user and a ping
(refreshReviewableCount) from the client side to sync the reviewable count
whenever they open the hamburger menu.
REFACTOR: I also refactored the hamburger-menu widget code to prevent repetitive uses
of "this.".
PERF: I improved the performance of the 'notify_reviewable' job by doing only 1 query
to the database to retrieve all the pending reviewables and then tallying based on the
various rights.
This commit should cause no functional change
- Split into functions to avoid deep nesting
- Register custom field type, and remove manual json parse/serialize
- Recover from deleted upload records
Also adds a test to ensure pull_hotlinked_images redownloads secure images only once
Was tired of seeing the following warnings in the logs
```
/discourse/app/jobs/scheduled/old_keys_reminder.rb:7: warning: already initialized constant Jobs::OldKeysReminder::OLD_CREDENTIALS_PERIOD
/discourse/app/jobs/scheduled/old_keys_reminder.rb:7: warning: previous definition of OLD_CREDENTIALS_PERIOD was here
```
* FEATURE: notify admins about old credentials
Security and API keys should be renewed periodically.
This additional notification should help admins keep their Discourse safe and secure.
Previously the pull hotlinked images job was skipped after system edits. This ensured that we never had an infinite loop of system-edit/pull-hotlinked/system-edit/pull-hotlinked etc.
A side effect was that edits made by system for any other reason (e.g. API, removing full quotes) would prevent pulling hotlinked images. This commit removes the system edit check, and replaces it with another method to avoid an infinite job scheduling loop.
* DEV: new S3 backup layout
Currently, with $S3_BACKUP_BUCKET of "bucket/backups", multisite backups
end up in "bucket/backups/backups/dbname/" and single-site will be in
"bucket/backups/".
Both _should_ be in "bucket/backups/dbname/"
- remove MULTISITE_PREFIX,
- always include dbname,
- method to move to the new prefix
- job to call the method
* SPEC: add tests for `VacateLegacyPrefixBackups` onceoff job.
Co-authored-by: Vinoth Kannan <vinothkannan@vinkas.com>
When running jobs in tests, we use `Jobs.run_immediately!`. This means that jobs are run synchronously when they are enqueued. Jobs sometimes enqueue other jobs, which are also executed synchronously. This means that the outermost job will block until the inner jobs have finished executing. In some cases (e.g. process_post with hotlinked images) this can lead to a deadlock.
This commit changes the behavior slightly. Now we will never run jobs inside other jobs. Instead, we will queue them up and run them sequentially in the order they were enqueued. As a whole, they are still executed synchronously. Consider the example
```ruby
class Jobs::InnerJob < Jobs::Base
def execute(args)
puts "Running inner job"
end
end
class Jobs::OuterJob < Jobs::Base
def execute(args)
puts "Starting outer job"
Jobs.enqueue(:inner_job)
puts "Finished outer job"
end
end
Jobs.enqueue(:outer_job)
puts "All jobs complete"
```
The old behavior would result in:
```
Starting outer job
Running inner job
Finished outer job
All jobs complete
```
The new behavior will result in:
```
Starting outer job
Finished outer job
Running inner job
All jobs complete
```