PERF: correct clean up inactive so it does not clog scheduler

also add a hard limit of 1000 users per job run so we do not clog the
scheduler

destroyer.destroy has a transaction and this can have some serious complications
with the open record set find_each has going
This commit is contained in:
Sam Saffron 2019-04-09 22:24:19 +10:00
parent ad5edc8bb1
commit ec1c3559da
1 changed files with 3 additions and 2 deletions

View File

@ -14,9 +14,10 @@ module Jobs
"posts.user_id IS NULL AND users.last_seen_at < ?", "posts.user_id IS NULL AND users.last_seen_at < ?",
SiteSetting.clean_up_inactive_users_after_days.days.ago SiteSetting.clean_up_inactive_users_after_days.days.ago
) )
.find_each do |user| .limit(1000)
.pluck(:id).each do |id|
begin begin
user = User.find(id)
destroyer.destroy(user, context: I18n.t("user.destroy_reasons.inactive_user")) destroyer.destroy(user, context: I18n.t("user.destroy_reasons.inactive_user"))
rescue => e rescue => e
Discourse.handle_job_exception(e, Discourse.handle_job_exception(e,