This ensures we only update last_posted_at which is user facing for non messages
and non whispers.
We still update this date for secure categories, we do not revert it for
deleted posts.
Adds the settings:
raw_email_max_length, raw_rejected_email_max_length, delete_rejected_email_after_days.
These settings control retention of the "raw" emails logs.
raw_email_max_length ensures that if we get incoming email that is huge we will truncate it removing uploads from the raw log.
raw_rejected_email_max_length introduces an even more aggressive truncation for rejected incoming mail.
delete_rejected_email_after_days controls how many days we will keep rejected emails for (default 90)
Set `DEBUG_NODE=1` when running `rake smoke:test` and use your favorite tool to debug the smoke tests. See https://nodejs.org/en/docs/guides/debugging-getting-started/ for more information.
The debugger will break at the beginning of the smoke tests when the env variable is set.
Doing .pluck(:column).first is a very common pattern in Discourse and in
most cases, a limit cause isn't being added. Instead of adding a limit
clause to all these callsites, this commit adds two new methods to
ActiveRecord::Relation:
pluck_first, equivalent to limit(1).pluck(*columns).first
and pluck_first! which, like other finder methods, raises an exception
when no record is found
Zeitwerk simplifies working with dependencies in dev and makes it easier reloading class chains.
We no longer need to use Rails "require_dependency" anywhere and instead can just use standard
Ruby patterns to require files.
This is a far reaching change and we expect some followups here.
In Rails 6 due to internal changes, the following sequence no longer works:
```
RAILS_ENV=test bin/rake db:migrate
RAILS_ENV=test bin/rake db:schema:dump
dropdb discourse_test
createdb discourse_test
RAILS_ENV=test bin/rake db:schema:load
RAILS_ENV=test bin/rake db:migrate
```
What appears to be happening is that our tracking of plugin migrations is
being missed on schema:dump or load.
A more comprehensive fix restoring schema:dump / load support will be
investigated.
Prior to this change plugin migrations were not working and multisite
migrations not working.
Rails internals changed so we need to account for it.
Specifically semantics of `db:migrate` in rails changed so it is sort of
a "multisite:migrate".
* FIX: inline_uploads and subfolder
* if subfolder, also look for images with a path containing
cdn_url + relative_url_root
* FIX: migrate_to_s3 task and subfolder
Running this inline makes more sense otherwise there is extreme risk in
saturating sidekiq queue.
This also reworks ordering and selection so we double check if a post needs
rebaking prior to rebaking, this unlocks the ability to run this rake task
from multiple consoles.
* REFACTOR: Rename SiteSetting.disable_edit_notifications to disable_system_edit_notifications
- The older name could cause some confusion because the setting does not disable all edit notifications, only system ones.
* FIX: Add frozen_string_literal: true in the migration
* DEV: Deprecate 'disable_edit_notifications'
Follow up to: [FEATURE: Create a rake task for destroying categories][1]
- `Discourse.system_user` is my friend
- Remove puts statements from rake tasks that don't return anything
- `for_each` is also my friend
- Use `human_users` to also exclude discobot
- Sort/format categories:list
[1]: 092eeb5ca3
Created a rake task for destroying multiple categories along with any
subcategories and topics the belong to those categories.
Also created a rake task for listing all of your categories.
Refactored existing destroy rake tasks to use new logging method, that
allows for puts output in the console but prevents it from showing in
the specs.
* DEV: Add a new way to run specs in parallel with better output
This commit:
1. adds a new executable, `bin/interleaved_rspec` which works much like
`rspec`, but runs the tests in parallel.
2. adds a rake task, `rake interleaved:spec` which runs the whole test
suite.
3. makes autospec use this new wrapper by default. You can disable this
by running `PARALLEL_SPEC=0 rake autospec`.
It works much like the `parallel_tests` gem (and relies on it), but
makes each subprocess use a machine-readable formatter and parses this
output in order to provide a better overall summary.
(It's called interleaved, because parallel was taken and naming is
hard).
* Make popen3 invocation safer
* Use FileUtils instead of shelling out
* DRY up reporter
* Moved summary logic into Reporter
* s/interleaved/turbo/g
* Move Reporter into its own file
* Moved run into its own class
* Moved Runner into its own file
* Move JsonRowsFormatter under TurboTests
* Join on threads at the end
* Acted on feedback from eviltrout
This fixes a condition where an intermittent db connection could cause
invalid site settings to be stored
It also removes a catch all we had.
Somewhere around Rails 5 `db:create` started wanting full environment
this is a problem for Discourse since it needs to boot up data from the
db.
This removes the catch all and surgically adds a db / redis bypass to
db:create task.
* Support private uploads in S3
* Use localStore for local avatars
* Add job to update private upload ACL on S3
* Test multisite paths
* update ACL for private uploads in migrate_to_s3 task
This also corrects FileHelper.download so it supports "follow_redirect"
correctly (it used to always follow 1 redirect) and adds a `validate_url`
param that will bypass all uri validation if set to false (default is true)
This new `DISCOURSE_MAXMIND_BACKUP_PATH` can be used a secondary location
for maxmind db. That way a build machine, for example can cache it on the
host and reuse between builds.
Also per 5bfeef77 added proper error raising for download fails from
dedicated rake task
This also moves "refresh_maxmind_db_during_precompile_days" to a global
setting, it did not make sense in a site setting
`rake posts:recover_uploads_from_index`
Searches through all missing uploads in the cluster, if it finds one it
tries to find it in the "upload index file" and creates a new upload for
it.
Previously we were only catching one type of data export, the new job will
catch every csv export we have
Job is pretty safe as it filters on system user id / pm with a particular
slug