Adds `DISCOURSE_MESSAGE_BUS_REDIS_ENABLED` env var, that when set
to true, will allow Discourse to connect to a different redis
instance for MessageBus needs.
When enabled you can configure the same env vars user for redis,
but prefixed by `MESSAGE_BUS`, eg:
`DISCOURSE_MESSAGE_BUS_REDIS_HOST`
This new `DISCOURSE_MAXMIND_BACKUP_PATH` can be used a secondary location
for maxmind db. That way a build machine, for example can cache it on the
host and reuse between builds.
Also per 5bfeef77 added proper error raising for download fails from
dedicated rake task
This also moves "refresh_maxmind_db_during_precompile_days" to a global
setting, it did not make sense in a site setting
This cleans up logster configuration a bit cause we no longer have to
check if we respond_to anything and keeps the logster limit properly
documented
Followup on da578e92
- s3_force_path_style was added as a Minio specific url scheme but it has never been well supported in our code base.
- Our new migrate_to_s3 rake task does not work reliably with path style urls too
- Minio has also added support for virtual style requests i.e the same scheme as AWS S3/DO Spaces so we can rely on that instead of using path style requests.
- Add migration to drop s3_force_path_style from the site_settings table
Some cloud providers (Google Memorystore) do not support any CLIENT commands
By setting :id to nil in the redis config hash we can avoid these commands.
This adds a special global setting GCE users can enable:
`DISCOURSE_REDIS_SKIP_CLIENT_COMMANDS = true`
We have the periodical job that regularly will rebake old posts. This is
used to trickle in update to cooked markdown. The problem is that each rebake
can issue multiple background jobs (post process and pull hotlinked images)
Previously we had no per-cluster limit so cluster running 100s of sites could
flood the sidekiq queue with rebake related jobs.
New system introduces a hard limit of 300 rebakes per 15 minutes across a
cluster to ensure the sidekiq job is not dominated by this.
We also reduced `rebake_old_posts_count` to 80, which is a safer default.
If "logged in" is being forced anonymous on certain routes, trigger
the protection for any requests that spend 50ms queueing
This means that ...
1. You need to trip it by having 3 requests take longer than 1 second in 10 second interval
2. Once tripped, if your route is still spending 50m queueuing it will continue to be protected
This means that site will continue to function with almost no delays while it is scaling up to handle the new load
If a particular path is being hit extremely hard by logged on users,
revert to anonymous cached view.
This will only come into effect if 3 requests queue for longer than 2 seconds
on a *single* path.
This can happen if a URL is shared with the entire forum base and everyone
is logged on
* In `pg_dump` 10.3+ and 9.5.12+, in
it does a `SELECT pg_catalog.set_config('search_path', '', false)`
which changes the state of the current connection. This is known
to be problematic with Pgbouncer which reuses connections. As such,
we'll always try to connect directly to PG directly during
the backup/restore process.
This refactors handling of s3 so it can be specified via GlobalSetting
This means that in a multisite environment you can configure s3 uploads
without actual sites knowing credentials in s3
It is a critical setting for situations where assets are mirrored to s3.
Revamped system for managing authentication tokens.
- Every user has 1 token per client (web browser)
- Tokens are rotated every 10 minutes
New system migrates the old tokens to "legacy" tokens,
so users still remain logged on.
Also introduces weekly job to expire old auth tokens.