This is so that, on a multisite cluster, when we handle a CDN request,
the hostname that is requested corresponds to one of the sites -
specifically the default site.
Sometimes we would like to create a base image without any DB access, this
assists in creating custom base images with custom plugins that already
includes `public/assets`
Following this change set you can run:
```
SPROCKETS_CONCURRENT=1 DONT_PRECOMPILE_CSS=1 SKIP_DB_AND_REDIS=1 RAILS_ENV=production bin/rake assets:precompile
```
Then it is straight forward to create a base image without needing a DB or
Redis.
Adds `DISCOURSE_MESSAGE_BUS_REDIS_ENABLED` env var, that when set
to true, will allow Discourse to connect to a different redis
instance for MessageBus needs.
When enabled you can configure the same env vars user for redis,
but prefixed by `MESSAGE_BUS`, eg:
`DISCOURSE_MESSAGE_BUS_REDIS_HOST`
This reduces chances of errors where consumers of strings mutate inputs
and reduces memory usage of the app.
Test suite passes now, but there may be some stuff left, so we will run
a few sites on a branch prior to merging
Adds the parallel_tests gem, and redis/postgres configuration for running rspec tests in parallel. To use:
```
rake parallel:rake[db:create]
rake parallel:rake[db:migrate]
rake parallel:spec
```
This brings the test suite from 12m20s to 3m11s on my macOS machine
Some cloud providers (Google Memorystore) do not support any CLIENT commands
By setting :id to nil in the redis config hash we can avoid these commands.
This adds a special global setting GCE users can enable:
`DISCOURSE_REDIS_SKIP_CLIENT_COMMANDS = true`
When `s3_bucket="bucket/folder` in discourse.conf, absolute_base_url
was bucket/folder.s3-region.amazonaws.com
These names are bad, but this mirrors the s3_bucket/s3_bucket_name in
S3Store
N.B. that nearby s3_upload_bucket _should_ include the folder
* In `pg_dump` 10.3+ and 9.5.12+, in
it does a `SELECT pg_catalog.set_config('search_path', '', false)`
which changes the state of the current connection. This is known
to be problematic with Pgbouncer which reuses connections. As such,
we'll always try to connect directly to PG directly during
the backup/restore process.
This refactors handling of s3 so it can be specified via GlobalSetting
This means that in a multisite environment you can configure s3 uploads
without actual sites knowing credentials in s3
It is a critical setting for situations where assets are mirrored to s3.
This ensures we have some handling for redis flushall
We attempt to recover our in-memory session token once every 30 seconds
Code is careful to only set the token if it is nil, to allow for manual
cycling to remain safe if needed
Revamped system for managing authentication tokens.
- Every user has 1 token per client (web browser)
- Tokens are rotated every 10 minutes
New system migrates the old tokens to "legacy" tokens,
so users still remain logged on.
Also introduces weekly job to expire old auth tokens.