Adds the parallel_tests gem, and redis/postgres configuration for running rspec tests in parallel. To use:
```
rake parallel:rake[db:create]
rake parallel:rake[db:migrate]
rake parallel:spec
```
This brings the test suite from 12m20s to 3m11s on my macOS machine
Some cloud providers (Google Memorystore) do not support any CLIENT commands
By setting :id to nil in the redis config hash we can avoid these commands.
This adds a special global setting GCE users can enable:
`DISCOURSE_REDIS_SKIP_CLIENT_COMMANDS = true`
When `s3_bucket="bucket/folder` in discourse.conf, absolute_base_url
was bucket/folder.s3-region.amazonaws.com
These names are bad, but this mirrors the s3_bucket/s3_bucket_name in
S3Store
N.B. that nearby s3_upload_bucket _should_ include the folder
* In `pg_dump` 10.3+ and 9.5.12+, in
it does a `SELECT pg_catalog.set_config('search_path', '', false)`
which changes the state of the current connection. This is known
to be problematic with Pgbouncer which reuses connections. As such,
we'll always try to connect directly to PG directly during
the backup/restore process.
This refactors handling of s3 so it can be specified via GlobalSetting
This means that in a multisite environment you can configure s3 uploads
without actual sites knowing credentials in s3
It is a critical setting for situations where assets are mirrored to s3.
This ensures we have some handling for redis flushall
We attempt to recover our in-memory session token once every 30 seconds
Code is careful to only set the token if it is nil, to allow for manual
cycling to remain safe if needed
Revamped system for managing authentication tokens.
- Every user has 1 token per client (web browser)
- Tokens are rotated every 10 minutes
New system migrates the old tokens to "legacy" tokens,
so users still remain logged on.
Also introduces weekly job to expire old auth tokens.