That commit introduced a bug to the system: f69dacf979
Restore works fine for multisite, however, stopped working for non-multisite.
Reason for that was that `establish_connection` method got a check if the multisite instance is available:
```
def self.instance
@instance
end
def self.establish_connection(opts)
@instance.establish_connection(opts) if @instance
end
```
However, the reload method don't have that check
```
def self.reload
@instance = new(instance.config_filename)
end
```
To solve it, let's ensure we are in a multisite environment before call reload
Zeitwerk simplifies working with dependencies in dev and makes it easier reloading class chains.
We no longer need to use Rails "require_dependency" anywhere and instead can just use standard
Ruby patterns to require files.
This is a far reaching change and we expect some followups here.
Temporarily recreate already dropped functions in the discourse_functions schema in order to allow restoring of backups which still reference dropped functions.
Follow up to [FIX: Empty backup names with unicode site titles][1]
- Use .presence - "It's cleaner"
- Update spec to use System.system_user so it is more readable
[1]: c8661674d4
In order for this to work the Backuper stores a couple of site settings
in the new backup_metadata table, because the old setting values might
not be available on restore anymore.
* Support private uploads in S3
* Use localStore for local avatars
* Add job to update private upload ACL on S3
* Test multisite paths
* update ACL for private uploads in migrate_to_s3 task
This reduces chances of errors where consumers of strings mutate inputs
and reduces memory usage of the app.
Test suite passes now, but there may be some stuff left, so we will run
a few sites on a branch prior to merging
tar exits with status 1 when uploads are modified or deleted by a sidekiq job, so we need to treat it like status 0.
According to the documentation it should be safe to ignore status 1 ("Some files differ"):
> If tar was given `--create', `--append' or `--update' option, this exit code means that some files were changed while being archived and so the resulting archive does not contain the exact copy of the file set.
Status 2 ("Fatal error") still results in an exception.
* Dashboard doesn't timeout anymore when Amazon S3 is used for backups
* Storage stats are now a proper report with the same caching rules
* Changing the backup_location, s3_backup_bucket or creating and deleting backups removes the report from the cache
* It shows the number of backups and the backup location
* It shows the used space for the correct backup location instead of always showing used space on local storage
* It shows the date of the last backup as relative date
* Logs exceptions during the cleanup phase, but doesn't stop executing subsequent cleanup tasks.
* Notifies the user at the end of the cleanup phase, so that the log contains possible errors during that phase.
Introduce new patterns for direct sql that are safe and fast.
MiniSql is not prone to memory bloat that can happen with direct PG usage.
It also has an extremely fast materializer and very a convenient API
- DB.exec(sql, *params) => runs sql returns row count
- DB.query(sql, *params) => runs sql returns usable objects (not a hash)
- DB.query_hash(sql, *params) => runs sql returns an array of hashes
- DB.query_single(sql, *params) => runs sql and returns a flat one dimensional array
- DB.build(sql) => returns a sql builder
See more at: https://github.com/discourse/mini_sql
* `rescue nil` is a really bad pattern to use in our code base.
We should rescue errors that we expect the code to throw and
not rescue everything because we're unsure of what errors the
code would throw. This would reduce the amount of pain we face
when debugging why something isn't working as expexted. I've
been bitten countless of times by errors being swallowed as a
result during debugging sessions.
* Since we can no longer restore into a different schema,
we will move tables in the public schema into the backup schema
first before restoring the dump file which goes into the public
schema. The downside to this approach is that we will increase
the downtime experienced during the restore process. Downtime
would equal the duration of restoring the dump file.
* In `pg_dump` 10.3+ and 9.5.12+, in
it does a `SELECT pg_catalog.set_config('search_path', '', false)`
which changes the state of the current connection. This is known
to be problematic with Pgbouncer which reuses connections. As such,
we'll always try to connect directly to PG directly during
the backup/restore process.