1. bbcode hashes don't always have exactly 8 characters.
2. colors aren't always hex values, it can be a color string ("red", "blue", etc).
3. The closing tag of smileys doesn't always include a `:` character (the start of the regex was already right for this particular issue)
* File.exists? is deprecated and removed in Ruby 3.2 in favor of
File.exist?
* Dir.exists? is deprecated and removed in Ruby 3.2 in favor of
Dir.exist?
The discourse base image already contains a postgres installation, so pulling a separate postgres image is a little wasteful. Using the copy of Postgres in the discourse image saves about 20 seconds on every GitHub actions run.
This commit sets up Postgres with a few performance-improving flags, which we were already using for the `rake docker:test` task (used on our internal CI system).
Some tables in the database have constraints on columns with dates. Because of them, the script for moving timestamps can fail from time to time. This PR makes the script work with such tables.
In general, in PostgreSQL it is not always possible to defer constraint checks to the transaction commit (Primary Keys and Unique Constraints can be deferred, but them should be declared as DEFERRABLE to make it possible. Indices created with CREATE UNIQUE INDEX can't be deferred at all).
Since we can't defer constraint checks, I've made it work using a little hack. For example, if we need to move all timestamps by one day, the script will move timestamps by 1000 years and one day, and then return timestamps back by 1000 years. The script use this hack only for columns that have unique constraints.
This will fix the try-reset build that failed today. Probably this going to happen again with other tables that have constraints on date columns. I'm going to modify the script to make it work without ignoring such tables. After that, the only table we're going to need to ignore will be the 2FA table.
Before I fixed that, don't hesitate to tag me if the try-reset build fail again.
Without checking if t.table_schema = '#{@schema}' the SELECT with JOIN in the script were returning every column twice in case there is a 'backup' scheme with exactly the same tables as in the 'public scheme'
We're going to use this script for updating timestamps on Try, but it can be used with a local database during development as well.
Usage:
Commands:
ruby db_timestamp_updater.rb yesterday <date> move all timestamps by x days so that <date> will be moved to yesterday
ruby db_timestamp_updater.rb 100 move all timestamps forward by 100 days
ruby db_timestamp_updater.rb -100 move all timestamps backward by 100 days
The script moves all timestamps in the database by the same amount of days forward or backward. No need to change the script if we add a new column in the future.
The more simple solution would be just to move timestamps in several tables (topics, posts, and so on). I didn't want to go that way because it could generate additional work in the future. For example, if we add a new column with a timestamp and users can see that timestamp we'd need to add that column to the script. Or, for example, if we move a post's timestamp to the future but forget to move a timestamp of topic timer or user action it can cause weird bugs.
Post-deploy migrations exist to allow for seamless Discourse upgrades. By design, they cause migrations to run out of numerical order. This has the potential to cause some unexpected edge cases. To reduce the likelihood of these edge cases, we will promote historical post_deploy migrations to regular migrations after a full Discourse stable release cycle.
This script is intended to be run at least during every Discourse release cycle.
This means that truly seamless upgrades will not be possible between non-consecutive Discourse versions. (Upgrades will still work, but may cause some server errors for users during the upgrade)
Setting a random value in the interval 1 week ago ... now works better
because this spreads digest scheduling over a week because digests are
sent one week from the date of the last digest.
Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
This is a pretty straightforward bulk importer, just tailored to the vBulletin 5 database structure.
Also made a few minor improvements to the base importer -- should be self explanatory in the code.
Wrote up a new script to import from Higher Logic. Nothing too crazy going on here. Two major things about this script:
It requires you to convert a Microsoft SQL file to a format MySQL can read.
Higher Logic stores posts (at least in the case of the import I ran) with the email thread shown in the post body. The script does its best to truncate this out, but the logic may need to be improved on future imports. For the import I ran, it worked just fine as is. 🤷♂️
Made some improvements to the Vanilla MySQL script -- mainly because not all SQL imports require use of the VanillaBodyParser. Still left it as an option to turn on and use if so desired. Also added subcategory support, importing of likes, and solve status.
Includes:
* DEV: Remove external plugin linting (that's covered by CI in their repositories)
* DEV: Move lint stages to a separate workflow (partial de-`if`-ication of workflows)
* DEV: Run CI on `main` branch too
* DEV: Update postgres to 13
* DEV: Update redis to 6.x
Other changes:
* DEV: Remove matrix.os
* DEV: Remove env.BUILD_TYPE
* DEV: Remove env.TARGET
* DEV: Rename `build_types` config option to `build_type`
* DEV: Lowercase `target` and `build_type` names
* DEV: Rename `ci` to `tests`
* DEV: Rename `lint` to `linting`
* DEV: Lower the wizard qunit timeout (30 min -> 10)
* DEV: Ruby version is no longer configurable
* DEV: Run plugin tests only in the `plugins` target
* DEV: Use binstubs where applicable
* DEV: We don't open PRs to `tests-passed`
This is an importer I wrote to restore some users that were
accidentally deleted for being purged as old staged users or old
unactivated users.
It reads from CSV files exported from a discourse sql backup.
When running an import script there are many site settings that are
changed but we reset them back to where they were originally before the
import. However, there are two settings that we don't roll back:
```
purge_unactivated_users_grace_period_days
purge_deleted_uploads_grace_period_days
```
which could have some unintended consequences.
My first question is do we *really* have to change these settings? I'm
not a huge fan of changing someones settings without them really knowing
they were changed.
If we really do have to change these settings here is my proposed PR
where we don't alter the `purge_unactivated_users_grace_period_days` if
it has been disabled.
As I'm writing this another change we could make is that we don't change
either of these site settings if we detect that they aren't set to the
default values.
The drive behind this PR is that there is a discourse instance which
relies on staged users as part of their workflow and this setting was
changed by accident via the import script causing users to be deleted
that shouldn't have been.
You can use `discourse restore --location=local FILENAME` if you want to restore a backup that is stored locally even though the `backup_location` has the value `s3`.