Discourse.cache is a more consistent method to use and offers clean fallback
if you are skipping redis
This is part of a larger change that both optimizes Discoruse.cache and omits
use of setex on $redis in favor of consistently using discourse cache
Bench does reveal that use of Rails.cache and Discourse.cache is 1.25x slower
than redis.setex / get so a re-implementation will follow prior to porting
We were having issues in development mode where the JS code had errors due to a bad cache. When starting a server in development mode in bin/unicorn we now get the git sha of the discourse HEAD and get a git sha of all plugins, and store them in a file. If the sha has changed then we delete tmp/cache to refresh the assets cache.
This makes it easy to run multiple commands with the same keyword arguments. The main use is for using `chdir` across multiple commands. The `Dir.chdir` method is not concurrency safe because it switches the working directory of the entire process.
This was not causing any known issue, because the system user ID is always the same across all sites. However, we should cache this on a per-site basis to be safe.
If the setting is turned on, then the user will receive information
about the subject: if it was deleted or requires some special access to
a group (only if the group is public). Otherwise, the user will receive
a generic #404 error message. For now, this change affects only the
topics and categories controller.
This commit also tries to refactor some of the code related to error
handling. To make error pages more consistent (design-wise), the actual
error page will be rendered server-side.
Using popups is becoming increasingly rare. Full page redirects are already used on mobile, and for some providers. This commit removes all logic related to popup authentication, leaving only the full page redirect method.
For more info, see https://meta.discourse.org/t/do-we-need-popups-for-login/127988
We preload to ensure as much memory as possible is reused from unicorn master
to various workers using copy-on-write (sidekiq, unicorn)
This migrates the preloading code into the Discourse module for easier
reuse and adds 3 notable preloading changes
1. We attempt to localize a string on each site, ensuring we warmup
the i18n
2. We preload all our templates (compiling .erb to class)
3. We warm-up our search tokenizer which uses cppjieba which is a large
memory consumer, this will only cause a warmup on CJK sites or sites with
the special site setting enabled.
* DEV: group_list site settings should store IDs instead of group names
* Ship site setting to know when we should migrate group_list settings
* Migrate existing group_list site settings
* Bump migration timestamp and don't set null when migrating is not possible.
And don't load javascript assets if plugin is disabled.
* precompile auto generated plugin js assets
* SPEC: remove spec test functions
* remove plugin js from test_helper
Co-Authored-By: Régis Hanol <regis@hanol.fr>
* DEV: using equality is slightly easier to read than inequality
Co-Authored-By: Régis Hanol <regis@hanol.fr>
* DEV: use `select` method instead of `find_all` for readability
Co-Authored-By: Régis Hanol <regis@hanol.fr>
This can cause unbound CPU usage in some cases, and excessive logging in other cases. This commit moves redis readonly information into the local process, but maintains the DistributedCache for postgres readonly state.
In sidekiq, jobs are run in multiple threads within the same process. `cd` affects the entire process, so can cause unexpected issues in other running jobs.
Sometimes we would like to create a base image without any DB access, this
assists in creating custom base images with custom plugins that already
includes `public/assets`
Following this change set you can run:
```
SPROCKETS_CONCURRENT=1 DONT_PRECOMPILE_CSS=1 SKIP_DB_AND_REDIS=1 RAILS_ENV=production bin/rake assets:precompile
```
Then it is straight forward to create a base image without needing a DB or
Redis.
This reduces chances of errors where consumers of strings mutate inputs
and reduces memory usage of the app.
Test suite passes now, but there may be some stuff left, so we will run
a few sites on a branch prior to merging
If you turn it on now, default all users to approved since they were
previously. Also support approving a user that doesn't have a reviewable
record (it will be created first.)
This also includes a refactor to move class method calls to
`DiscourseEvent` into an initializer. Otherwise the load order of
classes makes a difference in the test environment and some settings
might be triggered and others not, randomly.
Previously jobs would fail silently in test mode. Now they will raise the exception and cause the relevant test to fail. This identified a few broken tests, which I will fix in a followup commit
- Plugin developers using OpenID2.0 should migrate to OAuth2 or OIDC. OpenID2.0 APIs will be removed in v2.4.0
- For sites requiring Yahoo login, it can be implemented using the OpenID Connect plugin: https://meta.discourse.org/t/103632
For more information, see https://meta.discourse.org/t/113249
Includes support for flags, reviewable users and queued posts, with REST API
backwards compatibility.
Co-Authored-By: romanrizzi <romanalejandro@gmail.com>
Co-Authored-By: jjaffeux <j.jaffeux@gmail.com>
Previously we relied on the provider name matching the name of the icon. Now icon names are explicitly set. Plugin providers which do not define an icon will get the default "sign-in-alt" icon
tar exits with status 1 when uploads are modified or deleted by a sidekiq job, so we need to treat it like status 0.
According to the documentation it should be safe to ignore status 1 ("Some files differ"):
> If tar was given `--create', `--append' or `--update' option, this exit code means that some files were changed while being archived and so the resulting archive does not contain the exact copy of the file set.
Status 2 ("Fatal error") still results in an exception.
`SiteSerializer#is_readonly` is cached for an anonymous user so we have
to clear the cache when disabling readonly mode. Otherwise, the site may
appear to be in readonly mode for an extended period of time.
A per process cache is hard to reason about. During PostgreSQL
failovers. The site may bounce in and out of readonly mode depending on
which server and process that a request hits.
This moves us away from the delayed drops pattern which
was problematic on two counts. First, it uses a hardcoded "delay for"
duration which may be too short for certain deployment strategies.
Second, delayed drop doesn't ensure that it only runs after
the latest application code has been deployed. If the migration runs
and the application code fails to deploy, running the migration after
"delay for" has been met will cause the application to blow up.
The new strategy allows post deployment migrations to be skipped if the
env `SKIP_POST_DEPLOYMENT_MIGRATIONS` is provided.
```
SKIP_POST_DEPLOYMENT_MIGRATIONS=1 rake db:migrate
-> deploy app servers
SKIP_POST_DEPLOYMENT_MIGRATIONS=0 rake db:migrate
```
To aid with the generation of a post deployment migration, a generator
has been added. Simply run `rails generate post_migration`.