This makes more sense than having the guardian take an accessor.
The logic belongs in the Serializer, where the JSON is calculated.
Also removed some of the DRYness in the spec. It's fewer lines
and made it easier to test the option on the serializer.
Previously, we would initialize an ImageOptim object each time we resize.
This object init is mega expensive (170ms on a VERY fast machine):
```
[1] pry(main)> Benchmark.measure { FileHelper.image_optim }
=> #<Benchmark::Tms:0x00007f55440c1de0
@cstime=0.055742,
@cutime=0.141031,
@label="",
@real=0.17165619300794788,
@stime=0.0002750000000000252,
@total=0.19890400000000008,
@utime=0.0018560000000000798>
```
This happens cause during init it hunts for all the right binaries and sets
up internals.
We now memoize this object to avoid a huge amount of pointless work.
This ensures that unicorn master forks of sidekiq run with a lower priority
than the webs. It means that a busy sidekiq is less likely to impact web
performance
Before this patch, a high trust level user could flag something
and have an action be taken, as well as skipping the flag queue.
Now, if a TL3/TL4 cause an action, the flag will skip the minimum
visibility check and allow staff to review it.
FIX: buildTranslationTree was erroring when translations overlapped (ie. ":-)" and ":-))")
FIX: emoji translations wasn't working properly when translations overlapped
We need to handle arbitrary exceptions in this task, especially since the
task is not easily resumable.
Simply output problem uploads as you hit them for now.
Some previous migrations to S3 may have bad ACLs set on objects. This
introduces a new rake task (`rake s3:correct_acl`) that will reset ACL on
every S3 object.
Vast majority of users will never have to run it, but if you have ACL issues
this is the atomic solution.
This feature ensures optimized images run via pngquant, this results extreme amounts of savings for resized images. Effectively the only impact is that the color palette on small resized images is reduced to 256.
To ensure safety we only apply this optimisation to images smaller than 500k.
This commit also makes a bunch of image specs less fragile.
This avoids require dependency on method_profiler and anon cache.
It means that if there is any change to these files the reloader will not pick it up.
Previously the reloader was picking up the anon cache twice causing it to double load on boot.
This caused warnings.
Long term my plan is to give up on require dependency and instead use:
https://github.com/Shopify/autoload_reloader
As per the documentation for KEYS
```
Warning: consider KEYS as a command that should only be used in production environments with extreme care. It may ruin performance when it is executed against large databases. This command is intended for debugging and special operations, such as changing your keyspace layout.
```
Instead SCAN
```
Since these commands allow for incremental iteration, returning only a small number of elements per call, they can be used in production without the downside of commands like KEYS or SMEMBERS that may block the server for a long time (even several seconds) when called against big collections of keys or elements.
```
This generates a 10x10 PNG thumbnail for each lightboxed image.
If Image Lazy Loading is enabled (IntersectionObserver API) then
we'll load the low res version when offscreen. As the image scrolls
in we'll swap it for the high res version.
We use a WeakMap to track the old image attributes. It's much less
memory than storing them as `data-*` attributes and swapping them
back and forth all the time.
This validation makes sure that the s3_upload_bucket and the
s3_backup_bucket have different values. The backup bucket is
allowed to be a subfolder of the upload bucket. The other way
around is forbidden because the backup system searches by
prefix and would return all files stored within the backup
bucket and its subfolders.
* Dashboard doesn't timeout anymore when Amazon S3 is used for backups
* Storage stats are now a proper report with the same caching rules
* Changing the backup_location, s3_backup_bucket or creating and deleting backups removes the report from the cache
* It shows the number of backups and the backup location
* It shows the used space for the correct backup location instead of always showing used space on local storage
* It shows the date of the last backup as relative date
`SiteSerializer#is_readonly` is cached for an anonymous user so we have
to clear the cache when disabling readonly mode. Otherwise, the site may
appear to be in readonly mode for an extended period of time.
A per process cache is hard to reason about. During PostgreSQL
failovers. The site may bounce in and out of readonly mode depending on
which server and process that a request hits.
Historically due to https://meta.discourse.org/t/why-is-discourse-so-slow-on-android/8823
we decreased page sizes of both home page and topic page on android by half.
This was done on the server side and as a side effect and caused page sizes on android
to mismatch between Android and non Android.
Unfortunately about a year ago googlebot started pretending it is Android,
this cause Google to start indexing pages as what android would see. So
it saw double the amount of pages in the index as what exists on desktop.
This in turn caused double the amount of indexing work and a large amount
of broken links on long topics.
This fix removes all special behavior which is no longer needed due to
other performance work in Discourse including raw handlebars on home page
and virtual dom on topic pages.
I tested we do not need this on Blu Advance 5.0 it has 1.3 GHZ mediatec mt6580
This phone retails for around $50 USD.
If we decide long term that we want any hacks like this we will shift them
to the client side. It can just hold data in memory without rendering.
This feature is used for defer loading of images and in future for post cloaking
This gives us a polyfill so we can safely use the feature in problem browsers
The polyfill supports "polling" but it does not appear we need it yet.
If we discover anything odd here, consider setting poll interval per:
https://github.com/w3c/IntersectionObserver/tree/master/polyfill
```
var io = new IntersectionObserver(callback);
io.POLL_INTERVAL = 100; // Time in milliseconds.
```
Keeping the mutation observer cause we often mutate the DOM
Previously the 'reconnect' process was a bit magic - IF you were already logged into discourse, and followed the auth flow, your account would be reconnected and you would be 'logged in again'.
Now, we explicitly check for a reconnect=true parameter when the flow is started, store it in the session, and then only follow the reconnect logic if that variable is present. Setting this parameter also skips the 'logged in again' step, which means reconnect now works with 2fa enabled.
Some URLs in browsers are non compliant and contain twos `#` this commit adds
special handling for this edge case by auto encoding any fragments containing `#`
The wizard searches for:
* a topic that with the "is_welcome_topic" custom field
* a topic with the correct slug for the current default locale
* a topic with the correct slug for the English locale
* the oldest globally pinned topic
It gives up if it didn't find any of the above.