Previously workbox JS was vendored into our git repository, and would be loaded from the `public/javascripts` directory with a 1 day cache lifetime. The main aim of this commit is to add 'cachebuster' to the workbox URL so that the cache lifetime can be increased.
- Remove vendored copies of workbox.
- Use ember-cli/broccoli to collect workbox files from node_modules into assets/workbox-{digest}
- Add assets to sprockets manifest so that they're collected from the ember-cli output directory (and uploaded to s3 when configured)
Some of the sprockets-related changes in this commit are not ideal, but we hope to remove sprockets in the not-too-distant future.
This commit implements many changes to topic and comments embedding. It
deprecates the class_name field from EmbeddableHost and suggests using
the className parameter. discourse_username parameter has been
deprecated and it will fetch it from embedded site from the author or
discourse-username meta.
See the updated code sample from Admin > Customize > Embedding page.
* FEATURE: Add className parameter for Discourse embed
* DEV: Hide class_name from EmbeddableHost
* DEV: Deprecate class_name field of EmbeddableHost
* FEATURE: Use either author or discourse-username meta tag
* DEV: Deprecate discourse_username parameter
* DEV: Improve embed code sample
Also, the change in insert-hyperlink (from `this.linkUrl.indexOf("http") === -1` to `!this.linkUrl.startsWith("http")`) was intentional fix: we don't want to prevent users from looking up topics with http in their titles.
* The `javascript:update` rake task failed because recent versions of chart.js use a lowercase filename (`chart.min.js` instead of `Chart.min.js`)
* Changed `loadScript()` to use lowercase keys to lookup scripts
* `svg-arrow.css` seems to have changed slightly (linebreak at the end of file)
When creating files with create-multipart, if the file
size was somehow zero we were showing a very unhelpful
error message to the user. Now we show a nicer message,
and proactively don't call the API if we know the file
size is 0 bytes in JS, along with extra console logging
to help with debugging.
Occasionally there will be a misconfigured CORS rule or a different
network failure when loading one of the media optimization WASM scripts.
This commit handles load failures and sends a new installFailed message
from the service worker, so that we don't error and hold up the rest
of the uploads if this occurs; the worker will just not process anything
and will keep trying to install itself with subsequent uploads until it succeeds.
This commit also removes the redundant useUppy variable in the worker
this should have been removed a while ago in f70e6c302f
Previously, `loadLibs` was called inside the `optimize` function of
the media-optimization-worker, which meant that it could be hit
multiple times causing load errors (as seen in b69c2f7311)
This commit moves that call to a specific message handler (the `install` message)
for the service worker, and refactors the service for the media-optimization-worker
to wait for this installation to complete before continuing with processing
image optimizations.
This way, we know for sure based on promises and worker messages
that the worker is installed and has all required libraries
loaded before we continue on with attempting any processing. The
change made in b69c2f7311 is no
longer needed with this commit.
When we are calling the loadLibs function, which in turn calls:
importScripts(settings.mozjpeg_script);
importScripts(settings.resize_script);
For the media-optimization-worker service worker, we are getting
an error in Firefox, which balks at wasm_bindgen, a global
variable defined with let, being redefined when the module loads.
This causes image processing to fail in Firefox when more than one
image is uploaded at a time.
The solution to this is to just check whether the scripts are
already imported, and if so do not import them again.
Chrome doesn't seem to care about this variable redefinition
and does not error, and it seems to be expected behaviour that
the script can be loaded multiple times (see https://github.com/w3c/ServiceWorker/issues/1041)
Short URLs were resolved before diffHTML was loaded and content was
swapped by it, which meant that no URLs were found and the URLs remained
unsolved. This caused image elements to be blank.
* DEV: Updated diffHTML to 1.0.0-beta.20
This change only applies when uppy is calling the media-optimization-worker.
Since the old way of calling the worker via jQuery file uploader will
be removed soon, there is no point coming up with some random string
to use in place of the file name for the promise resolvers there, we
can live with this for now.
Adds uppy upload functionality behind a
enable_experimental_composer_uploader site setting (default false,
and hidden).
When enabled this site setting will make the composer-editor-uppy
component be used within composer.hbs, which in turn points to
a ComposerUploadUppy mixin which overrides the relevant
functions from ComposerUpload. This uppy uploader has parity
with all the features of jQuery file uploader in the original
composer-editor, including:
progress tracking
error handling
number of files validation
pasting files
dragging and dropping files
updating upload placeholders
upload markdown resolvers
processing actions (the only one we have so far is the media optimization
worker by falco, this works)
cancelling uploads
For now all uploads still go via the /uploads.json endpoint, direct
S3 support will be added later.
Also included in this PR are some changes to the media optimization
service, to support uppy's different file data structures, and also
to make the promise tracking and resolving more robust. Currently
it uses the file name to track promises, we can switch to something
more unique later if needed.
Does not include custom upload handlers, that will come
in a later PR, it is a tricky problem to handle.
Also, this new functionality will not be used in encrypted PMs because
encrypted PM uploads rely on custom upload handlers.
On iOS 15 beta, if you select the camera app when uploading an image
and try to upload a freshly taken picture, from the second picture
onwards the resize WASM operation will return an array filled with
zeroes.
Since every 4th byte is alpha, and at this step we are only dealing with
non-transparent images this a O(1) way to detect that the bug was hit.
(On normal images, all 4th bytes are 255 at this point)
Also adds a "catch-all" when the original image became too small to try
to accomodate other bugs of the same type. By default we only trigger
this whole operation on images over 1MB, so if the end result is <20KB
something weird did happen. Throwing here will let the upload continue
using the original file, so nothing is lost and the user can continue.
There are some hard limits in browser Canvas implementations, that will
throw a runtime exception when crossed. Since those limits are platform
dependent, the best we can do is catch it and back off from trying to
optimize a problematic file.
For example, a 60MB PNG can be processed fine by Chrome but Firefox will
fail trying to extract the ImageData from the CanvasRenderingContext2D
with NS_ERROR_FAILURE.
Also cleans up the media-optimization-utils and add post-resize size logs
To prevent opaque cache files, now all the CDN files will be requested in 'cors' mode if the cdn_cors_enabled global setting is enabled. Before enabling the setting, should enable the cors in the CDN server by adding the response header `access-control-allow-origin: *` or `access-control-allow-origin: https://discourse.example.com.`
And other external file requests other than CDN will not be cached if the response type is opaque.
This reverts commit e3de45359f.
We need to improve out strategy by adding a cache breaker with this change ... some assets on CDNs and clients may have incorrect CORS headers which can cause stuff to break.
The poll breakdown modal replaces the grouped pie charts feature.
Includes:
* MODAL: Untangle `onSelectPanel`
Previously modal-tab component would call on click the onSelectPanel callback with itself (modal-tab) as `this` which severely limited its usefulness. Now showModal binds the callback to its controller.
"The PR includes a fix/change to d-modal (b7f6ec6) that hasn't been extracted to a separate PR because it's not currently possible to test a change like this in abstract, i.e. with dynamically created controllers/components in tests. The percentage/count toggle test for the poll breakdown feature is essentially a test for that d-modal modification."
This adds support for a `<d-topics-list>` tag you can embed in your site
that will be rendered as a list of discourse topics. Any attributes on
the tag will be passed as filters. For example:
`<d-topics-list discourse-url="URL" category="1234">` will filter to category 1234.
To use this feature, enable the `embed topics list` site setting. Then
on the site you want to embed, include the following javascript:
`<script
src="http://URL/javascripts/embed-topics.js"></script>`
Where `URL` is your discourse forum's URL.
Then include the `<d-topics-list discourse-url="URL">` tag in your HTML document and it will
be replaced with the list of topics.
This commit adds a new property "discourseReferrerPolicy" to the
set of supported configuration properties for the comment embed
script. If provided the value will be used to set the "referrerPolicy"
attribute on the iframe created to display the comments. This in turn
will allow embedding pages to define a more lenient referer policy on
the embed iframe for pages whose default policy is so strict it
keeps the comment embed from working.
Example:
* Setup:
* Discourse hosted at discourse.example.com
* Comments embedded at example.com
* Referrer-Policy at example.com set to 'same-origin'
* Without this commit:
* Loading the comments fails due to the referer being empty
* With this commit and no adjusted configuration:
* Loading the comments fails due to the referer being empty
(= same behaviour as without the commit)
* With this commit and DiscourseEmbed.discourseReferrerPolicy =
'no-referrer-when-downgrade' as additional configuration:
* Loading the comments succeeds
Note that this change is of special interest for embedding pages
wanting to restrict data flows under the terms of the GDPR since
it allows selectively whitelisting comment embeds while preventing
referer leaking by default.
This is the first iteration of an effort towards making a very good dashboard.
Until we feel confident this is good, this dashboard will only be accessible through /admin/dashboard_next