Non UTF-8 user_agent requests were bypassing logging due to PG always
wanting UTF-8 strings.
This adds some conversion to ensure we are always dealing with UTF-8
`available_disk_space` calls `df` which exits with an error if the `uploads` path doesn't exist. That's often the case when the `Discourse.store.external?` is true.
By doing the `external?` check first the `disable_if_low_on_disk_space` does less work and doesn't output any errors to the console.
This fixes the following issues:
* The link element on the lightbox which pops open the lightbox was linking to the S3 URL with a private ACL instead of the secure media URL for the image
* Change to use `@post.with_secure_media?` in `CookedPostProcessor` for URL cooking, as in some cases, like when a post is edited and an upload is added, `upload.secure?` can be false which resulted in `srcset` URLs not being cooked correctly to secure media upload urls.
This feature adds the ability to define synonyms for tags, and the ability to merge one tag into another while keeping it as a synonym. For example, tags named "js" and "java-script" can be synonyms of "javascript". When searching and creating topics using synonyms, they will be mapped to the base tag.
Along with this change is a new UI found on each tag's page (for example, `/tags/javascript`) where more information about the tag can be shown. It will list the synonyms, which categories it's restricted to (if any), and which tag groups it belongs to (if tag group names are public on the `/tags` page by enabling the "tags listed by group" setting). Staff users will be able to manage tags in this UI, merge tags, and add/remove synonyms.
This bug was causing some unusual behavior when the last post is filtered (e.g. from an ignored user). In some situations this would cause suggested topics to be omitted from the payload.
The next_page specs have been updated to remove most of the stubs
* Support for custom messages and redirects when creating posts
When a post/topic is created Discourse serializes a `NewPostResult`
object. Normally this contains a status like `created_post` or
errors describing why the post could not be created.
There are times when a plugin might want to take the inputted post
and do something in the background. In this case, the plugin
can return a custom `message` and `route_to` attribute in the
`NewPostResult`.
If present, the message will be displayed in an alert, and when "Ok" is
clicked the user will be routed to the new URL.
* Destroy the draft in parallel
Note:
```
def foo(bar: 1)
end
foo({bar: 2})
# raises a deprecation, instead use:
foo(**{bar: 2})
```
Additionally when matching regexes always use strings. It does not make
sense to match a non string to a regex.
We already cache failed onebox URL requests client-side, we now want to cache this on the server-side for extra protection. failed onebox previews will be cached for 1 hour, and any more requests for that URL will fail with a 404 status. Forcing a rebake via the Rebake HTML action will delete the failed URL cache (like how the oneboxer preview cache is deleted).
If a user has more than 60 active sessions, the oldest sessions will be terminated automatically. This protects performance when logging in and when loading the list of recently used devices.
This is a bottom up rewrite of Discourse cache to support faster performance
and a limited surface area.
ActiveSupport::Cache::Store accepts many options we do not use, this partial
implementation only picks the bits out that we do use and want to support.
Additionally params are named which avoids typos such as "expires_at" vs "expires_in"
This also moves a few spots in Discourse to use Discourse.cache over setex
Performance of setex and Discourse.cache.write is similar.
Discourse.cache is a more consistent method to use and offers clean fallback
if you are skipping redis
This is part of a larger change that both optimizes Discoruse.cache and omits
use of setex on $redis in favor of consistently using discourse cache
Bench does reveal that use of Rails.cache and Discourse.cache is 1.25x slower
than redis.setex / get so a re-implementation will follow prior to porting
This amends our API so we provide it with the draft key when saving a post
this means post creator can clean up the draft consistently even if we are
doing fancy stuff like replying to a new topic or new pm or whatever.
There will be some followup work to clean it up so client never calls destroy
on draft during normal operation and the #create/#update endpoints takes care of it
every time
In `post_creator`, the ACL update is only necessary when uploads need to be secured.
This should fix a regression with S3 clones that do not support updating ACLs.
* Add timezone to user_options table
* Also migrate existing timezone values from UserCustomField,
which is where the discourse-calendar plugin is storing them
* Allow user to change their core timezone from Profile
* Auto guess & set timezone on login & invite accept & signup
* Serialize user_options.timezone for group members. this is so discourse-group-timezones can access the core user timezone, as it is being removed in discourse-calendar.
* Annotate user_option with timezone
* Validate timezone values
There was an issue on dev where when uploading secure media, the href of the media was correctly being replaced in the CookedPostProcessor, but the srcset urls were not being replaced correctly. This is because UrlHelper.cook_url was returning the asset host URL for the media for secure media instead of returning early with the proxied secure proxy url.
Previously our custom exception handler was unable to handle situations
where an invalid mime type was sent, resulting in a warning log
This ensures we pretend a request is HTML for the purpose of rendering
the error page if an invalid mime type from a scanner is shipped to the app
* FEATURE: Normalize the service worker route
Update cache headers so they are not immutable outside of the rails app
Add the ability to purge the service worker cache from localhost
Rails -> nginx will pass immutable flags so the file is cached until reloaded.
In most cases, nginx will have its cache flushed on rebuild (new image)
For those needing dynamic re-caching (such as upgrading via the UI),
a rake task for flushing the service worker script is provided
through `assets:flush_sw`
This is only defined in a console environment. For example:
```
[1] pry(main)> SiteSetting.info(:title)
=> {:resolved_value=>"Globally Overridden Title",
:default_value=>"Discourse",
:global_override=>"Globally Overridden Title",
:database_value=>"Test Discourse",
:refresh?=>false,
:client?=>true,
:secret?=>false}
```