Latest redis interoduces a block form of multi / pipelined, this was incorrectly
passed through and not namespaced.
Fix also updates logster, we held off on upgrading it due to missing functions
This fixes a corner case of the perf optimization in d4e35f5.
When you have the the same post showing in multiple tab/devices and like
said post in one place, we updated the like count but didn't flip the
`acted` bool in the front-end. This caused a small visual desync.
Co-authored-by: Penar Musaraj <pmusaraj@gmail.com>
A bit of a mixed bag, this addresses several edge areas of bookmarks and makes them compatible with polymorphic bookmarks (hidden behind the `use_polymorphic_bookmarks` site setting). The main ones are:
* ExportUserArchive compatibility
* SyncTopicUserBookmarked job compatibility
* Sending different notifications for the bookmark reminders based on the bookmarkable type
* Import scripts compatibility
* BookmarkReminderNotificationHandler compatibility
This PR also refactors the `register_bookmarkable` API so it accepts a class descended from a `BaseBookmarkable` class instead. This was done because we kept having to add more and more lambdas/properties inline and it was very messy, so a factory pattern is cleaner. The classes can be tested independently as well.
Some later PRs will address some other areas like the discourse narrative bot, advanced search, reports, and the .ics endpoint for bookmarks.
Admins won't be able to disable strip_image_metadata if they don't
disable composer_media_optimization_image_enabled first since the later
will strip the same metadata on client during upload, making disabling
the former have no effect.
Bug report at https://meta.discourse.org/t/-/223350
`TestLogger` was responsible for some flaky specs runs:
```
Error during failsafe response: undefined method `debug' for #<TestLogger:0x0000556c4b942cf0 @warnings=1>
Did you mean? debugger
```
This commit also cleans up other uses of `FakeLogger`
This happened only for languages other than "en" and when `I18n.t` was called without any interpolation keys. The lib still tried to interpolate keys because it interpreted the `overrides` option as interpolation key.
When an admin enters a badly formed regular expression in the
permalink_normalizations site setting, a RegexpError exception is
generated everytime a URL is normalized (see Permalink.normalize_url).
The new validator validates every regular expression present in the
setting value (delimited by '|').
If the identity provider does not provide a precise username value, then we should use our UserNameSuggester to generate one and use it for the override. This makes the override consistent with initial account creation.
This will make future changes to the 'pull hotlinked images' system easier. This commit should not introduce any functional change.
For now, the old post_custom_field data is kept in the database. This will be dropped in a future commit.
We were calling `dup` on the hash and using that to check for changes. However, we were not duplicating the values, so changes to arrays or nested hashes would not be detected.
This pull request follows on from https://github.com/discourse/discourse/pull/16308. This one does the following:
* Changes `BookmarkQuery` to allow for querying more than just Post and Topic bookmarkables
* Introduces a `Bookmark.register_bookmarkable` method which requires a model, serializer, fields and preload includes for searching. These registered `Bookmarkable` types are then used when validating new bookmarks, and also when determining which serializer to use for the bookmark list. The `Post` and `Topic` bookmarkables are registered by default.
* Adds new specific types for Post and Topic bookmark serializers along with preloading of associations in `UserBookmarkList`
* Changes to the user bookmark list template to allow for more generic bookmarkable types alongside the Post and Topic ones which need to display in a particular way
All of these changes are gated behind the `use_polymorphic_bookmarks` site setting, apart from the .hbs changes where I have updated the original `UserBookmarkSerializer` with some stub methods.
Following this PR will be several plugin PRs (for assign, chat, encrypt) that will register their own bookmarkable types or otherwise alter the bookmark serializers in their own way, also gated behind `use_polymorphic_bookmarks`.
This commit also removes `BookmarkQuery.preloaded_custom_fields` and the functionality surrounding it. It was added in 0cd502a558 but only used by one plugin (discourse-assign) where it has since been removed, and is now used by no plugins. We don't need it anymore.
We intend to switch to the `:json` serializer, which will stringify all keys. However, we need a clean revert path. This commit ensures that our `_t` cookie handling works with both marshal (the current default) and json (the new default) serialization.
This PR enables custom email dark mode styles by default that were added here.
There is currently poor support for dark mode queries in mail clients. The main beneficiary of these changes will be Apple Mail and Outlook.
Enjoy the darkness 🕶️
When changing upload security using `Upload#update_secure_status`,
we may not have the context of how an upload is being created, because
this code path can be run through scheduled jobs. When calling
update_secure_status, the normal ActiveRecord validations are run,
and ours include validating extensions. In some cases the upload
is created in an automated way, such as user export zips, and the
security is applied later, with the extension prohibited from
use when normally uploading.
This caused the upload to fail validation on `update_secure_status`,
causing the security change to silently fail. This fixes the issue
by skipping the file extension validation when the upload security
is being changed.
This commit fixes two issues at play. The first was introduced
in f6c852b (or maybe not introduced
but rather revealed). When a user posted a new message in a topic,
they received the unread topic tracking state MessageBus message,
and the Unread (X) indicator was incremented by one, because with the
aforementioned perf commit we "guess" the correct last read post
for the user, because we no longer calculate individual users' read
status there. This meant that every time a user posted in a topic
they tracked, the unread indicator was incremented. To get around
this, we can just exclude the user who created the post from the
target users of the unread state message.
The second issue was related to the private message topic tracking
state, and was somewhat similar. Whenever a user created a new private
message, the New (X) indicator was incremented, and could not be
cleared until the page was refreshed. To solve this, we just don't
update the topic state for the user when the new_topic tracking state
message comes through if the user who created the topic is the
same as the current user.
cf. https://meta.discourse.org/t/bottom-of-topic-shows-there-is-1-unread-remaining-when-there-are-actually-0-unread-topics-remaining/220817
Discourse has the Discourse Connect Provider protocol that makes it possible to
use a Discourse instance as an identity provider for external sites. As a
natural extension to this protocol, this PR adds a new feature that makes it
possible to use Discourse as a 2FA provider as well as an identity provider.
The rationale for this change is that it's very difficult to implement 2FA
support in a website and if you have multiple websites that need to have 2FA,
it's unrealistic to build and maintain a separate 2FA implementation for each
one. But with this change, you can piggyback on Discourse to take care of all
the 2FA details for you for as many sites as you wish.
To use Discourse as a 2FA provider, you'll need to follow this guide:
https://meta.discourse.org/t/-/32974. It walks you through what you need to
implement on your end/site and how to configure your Discourse instance. Once
you're done, there is only one additional thing you need to do which is to
include `require_2fa=true` in the payload that you send to Discourse.
When Discourse sees `require_2fa=true`, it'll prompt the user to confirm their
2FA using whatever methods they've enabled (TOTP or security keys), and once
they confirm they'll be redirected back to the return URL you've configured and
the payload will contain `confirmed_2fa=true`. If the user has no 2FA methods
enabled however, the payload will not contain `confirmed_2fa`, but it will
contain `no_2fa_methods=true`.
You'll need to be careful to re-run all the security checks and ensure the user
can still access the resource on your site after they return from Discourse.
This is very important because there's nothing that guarantees the user that
will come back from Discourse after they confirm 2FA is the same user that
you've redirected to Discourse.
Internal ticket: t62183.
Due to default CSP web workers instantiated from CDN based assets are still
treated as "same-origin" meaning that we had no way of safely instansiating
a web worker from a theme.
This limits the theme system and adds the arbitrary restriction that WASM
based components can not be safely used.
To resolve this limitation all js assets in about.json are also cached on
local domain.
{
"name": "Header Icons",
"assets" : {
"worker" : "assets/worker.js"
}
}
This can then be referenced in JS via:
settings.theme_uploads_local.worker
local_js_assets are unconditionally served from the site directly and
bypass the entire CDN, using the pre-existing JavascriptCache
Previous to this change this code was completely dormant on sites which
used s3 based uploads, this reuses the very well tested and cached asset
system on s3 based sites.
Note, when creating local_js_assets it is highly recommended to keep the
assets lean and keep all the heavy working in CDN based assets. For example
wasm files can still live on the CDN but the lean worker that loads it can
live on local.
This change unlocks wasm in theme components, so wasm is now also allowed
in `theme_authorized_extensions`
* more usages of upload.content
* add a specific test for upload.content
* Adjust logic to ensure that after upgrades we still get a cached local js
on save
There are still some, but those are in actual code that's used outside core, so the change there would need to go through the deprecation cycle. That's a task for another day.
Previously we only supported a single 'required tag group' for a category. This commit allows admins to specify multiple required tag groups, each with their own minimum tag count.
A new category_required_tag_groups database table replaces the existing columns on the categories table. Data is automatically migrated.
Previous to this change an optimisation stripped crawler content from
all mobile browsers.
This had a side effect that meant that when we dropped support for an old
mobile platform we would stop rendering topic and topic list pages.
The new implementation ensures we only perform the optimisation on modern
mobile browsers.
This patch removes some of our freedom patches that have been deprecated
for some time now.
Some of them have been updated so we’re not shipping code based on an
old version of Rails.
This commit introduces a new use_polymorphic_bookmarks site setting
that is default false and hidden, that will be used to help continuous
development of polymorphic bookmarks. This setting **should not** be
enabled anywhere in production yet, it is purely for local development.
This commit uses the setting to enable create/update/delete actions
for polymorphic bookmarks on the server and client side. The bookmark
interactions on topics/posts are all usable. Listing, searching,
sending bookmark reminders, and other edge cases will be handled
in subsequent PRs.
Comprehensive UI tests will be added in the final PR -- we already
have them for regular bookmarks, so it will just be a matter of
changing them to be for polymorphic bookmarks.
Under some conditions, replacing an `<img` with `![]()` can break rendering, and make the image disappear.
Context at https://meta.discourse.org/t/152801
* DEV: add testing for multi del on keys
Following #15905 we were missing some tests, this covers cases where
del is used in the form of .del(key1,key2)
Tags (and tag groups) can be configured so that they can only be used in specific categories and (optionally) restrict topics in these categories to be able to add/use only these tags. These restrictions work as expected when a topic is created without going through the review queue; however, if the topic has to be reviewed by a moderator then these restrictions currently aren't checked before the topic is sent to the review queue, but they're checked later when a moderator tries to approve the topic. This is because if a user manages to submit a topic that doesn't meet the restrictions, moderators won't be able to approve and it'll be stuck in the review queue.
This PR prevents topics that don't meet the tags requirements from being sent to the review queue and shows the poster an error message that indicates which tags that cannot be used.
Internal ticket: t60562.
Previously, our `upload://` protocol urls were only supported in markdown image tags. This meant that our PullHotlinkedImages job was forced to convert `<img` tags to markdown. Depending on the exact syntax, this can actually cause the image to break.
This commit adds support for `upload://` inside regular HTML `<img` tags. In a future commit, we'll be able to use this to make our PullHotlinkedImages job much more robust.
Context at https://meta.discourse.org/t/152801
PostAnalyzer and CookedPostProcessor both replace URLs with oneboxes.
PostAnalyzer did not use the max_oneboxes_per_post site and setting and
CookedPostProcessor replaced at most max_oneboxes_per_post URLs ignoring
the oneboxes that were replaced already by PostAnalyzer.
If the crawled page returned an error, `FinalDestination#safe_get`
yielded `nil` for `uri` and `chunk` arguments. Another problem is that
`get` did not handle the case when `safe_get` failed and did not return
the `location` and `set_cookie` headers.
If a group's messageable_level is set to nobody then staff can't should not be able to send PMs to it.
Co-authored-by: Martin Brennan <martin@discourse.org>
The `blocked onebox domains` setting lets site owners change what sites
are allowed to be oneboxed. When a link is entered into a post,
Discourse checks the domain of the link against that setting and blocks
the onebox if the domain is blocked. But if there's a chain of
redirects, then only the final destination website is checked against
the site setting.
This commit amends that behavior so that every website in the redirect
chain is checked against the site setting, and if anything is blocked
the original link doesn't onebox at all in the post. The
`Discourse-No-Onebox` header is also checked in every response and the
onebox is blocked if the header is set to "1".
Additionally, Discourse will now include the `Discourse-No-Onebox`
header with every response if the site requires login to access content.
This is done to signal to a Discourse instance that it shouldn't attempt
to onebox other Discourse instances if they're login-only. Non-Discourse
websites can also use include that header if they don't wish to have
Discourse onebox their content.
Internal ticket: t59305.
The user can select what happens with a bookamrk after it expires. New
option allow bookmark's reminder to be kept even after it has expired.
After a bookmark's reminder notification is created, the reminder date
will be highlighted in red until the user resets the reminder date.
User can do that using the new Clear Reminder button from the dropdown.
The search_ignore_accents site setting can be used to make the search
indexer remove the accents before indexing the content. The unaccent
function from PostgreSQL is better than Ruby's unicode_normalize(:nfkd).
Some product pages on Amazon are using a new HTML structure, meaning the previous Onebox engine was unable to gather the price and/or description. This change should allow these pages to be Oneboxed.
This PR adds an extra description to the 2FA page when granting a user admin access. It also introduces a general system for adding customized descriptions that can be used by future actions.
(Follow-up to dd6ec65061)
Discourse users and associated accounts are created or updated when a
user logins or connects the account using their account preferences.
This new API can be used to create associated accounts and users too,
if necessary.
It's very easy to forget to add `require 'rails_helper'` at the top of every core/plugin spec file, and omissions can cause some very confusing/sporadic errors.
By setting this flag in `.rspec`, we can remove the need for `require 'rails_helper'` entirely.
This allows text editors to use correct syntax coloring for the heredoc sections.
Heredoc tag names we use:
languages: SQL, JS, RUBY, LUA, HTML, CSS, SCSS, SH, HBS, XML, YAML/YML, MF, ICS
other: MD, TEXT/TXT, RAW, EMAIL
Previously email validations could fire when deleting posts if for
certain reasons any user validations fail on the user objects
This kind of condition could happen in core due to a corruption of a
user record, or via a plugin that introduces a new validation on User
Lib specs moved in 45cc16098d
Move the new selectable_avatars_mode_validator_spec to the new location
Remove the old selectable_avatars_enabled_validator_spec
follow-up of d1bdb6c65d
This commit handles the edge case where a draft is lost with no warnings if the user edits the title (or category/tags) of a topic while they're replying.to the same topic. Repro steps are as follows:
1. Start replying to a topic and type enough to get a draft saved.
2. Scroll up to the topic title and click the pencil icon next to the topic title, change the title, category and/or tags, and then save the changes.
3. Reload the page and you'll see that the draft is gone.
This happens because we only allow 1 draft per topic per user and when you edit the title of a topic that you're replying to, from the server perspective it'll look like as if you've submitted your reply so it will advance the draft sequence for the topic and delete the draft.
The fix in this commit makes `PostRevisor` skip advancing the draft sequence when a topic's title is edited using the pencil button next to the title.
Internal ticket: t60854.
Co-authored-by: Robin Ward <robin.ward@gmail.com>
This option will make it so the [quote] bbcode will always
include the HTML link to the quoted post, even if a topic_id
is not provided in the PrettyText#cook options. This is so
[quote] bbcode can be used in other places, like chat messages,
that always need the link and do not have an "off-topic" ID
to use.
Previously cached counting made redis calls in main thread and performed
the flush in main thread.
This could lead to pathological states in extreme heavy load.
This refactor reduces load and cleans up the interface
Previously we were publishing one messagebus message per user which was 'tracking' a topic. On large sites, this can easily be 1000+ messages. The important information in the message is common between all users, so we can manage with a single message on a shared channel, which will be much more efficient.
For user-specific values (notification_level and last_read_post_number), the JS app can infer values which are 'good enough'. Correct values will be loaded as soon as a topic-list containing the topic is visited.
Themes often cache `nil` values in a DistributedCache. This bug meant that we were re-calculating some values on every request, AND triggering message-bus publishing on every request.
This fix should provide a significant performance improvement for busy sites.
2FA support in Discourse was added and grown gradually over the years: we first
added support for TOTP for logins, then we implemented backup codes, and last
but not least, security keys. 2FA usage was initially limited to logging in,
but it has been expanded and we now require 2FA for risky actions such as
adding a new admin to the site.
As a result of this gradual growth of the 2FA system, technical debt has
accumulated to the point where it has become difficult to require 2FA for more
actions. We now have 5 different 2FA UI implementations and each one has to
support all 3 2FA methods (TOTP, backup codes, and security keys) which makes
it difficult to maintain a consistent UX for these different implementations.
Moreover, there is a lot of repeated logic in the server-side code behind these
5 UI implementations which hinders maintainability even more.
This commit is the first step towards repaying the technical debt: it builds a
system that centralizes as much as possible of the 2FA server-side logic and
UI. The 2 main components of this system are:
1. A dedicated page for 2FA with support for all 3 methods.
2. A reusable server-side class that centralizes the 2FA logic (the
`SecondFactor::AuthManager` class).
From a top-level view, the 2FA flow in this new system looks like this:
1. User initiates an action that requires 2FA;
2. Server is aware that 2FA is required for this action, so it redirects the
user to the 2FA page if the user has a 2FA method, otherwise the action is
performed.
3. User submits the 2FA form on the page;
4. Server validates the 2FA and if it's successful, the action is performed and
the user is redirected to the previous page.
A more technically-detailed explanation/documentation of the new system is
available as a comment at the top of the `lib/second_factor/auth_manager.rb`
file. Please note that the details are not set in stone and will likely change
in the future, so please don't use the system in your plugins yet.
Since this is a new system that needs to be tested, we've decided to migrate
only the 2FA for adding a new admin to the new system at this time (in this
commit). Our plan is to gradually migrate the remaining 2FA implementations to
the new system.
For screenshots of the 2FA page, see PR #15377 on GitHub.
The chat quoting mechanism will need to be able to generate
markdown for all kinds of uploads. The UploadMarkdown class
was missing generation for video and audio uploads. This
commit adds that in, and also expands the server-side regex
recognition of FileHelper types to match those in uploads.js,
and adds a spec for UploadMarkdown
Follow up to 6f7364e48b to add a spec
that tests the full authentication of a Windows Hello algorithm (-257)
webauthn verification. The test added in that commit only tested that
we know about that algorithm, not whether it was actually usable.
* FEATURE: RS512, RS384 and RS256 COSE algorithms
These algorithms are not implemented by cose-ruby, but used in the web
authentication API and were marked as supported.
* FEATURE: Use all algorithms supported by cose-ruby
Previously only a subset of the algorithms were allowed.
* Chinese segmenetation will continue to rely on cppjieba
* Japanese segmentation will use our port of TinySegmenter
* Korean currently does not rely on segmentation which was dropped in c677877e4f
* SiteSetting.search_tokenize_chinese_japanese_korean has been split
into SiteSetting.search_tokenize_chinese and
SiteSetting.search_tokenize_japanese respectively
When creating a direct message to a group with group SMTP
set up, and adding another person to that message in the OP,
we send an email to the second person in the OP via the group_smtp
job. This in turn creates an IncomingEmail record to guard against
IMAP double sync.
The issue with this was that this IncomingEmail (which is essentialy
a placeholder/dummy one) was having its Message-ID used as the canonical
References Message-ID for subsequent emails sent out to user_private_message
recipients (such as members of the group), causing threading issues in
the mail client. The canonical <topic/ID@HOST> format should be used
instead for these cases.
This commit fixes the issue by only using the IncomingEmail for the
OP's Message-ID if the OP was created via our handle_mail email receiver
pipeline. It does not make sense to use it in other cases.
In an earlier PR, we decided that we only want to block a domain if
the blocked domain in the SiteSetting is the final destination (/t/59305). That
PR used `FinalDestination#get`. `resolve` however is used several places
but blocks domains along the redirect chain when certain options are provided.
This commit changes the default options for `resolve` to not do that. Existing
users of `FinalDestination#resolve` are
- `Oneboxer#external_onebox`
- our onebox helper `fetch_html_doc`, which is used in amazon, standard embed
and youtube
- these folks already go through `Oneboxer#external_onebox` which already
blocks correctly
If the SiteSetting `allowed_onebox_iframes` contains a value of `*`, it will use the values of `all_iframe_origins` during the Oneboxing process. If `all_iframe_origins` itself contains a value of `*`, `origins_to_regexes` will try to return a "catch-all" regex.
Other code assumes `origins_to_regexes`will return an array, so this change ensures the `*` case will return an array containing only the catch-all regex.
1. `html_doc.css('.Box.md')` always returns a truthy value (e.g. `[]`) so the second branch of the if-elsif never ran
2. `node&.css('text()')` was invalid code that would raise an error
3. Matching on h3 elements is no longer correct with the current html structure returned by GitHub
It is too close to release of 2.8 for incomplete
feature shenanigans. Ignores and drops the columns and drops
the trigger/function introduced in
e21c640a3c.
Will pick this feature back up post-release.
We are planning on attaching bookmarks to more and
more other models, so it makes sense to make a polymorphic
relationship to handle this. This commit adds the new
columns and backfills them in the bookmark table, and
makes sure that any new bookmark changes fill in the columns
via DB triggers.
This way we can gradually change the frontend and backend
to use these new columns, and eventually delete the
old post_id and for_topic columns in `bookmarks`.
* File.exists? is deprecated and removed in Ruby 3.2 in favor of
File.exist?
* Dir.exists? is deprecated and removed in Ruby 3.2 in favor of
Dir.exist?
This commit adds a check that runs regularly as per
2d68e5d942 which tests the
credentials of groups with SMTP or IMAP enabled. If any issues
are found with those credentials a high priority problem is added to the
admin dashboard.
This commit also formats the admin dashboard differently if
there are high priority problems, bringing them to the top of
the list and highlighting them.
The problem will be cleared if the issue is fixed before the next
problem check, or if the group's settings are updated with a valid
credential.
This allows authenticators to instruct the Auth::Result to override attributes without using the general site settings. This provides an easy migration path for auth plugins which offer their own "overrides email", "overrides username" or "overrides name" settings. With this new api, they can set `overrides_*` on the result object, and the attribute will be overriden regardless of the general site setting.
ManagedAuthenticator is updated to use this new API. Plugins which consume ManagedAuthenticator will instantly take advantage of this change.
When attempting to Onebox a page if there is no `meta property="og:description"` tag but there is a `meta name="description"` tag, Onebox should try to use that value.
As an example, the lookup order for German was:
1. override for de
2. override for en
3. value from de
4. value from en
After this change the lookup order is the same as on the client:
1. override for de
2. value from de
3. override for en
4. value from en
see /t/16381
This is a fix to address blurry onebox favicon images if the site you
are linking to happens to have a favicon.ico file that contains multiple
images.
This fix detects of we are trying to create an upload for a favicon.ico
file. We then convert it to a png and not a jpeg like we were doing. We
want a png because it will preserve transparency, otherwise if we
convert it to a jpeg we lose that and it looks bad on dark themed sites.
This fix also addresses the fact that .ico files can include multiple
images. The blurry images we were producing was caused by the
ImageMagick `-flatten` option when the .ico file had multiple images
which then squishes them all together. So for .ico files we are no
longer flattening them and instead we are grabbing the last image in the
.ico bundle and converting that single image to a png.
Since 3b13f1146b the email threading
in mail clients has been broken, because the random suffix meant
that the References header would always be different for non-group
SMTP email notifications sent out.
This commit fixes the issue by always using the "canonical" topic
reference ID inside the References header in the format:
topic/TOPIC_ID@HOST
Which was the old format. We also add the References header to
notifications sent for the first post arriving, so the threading
works for subsequent emails. The Message-ID header is still random
as per the previous change.
Currently the Message-IDs we send out for outbound email
are not unique; for a post they look like:
topic/TOPIC_ID/POST_ID@HOST
And for a topic they look like:
topic/TOPIC_ID@HOST
This commit changes the outbound Message-IDs to also have
a random suffix before the host, so the new format is
like this:
topic/TOPIC_ID/POST_ID.RANDOM_SUFFIX@HOST
Or:
topic/TOPIC_ID.RANDOM_SUFFIX@HOST
This should help with email deliverability. This change
is backwards-compatible, the old Message-ID format will
still be recognized in the mail receiver flow, so people
will still be able to reply using Message-IDs, In-Reply-To,
and References headers that have already been sent.
This commit also refactors Message-ID related logic
to a central location, and adds judicious amounts of
tests and documentation.
We have a couple of site setting, `slow_down_crawler_user_agents` and `slow_down_crawler_rate`, that are meant to allow site owners to signal to specific crawlers that they're crawling the site too aggressively and that they should slow down.
When a crawler is added to the `slow_down_crawler_user_agents` setting, Discourse currently adds a `Crawl-delay` directive for that crawler in `/robots.txt`. Unfortunately, many crawlers don't support the `Crawl-delay` directive in `/robots.txt` which leaves the site owners no options if a crawler is crawling the site too aggressively.
This PR replaces the `Crawl-delay` directive with proper rate limiting for crawlers added to the `slow_down_crawler_user_agents` list. On every request made by a non-logged in user, Discourse will check the User Agent string and if it contains one of the values of the `slow_down_crawler_user_agents` list, Discourse will only allow 1 request every N seconds for that User Agent (N is the value of the `slow_down_crawler_rate` setting) and the rest of requests made within the same interval will get a 429 response.
The `slow_down_crawler_user_agents` setting becomes quite dangerous with this PR since it could rate limit lots if not all of anonymous traffic if the setting is not used appropriately. So to protect against this scenario, we've added a couple of new validations to the setting when it's changed:
1) each value added to setting must 3 characters or longer
2) each value cannot be a substring of tokens found in popular browser User Agent. The current list of prohibited values is: apple, windows, linux, ubuntu, gecko, firefox, chrome, safari, applewebkit, webkit, mozilla, macintosh, khtml, intel, osx, os x, iphone, ipad and mac.
This commit adds the RailsMultisite middleware in test mode when Rails.configuration.multisite is true. This allows for much more realistic integration testing. The `multisite_spec.rb` file is rewritten to avoid needing to simulate a middleware stack.
This commit introduces a new s3:ensure_cors_rules rake task
that is run as a prerequisite to s3:upload_assets. This rake
task calls out to the S3CorsRulesets class to ensure that
the 3 relevant sets of CORS rules are applied, depending on
site settings:
* assets
* direct S3 backups
* direct S3 uploads
This works for both Global S3 settings and Database S3 settings
(the latter set directly via SiteSetting).
As it is, only one rule can be applied, which is generally
the assets rule as it is called first. This commit changes
the ensure_cors! method to be able to apply new rules as
well as the existing ones.
This commit also slightly changes the existing rules to cover
direct S3 uploads via uppy, especially multipart, which requires
some more headers.
We are no longer able to display the image returned by Instagram directly within a Discourse site (either in the composer, or within a cooked post within a topic), so:
- Display an image placeholder in the composer preview
- A cooked post should use an iframe to display the Instagram 'embed' content
Not reseting the registry could lead to assets still being registered for example.
This flakky spec was reprdocible with this call: `bundle exec rspec --seed 9472 spec/components/discourse_plugin_registry_spec.rb spec/components/svg_sprite/svg_sprite_spec.rb`
Which would trigger the following error:
```
Failures:
1) DiscoursePluginRegistry#register_asset registers vendored_core_pretty_text properly
Failure/Error: expect(registry.javascripts.count).to eq(0)
expected: 0
got: 1
(compared using ==)
# ./spec/components/discourse_plugin_registry_spec.rb:248:in `block (3 levels) in <top (required)>'
# ./spec/rails_helper.rb:280:in `block (2 levels) in <top (required)>'
# /Users/joffreyjaffeux/.gem/ruby/2.7.3/gems/webmock-3.14.0/lib/webmock/rspec.rb:37:in `block (2 levels) in <top (required)>'
```
The all inboxes was introduced in
016efeadf6 but we decided to roll it back
for performance reasons. The main performance challenge here is that PG
has to basically loop through all the PMs that a user is allowed to view
before being able to order by `Topic#bumped_at`. The all inboxes was not
planned as part of the new/unread filter so we've decided not to tackle
the performance issue for the upcoming release.
Follow-up to 016efeadf6
The file size error messages for max_image_size_kb and
max_attachment_size_kb are shown to the user in the KB
format, regardless of how large the limit is. Since we
are going to support uploading much larger files soon,
this KB-based limit soon becomes unfriendly to the end
user.
For example, if the max attachment size is set to 512000
KB, this is what the user sees:
> Sorry, the file you are trying to upload is too big (maximum
size is 512000KB)
This makes the user do math. In almost all file explorers that
a regular user would be familiar width, the file size is shown
in a format based on the maximum increment (e.g. KB, MB, GB).
This commit changes the behaviour to output a humanized file size
instead of the raw KB. For the above example, it would now say:
> Sorry, the file you are trying to upload is too big (maximum
size is 512 MB)
This humanization also handles decimals, e.g. 1536KB = 1.5 MB
Instead of going to the OP of the topic for topic-level bookmarks
(which are bookmarks where for_topic is true) when clicking on the
bookmark in the quick access menu or on the user bookmark list,
this commit takes the user to the last unread post in
the topic instead. This should be generally more useful than landing
on the unchanging OP.
To make this work nicely, I needed to add the last_read_post_number to
the BookmarkQuery based on the TopicUser association. It should not add
too much extra weight to the query, because it is limited to the user
that we are fetching bookmarks for.
Also fixed an issue where the bookmark serializer highest_post_number was
not taking into account whether the user was staff, which is when we
should use highest_staff_post_number instead.
Allows creating a bookmark with the `for_topic` flag introduced in d1d2298a4c set to true. This happens when clicking on the Bookmark button in the topic footer when no other posts are bookmarked. In a later PR, when clicking on these topic-level bookmarks the user will be taken to the last unread post in the topic, not the OP. Only the OP can have a topic level bookmark, and users can also make a post-level bookmark on the OP of the topic.
I had to do some pretty heavy refactors because most of the bookmark code in the JS topics controller was centred around instances of Post JS models, but the topic level bookmark is not centred around a post. Some refactors were just for readability as well.
Also removes some missed reminderType code from the purge in 41e19adb0d
We don't actually use the reminder_type for bookmarks anywhere;
we are just storing it. It has no bearing on the UI. It used
to be relevant with the at_desktop bookmark reminders (see
fa572d3a7a)
This commit marks the column as readonly, ignores it, and removes
the index, and it will be dropped in a later PR. Some plugins
are relying on reminder_type partially so some stubs have been
left in place to avoid errors.
First reported in https://meta.discourse.org/t/-/202482/19
There are two optimizations being applied here:
1. Fetch a user's group ids in a seperate query instead of including it
as a sub-query. When I tried a subquery, the query plan becomes very
inefficient.
1. Join against the `topic_allowed_users` and `topic_allowed_groups`
table instead of doing an IN against a subquery where we UNION the
`topic_id`s from the two tables. From my profiling, this enables PG to
do a backwards index scan on the `index_topics_on_timestamps_private`
index.
This commit fixes a bug where listing all messages was incorrectly
excluding topics if a topic has been archived by a group even if the
user did not belong to the group.
This commit also fixes another bug where dismissing private messages
selectively was subjected to the default limit of 30.
We don't need no stinkin' denormalization! This commit ignores
the topic_id column on bookmarks, to be deleted at a later date.
We don't really need this column and it's better to rely on the
post.topic_id as the canonical topic_id for bookmarks, then we
don't need to remember to update both columns if the bookmarked
post moves to another topic.
This bug was introduced by f66007ec83.
In PostJobsEnqueuer we previously did not fire the after_post_create
event and after_topic_create event for private message topics. This was
changed in the above commit in order to publish message bus messages
for topic tracking state updates. Unfortunately this caused the
NotifyMailingListSubscribers job to be enqueued for all posts including
private messages, and admins and the users involved in the PMs got
emailed the contents of the PMs if they had mailing list mode enabled.
Luckily the impact of this was mitigated by a Guardian#can_see? check
for each mailing list mode user in the NotifyMailingListSubscribers job.
We never want to notify mailing list mode subscribers for private messages
so an early return has been added there, plus the logic in PostJobsEnqueuer
has been fixed, and tests have been added to that class where there were
none before.
Discourse automatically sends a private message after backup or
restore finished. The private message used to contain the log inline
even when it was very long. A very long log can create issues because
the length of the post will be over the maximum allowed length of a
post. When that happens, Discourse will try to create an upload with
the logs. If that fails, it will trim the log and inline it.
Searching in a category looked only one level down, ignoring the site
setting max_category_nesting. The user interface did not support the
third level of categories and did not display them in the "Categorized"
input of the advanced search options.
* FEATURE: Onebox can match engines based on the content_type
`FinalDestination` now returns the `content_type` of a resolved URL.
`Oneboxer` passes this value to `Onebox` itself. Onebox engines can now specify a `matches_content_type` regex of content_types that the engine can handle, regardless of the URL.
`ImageOnebox` will match URLs with a content type of `image/png`, `jpg`, `gif`, `bmp`, `tif`, etc.
This will allow images that exist at a URL without a file type extension to be correctly rendered, assuming a valid `content_type` is returned.
By default, Twitter will return the URL for the avatar image of the tweet poster as the `og:image` value.
However, if the `user_generated` attribute is true, we should not use this as the avatar URL as this will be an URL of an image in the tweet itself (e.g., an image belonging to a tweeted news story).
Mixing multisite and standard specs can lead to issues (e.g. when using `fab!`)
Disabled the (upcoming https://github.com/discourse/rubocop-discourse/pull/11) rubocop rule for two files that have thoroughly tangled both types of specs.
Before this change, calling `StyleSheet::Manager.stylesheet_details`
for the first time resulted in multiple queries to the database. This is
because the code was modelled in a way where each `Theme` was loaded
from the database one at a time.
This PR restructures the code such that it allows us to load all the
theme records in a single query. It also allows us to eager load the
required associations upfront. In order to achieve this, I removed the
support of loading multiple themes per request. It was initially added
to support user selectable theme components but the feature was never
completed and abandoned because it wasn't a feature that we thought was
worth building.
* FIX: Allow SVG uploads if dimensions are a fraction of a unit
`UploadCreator` counts the number of pixels in an file to determine if it is valid. `pixels` is calculated by multiplying the width and height of the image, as determined by FastImage.
SVG files can have their width/height expressed in a variety of different units of measurement. For example, ‘px’, ‘in’, ‘cm’, ‘mm’, ‘pt’, ‘pc’, etc are all valid within SVG files. If an image has a width of `0.5in`, FastImage may interpret this as being a width of `0`, meaning it will report the `size` as being `0`.
However, we don’t need to concern ourselves with the number of ‘pixels’ in a SVG files, as that is irrelevant for this file format, so we can skip over the check for `pixels == 0` when processing this file type.
* DEV: Speed up getting SVG dimensions
The `-ping` flag prevents the entire image from being rasterized before a result is returned. See:
https://imagemagick.org/script/command-line-options.php#ping
Leaking state and non-obvious order (before :all runs *before* RailsHelper.test_setup) are not worth it.
A replacement PR for #13370. Fixes some flaky specs, e.g.
```
bin/rspec './spec/components/freedom_patches/translate_accelerator_spec.rb[1:3]' './spec/jobs/clean_up_user_export_topics_spec.rb[1:1]' --tag ~type:multisite --seed 35994
```
Also included:
* DEV: No need for locale reset (we do it anyway in rails_helper in `test_setup`)
Since we use the event to perform additional validations on the file, we should check if it added any errors to the upload before saving it. This change makes the UploadCreator more consistent since we no longer have to rely on exceptions.
If force_https is enabled all resource (including markdown preview and so on) will be accessed using HTTPS
If for any reason you attempt to link to non HTTPS reachable content content may appear broken
* FIX: Quoting Oneboxed content should exclude formatting
When a post is quoted that includes Oneboxed content, we should not include the formatting generated by the Onebox. Rather, we should attempt to collapse the link referenced by the Onebox to a single line text link.
* DEV: fix tests
Over the years we have found that a few communities never discovered tags.
Instead of having them default off we now have them default on, ensuring
that everyone finds out about them.
Co-authored-by: Dan Ungureanu <dan@ungureanu.me>
IMDb movie links were being rendered as posters. This was because
IMDb was sending `og:type` as `image` randomly in some cases. To
fix this we'll now default all IMDb links as article type. This will
ensure that the IMDb onebox link includes all the information instead
of showing just a poster without any context.
* FIX: return an empty result if response from Amazon is missing attributes
Check we have the basic attributes requires to construct a Onebox for Amazon.
This is an attempt to handle scenarios where we receive a valid 200-status response from an Amazon request that does not include the data we’re expecting.
* Update lib/onebox/engine/amazon_onebox.rb
Co-authored-by: Régis Hanol <regis@hanol.fr>
Co-authored-by: Régis Hanol <regis@hanol.fr>
Some specs failed when `LOAD_PLUGINS=1` was set while migrating the test DB and the narrative-bot plugin disabled the `send_welcome_message` site setting.
This overhauls the user interface for the group email settings management, aiming to make it a lot easier to test the settings entered and confirm they are correct before proceeding. We do this by forcing the user to test the settings before they can be saved to the database. It also includes some quality of life improvements around setting up IMAP and SMTP for our first supported provider, GMail. This PR does not remove the old group email config, that will come in a subsequent PR. This is related to https://meta.discourse.org/t/imap-support-for-group-inboxes/160588 so read that if you would like more backstory.
### UI
Both site settings of `enable_imap` and `enable_smtp` must be true to test this. You must enable SMTP first to enable IMAP.
You can prefill the SMTP settings with GMail configuration. To proceed with saving these settings you must test them, which is handled by the EmailSettingsValidator.
If there is an issue with the configuration or credentials a meaningful error message should be shown.
IMAP settings must also be validated when IMAP is enabled, before saving.
When saving IMAP, we fetch the mailboxes for that account and populate them. This mailbox must be selected and saved for IMAP to work (the feature acts as though it is disabled until the mailbox is selected and saved):
### Database & Backend
This adds several columns to the Groups table. The purpose of this change is to make it much more explicit that SMTP/IMAP is enabled for a group, rather than relying on settings not being null. Also included is an UPDATE query to backfill these columns. These columns are automatically filled when updating the group.
For GMail, we now filter the mailboxes returned. This is so users cannot use a mailbox like Sent or Trash for syncing, which would generally be disastrous.
There is a new group endpoint for testing email settings. This may be useful in the future for other places in our UI, at which point it can be extracted to a more generic endpoint or module to be included.
* FIX: Improve GitHub folder regexp in Onebox
It used to match any GitHub URL that was not matched by the other GitHub
Oneboxes and it did not do a good job at handling those. With this
change, the generic Onebox will handle the remaining URLs.
* FEATURE: Add Onebox for GitHub Actions
* FEATURE: Add Onebox for PR check runs
* FIX: Remove image from GitHub folder Oneboxes
It is a generic, auto-generated image which does not provide any value.
* DEV: Add tests
* FIX: Strip HTML comments from PR body
* Move onebox gem in core library
* Update template file path
* Remove warning for onebox gem caching
* Remove onebox version file
* Remove onebox gem
* Add sanitize gem
* Require onebox library in lazy-yt plugin
* Remove onebox web specific code
This code was used in standalone onebox Sinatra application
* Merge Discourse specific AllowlistedGenericOnebox engine in core
* Fix onebox engine filenames to match class name casing
* Move onebox specs from gem into core
* DEV: Rename `response` helper to `onebox_response`
Fixes a naming collision.
* Require rails_helper
* Don't use `before/after(:all)`
* Whitespace
* Remove fakeweb
* Remove poor unit tests
* DEV: Re-add fakeweb, plugins are using it
* Move onebox helpers
* Stub Instagram API
* FIX: Follow additional redirect status codes (#476)
Don’t throw errors if we encounter 303, 307 or 308 HTTP status codes in responses
* Remove an empty file
* DEV: Update the license file
Using the copy from https://choosealicense.com/licenses/gpl-2.0/#
Hopefully this will enable GitHub to show the license UI?
* DEV: Update embedded copyrights
* DEV: Add Onebox copyright notice
* DEV: Add MIT license, convert COPYRIGHT.txt to md
* DEV: Remove an incorrect copyright claim
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
Co-authored-by: jbrw <jamie@goatforce5.org>
Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
When the admin creates a new custom field they can specify if that field should be searchable or not.
That setting is taken into consideration for quick search results.
This commit allows themes and theme components to have QUnit tests. To add tests to your theme/component, create a top-level directory in your theme and name it `test`, and Discourse will save all the files in that directory (and its sub-directories) as "tests files" in the database. While tests files/directories are not required to be organized in a specific way, we recommend that you follow Discourse core's tests [structure](https://github.com/discourse/discourse/tree/master/app/assets/javascripts/discourse/tests).
Writing theme tests should be identical to writing plugins or core tests; all the `import` statements and APIs that you see in core (or plugins) to define/setup tests should just work in themes.
You do need a working Discourse install to run theme tests, and you have 2 ways to run theme tests:
* In the browser at the `/qunit` route. `/qunit` will run tests of all active themes/components as well as core and plugins. The `/qunit` now accepts a `theme_name` or `theme_url` params that you can use to run tests of a specific theme/component like so: `/qunit?theme_name=<your_theme_name>`.
* In the command line using the `themes:qunit` rake task. This take is meant to run tests of a single theme/component so you need to provide it with a theme name or URL like so: `bundle exec rake themes:qunit[name=<theme_name>]` or `bundle exec rake themes:qunit[url=<theme_url>]`.
There are some refactors to how Discourse processes JavaScript that comes with themes/components, and these refactors may break your JS customizations; see https://meta.discourse.org/t/upcoming-core-changes-that-may-break-some-themes-components-april-12/186252?u=osama for details on how you can check if your themes/components are affected and what you need to do to fix them.
This commit also improves theme error handling in Discourse. We will now be able to catch errors that occur when theme initializers are run and prevent them from breaking the site and other themes/components.
* DEV: upgrade mini_sql
Even though we are not planning on using this quite yet, mini_sql now supports
prepared statements.
Would like this upgrade merged so we can do some benchmarking.
Note, this will not work with pg_bouncer, but sites that are not using it
may benefit from the feature.
* implement multisite friendly prepared statements
Users can now pin bookmarks from their bookmark list. This will anchor the bookmark to the top of the list, and show a pin icon next to it. This also applies in the nav bookmarks panel. If there are multiple pinned bookmarks they sort by last updated order.
* FIX: Be able to handle long file extensions
Some applications have really long file extensions, but if we truncate
them weird behavior ensues.
This commit changes the file extension size from 10 characters to 255
characters instead.
See:
https://meta.discourse.org/t/182824
* Keep truncation at 10, but allow uppercase and dashes
SVG files can have dimensions expressed in inches, centimeters, etc., which may lead to the dimensions being misinterpreted (e.g. “8in” ends up as 8 pixels).
If the file type is `svg`, ask ImageMagick to work out what size the SVG file should be rendered on screen.
NOTE: The `pencil.svg` file was obtained from https://freesvg.org/1534028868, which has placed the file in to the public domain.
`convert_to_jpeg!` is only called if `convert_png_to_jpeg?` and/or `should_alter_quality?` is true.
`convert_png_to_jpeg?` can be disabled by setting `SiteSetting.png_to_jpg_quality` to 100.
However, `should_alter_quality?` could be true if `SiteSetting.recompress_original_jpg_quality` was lower than the quality of the uploaded file, regardless of file type.
This commits changes `should_alter_quality?` so that uploaded png files will use the `SiteSetting.png_to_jpg_quality` value, rather than ``SiteSetting.recompress_original_jpg_quality` value.
You can use `discourse restore --location=local FILENAME` if you want to restore a backup that is stored locally even though the `backup_location` has the value `s3`.
The 'Discourse SSO' protocol is being rebranded to DiscourseConnect. This should help to reduce confusion when 'SSO' is used in the generic sense.
This commit aims to:
- Rename `sso_` site settings. DiscourseConnect specific ones are prefixed `discourse_connect_`. Generic settings are prefixed `auth_`
- Add (server-side-only) backwards compatibility for the old setting names, with deprecation notices
- Copy `site_settings` database records to the new names
- Rename relevant translation keys
- Update relevant translations
This commit does **not** aim to:
- Rename any Ruby classes or methods. This might be done in a future commit
- Change any URLs. This would break existing integrations
- Make any changes to the protocol. This would break existing integrations
- Change any functionality. Further normalization across DiscourseConnect and other auth methods will be done separately
The risks are:
- There is no backwards compatibility for site settings on the client-side. Accessing auth-related site settings in Javascript is fairly rare, and an error on the client side would not be security-critical.
- If a plugin is monkey-patching parts of the auth process, changes to locale keys could cause broken error messages. This should also be unlikely. The old site setting names remain functional, so security-related overrides will remain working.
A follow-up commit will be made with a post-deploy migration to delete the old `site_settings` rows.
This PR adds security_last_changed_at and security_last_changed_reason to uploads. This has been done to make it easier to track down why an upload's secure column has changed and when. This necessitated a refactor of the UploadSecurity class to provide reasons why the upload security would have changed.
As well as this, a source is now provided from the location which called for the upload's security status to be updated as they are several (e.g. post creator, topic security updater, rake tasks, manual change).
A simplified version of the logic used in the function before my fix is as follow:
```ruby
result = []
things = [0,1,2,3]
max_values = 2
every = (things.size.to_f / max_values).ceil
things.each_with_index do |t, index|
next unless (t % every) === 0
result << t
end
p result # [0, 2]
# 3 doesn’t get included
```
The problem is that if you get unlucky two times you won't get last tuple(s) and might get a very erroneous date.
Double unlucky:
- last tuple index % computed every !== 0 and you don't get the last tuple
- the last tuple is related to a post with a very different date than the previous tuples (on year difference in our case)
It was a problem because during this operation only the first frame
is kept. This commit removes the alternative solution to check if a GIF
image is animated.
Previously it matched the behavior of standard ActiveRecord after_commit callbacks. They do not work well within `joinable: false` nested transactions. Now `DB.after_commit` callbacks will only be run when the outermost transaction has been committed.
Tests always run inside transactions, so this also introduces some logic to run callbacks once the test-wrapping transaction is reached.
Animated emojis were converted to static images. This commit moves the
responsability on site admins to optimize their animated emojis before
uploading them (gifsicle is no longer used).
Plugin client.en.yml and server.en.yml can now be client/server-(1-100).en.yml. 1 is the lowest priority, and 100 is the highest priority. This allows plugins to set their priority higher than other plugins, so that they can override eachothers' translations.
Per Google, sites are encouraged to upgrade from Universal Analytics v3 `analytics.js` to v4 `gtag.js` for Google Analytics tracking. We're giving admins the option to stay on the v3 API or migrate to v4. Admins can change the implementation they're using via the `ga_version` site setting. Eventually Google will deprecate v3, but our implementation gives admins the choice on what to use for now.
We chose this implementation to make the change less error prone, as many site admins are using custom events via the v3 UA API. With the site stetting defaulted to `v3_analytics`, site analytics won't break until the admin is ready to make the migration.
Additionally, in the v4 implementation, we do not enable automatic pageview tracking (on by default in the v4 API). Instead we rely on Discourse's page change API to report pageviews on transition to avoid double-tracking.
There are issues around displaying images on published pages when secure media is enabled. This PR temporarily makes it appear as if published pages are enabled if secure media is also enabled.
* FEATURE - Add SiteSettings to control JPEG image quality
`recompress_original_jpg_quality` - the maximum quality of a newly
uploaded file.
`image_preview_jpg_quality` - the maximum quality of OptimizedImages
It pauses Sidekiq, clears Redis (namespaced to the current site), clears Sidekiq jobs for the current site, restores the database and unpauses Sidekiq. Previously it stayed paused until the end of the restore.
Redis is cleared because we don't want any old data lying around (e.g. old Sidekiq jobs). Most data in Redis is prefixed with the name of the multisite, but Sidekiq jobs in a multisite are all stored in the same keys. So, deleting those jobs requires a little bit more logic.
Per Google, sites are encouraged to upgrade from `analytics.js` to `gtag.js` for Google Analytics tracking. This commit updates core Discourse to use the new `gtag.js` API Google is asking sites to use. This API has feature parity with `analytics.js` but does not use trackers.
* FIX: second factor cannot be enabled if SSO is enabled
If `enable_sso` setting is enabled then admin should not be able to
enable `enforce_second_factor` setting as that will lock users out.
Co-authored-by: Robin Ward <robin.ward@gmail.com>
UploadRecovery only worked on missing Upload records. Now it also works with existing ones that have an invalid_etag status.
The specs (first that test the S3 path) are a bit of stub-a-palooza, but that's how much this class interacts with the the outside world. 🤷♂️
Extracted commonly used spec helpers into spec/support/uploads_helpers.rb, removed unused stubs and let definitions. Makes it easier to write new S3-related specs without copy and pasting setup steps from other specs.
With secure media and the UploadSecurity class, we need a nice way for plugins to register custom upload types that should be considered public and never secure.
After restoring a backup it takes up to 48 hours for uploads stored on S3 to appear in the S3 inventory. This change prevents alerts about missing uploads by preventing the EnsureS3UploadsExistence job from running in the first 48 hours after a restore. During the restore it deletes the count of missing uploads from the PluginStore, so that an alert isn't triggered by an old number.
This PR ensures that new bookmarks cannot be created for deleted posts and topics, and also makes sure that if a bookmark was created and then the topic deleted that the show topic page does not error from trying to retrieve the bookmark reminder at.
Originally disabled in 0c0192e. Upload specs now use separate paths for each spec worker.
Fixes an issue in UploadRecovery#recover_from_local – it didn't take into account the testing infix (e.g. test_0) in the uploads/tombstone paths.
This also ensures that restoring a backup works when it was created with the wrong upload paths in the time between ab4c0a4970 (shortly after v2.6.0.beta1) and this fix.
Adds functionality to reflect topic delete in Discourse to IMAP inbox (Gmail only for now) and reflecting Gmail deletes in Discourse.
Adding lots of tests, various refactors and code improvements.
When Discourse topic is destroyed in PostDestroyer mark the topic incoming email as imap_sync: true, and do the opposite when post is recovered.
* Fixed an issue I introduced in the last PR where I am just archiving everything regardless of whether it is actually archived in Discourse man_facepalming
* Refactor group list_mailboxes IMAP code to use providers, add specs, and add provider code to get the correct prodivder
Adds a imap_group_id column to IncomingEmail to deal with an issue where we were trying to update emails in the mailbox, calling IncomingEmail.where(imap_sync: true). However UID and UIDVALIDITY could be the same across accounts. So if group A used IMAP details for Gmail account A, and group B used IMAP details for Gmail account B, and both tried to sync changes to an email with UID of 3 (e.g. changing Labels), one account could affect the other. This even applied to Archiving!
Also in this PR:
* Fix error occurring if we do a uid_fetch and no emails are returned
* Allow for creating labels within the target mailbox (previously we would not do this, only use existing labels)
* Improve consistency for log messages
* Add specs for generic IMAP provider (Gmail specs still to come)
* Add custom archiving support for Gmail
* Only use Message-ID for uniqueness of IncomingEmail if it was generated by us
* Various refactors and improvements
For the following conditions, the TopicUser.bookmarked column was not updated correctly:
* When a bookmark was auto-deleted because the reminder was sent
* When a bookmark was auto-deleted because the owner of the bookmark replied to the topic
This adds another migration to fix the out-of-sync column and also some refactors to BookmarkManager to allow for more of these delete cases. BookmarkManager is used instead of directly destroying the bookmark in PostCreator and BookmarkReminderNotificationHandler.
This adds an option to "delete on owner reply" to bookmarks. If you select this option in the modal, then reply to the topic the bookmark is in, the bookmark will be deleted on reply.
This PR also changes the checkboxes for these additional bookmark options to an Integer column in the DB with a combobox to select the option you want.
The use cases are:
* Sometimes I will bookmark the topics to read it later. In this case we definitely don’t need to keep the bookmark after I replied to it.
* Sometimes I will read the topic in mobile and I will prefer to reply in PC later. Or I may have to do some research before reply. So I will bookmark it for reply later.
* Remove unneeded bookmark name index.
* Change bookmark search query to use post_search_data. This allows searching on topic title and post content
* Tweak the style/layout of the bookmark list so the search looks better and the whole page fits better on mobile.
We have a couple of examples of enormous amounts of text being entered in the name column of bookmarks. This is not desirable...it is just meant to be a short note / reminder of why you bookmarked this.
This PR caps the column at 100 characters and truncates existing names in the database to 100 characters.
The `EXECUTE FUNCTION` syntax for `CREATE TRIGGER` statements was introduced in PostgreSQL 11. We need to replace `EXECUTE FUNCTION` with `EXECUTE PROCEDURE` in order to be able to restore backups created with PG12 on PG10.
Fixed bugs, added specs, extracted the upload downsizing code to a class, added support for non-S3 setups, changed it so that images aren't downloaded twice.
This code has been tested on production and successfully resized ~180k uploads.
Includes:
* DEV: Extract upload downsizing logic
* DEV: Add support for non-S3 uploads
* DEV: Process only images uploaded by users
* FIX: Incorrect usage of `count` and `exist?` typo
* DEV: Spec S3 image downsizing
* DEV: Avoid downloading images twice
* DEV: Update filesizes earlier in the process
* DEV: Return false on invalid upload
* FIX: Download images that currently above the limit (If the image size limit is decreased, then there was no way to resize those images that now fall outside the allowed size range)
* Update script/downsize_uploads.rb (Co-authored-by: Régis Hanol <regis@hanol.fr>)
* DEV: new S3 backup layout
Currently, with $S3_BACKUP_BUCKET of "bucket/backups", multisite backups
end up in "bucket/backups/backups/dbname/" and single-site will be in
"bucket/backups/".
Both _should_ be in "bucket/backups/dbname/"
- remove MULTISITE_PREFIX,
- always include dbname,
- method to move to the new prefix
- job to call the method
* SPEC: add tests for `VacateLegacyPrefixBackups` onceoff job.
Co-authored-by: Vinoth Kannan <vinothkannan@vinkas.com>
This reverts commit 20780a1eee.
* SECURITY: re-adds accidentally reverted commit:
03d26cd6: ensure embed_url contains valid http(s) uri
* when the merge commit e62a85cf was reverted, git chose the 2660c2e2 parent to land on
instead of the 03d26cd6 parent (which contains security fixes)
PG already handles English stop words, the list in cppjieba is
bigger than the list PG uses, which in turn causes confusion cause
words such as "volume" are stripped using cppijieba stop word list
We will follow up with another commit here to apply the Chinese
word stopwords, but for now to eliminate the confusion we are
skipping applying the stopword list when the dictionary in PG is
in English.
We now show an options gear icon next to the bookmark name.
When expanded we show the "delete bookmark when reminder sent" option. The value of this checkbox is saved in local storage for the user.
If this is ticked, when a reminder is sent for the bookmark the bookmark itself is deleted. This is so people can use the reminder functionality by itself.
Also remove the blue alert reminder section from the "Edit Bookmark" modal as it just added clutter, because the user can already see they had a reminder set:
Adds a default false boolean column `delete_when_reminder_sent` to bookmarks.
Locale files get precompiled after deployment and they contained translations from the `default_locale`. That's especially bad in multisites, because the initial `default_locale` is `en_US`. Sites where the `default_locale` isn't `en_US` could see missing translations. The same thing could happen when users are allowed to chose a different locale.
This change simplifies the logic by not using the `default_locale` in the locale chain. It always falls back to `en` in case of missing translations.
We were sharing `Discourse` both as an application object and a
namespace which complicated things for Ember CLI. This patch
moves raw templates into `__DISCOURSE_RAW_TEMPLATES` and adds
a couple helper methods to create/remove them.
This introduces new APIs for obtaining optimized thumbnails for topics. There are a few building blocks required for this:
- Introduces new `image_upload_id` columns on the `posts` and `topics` table. This replaces the old `image_url` column, which means that thumbnails are now restricted to uploads. Hotlinked thumbnails are no longer possible. In normal use (with pull_hotlinked_images enabled), this has no noticeable impact
- A migration attempts to match existing urls to upload records. If a match cannot be found then the posts will be queued for rebake
- Optimized thumbnails are generated during post_process_cooked. If thumbnails are missing when serializing a topic list, then a sidekiq job is queued
- Topic lists and topics now include a `thumbnails` key, which includes all the available images:
```
"thumbnails": [
{
"max_width": null,
"max_height": null,
"url": "//example.com/original-image.png",
"width": 1380,
"height": 1840
},
{
"max_width": 1024,
"max_height": 1024,
"url": "//example.com/optimized-image.png",
"width": 768,
"height": 1024
}
]
```
- Themes can request additional thumbnail sizes by using a modifier in their `about.json` file:
```
"modifiers": {
"topic_thumbnail_sizes": [
[200, 200],
[800, 800]
],
...
```
Remember that these are generated asynchronously, so your theme should include logic to fallback to other available thumbnails if your requested size has not yet been generated
- Two new raw plugin outlets are introduced, to improve the customisability of the topic list. `topic-list-before-columns` and `topic-list-before-link`
* When hovering over the bookmark icon for a post, show the name of the bookmark at the end of the tooltip _if_ it has been set.
* Order bookmarks by `updated_at DESC` in the user list and show that instead of created at.
Make sure the topic_user.bookmarked column is set correctly when user bookmarks/unbookmarks any post in a topic. For example, you bookmarked a post in the topic that was not the OP, the bookmark icon in the topic list would not be shown. Same if deleting a bookmark for the last bookmarked post in a topic, the bookmark icon in the topic list would not be removed.
Previously this was only setting it to true if bookmarking the OP/topic, which was not correct -- we want to show the icon on the topic list if any post is bookmarked.
Also set to false if unbookmarking the last bookmarked post in the topic.
Also in this PR is a migration to correct any out of sync topic_user.bookmarked columns, based on the new logic.
The main thrust of this PR is to take all the conditional checks based on the `enable_bookmarks_with_reminders` away and only keep the code from the `true` path, making bookmarks with reminders the core bookmarks feature. There is also a migration to create `Bookmark` records out of `PostAction` bookmarks for a site.
### Summary
* Remove logic based on whether enable_bookmarks_with_reminders is true. This site setting is now obsolete, the old bookmark functionality is being removed. Retain the setting and set the value to `true` in a migration.
* Use the code from the rake task to create a database migration that creates bookmarks from post actions.
* Change the bookmark report to read from the new table.
* Get rid of old endpoints for bookmarks
* Link to the new bookmarks list from the user summary page
Otherwise we are trying to update the reminder type with a string which often evaluates to 0 (At Desktop) which causes reminders to come through early.
There is now an explicit "Delete Bookmark" button in the edit modal. A confirmation is shown before deleting.
Along with this, when the bookmarked post icon is clicked the modal is now shown instead of just deleting the bookmark. Also, the "Delete Bookmark" button from the user bookmark list now confirms the action.
Add a `d d` shortcut in the modal to delete the bookmark.
Users can now edit the bookmark name and reminder time from their list of bookmarks.
We use "Custom" for the date and time in the modal because if the user set a reminder for "tomorrow" then edit the reminder "tomorrow", the definition of what "tomorrow" is has changed.
* Count user summary bookmarks from new Bookmark table if bookmarks with reminders enabled
* Update topic user bookmarked column when new topic bookmark changed
* Make in:bookmarks search work with new bookmarks
* Fix batch inserts for bookmark rake task (and thus migration). We were only inserting one bookmark at a time, completely defeating the purpose of batching!
This adds rubocop-rspec, and enables some cops that were either already passing or are passing now, after fixing them in this commit.
Some new cops are disabled for now, with annotation: "TODO" or "To be decided". Those either need to be discussed first, or require manual changes, or the number of found and fixed offenses is too large to bundle them up in a single PR.
Includes:
* DEV: Update rubocop's `TargetRubyVersion` to 2.6
* DEV: Enable RSpec/VoidExpect
* DEV: Enable RSpec/SharedContext
* DEV: Enable RSpec/EmptyExampleGroup (Removed an obsolete empty spec file)
* DEV: Enable RSpec/ItBehavesLike
* DEV: Remove RSpec/ScatteredLet (It's too strict, as it doesn't recognize fab! as a let-like)
* DEV: Remove RSpec/MultipleExpectations
If the “secure media” site setting is enabled then ALL files uploaded to Discourse (images, video, audio, pdf, txt, zip etc. etc.) will follow the secure media rules. The “prevent anons from downloading files” setting will no longer have any bearing on upload security. Basically, the feature will more appropriately be called “secure uploads” instead of “secure media”.
This is being done because there are communities out there that would like all attachments and media to be secure based on category rules but still allow anonymous users to download attachments in public places, which is not possible in the current arrangement.
- Define the CSP based on the requested domain / scheme (respecting force_https)
- Update EnforceHostname middleware to allow secondary domains, add specs
- Add URL scheme to anon cache key so that CSP headers are cached correctly
* Preload custom fields for BookmarkQuery and add preload callback. Copy TopicQuery preload methodology to allow plugins to preload data for the BookmarkQuery. This fixes assigned plugin custom fields N1
* Include topic tags in initial query to avoid tags N1
Related: discourse/discourse-assign#63
Rails calls I18n.translate during initialization and by default translation overrides are used. Database migrations would fail if the system tried to migrate from an old version that didn't have the `translation_overrides` table with all its columns yet.
This makes restoring really old backups work again. Running `DISABLE_TRANSLATION_OVERRIDES=1 rake db:migrate` will allow you to upgrade such an old database as well.
* This PR changes the user activity bookmarks stream to show a new list of bookmarks based on the Bookmark record.
* If a bookmark has a name or reminder it will be shown as metadata above the topic title in the list
* The categories, tags, topic status, and assigned show for each bookmarked post based on the post topic
* Bookmarks can be deleted from the [...] menu in the list
* As well as this, the list of bookmarks from the quick access panel is now drawn from the Bookmarks table for a user:
* All of this new functionality is gated behind the enable_bookmarks_with_reminders site setting
The /bookmarks/ route now redirects directly to /user/:username/activity/bookmarks-with-reminders
* The structure of the Ember for the list of bookmarks is not ideal, this is an MVP PR so we can start testing this functionality internally. There is a little repeated code from topic.js.es6. There is an ongoing effort to start standardizing these lists that will be addressed in future PRs.
* This PR also fixes issues with feature detection for at_desktop bookmark reminders
A custom date and time can now be selected for a bookmark reminder
The reminder will not happen at the exact time but rather at the next 5 minute interval of the bookmark reminder schedule.
This PR also fixes issues with bulk deleting topic bookmarks.
* This PR implements the scheduling and notification system for bookmark reminders. Every 5 minutes a schedule runs to check any reminders that need to be sent before now, limited to **300** reminders at a time. Any leftover reminders will be sent in the next run. This is to avoid having to deal with fickle sidekiq and reminders in the far-flung future, which would necessitate having a background job anyway to clean up any missing `enqueue_at` reminders.
* If a reminder is sent its `reminder_at` time is cleared and the `reminder_last_sent_at` time is filled in. Notifications are only user-level notifications for now.
* All JavaScript and frontend code related to displaying the bookmark reminder notification is contained here. The reminder functionality is now re-enabled in the bookmark modal as well.
* This PR also implements the "Remind me next time I am at my desktop" bookmark reminder functionality. When the user is on a mobile device they are able to select this option. When they choose this option we set a key in Redis saying they have a pending at desktop reminder. The next time they change devices we check if the new device is desktop, and if it is we send reminders using a DistributedMutex. There is also a job to ensure consistency of these reminders in Redis (in case Redis drops the ball) and the at desktop reminders expire after 20 days.
* Also in this PR is a fix to delete all Bookmarks for a user via `UserDestroyer`
* Remove some `.es6` from comments where it does not matter
* Use a post processor for transpilation
This will allow us to eventually use the directory structure to
transpile rather than the extension.
* FIX: Some errors and clean up in confirm-new-email
It would throw an error if the webauthn element wasn't present.
Also I changed things so that no-module is not explicitly
referenced.
* Remove `no-module`
Instead we allow a magic comment: `// discourse-skip-module` to prevent
the asset pipeline from creating a module.
* DEV: Enable babel transpilation based on directory
If it's in `app/assets/javascripts/dicourse` it will be transpiled
even without the `.es6` extension.
* REFACTOR: Remove Tilt/ES6ModuleTranspiler
There are three modifiers:
- serialize_topic_excerpts (boolean)
- csp_extensions (array of strings)
- svg_icons (array of strings)
When multiple themes are active, the values will be combined. The combination method varies based on the setting. CSP/SVG arrays will be combined. serialize_topic_excerpts will use `Enumerable#any`.
* Add uploads:sync_s3_acls rake task to ensure the ACLs in S3 are the correct (public-read or private) setting based on upload security
* Improved uploads:disable_secure_media to be more efficient and provide better messages to the user.
* Rename uploads:ensure_correct_acl task to uploads:secure_upload_analyse_and_update as it does more than check the ACL
* Many improvements to uploads:secure_upload_analyse_and_update
* Make sure that upload.access_control_post is unscoped so deleted posts are still fetched, because they still affect the security of the upload.
* Add escape hatch for capture_stdout in the form of RAILS_ENABLE_TEST_STDOUT. If provided the capture_stdout code will be ignored, so you can see the output if you need.
* Also fixes an issue where if webp was a downloaded hotlinked
image and then secure + sent in an email, it was not being
redacted because webp was not a supported media format in
FileHelper
* Webp originally removed as an image format in
https://github.com/discourse/discourse/pull/6377
and there was a spec to make sure a .bin webp
file did not get renamed from its type to webp.
However we want to support webp images now to make
sure they are properly redacted if secure media is
on, so change the example in the spec to use tiff,
another banned format, instead
* Attachments (non media files) were being marked as secure if just
SiteSetting.prevent_anons_from_downloading_files was enabled. this
was not correct as nothing should be marked as actually "secure" in
the DB without that site setting enabled
* Also add a proper standalone spec file for the upload security class
* DEV: Use Ember 3.12.2
* Add Ember version to ThemeField's DEPENDENT_CONSTANTS
* DEV: Use `id` instead of `elementId` (See: https://github.com/emberjs/ember.js/issues/18147)
* FIX: Don't leak event listeners (bug introduced in 999e2ff)
Add TopicUploadSecurityManager to handle post moves. When a post moves around or a topic changes between categories and public/private message status the uploads connected to posts in the topic need to have their secure status updated, depending on the security context the topic now lives in.
Custom emoji, profile background, and card background were being set to secure, which we do not want as they are always in a public context and result in a 403 error from the ACL if linked directly.
### General Changes and Duplication
* We now consider a post `with_secure_media?` if it is in a read-restricted category.
* When uploading we now set an upload's secure status straight away.
* When uploading if `SiteSetting.secure_media` is enabled, we do not check to see if the upload already exists using the `sha1` digest of the upload. The `sha1` column of the upload is filled with a `SecureRandom.hex(20)` value which is the same length as `Upload::SHA1_LENGTH`. The `original_sha1` column is filled with the _real_ sha1 digest of the file.
* Whether an upload `should_be_secure?` is now determined by whether the `access_control_post` is `with_secure_media?` (if there is no access control post then we leave the secure status as is).
* When serializing the upload, we now cook the URL if the upload is secure. This is so it shows up correctly in the composer preview, because we set secure status on upload.
### Viewing Secure Media
* The secure-media-upload URL will take the post that the upload is attached to into account via `Guardian.can_see?` for access permissions
* If there is no `access_control_post` then we just deliver the media. This should be a rare occurrance and shouldn't cause issues as the `access_control_post` is set when `link_post_uploads` is called via `CookedPostProcessor`
### Removed
We no longer do any of these because we do not reuse uploads by sha1 if secure media is enabled.
* We no longer have a way to prevent cross-posting of a secure upload from a private context to a public context.
* We no longer have to set `secure: false` for uploads when uploading for a theme component.
Some specs use psql to test database restores and dropping the table after the test needs to happen outside of rspec because of transactions. The previous attempt lead to some changes to be stored in the test database.
Added a fix to gracefully error with a Webauthn::SecurityKeyError if somehow a user provides an unkown COSE algorithm when logging in with a security key.
If `COSE::Algorithm.find` returns nil we now fail gracefully and log the algorithm used along with the user ID and the security key params for debugging, as this will help us find other common algorithms to implement for webauthn
- Refactor source_url to avoid using eval in development
- Precompile handlebars in development
- Include template compilers when running qunit
- Remove unsafe-eval in development CSP
- Include unsafe-eval only for qunit routes in development
* Add timezone to user_options table
* Also migrate existing timezone values from UserCustomField,
which is where the discourse-calendar plugin is storing them
* Allow user to change their core timezone from Profile
* Auto guess & set timezone on login & invite accept & signup
* Serialize user_options.timezone for group members. this is so discourse-group-timezones can access the core user timezone, as it is being removed in discourse-calendar.
* Annotate user_option with timezone
* Validate timezone values
* Abort CensoredWordsValidator early if censored_words_regexp nil. Sometimes censored_words_regex can end up nil, erroring the validator. This handles the nil condition and also adds a spec for the validator
The secure media functionality relied on `SiteSetting.enable_s3_uploads?` which, as we found in dev, did not take into account global S3 settings via `GlobalSetting.use_s3?`. We now use `SiteSetting.Upload.enable_s3_uploads` instead to be more consistent.
Also, we now validate `enable_s3_uploads` changes, because if `GlobalSetting.use_s3?` is true users should NOT be enabling S3 uploads manually.
This PR introduces a new secure media setting. When enabled, it prevent unathorized access to media uploads (files of type image, video and audio). When the `login_required` setting is enabled, then all media uploads will be protected from unauthorized (anonymous) access. When `login_required`is disabled, only media in private messages will be protected from unauthorized access.
A few notes:
- the `prevent_anons_from_downloading_files` setting no longer applies to audio and video uploads
- the `secure_media` setting can only be enabled if S3 uploads are already enabled and configured
- upload records have a new column, `secure`, which is a boolean `true/false` of the upload's secure status
- when creating a public post with an upload that has already been uploaded and is marked as secure, the post creator will raise an error
- when enabling or disabling the setting on a site with existing uploads, the rake task `uploads:ensure_correct_acl` should be used to update all uploads' secure status and their ACL on S3
These specs are inherently fragile when they are run in a concurrent mode
in CI.
One process will create an upload, another will destroy the upload on disk
at the same time. We need a new safe mechanism here.
I was searching for a reason for randomly failing jobs_base_spec.rb. The reason was that after restorer_spec, the database is not restored to default.
After restorer spec RailsMultisite::ConnectionManagement.all_dbs is returning array of ['default', 'second']
Then base job execution is evaluated twice
```
dbs = RailsMultisite::ConnectionManagement.all_dbs
dbs.each do |db|
execute(opts)
end
```
That commit introduced a bug to the system: f69dacf979
Restore works fine for multisite, however, stopped working for non-multisite.
Reason for that was that `establish_connection` method got a check if the multisite instance is available:
```
def self.instance
@instance
end
def self.establish_connection(opts)
@instance.establish_connection(opts) if @instance
end
```
However, the reload method don't have that check
```
def self.reload
@instance = new(instance.config_filename)
end
```
To solve it, let's ensure we are in a multisite environment before call reload
It was possible to add a category to more than one default group, e.g. "default categories muted" and "default categories watching first post".
The bug was caused by category validations inadvertently comparing strings and numbers.
* FEATURE: Adds an extra protection layer when decompressing files.
* Rename exporter/importer to zip importer. Update old locale
* Added a new composite class to decompress a file with multiple strategies
* Set max file size inside a site setting
* Ensure that file is deleted after compression
* Sanitize path and files before compressing/decompressing
Zeitwerk simplifies working with dependencies in dev and makes it easier reloading class chains.
We no longer need to use Rails "require_dependency" anywhere and instead can just use standard
Ruby patterns to require files.
This is a far reaching change and we expect some followups here.
Adds 2 factor authentication method via second factor security keys over [web authn](https://developer.mozilla.org/en-US/docs/Web/API/Web_Authentication_API).
Allows a user to authenticate a second factor on login, login-via-email, admin-login, and change password routes. Adds registration area within existing user second factor preferences to register multiple security keys. Supports both external (yubikey) and built-in (macOS/android fingerprint readers).
Follow up to [FIX: Empty backup names with unicode site titles][1]
- Use .presence - "It's cleaner"
- Update spec to use System.system_user so it is more readable
[1]: c8661674d4
This is a low severity security fix because it requires a logged in
admin user to update a site setting via the API directly to an invalid
value.
The fix adds validation for the affected site settings, as well as a
secondary fix to prevent injection in the event of bad data somehow
already exists.
* Support private uploads in S3
* Use localStore for local avatars
* Add job to update private upload ACL on S3
* Test multisite paths
* update ACL for private uploads in migrate_to_s3 task
* Introduced fab!, a helper that creates database state for a group
It's almost identical to let_it_be, except:
1. It creates a new object for each test by default,
2. You can disable it using PREFABRICATION=0
This removes all uses of both `send` and `public_send` from consumers of
SiteSetting and instead introduces a `get` helper for dynamic lookup
This leads to much cleaner and safer code long term as we are always explicit
to test that a site setting is really there before sending an arbitrary
string to the class
It also removes a couple of risky stubs from the auth provider test
`Upload#url` is more likely and can change from time to time. When it
does changes, we don't want to have to look through multiple tables to
ensure that the URLs are all up to date. Instead, we simply associate
uploads properly to `UserProfile` so that it does not have to replicate
the URLs in the table.
This change both speeds up specs (less strings to allocate) and helps catch
cases where methods in Discourse are mutating inputs.
Overall we will be migrating everything to use #frozen_string_literal: true
it will take a while, but this is the first and safest move in this direction
Previously we would bypass touching `Topic.updated_at` for whispers and post
recovery / deletions.
This meant that certain types of caching can not be done where we rely on
this information for cache accuracy.
For example if we know we have zero unread topics as of yesterday and whisper
is made I need to bump this date so the cache remains accurate
This is only half of a larger change but provides the groundwork.
Confirmed none of our serializers leak out Topic.updated_at so this is safe
spot for this info
At the moment edits still do not change this but it is not relevant for the
unread cache.
This commit also cleans up some specs to use the new `eq_time` matcher for
millisecond fidelity comparison of times
Previously `freeze_time` would fudge this which is not that clean.
It is not a setting, and only relevant in specs. The new API is:
```
Jobs.run_later! # jobs will be thrown on the queue
Jobs.run_immediately! # jobs will run right away, avoid the queue
```
Previously if you wanted to have jobs execute in test mode, you'd have
to do `SiteSetting.queue_jobs = false`, because the opposite of queue
is to execute.
I found this very confusing, so I created a test helper called
`run_jobs_synchronously!` which is much more clear about what it does.
- Themes can supply translation files in a format like `/locales/{locale}.yml`. These files should be valid YAML, with a single top level key equal to the locale being defined. For now these can only be defined using the `discourse_theme` CLI, importing a `.tar.gz`, or from a GIT repository.
- Fallback is handled on a global level (if the locale is not defined in the theme), as well as on individual keys (if some keys are missing from the selected interface language).
- Administrators can override individual keys on a per-theme basis in the /admin/customize/themes user interface.
- Theme developers should access defined translations using the new theme prefix variables:
JavaScript: `I18n.t(themePrefix("my_translation_key"))`
Handlebars: `{{theme-i18n "my_translation_key"}}` or `{{i18n (theme-prefix "my_translation_key")}}`
- To design for backwards compatibility, theme developers can check for the presence of the `themePrefix` variable in JavaScript
- As part of this, the old `{{themeSetting.setting_name}}` syntax is deprecated in favour of `{{theme-setting "setting_name"}}`
This commit introduces a new helper to enable transactional fixtures
when testing multisite. This would show up as tests that passed the
first time then failed the second time due to stale data being leftover.
This feature ensures optimized images run via pngquant, this results extreme amounts of savings for resized images. Effectively the only impact is that the color palette on small resized images is reduced to 256.
To ensure safety we only apply this optimisation to images smaller than 500k.
This commit also makes a bunch of image specs less fragile.
This validation makes sure that the s3_upload_bucket and the
s3_backup_bucket have different values. The backup bucket is
allowed to be a subfolder of the upload bucket. The other way
around is forbidden because the backup system searches by
prefix and would return all files stored within the backup
bucket and its subfolders.
* Dashboard doesn't timeout anymore when Amazon S3 is used for backups
* Storage stats are now a proper report with the same caching rules
* Changing the backup_location, s3_backup_bucket or creating and deleting backups removes the report from the cache
* It shows the number of backups and the backup location
* It shows the used space for the correct backup location instead of always showing used space on local storage
* It shows the date of the last backup as relative date
The wizard searches for:
* a topic that with the "is_welcome_topic" custom field
* a topic with the correct slug for the current default locale
* a topic with the correct slug for the English locale
* the oldest globally pinned topic
It gives up if it didn't find any of the above.
* FEATURE: allow plugins and themes to extend the default CSP
For plugins:
```
extend_content_security_policy(
script_src: ['https://domain.com/script.js', 'https://your-cdn.com/'],
style_src: ['https://domain.com/style.css']
)
```
For themes and components:
```
extend_content_security_policy:
type: list
default: "script_src:https://domain.com/|style_src:https://domain.com"
```
* clear CSP base url before each test
we have a test that stubs `Rails.env.development?` to true
* Only allow extending directives that core includes, for now
* Add missing icons to set
* Revert FA5 revert
This reverts commit 42572ff
* use new SVG syntax in locales
* Noscript page changes (remove login button, center "powered by" footer text)
* Cast wider net for SVG icons in settings
- include any _icon setting for SVG registry (offers better support for plugin settings)
- let themes store multiple pipe-delimited icons in a setting
- also replaces broken onebox image icon with SVG reference in cooked post processor
* interpolate icons in locales
* Fix composer whisper icon alignment
* Add support for stacked icons
* SECURITY: enforce hostname to match discourse hostname
This ensures that the hostname rails uses for various helpers always matches
the Discourse hostname
* load SVG sprite with pre-initializers
* FIX: enable caching on SVG sprites
* PERF: use JSONP for SVG sprites so they are served from CDN
This avoids needing to deal with CORS for loading of the SVG
Note, added the svg- prefix to the filename so we can quickly tell in
dev tools what the file is
* Add missing SVG sprite JSONP script to CSP
* Upgrade to FA 5.5.0
* Add support for all FA4.7 icons
- adds complete frontend and backend for renamed FA4.7 icons
- improves performance of SvgSprite.bundle and SvgSprite.all_icons
* Fix group avatar flair preview
- adds an endpoint at /svg-sprites/search/:keyword
- adds frontend ajax call that pulls icon in avatar flair preview even when it is not in subset
* Remove FA 4.7 font files
This also adjusts the algorithm to expect
- 30% saving for JPEG conversion
AND
- Minimum of 75K bytes saved
The reasoning for increase of saving requirements is cause PNG may have been
uploaded unoptimized, 30% saving on PNG is very possible
* Cuts number of queries from 273 to 89
* Add some specs
* For a table with 500 posts, benchmarks locally shows a runtime
reduction from 0.046929135 to 0.032694705.
* FEATURE: Added MaxMindDb to resolve IP information.
* FEATURE: Added browser detection based on user agent.
* FEATURE: Added recently used devices in user preferences.
* DEV: Added acceptance test for recently used devices.
* UX: Do not show 'Show more' button if there aren't more tokens.
* DEV: Fix unit tests.
* DEV: Make changes after code review.
* Add more detailed unit tests.
* Improve logging messages.
* Minor coding style fixes.
* DEV: Use DropdownSelectBoxComponent and run Prettier.
* DEV: Fix unit tests.
In the past the filename of the origin was used as the source
for the extension of the file when optimizing on upload.
We now use the actual calculated extension based on upload data.