We were changing the user's user_option.bookmark_auto_delete_preference
to whatever they changed it to in the bookmark modal to use as default
for future bookmarks. However this was leading to a lot of confusion
since if you wanted to set it for one bookmark you had to remember to
change it back on the next one.
This commit removes that automatic functionality, and instead moves
the bookmark auto delete preference to User Preferences > Interface
in an explicit dropdown.
This commit adds a new notification that gets sent to admins when the site gets new features after an upgrade/deploy. Clicking on the notification takes the admin to the admin dashboard at `/admin` where they can see the new features under the "New Features" section.
Internal topic: t/87166.
Depending on the current state of things, sometimes the homepage style
wouldn't update because we were incorrectly blocking updates the
`desktop_category_page_style` site setting if the first item in the top
menu was 'categories'.
Added a test case to handle this situation.
See https://meta.discourse.org/t/248354
The guardian is useful for plugins to determine if the callback should
do anything. A common use case is to not do anything in the callback if
the user is anonymous.
* FEATURE: Add chat and sidebar toggles to the setup wizard
- Fix css alighnment
- Add Enable Chat Toggle
- Add Enable Sidebar Toggle
* Check for the chat plugin
* Account for new sidebar step
* update chat and sidebar description
* UI: add checkmark as a visual indicator that it is enabled
* use new navigation_memu site setting for enabling the sidebar
* fix tests
* Add tests
* Update lib/wizard/step_updater.rb
Use HEADER_DROPDOWN instead of LEGACY
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
* Fix spec. Use HEADER_DROPDOWN instead of LEGACY
Co-authored-by: Ella <ella.estigoy@gmail.com>
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
The tsquery used for searching is generated using both functions from
Ruby and Postgresql (for example, unaccent function). Depending on the
term used, it generated an invalid tsquery. For example "can’t"
generated "''can''t''" instead of "''can''''t''".
* FIX: Use Category.secured(guardian) for hashtag datasource
Follow up to comments in #19219, changing the category
hashtag datasource to use Category.secured(guardian) instead
of Site.new(guardian).categories here since the latter does
more work for not much benefit, and the query time is the
same. Also eliminates some Hash -> Model back and forth
busywork. Add some more specs too.
* FIX: Server-side hashtag lookup cooking user loading
When we were using the PrettyText.options.currentUser
and parsing back and forth with JSON for the hashtag
lookups server-side, we had a bug where the user's
secure categories were not loaded since we never actually
loaded a User model from the database, only parsed it
from JSON.
This commit fixes the issue by instead using the
PretyText.options.userId and looking up the user directly
from the database when calling hashtag_lookup via the
PrettyText::Helpers code when cooking server-side. Added
the missing spec to check for this as well.
If configured, this will be used for static JS assets which are stored on S3. This can be useful if you want to use different CDN providers/configuration for Uploads and JS
This new site setting replaces the
`enable_experimental_sidebar_hamburger` and `enable_sidebar` site
settings as the sidebar feature exits the experimental phase.
Note that we're replacing this without depreciation since the previous
site setting was considered experimental.
Internal Ref: /t/86563
When looking up hashtags which were conflicting (e.g.
management::tag and management) where the user did
not have permission for one of them, we ended up returning
the one they did have permission to (e.g. the tag) twice
because of the way the lookup fallback code worked. This
fixes the issue, and another related one where the
::type was not added to the found item's .ref, and
so the hashtag replacement on the client was not working
correctly.
Update failing spec which previously used non-staff user to create
hidden posts.
Also add new spec for non-staff use cases to prevent future
regressions.
In some cases (e.g. user notification emails) we
are passing an excerpted/stripped version of the
post HTML to Email::Styles, at which point the
<span> elements surrounding the hashtag text have
been stripped. This caused an error when trying to
remove that element to replace the text.
Instead we can just remove all elements inside
a.hashtag-cooked and replace with the raw #hashtag
text which will work in more cases.
The centralization helps in reducing code duplication in our code base
and more importantly, centralizing logic for guardian checks into a
single spot.
This is closer to git's redirect following behaviour. We prevented git
following redirects when we clone in order to prevent SSRF attacks.
Follow-up-to: 291bbc4fb9
When doing local oneboxes we sometimes want to allow
SVGs in the final preview HTML. The main case currently
is for the new cooked hashtags, which include an SVG
icon.
SVGs will be included in local oneboxes via `ExcerptParser` _only_
if they have the d-icon class, and if the caller for `post.excerpt`
specifies the `keep_svg: true` option.
Adds stats for API and user API requests similar to regular page views.
This comes with a new report to visualize API requests per day like the
consolidated page views one.
This commit introduce a new API for registering callbacks, which we'll execute when a user gets destroyed, and the `delete_posts` opt is true. The chat plugin registers one callback and queues a job to destroy every message from that user in batches.
Previously we would unconditionally fetch all images via HTTP to grab
original sizing from cooked post processor in 2 different spots.
This was wasteful as we already calculate and cache this info in upload records.
This also simplifies some specs and reduces use of mocks.
The `decodedMap` prop comes from https://github.com/terser/terser/pull/1190
> This also exposes a new `decodedMap` property on the result object. Decoded maps are free to create (it's a shallow clone of the `GenMapping` instance), and passing them to `@jridgewell/trace-mapping` is copy-free. With Babel [recently](https://github.com/babel/babel/pull/14497) adding a `decodedMap` field, a dev could pass from the Babel transpilation to Terser without any added memory use for sourcemaps.
Raw paths like `/test/path` are not supported natively in the CSP. This commit prepends the site's base URL to these paths. This allows plugins to add 'local' assets to the CSP without needing to hardcode the site's hostname.
In this PR, we're making sure when a theme upload which is used in the theme's CSS is missing it won't break the stylesheet precompilation process. See also: 6ebd2cecda
Adds the description as a title="" attribute on the hashtag
autocomplete search items for tags, categories, and channels.
These descriptions can be seen by the user since they are
able to see the results that are returned by the search via
Guardian checks.
This fix changes the hashtag-raw hashtags, which are
the ones that do not actually match anything, back
to the old style which does not look like mentions.
This will be used by plugins to handle the client side of their custom
post validations without having to overwrite the whole composer save
action as it was done in other plugins.
Co-authored-by: Penar Musaraj <pmusaraj@gmail.com>
This commit fleshes out and adds functionality for the new `#hashtag` search and
lookup system, still hidden behind the `enable_experimental_hashtag_autocomplete`
feature flag.
**Serverside**
We have two plugin API registration methods that are used to define data sources
(`register_hashtag_data_source`) and hashtag result type priorities depending on
the context (`register_hashtag_type_in_context`). Reading the comments in plugin.rb
should make it clear what these are doing. Reading the `HashtagAutocompleteService`
in full will likely help a lot as well.
Each data source is responsible for providing its own **lookup** and **search**
method that returns hashtag results based on the arguments provided. For example,
the category hashtag data source has to take into account parent categories and
how they relate, and each data source has to define their own icon to use for the
hashtag, and so on.
The `Site` serializer has two new attributes that source data from `HashtagAutocompleteService`.
There is `hashtag_icons` that is just a simple array of all the different icons that
can be used for allowlisting in our markdown pipeline, and there is `hashtag_context_configurations`
that is used to store the type priority orders for each registered context.
When sending emails, we cannot render the SVG icons for hashtags, so
we need to change the HTML hashtags to the normal `#hashtag` text.
**Markdown**
The `hashtag-autocomplete.js` file is where I have added the new `hashtag-autocomplete`
markdown rule, and like all of our rules this is used to cook the raw text on both the clientside
and on the serverside using MiniRacer. Only on the server side do we actually reach out to
the database with the `hashtagLookup` function, on the clientside we just render a plainer
version of the hashtag HTML. Only in the composer preview do we do further lookups based
on this.
This rule is the first one (that I can find) that uses the `currentUser` based on a passed
in `user_id` for guardian checks in markdown rendering code. This is the `last_editor_id`
for both the post and chat message. In some cases we need to cook without a user present,
so the `Discourse.system_user` is used in this case.
**Chat Channels**
This also contains the changes required for chat so that chat channels can be used
as a data source for hashtag searches and lookups. This data source will only be
used when `enable_experimental_hashtag_autocomplete` is `true`, so we don't have
to worry about channel results suddenly turning up.
------
**Known Rough Edges**
- Onebox excerpts will not render the icon svg/use tags, I plan to address that in a follow up PR
- Selecting a hashtag + pressing the Quote button will result in weird behaviour, I plan to address that in a follow up PR
- Mixed hashtag contexts for hashtags without a type suffix will not work correctly, e.g. #ux which is both a category and a channel slug will resolve to a category when used inside a post or within a [chat] transcript in that post. Users can get around this manually by adding the correct suffix, for example ::channel. We may get to this at some point in future
- Icons will not show for the hashtags in emails since SVG support is so terrible in email (this is not likely to be resolved, but still noting for posterity)
- Additional refinements and review fixes wil
The hidden site setting `suppress_secured_categories_from_admin` will
suppress visibility of categories without explicit access from admins
in a few key areas (category drop downs and topic lists)
It is not intended to be a security wall since admins can amend any site
setting. Instead it is feature that allows hiding the categories from the
UI.
Admins will still be able to see topics in categories without explicit
access using direct URLs or flags.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
* FEATURE: Default Composer Category Site Setting
- Create the default_composer_category site setting
- Replace general_category_id logic for auto selecting the composer
category
- Prevent Uncategorized from being selected if not allowed
- Add default_composer_category option to seeded categories
- Create a migration to populate the default_composer_category site
setting if there is a general_category_id populated
- Added some tests
* Add missing translation for the new site setting
* fix some js tests
* Just check that the header value is null
Currently, moderators are able to set primary group for users
irrespective of the of the `moderators_manage_categories_and_groups` site
setting value.
This change updates Guardian implementation to honour it.
Previously the stylesheet cachebusting hash was based on the maximum mtime of files. This works well in development and during in-container updates (e.g. via docker_manager). However, when a fresh docker image is created for each deploy, the file mtimes will change even if the contents has not.
This commit changes the production logic to calculate the cachebuster from the filenames and contents of the relevant assets. This should be consistent across deploys, thereby improving cache hits and improving page load times.
Since the system user is a regular user, it can have its
`allow_private_messages` user option turned off, which
with our current `can_send_private_message?(Discourse.system_user)`
check inside the CurrentUserSerializer, will prevent any
user from sending messages in the UI if the system user is not
accepting PMs.
This commit adds a new `can_send_private_messages?` method to
the Guardian, which can be used in serializers and not depend
on the system user. When the user actually sends a message
we still rely on the old `can_send_private_message?(target)`
call to see if they are allowed to send the message to the target.
The new method is just to say they can "generally" send
private messages.
Before this commit, there was no way for us to efficiently check an
array of topics for which a user can see. Therefore, this commit
introduces the `TopicGuardian#can_see_topic_ids` method which accepts an
array of `Topic#id`s and filters out the ids which the user is not
allowed to see. The `TopicGuardian#can_see_topic_ids` method is meant to
maintain feature parity with `TopicGuardian#can_see_topic?` at all
times so a consistency check has been added in our tests to ensure that
`TopicGuardian#can_see_topic_ids` returns the same result as
`TopicGuardian#can_see_topic?`. In the near future, the plan is for us
to switch to `TopicGuardian#can_see_topic_ids` completely but I'm not
doing that in this commit as we have to be careful with the performance
impact of such a change.
This method is currently not being used in the current commit but will
be relied on in a subsequent commit.
Linking a commit from a GitHub pull request included the complete commit
message, instead of just the first line. The rest of the commit message
will be added to the body of the Onebox.
Building does not persist the object in the database which is
unrealistic since we're mostly dealing with persisted objects in
production.
In theory, this will result our test suite taking longer to run since we
now have to write to the database. However, I don't expect the increase
to be significant and it is actually no different than us adding new
tests which fabricates more objects.
Staged users are allowed to view topics they created in a read restricted category
when category has `Category#email_in` and
`Category#email_in_allow_strangers` configured.
When PostRevisor is called with 'skip_validations: true' it can save
the post twice and one of the calls passes the correct 'validate: false'
argument, but the other one does not.
The filenames (minus the extensions) were being used as keys in a hash to pass to Terser, which meant that colocated connector files would overwrite each other. This commit moves the un-colocating earlier in the pipeline so that the fixed filenames are passed to Terser.
Followup to be3d6a56ce
Theme javascript is now minified using Terser, just like our core/plugin JS bundles. This reduces the amount of data sent over the network.
This commit also introduces sourcemaps for theme JS. Browser developer tools will now be able show each source file separately when browsing, and also in backtraces.
For theme test JS, the sourcemap is inlined for simplicity. Network load is not a concern for tests.
Previously, compiling theme 'extra_js' was done with a number of steps. Each theme_field would be compiled into its own value_baked column, and then the JavascriptCache content would be built by concatenating all of those compiled values.
This commit streamlines things by removing the value_baked step. The raw value of all extra_js theme_fields are passed directly to the ThemeJavascriptCompiler, and then the result is stored in the JavascriptCache.
In itself, this commit should not cause any behavior change. It is designed to open the door to more advanced compilation features which have interdependencies between different source files (e.g. template colocation, sourcemaps).
The previous implementation would attempt to fetch groups using the end-user's Google auth token. This only worked for admin accounts, or users with 'delegated' access to the `admin.directory.group.readonly` API.
This commit changes the approach to use a single 'service account' for fetching the groups. This removes the need to add permissions to all regular user accounts. I'll be updating the [meta docs](https://meta.discourse.org/t/226850) with instructions on setting up the service account.
This is technically a breaking change in behavior, but the existing implementation was marked experimental, and is currently unusable in production google workspace environments.
Previously, when the array had both nil and string values it returned the error "comparison of NilClass with String failed". Now I added the `.compact` method to prevent this issue as per @martin-brennan's suggestion https://github.com/discourse/discourse/pull/18431#discussion_r984204788
* Revert "Revert "FEATURE: Preload resources via link header (#18475)" (#18511)"
This reverts commit 95a57f7e0c.
* put behind feature flag
* env -> global setting
* declare global setting
* forgot one spot
Experiment moving from preload tags in the document head to preload information the the response headers.
While this is a minor improvement in most browsers (headers are parsed before the response body), this allows smart proxies like Cloudflare to "learn" from those headers and build HTTP 103 Early Hints for subsequent requests to the same URI, which will allow the user agent to download and parse our JS/CSS while we are waiting for the server to generate and stream the HTML response.
Co-authored-by: Penar Musaraj <pmusaraj@gmail.com>
When a user with an email matching those inside the
DISCOURSE_DEVELOPER_EMAILS env var log in, we make
them into admin users if they are not already. This
is used when setting up the first admin user for
self-hosters, since the discourse-setup script sets
the provided admin emails into DISCOURSE_DEVELOPER_EMAILS.
The issue being fixed here is that the new admins were
not being automatically added to the staff and admins
automatic groups, which was causing issues with the site
settings that are group_list based that don't have an explicit
staff override. All we need to do is refresh the automatic
staff, admin groups when admin is granted for the user.
cf. e62e93f83a
This PR also makes it so `bot` (negative ID) and `system` users are always allowed
to send PMs, since the old conditional was just based on `enable_personal_messages`
Static topics are the seeded topics that are automatically created for every Discourse instance to hold the content for the FAQ, ToS and Privacy pages. These topics are allowed to bypass the minimum title length checks when they're edited by admins:
ba27ee1637/app/assets/javascripts/discourse/app/models/composer.js (L487-L496)
However, on the server-side, the "quality title" validations aren't skipped for static topics and that can cause confusion for admins when they change the title of a static topic to something that's short enough to fail the quality title validations. This commit ignores all quality title validations on static topics when they're edited by admins.
Internal topic: t/75745.
By default, only staff members have to confirm their old email when
changing it. This commit adds a site setting that when enabled will
always ask the user to confirm old email.
This commit renames all secure_media related settings to secure_uploads_* along with the associated functionality.
This is being done because "media" does not really cover it, we aren't just doing this for images and videos etc. but for all uploads in the site.
Additionally, in future we want to secure more types of uploads, and enable a kind of "mixed mode" where some uploads are secure and some are not, so keeping media in the name is just confusing.
This also keeps compatibility with the `secure-media-uploads` path, and changes new
secure URLs to be `secure-uploads`.
Deprecated settings:
* secure_media -> secure_uploads
* secure_media_allow_embed_images_in_emails -> secure_uploads_allow_embed_images_in_emails
* secure_media_max_email_embed_image_size_kb -> secure_uploads_max_email_embed_image_size_kb
* FEATURE: add composer warning when user haven't been seen in a long time
When a user creates a PM and adds a recipient that hasn't been seen in a
long time then we'll now show a warning in composer indicating that the
user hasn't been seen in a long time.
* FIX: Recursively tag topics with missing ancestor tags
Given only a child tag, walk up the ancestry chain, get all of it's
ancestors for use in tagging a topic
* FIX: Ensure only one parent tag is returned for topic tagging
Current implementation selects and return first parent tag if child tag
has multiple parents.
This change updates recursive parent tag implementation to only return
parent tags via only one ancestry line.
* DEV: Add test case for tag cycles
Given we aren't performing a strict graph traversal to get a tag's
parent, cycles do not have any effect on the tags returned for topic
tagging.
This will replace `enable_personal_messages` and
`min_trust_to_send_messages`, this commit introduces
the setting `personal_message_enabled_groups`
and uses it in all places that `enable_personal_messages`
and `min_trust_to_send_messages` currently apply.
A migration is included to set `personal_message_enabled_groups`
based on the following rules:
* If `enable_personal_messages` was false, then set
`personal_message_enabled_groups` to `3`, which is
the staff auto group
* If `min_trust_to_send_messages` is not default (1)
and the above condition is false, then set the
`personal_message_enabled_groups` setting to
the appropriate auto group based on the trust level
* Otherwise just set `personal_message_enabled_groups` to
11 which is the TL1 auto group
After follow-up PRs to plugins using these old settings, we will be
able to drop the old settings from core, in the meantime I've added
DEPRECATED notices to their descriptions and added them
to the deprecated site settings list.
This commit also introduces a `_map` shortcut method definition
for all `group_list` site settings, e.g. `SiteSetting.personal_message_enabled_groups`
also has `SiteSetting.personal_message_enabled_groups_map` available,
which automatically splits the setting by `|` and converts it into
an array of integers.
See https://meta.discourse.org/t/discourse-email-messages-are-incorrectly-threaded/233499
for thorough reasoning.
This commit changes how we generate Message-IDs and do email
threading for emails sent from Discourse. The main changes are
as follows:
* Introduce an outbound_message_id column on Post that
is either a) filled with a Discourse-generated Message-ID
the first time that post is used for an outbound email
or b) filled with an original Message-ID from an external
mail client or service if the post was created from an
incoming email.
* Change Discourse-generated Message-IDs to be more consistent
and static, in the format `discourse/post/:post_id@:host`
* Do not send References or In-Reply-To headers for emails sent
for the OP of topics.
* Make sure that In-Reply-To is filled with either a) the OP's
Message-ID if the post is not a direct reply or b) the parent
post's Message-ID
* Make sure that In-Reply-To has all referenced post's Message-IDs
* Make sure that References is filled with a chain of Message-IDs
from the OP down to the parent post of the new post.
We also are keeping X-Discourse-Post-Id and X-Discourse-Topic-Id,
headers that we previously removed, for easier visual debugging
of outbound emails.
Finally, we backfill the `outbound_message_id` for posts that have
a linked `IncomingEmail` record, using the `message_id` of that record.
We do not need to do that for posts that don't have an incoming email
since they are backfilled at runtime if `outbound_message_id` is missing.
We previously had a system which would generate a 10x10px preview of images and add their URLs in a data-small-upload attribute. The client would then use that as the background-image of the `<img>` element. This works reasonably well on fast connections, but on slower connections it can take a few seconds for the placeholders to appear. The act of loading the placeholders can also break or delay the loading of the 'real' images.
This commit replaces the placeholder logic with a new approach. Instead of a 10x10px preview, we use imagemagick to calculate the average color of an image and store it in the database. The hex color value then added as a `data-dominant-color` attribute on the `<img>` element, and the client can use this as a `background-color` on the element while the real image is loading. That means no extra HTTP request is required, and so the placeholder color can appear instantly.
Dominant color will be calculated:
1. When a new upload is created
2. During a post rebake, if the dominant color is missing from an upload, it will be calculated and stored
3. Every 15 minutes, 25 old upload records are fetched and their dominant color calculated and stored. (part of the existing PeriodicalUpdates job)
Existing posts will continue to use the old 10x10px placeholder system until they are next rebaked
Upgrading to Markdown.it v13 broke empty inline BBCodes. This works around the problem by adding an empty token before a closing token if the previous token was a BBCode token.
It also removes the unused `jump` attribute which was removed in Markdown.it v12.3
Previously we were relying on a highly-customized version of the unmaintained Barber gem for theme template compilation. This commit switches us to use our own DiscourseJsProcessor, which makes use of more modern patterns and will be easier to maintain going forward.
In summary:
- Refactors DiscourseJsProcessor to move multiline JS heredocs into a companion `discourse-js-processor.js` file
- Use MiniRacer's `.call` method to avoid manually escaping JS strings
- Move Theme template AST transformers into DiscourseJsProcessor, and formalise interface for extending RawHandlebars AST transformations
- Update Ember template compilation to use a babel-based approach, just like Ember CLI. This gives each template its own ES6 module rather than directly assigning `Ember.TEMPLATES` values
- Improve testing of template compilation (and move some tests from `theme_javascript_compiler_spec.rb` to `discourse_js_processor_spec.rb`
* FIX: hide welcome topic banner as soon as the welcome topic is edited
This commit adds a message bus listener on client to hide the welcome
topic banner as soon as the welcome topic is edited.
* update test
* only subscribe when show_welcome_topic_banner is true
* Do not lookup for messageBus service if it's not required
* Remove unneeded code
* Cache result for Site.show_welcome_topic_banner
* Update tests per latest changes
* Changes per PR review
* FIX: Only seed general category on new sites
If the site already has human users (users with an id > 0) don't seed
the categories.
Follow up to: a6ad74c759
* use human_users scope
We don't want to save the auto_delete_preference for bookmarks to the
user options if it was passed through as nil from the frontend,
this leads to confusion for the end user since they did not explicitly set it.
It's fine to create the bookmark with the default of "never" if no
auto_delete_preference is provided since it applies only to the
single bookmark, not future bookmarks.
- Seed the General category so that the general chat channel will have
a home
- Do not seed the Lounge category anymore
- Move the "Welcome to Site" topic to the General category
This commit makes a number of improvements to the DiscourseJsProcessor:
1. Remove dependence on the out-of-date Ember template compiler from the ember-rails gem; switch to modern template compiler
2. Refactor to make use of a proper module system with `define`/`require`
3. Introduce `babel-plugin-ember-template-compilation` to enable inline hbs compilation
The `mini-loader` is upgraded to support relative lookup and `require.has`, so that these new JS packages work correctly.
Topic allowed user records were created for small actions, which lead to
the system user being invited in many private topics when the user
removed themselves or if a group was invited but some members already
had access.
This commits skips creating topic allowed user. They are already skipped
for the whisper posts.
When preloading topic_list data we were giving it a 'preload key' which was loosely based on the parameters of the list. However, it did not include all parameters, and mismatches between client/server-side logic would cause the preloaded data to be ignored.
This commit simplifies things by using a single key for all topic_list preloading. This works on the assumption that "The first topic_list the JS app will load is the one which was preloaded". That assumption also existed to some extent in the old design, so we don't expect any regressions here.
In a multisite Discourse reported that no backup is running after 60 seconds because the Redis key expired. Also, the thread that listens for a shutdown signal stopped running immediately because it didn't detect a running operation.
Certain HTML can be rejected by nokogumbo, specifically cases where there
are enormous amounts of attributes
This ensures that malformed HTML is simply skipped instead of leaking out
an exception and terminating downstream processes.
* FIX: Do not allow to remove like if topic is archived
* FIX: Always show like button
The like button used to be hidden if the topic was archived and it had
no likes. This commit changes that to always show the like button, but
with a not-allowed cursor if the topic is archived.
`TopicQueryParams` allows for `match_all_tags` to be passed as a query parameter. `TagsController` forces the value to be true.
This change allows a value to be passed, and only sets it to true if no value has been set. It then uses `ActiveModel::Type::Boolean.new.cast` to compare the value.
The maximum_staged_users_per_email site setting controls how many
staged users will be invited to the topic created from an incoming
email. Previously, it counted only the new staged users.
Twitter removed OpenGraph tags from their pages. We can no longer
extract all the information (for example, the quoted tweet) we need
to render Oneboxes without using their API.
Dead and large images are replaced with a placeholder, either a broken
chain icon or a short text. This commit no longer applies this
transformation for images inside Oneboxes, but removes them instead.
This is a much better description of its function. It performs idempotent normalization of a URL. If consumers truly need to `encode` a URL (including double-encoding of existing encoded entities), they can use the existing `.encode` method.
normalized_encode in addressable has a number of issues, including https://github.com/sporkmonger/addressable/issues/472
To temporaily work around those issues for the majority of cases, we try parsing with `::URI`. If that fails (e.g. due to non-ascii characters) then we will fall back to addressable.
Hopefully we can simplify this back to `Addressable::URI.normalized_encode` in the future.
This commit also adds support for unicode domain names and emoji domain names with escape_uri.
This removes an unneeded hack checking for pre-signed urls, which are now handled by the general case due to starting off valid and only being minimally normalized. Previous test case continues to pass.
UrlHelper.s3_presigned_url? which was somewhat wide was removed.
Followup to d66115d918
* Makes sure the `actor_preferences` all initialize with an empty array instead of nil if there are no preferences e.g. the actor is not ignoring anyone
* If the actor has disabled all PMs make `actor_disallowing_pms?` always return true
* FIX: don't memoize site setting in guardian
Memoizing site settings can make tests more fragile and harder to debug
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
This commit introduces several fine-grained methods
to UserCommScreener which can be used to show the actor
who they are ignoring/muting/blocking DMs from in order
to prevent them initiating conversation with those users
or to display relevant information in the UI to the
actor.
This will be used in a companion PR in discourse-chat,
and is a follow up to 74584ff3ca
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
Co-authored-by: Osama Sayegh <asooomaasoooma90@gmail.com>
* FEATURE: track stats around failing scheduled jobs
Discourse.job_exception_stats can now be used to gather stats around how
many regular scheduled jobs failed in the current process.
This will be consumed by the Prometheus plugin and potentially other
monitoring plugins.
* FEATURE: Add case-sensitivity flag to watched_words
Currently, all watched words are matched case-insensitively. This flag
allows a watched word to be flagged for case-sensitive matching.
To allow allow for backwards compatibility the flag is set to false by
default.
* FEATURE: Support case-sensitive creation of Watched Words via API
Extend admin creation and upload of Watched Words to support case
sensitive flag. This lays the ground work for supporting
case-insensitive matching of Watched Words.
Support for an extra column has also been introduced for the Watched
Words upload CSV file. The new column structure is as follows:
word,replacement,case_sentive
* FEATURE: Enable case-sensitive matching of Watched Words
WordWatcher's word_matcher_regexp now returns a list of regular
expressions instead of one case-insensitive regular expression.
With the ability to flag a Watched Word as case-sensitive, an action
can have words of both sensitivities.This makes the use of the global
Regexp::IGNORECASE flag added to all words problematic.
To get around platform limitations around the use of subexpression level
switches/flags, a list of regular expressions is returned instead, one for each
case sensitivity.
Word matching has also been updated to use this list of regular expressions
instead of one.
* FEATURE: Use case-sensitive regular expressions for Watched Words
Update Watched Words regular expressions matching and processing to handle
the extra metadata which comes along with the introduction of
case-sensitive Watched Words.
This allows case-sensitive Watched Words to matched as such.
* DEV: Simplify type casting of case-sensitive flag from uploads
Use builtin semantics instead of a custom method for converting
string case flags in uploaded Watched Words to boolean.
* UX: Add case-sensitivity details to Admin Watched Words UI
Update Watched Word form to include a toggle for case-sensitivity.
This also adds support for, case-sensitive testing and matching of Watched Word
in the admin UI.
* DEV: Code improvements from review feedback
- Extract watched word regex creation out to a utility function
- Make JS array presence check more explicit and readable
* DEV: Extract Watched Word regex creation to utility function
Clean-up work from review feedback. Reduce code duplication.
* DEV: Rename word_matcher_regexp to word_matcher_regexp_list
Since a list is returned now instead of a single regular expression,
change `word_matcher_regexp` to `word_matcher_regexp_list` to better communicate
this change.
* DEV: Incorporate WordWatcher updates from upstream
Resolve conflicts and ensure apply_to_text does not remove non-word characters in matches
that aren't at the beginning of the line.
Fixes edge case from fa5f3e228c.
In case the acting user is sent in with the target_user_ids,
we do not need to load those preferences, because even if the
acting user is preventing PMs or muting etc they need to always be able to
send themselves messages.
The "everyone" group is an automatic group and GroupUser records do not
exist for it. This commit allows all users if the group everyone is one
of the groups in the setting "pm_tags_allowed_for_groups".
* FIX: Rejected emails should not be cleaned up before their logs
If we delete the rejected emails before we delete their associated logs
we will receive 404 errors trying to inspect an email message for that
log.
* don't add a blank line
* test for max value as well
* pr cleanup and add migration
* Fix failing test
* FEATURE: revamped wizard
* UX: Wizard redesign (#17381)
* UX: Step 1-2
* swap out images
* UX: Finalize all steps
* UX: mobile
* UX: Fix test
* more test
* DEV: remove unneeded wizard components
* DEV: fix wizard tests
* DEV: update rails tests for new wizard
* Remove empty hbs files that were created because of rebase
* Fixes for rebase
* Fix wizard image link
* More rebase fixes
* Fix rails tests
* FIX: Update preview for new color schemes: (#17481)
* UX: make layout more responsive, update images
* fix typo
* DEV: move discourse logo svg to template only component
* DEV: formatting improvements
* Remove unneeded files
* Add tests for privacy step
* Fix banner image height for step "ready"
Co-authored-by: Jordan Vidrine <30537603+jordanvidrine@users.noreply.github.com>
Co-authored-by: awesomerobot <kris.aubuchon@discourse.org>
Currently when generating oneboxes if the connection timeouts and we’re
using the `FinalDestination#get` method, then it raises an exception.
We already catch this exception when using the
`FinalDestination#resolve` method so this patch just applies the same
logic to `FinalDestination#get`.
All other tests that are setting grade_period use either unitless `0`, `1.minute` or `5.minutes` so it wasn't clear if `5` was meant to be seconds (it was)
Our theme system is very complex and it can take a while to figure out how to invalidate the various types of caches that are used throughout the theme system. So, having a single helper method that invalidates everything can be useful in emergency situations where there is no time to read through the code and figure out how to clear the various caches.
Internal ticket: t64732.