Follow up to 4d2a95ffe6. Sometimes
due to the original UploadReference migration or other issues,
multiple UploadReference records can have the exact same
created_at date and time. To tiebreak and correct the SQL order
when this happens, we can add a secondary `id ASC` ordering
when we check for the first upload reference.
Way back when this was introduced way back in b96c10a903
I didn't have any frame of reference for what these max rate
limit numbers should be, so 10 seemed like a reasonable limit
until a real world case where this did not make sense came
along.
The time has come.
Moving these into site settings, which are hidden since in most
cases there is no need to change these.
Similar spirit to e195e6f614,
this moves the Bookmarkable registration to DiscoursePluginRegistry
so plugins which are not enabled do not register additional
bookmarkable classes.
When a theme setting of type `upload` has a default upload, it should return the URL of the specified default upload until a custom upload is used for the setting. However, currently this isn't the case and we get null instead of the default upload URL.
The reason for this is because the `super` method of `#value` already returns the default upload URL (if there's one), so we can't pass that to `cdn_url` which expects an upload ID:
c961dcc757/lib/theme_settings_manager.rb (L212)
This commit fixes the bug by skipping the call to `cdn_url` when we fallback to the default upload for the setting value.
* UX: add type tag and design update
* UX: clarify status copy in reviewQ
* DEV: switch to selectKit
* UX: color approve/reject buttons in RQ
* DEV: regroup actions
* UX: add type tag and design update
* UX: clarify status copy in reviewQ
* Join questions for flagged post with "or" with new I18n function
* Move ReviewableScores component out of context
* Add CSS classes to reviewable-item based on human type
* UX: add table header for scoring
* UX: don't display % score
* UX: prefix modifier class with dash
* UX: reviewQ flag table styling
* UX: consistent use of ignore icon
* DEV: only show context question on pending status
* UX: only show table headers on pending status
* DEV: reviewQ regroup actions for hidden posts
* UX: reviewQ > approve/reject buttons
* UX: reviewQ add fadeout
* UX: reviewQ styling
* DEV: move scores back into component
* UX: reviewQ mobile styling
* UX: score table on mobile
* UX: reviewQ > move meta info outside table
* UX: reviewQ > score layout fixes
* DEV: readd `agree_and_keep` and fix the spec tests.
* Fix the spec tests
* fix the quint test
* DEV: readd deleting replies
* UX: reviewQ copy tweaks
* DEV: readd test for ignore + delete replies
* Remove old
* FIX: Add perform_ignore back in for backwards compat
* DEV: add an action alias `ignore` for `ignore_and_do_nothing`.
---------
Co-authored-by: Martin Brennan <martin@discourse.org>
Co-authored-by: Vinoth Kannan <svkn.87@gmail.com>
Follow up to 098ab29d41. Since
we just used a `cattr_reader` on `About` this was not safe
for multisite, since some sites could have the chat plugin
enabled and some may not. Using `DiscoursePluginRegistry` gets
around this issue, and makes it so the chat stats only show
for a site if `chat_enabled` is true.
Sometimes we get Maps URL containing a zoom level as a float (17.5z and
not 17z) but this doesn’t work with our current onebox implementation.
While Google accepts those float zoom levels, it removes automatically
the floating part in the URL (thus when visiting a Maps URL containing
17.5z, the URL will be rewritten shortly after as 17z). When putting a
float zoom level in an embedded URL, this actually breaks (Maps API
returns a 400 error).
This patch addresses the issue by allowing the onebox engine to match on
a zoom level expressed as a float but we only keep the integer part thus
rendering properly maps.
The ensure_consistency rake task was not marking posted as true for post authors in the TopicUser table, post migration. Create another step to set posted='t'.
Forcing distributed muted to raise when a notify reviewable job is running
leads to excessive errors in the logs under many conditions.
The new pattern
1. Optimises the counting of reviewables so it is a lot faster
2. Holds the distributed lock for 2 minutes (max)
The downside is the job queue can get blocked up when tons of notify
reviewables are running at the same time. However this should be very
rare in the real world, as we only notify when stuff is flagged which
is fairly infrequent.
This also give a fair bit more time for the notifications which may be
a little slow on large sites with tons of mods.
This commit implements many changes to topic and comments embedding. It
deprecates the class_name field from EmbeddableHost and suggests using
the className parameter. discourse_username parameter has been
deprecated and it will fetch it from embedded site from the author or
discourse-username meta.
See the updated code sample from Admin > Customize > Embedding page.
* FEATURE: Add className parameter for Discourse embed
* DEV: Hide class_name from EmbeddableHost
* DEV: Deprecate class_name field of EmbeddableHost
* FEATURE: Use either author or discourse-username meta tag
* DEV: Deprecate discourse_username parameter
* DEV: Improve embed code sample
This commit introduces a few experimental changes to the New topics list and "Everything" link in the sidebar:
1. Make the New topics list include unread topics
2. Make the Everything section in the sidebar link to the New topics list (`/new`)
3. Remove "unread" or "new" text next to the count and keep the count
4. The count is a sum of new and unread topics counts
All of these of changes are behind an off-by-default feature flag. I've not written extensive tests for these changes because they're highly experimental.
Internal topic: t/77234.
When invoking e.g. `can_see?(Foo.new)`, the guardian checks if there's a method `#can_see_foo?` defined and if so uses that to determine whether the user can see it or not.
When such a method is not defined, the guardian currently returns `true`, but it is probably a better call (pun intended) to make it "safe by default" and return `false` instead. I.e. if you can't explicitly see it, you can't see it at all.
This change makes the change to `Guardian#can_see?` to fall back to `false` if no visibility check method is defined.
For `#can_see_user?` and `#can_see_tag?` we don't have any particular logic that prevents viewing. We previously relied on the implicit `true` value, but since that's now change to `false`, I have explicitly implemented these two methods in `UserGuardian` and `TagGuardian` modules. If in the future we want to add some logic for it, this would be the place.
To be clear, **the behaviour remains the same**, but the `true` value is now explicit rather than implicit.
We were only supporting the main name of each HighlightJS language. So, by default, you could not use `js` or `jsx` to highlight Javascript, given they are aliases for `javascript`.
This PR adds a list of aliases as a constant to core (built via a rake task), and then checks against the `highlighted_languages` site settings plus the list of aliases when processing a code block.
This commit 57caf08e13 broke
`bin/turbo_rspec` timing recording via `TurboTests::Runner`,
because we changed to using all `spec/*` folders except
`spec/system` as default for the runner, rather than
the old `['spec']` array, which is what `TurboTests::Runner`
was relying on to determine whether to record test run
time with `ParallelTests::RSpec::RuntimeLogger`.
Instead, we can just pass a new `use_runtime_info` boolean to the
runner class and use it when running against the default set of
spec files using `bin/turbo_rspec` and the turbo rspec rake task.
The current default timeout is hardcoded to 2 seconds which is proving
too low for certain cases, and resulting in sporadic timeouts due to slow DNS queries.
As of ba3f62f576, handlebars templates are colocated with js files so the path to hbs templates referenced by this rake task is no longer valid. This commit fixes the path to hbs templates and updates a couple of files that are generated by the rake task.
We call `post.update_uploads_secure_status` in both
`PostCreator` and `PostRevisor`. Only the former was checking
if `SiteSetting.secure_uploads?` was enabled, but the latter
was not. There is no need to enqueue the job
`UpdatePostUploadsSecureStatus` if secure_uploads is not
enabled for the site.
- Reduce duplication of terms in post index from unlimited to 6. This will
result in reduced index size and reduced weighting for posts containing
a huge amount of duplicate terms. (Eg: a post containing "sam sam sam sam
sam sam sam sam", will index as "sam sam sam sam sam sam", only including
the word up to 6 times.) This corrects a flaw where title weighting could
be ignored.
- Prioritize exact matches of words in titles. Our search always performs
a prefix match. However we want to give special weight to exact title matches
meaning that a search for "sum" will find topics such as "the sum of us" vs
"summer in spring".
- Pick up fixes to our search algorithm which are missing from old indexes.
Specifically pick up the fix that indexes URLs properly. (`https://happy.com`
was stemmed to `happi` in keywords and then was not searchable)
see also:
https://meta.discourse.org/t/refinements-to-search-being-tested-on-meta/254158
Indexing will take a while and work in batches, in the background.
* Add username and name_or_username variables to SystemMessage defaults
* Allow username and name variables on welcome_user email template overrides
* Satisfy linting
* Add test
This commit adds backend support for a new topics list that combines both the current unread and new topics lists. We're going to experiment with this new list (name TBD) internally and decide if this feature is something that we want to fully build.
Internal topic: t/77234.
Currently, clicking on the unsubscribe link with a `key` associated with a
deleted topic results in an HTTP 500 response.
This change fixes that by skipping any attempt to run topic related flow if topic
isn't present.
* FIX: do not notify admins on suppressed categories
Avoid notifying admins on categories where they are not explicitly members
in cases where SiteSetting.suppress_secured_categories_from_admin is
enabled.
This helps keep notification stream clean and avoids admins mistakenly
being invited to discussions that should be suppressed
This is a combined work of Martin Brennan, Loïc Guitaut, and Joffrey Jaffeux.
---
This commit implements a base service object when working in chat. The documentation is available at https://discourse.github.io/discourse/chat/backend/Chat/Service.html
Generating documentation has been made as part of this commit with a bigger goal in mind of generally making it easier to dive into the chat project.
Working with services generally involves 3 parts:
- The service object itself, which is a series of steps where few of them are specialized (model, transaction, policy)
```ruby
class UpdateAge
include Chat::Service::Base
model :user, :fetch_user
policy :can_see_user
contract
step :update_age
class Contract
attribute :age, :integer
end
def fetch_user(user_id:, **)
User.find_by(id: user_id)
end
def can_see_user(guardian:, **)
guardian.can_see_user(user)
end
def update_age(age:, **)
user.update!(age: age)
end
end
```
- The `with_service` controller helper, handling success and failure of the service within a service and making easy to return proper response to it from the controller
```ruby
def update
with_service(UpdateAge) do
on_success { render_serialized(result.user, BasicUserSerializer, root: "user") }
end
end
```
- Rspec matchers and steps inspector, improving the dev experience while creating specs for a service
```ruby
RSpec.describe(UpdateAge) do
subject(:result) do
described_class.call(guardian: guardian, user_id: user.id, age: age)
end
fab!(:user) { Fabricate(:user) }
fab!(:current_user) { Fabricate(:admin) }
let(:guardian) { Guardian.new(current_user) }
let(:age) { 1 }
it { expect(user.reload.age).to eq(age) }
end
```
Note in case of unexpected failure in your spec, the output will give all the relevant information:
```
1) UpdateAge when no channel_id is given is expected to fail to find a model named 'user'
Failure/Error: it { is_expected.to fail_to_find_a_model(:user) }
Expected model 'foo' (key: 'result.model.user') was not found in the result object.
[1/4] [model] 'user' ❌
[2/4] [policy] 'can_see_user'
[3/4] [contract] 'default'
[4/4] [step] 'update_age'
/Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/app/services/update_age.rb:32:in `fetch_user': missing keyword: :user_id (ArgumentError)
from /Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/app/services/base.rb:202:in `instance_exec'
from /Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/app/services/base.rb:202:in `call'
from /Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/app/services/base.rb:219:in `call'
from /Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/app/services/base.rb:417:in `block in run!'
from /Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/app/services/base.rb:417:in `each'
from /Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/app/services/base.rb:417:in `run!'
from /Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/app/services/base.rb:411:in `run'
from <internal:kernel>:90:in `tap'
from /Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/app/services/base.rb:302:in `call'
from /Users/joffreyjaffeux/Code/pr-discourse/plugins/chat/spec/services/update_age_spec.rb:15:in `block (3 levels) in <main>'
```
The #pluck_first freedom patch, first introduced by @danielwaterworth has served us well, and is used widely throughout both core and plugins. It seems to have been a common enough use case that Rails 6 introduced it's own method #pick with the exact same implementation. This allows us to retire the freedom patch and switch over to the built-in ActiveRecord method.
There is no replacement for #pluck_first!, but a quick search shows we are using this in a very limited capacity, and in some cases incorrectly (by assuming a nil return rather than an exception), which can quite easily be replaced with #pick plus some extra handling.
Ember CLI will automatically run babel transformations in parallel when the config is 'serializable', and can therefore be applied in multiple processes automatically. If any plugin is defined in an unserializable way, parallelisation will be disabled.
Our discourse-widget-hbs transformer was causing parallelisation to be disabled. This commit fixes that, and also enables the throwUnlessParallelizable flag so that we catch this kind of issue more easily in future.
This commit also refactors our deprecation silencing system into its own file, and uses a fake babel plugin to ensure deprecations are silenced in babel worker processes.
In our GitHub CI jobs, this doubles the speed of ember builds (1m30s -> 45s). It should also improve production deploy times, and cold-start dev builds.
This PR is a major change to Sass compilation in Discourse.
The new version of sass-ruby moves to dart-sass putting we back on the supported version of Sass. It does so while keeping compatibility with the existing method signatures, so minimal change is needed in Discourse for this change.
This moves us
From:
- sassc 2.0.1 (Feb 2019)
- libsass 3.5.2 (May 2018)
To:
- dart-sass 1.58
This update applies the following breaking changes:
>
> These breaking changes are coming soon or have recently been released:
>
> [Functions are stricter about which units they allow](https://sass-lang.com/documentation/breaking-changes/function-units) beginning in Dart Sass 1.32.0.
>
> [Selectors with invalid combinators are invalid](https://sass-lang.com/documentation/breaking-changes/bogus-combinators) beginning in Dart Sass 1.54.0.
>
> [/ is changing from a division operation to a list separator](https://sass-lang.com/documentation/breaking-changes/slash-div) beginning in Dart Sass 1.33.0.
>
> [Parsing the special syntax of @-moz-document will be invalid](https://sass-lang.com/documentation/breaking-changes/moz-document) beginning in Dart Sass 1.7.2.
>
> [Compound selectors could not be extended](https://sass-lang.com/documentation/breaking-changes/extend-compound) in Dart Sass 1.0.0 and Ruby Sass 4.0.0.
SCSS files have been migrated automatically using `sass-migrator division app/assets/stylesheets/**/*.scss`
Allows users to configure their own custom sidebar sections with links withing Discourse instance. Links can be passed as relative path, for example "/tags" or full URL.
Only path is saved in DB, so when Discourse domain is changed, links will be still valid.
Feature is hidden behind SiteSetting.enable_custom_sidebar_sections. This hidden setting determines the group which members have access to this new feature.
Previously due to an error archived topics were more prominent in search
than closed topics.
This amends our internal logic to ensure archived topics are bumped down
the list.
Previously to_tsquery would split terms and join with &
In PG 14 terms are split and use <-> which means followed directly by.
In PG 13:
discourse_test=# SELECT to_tsquery('english', '''hello world''');
to_tsquery
---------------------
'hello' & 'world'
(1 row)
In PG 14:
discourse_test=# SELECT to_tsquery('english', '''hello world''');
to_tsquery
---------------------
'hello' <-> 'world'
(1 row)
Change is very unobtrosive, we simply amend our to_tsquery to behave like
it used to behave and make no use of the `<->` operator
More detail at: https://akorotkov.github.io/blog/2021/05/22/pg-14-query-parsing/
Note that plainto_tsquery used elsewhere in Discourse keeps the exact
same function.
This also corrects a faulty test that was passing by a fluke on older
version of PG
We've had a couple of problems with the R2 gem where it generated a broken RTL CSS bundle that caused a badly broken layout when Discourse is used in an RTL language, see a3ce93b and 5926386. For this reason, we're replacing R2 with `rtlcss` that can handle modern CSS features better than R2 does.
`rltcss` is written in JS and available as an npm package. Calling the `rltcss` from rubyland is done via the `rtlcss_wrapper` gem which contains a distributable copy of the `rtlcss` package and loads/calls it with Mini Racer. See https://github.com/discourse/rtlcss_wrapper for more details.
Internal topic: t/76263.
`--d-hover` is calculated to be equivalent to primary-100 in light mode, or primary-low in dark mode
`--d-selected` is calculated to be equivalent to primary-low in light mode, or primary-100 in dark mode
`lib/color_math` is introduced to provide some utilities for making these calculations.
The new `prioritize_exact_search_match` can be used to force the search
algorithm to prioritize exact term matches in title when ranking results.
This is scoped narrowly to titles for cases such as a topic titled:
"organisation chart" and a search of "org chart".
If we scoped this wider, all discussion about "org chart" would float to
the top and leave a very common title de-prioritized.
This is a hidden site setting and it has some performance impact due
to double ranking.
That said, performance impact is somewhat mitigated cause ranking on
title alone is a very cheap operation.
Many users seems surprised by prefix matching in search leading to
unexpected results.
Over the years we always would return results starting with a search term
and not expect exact matches.
Meaning a search for `abra` would find `abracadabra`
This introduces the Site Setting `enable_search_prefix_matching` which
defaults to true. (behavior unchanged)
We plan to experiment on select sites with exact matches to see if the
results are less surprising
* FIX: Ensure soft-deleted topics can be deleted
The topic was not found during the deletion process because it was
deleted and `@post.topic` was nil.
* DEV: Use @topic instead of finding the topic every time
Under some situations, we would inadvertently return a public (unauthenticated) result to an authenticated API request. This commit adds the `Api-Key` header to our anonymous cache bypass logic.
Under scenarios of extremely high load where large numbers of `Reviewable*` items are being created, it has been observed that multiple instances of the `NotifyReviewable` job may run simultaneously.
These jobs will work satisfactorily if the concurrency is limited to 1, and the different types of jobs (items reviewable by admins, vs moderators, vs particular groups, etc.) are run eventually.
This change introduces a new option to `DistributedMutex` which allows the `max_get_lock_attempts` to be specified. If the number is exceeded an error will be raised, which will cause Sidekiq to requeue the job. Sidekiq has existing logic to back-off on retry times for jobs that have failed multiple times.
There was an issue where if hashtag-cooked HTML was sent
to the ExcerptParser without the keep_svg option, we would
end up with empty </use> and </svg> tags on the parts of the
excerpt where the hashtag was, in this case when a post
push notification was sent.
Fixed this, and also added a way to only display a plaintext
version of the hashtag for cases like this via PrettyText#excerpt.
When checking whether an existing upload should be secure
based on upload references, do not count deleted posts, since
there is still a reference attached to them. This can lead to
issues where e.g. an upload is used for a post then later on
a custom emoji.
Currently, `Tag#topic_count` is a count of all regular topics regardless of whether the topic is in a read restricted category or not. As a result, any users can technically poll a sensitive tag to determine if a new topic is created in a category which the user has not excess to. We classify this as a minor leak in sensitive information.
The following changes are introduced in this commit:
1. Introduce `Tag#public_topic_count` which only count topics which have been tagged with a given tag in public categories.
2. Rename `Tag#topic_count` to `Tag#staff_topic_count` which counts the same way as `Tag#topic_count`. In other words, it counts all topics tagged with a given tag regardless of the category the topic is in. The rename is also done so that we indicate that this column contains sensitive information.
3. Change all previous spots which relied on `Topic#topic_count` to rely on `Tag.topic_column_count(guardian)` which will return the right "topic count" column to use based on the current scope.
4. Introduce `SiteSetting.include_secure_categories_in_tag_counts` site setting to allow site administrators to always display the tag topics count using `Tag#staff_topic_count` instead.
This fixes a longstanding issue for sites with the
secure_uploads setting enabled. What would happen is a scenario
like this, since we did not check all places an upload could be
linked to whenever we used UploadSecurity to check whether an
upload should be secure:
* Upload is created and used for site setting, set to secure: false
since site setting uploads should not be secure. Let's say favicon
* Favicon for the site is used inside a post in a private category,
e.g. via a Onebox
* We changed the secure status for the upload to true, since it's been
used in a private category and we don't check if it's originator
was a public place
* The site favicon breaks :'(
This was a source of constant consternation. Now, when an upload is _not_
being created, and we are checking if an existing upload should be
secure, we now check to see what the first record in the UploadReference
table is for that upload. If it's something public like a site setting,
then we will never change the upload to `secure`.
Prior to this change, we were parsing `Post#cooked` every time we
serialize a post to extract the usernames of mentioned users in the
post. However, the only reason we have to do this is to support
displaying a user's status beside each mention in a post on the client side when
the `enable_user_status` site setting is enabled. When
`enable_user_status` is disabled, we should avoid having to parse
`Post#cooked` since there is no point in doing so.
Since the new hashtag format has been added, we want site
admins to be able to rebake old posts with the old hashtag
format. This can now be done with `rake hashtags:mark_old_format_for_rebake`
which goes and marks posts with the old cooked version of hashtags
in this format for rebake:
```
<a class=\"hashtag\" href=\"/c/ux/14\">#<span>ux</span></a>
```
c.f. https://meta.discourse.org/t/what-rebake-is-required-for-the-new-autocomplete-styling/249642/12
This commit fixes the following issue:
* User creates a post
* Akismet or some other thing like requiring posts to be approved puts
the post in the review queue, deleting it
* Admin approves the post
* Email is never sent to mailing list mode subscribers
We intentionally do not enqueue this for every single post when
recovering a topic (i.e. recovering the first post) since the topics
could have a lot of posts with emails already sent, and we don't want
to clog sidekiq with thousands of notify jobs.
TL4 users can already list and unlist topics, but they can't see
the unlisted topics. This change brings this to par by allowing
TL4 users to also see unlisted topics.
So it can easily be overwritten in a plugin for example.
### Added more tests to provide better coverage
We previously only had `u.silenced_till IS NULL` but I made it consistent with pretty much every other places where we check for "active" users.
These two new lines do change the query a tiny bit though.
**Before**
- You could not get the badge if you were currently silenced (no matter what period is being checked)
- You could get the badge if you were suspended 😬
**After**
- You can't get the badge if you were silenced during the past year
- You can't get the badge if you were suspended during the past year
### Improved the performance of the query by using `NOT EXISTS` instead of `LEFT JOIN / COUNT() = 0`
There is no difference in behaviour between
```sql
LEFT JOIN user_badges AS ub ON ub.user_id = u.id AND ...
[...]
HAVING COUNT(ub.*) = 0
```
and
```sql
NOT EXISTS (SELECT 1 FROM user_badges AS ub WHERE ub.user_id = u.id AND ...)
```
The only difference is performance-wise. The `NOT EXISTS` is 10-30% faster on very large databases (aka. posts and users in X millions). I checked on 3 of the largest datasets I could find.
When the thread is aborted, an exception is raised before the `start` of a job is set, and therefore raises an exception in the `ensure` block. This commit checks that `start` exists, and also adds `abort_on_exception=true` so that this issue would have caused test failures.
The `enable_new_notifications_menu` site setting allows sites that have
`navigation_menu` set to `legacy` to use the redesigned notifications
menu before switching to the new sidebar navigation menu.
This is a very subtle one. Setting the redirect URL is done by passing
a hash through a Discourse event. This is broken on Ruby 2 since the
support for keyword arguments in events was added.
In Ruby 2 the last argument is cast to keyword arguments if it is a
hash. The key point here is that creates a new copy of the hash, so
what the plugin is modifying is not the hash that was passed.
This patch introduces a cookies rotator as indicated in the Rails
upgrade guide. This allows to migrate from the old SHA1 digest to the
new SHA256 digest.
This will give us some aggregate stats on the defer queue performance.
It is limited to 100 entries (for safety) which is stored in an LRU cache.
Scheduler::Defer.stats can then be used to get an array that denotes:
- number of runs and completions (queued, finished)
- error count (errors)
- total duration (duration)
We can look later at exposing these metrics to gain visibility on the reason
the defer queue is clogged.
There was an issue with channel archiving, where at times the topic
creation could fail which left the archive in a bad state, as read-only
instead of archived. This commit does several things:
* Changes the ChatChannelArchiveService to validate the topic being
created first and if it is not valid report the topic creation errors
in the PM we send to the user
* Changes the UI message in the channel with the archive status to reflect
that topic creation failed
* Validate the new topic when starting the archive process from the UI,
and show the validation errors to the user straight away instead of
creating the archive record and starting the process
This also fixes another issue in the discourse_dev config which was
failing because YAML parsing does not enable all classes by default now,
which was making the seeding rake task for chat fail.
* Firefox now finally returns PerformanceMeasure from performance.measure
* Some TODOs were really more NOTE or FIXME material or no longer relevant
* retain_hours is not needed in ExternalUploadsManager, it doesn't seem like anywhere in the UI sends this as a param for uploads
* https://github.com/discourse/discourse/pull/18413 was merged so we can remove JS test workaround for settings
Currently when generating a onebox for Discourse topics, some important
context is missing such as categories and tags.
This patch addresses this issue by introducing a new onebox engine
dedicated to display this information when available. Indeed to get this
new information, categories and tags are exposed in the topic metadata
as opengraph tags.
If unaccent is called with quote-like Unicode characters then it can
generate invalid queries because some of the transformed quotes by
unaccent are not escaped and to_tsquery fails because of bad input.
This commits replaces more quote-like Unicode characters before
unaccent is called.
Fixes the support for kwargs in `DiscourseEvent.trigger()` on Ruby 3, e.g.
```rb
DiscourseEvent.trigger(:before_system_message_sent, message_type: type, recipient: @recipient, post_creator_args: post_creator_args, params: method_params)
```
Fixes https://github.com/discourse/discourse-local-site-contacts
If a secure upload's access_control_post was trashed, and an anon user
tried to look at that upload, they would get a 500 error rather than
the correct 403 because of an error inside the PostGuardian logic.
This commit does a couple of things:
1. Changes the limit of tags to include a subject for a
notification email to the `max_tags_per_topic` setting
instead of the arbitrary 3 limit
2. Adds both an X-Discourse-Tags and X-Discourse-Category
custom header to outbound emails containing the tags
and category from the subject, so people on mail clients
that allow advanced filtering (i.e. not Gmail) can filter
mail by tags and category, which is useful for mailing
list mode users
c.f. https://meta.discourse.org/t/headers-for-email-notifications-so-that-gmail-users-can-filter-on-tags/249982/17
This commit fixes an issue where the chat message bookmarks
did not respect the user's `bookmark_auto_delete_preference`
which they select in their user preference page.
Also, it changes the default for that value to "keep bookmark and clear reminder"
rather than "never", which ends up leaving a lot of expired bookmark
reminders around which are a pain to clean up.
When sending emails out via group SMTP, if we
are sending them to non-staged users we want
to mask those emails with BCC, just so we don't
expose them to anyone we shouldn't. Staged users
are ones that have likely only interacted with
support via email, and will likely include other
people who were CC'd on the original email to the
group.
Co-authored-by: Martin Brennan <martin@discourse.org>
This commit changes the default return value of `Auth::ManagedAuthenticator#primary_email_verified?` to false. We're changing the default to force developers to think about email verification when building a new authentication method. All existing authenticators (in core and official plugins) have been updated to explicitly define the `primary_email_verified?` method in their subclass of `Auth::ManagedAuthenticator` (example commit 65f57a4d05).
Internal topic: t/82084.
The `TopicView#bookmarks` method is called by `TopicViewSerializer` and `PostSerializer`
so we want to avoid running a meaningless query when user is not
present.
When loading posts in a topic, the topic level guardian
checks are run multiple times even though all the posts belong to the
same topic. Profiling in production revealed that this accounted for a
significant amount of request time for a user that is not staff or anon.
Therefore, we're optimizing this by adding memoizing the topic level
calls in `PostGuardian`. Speficifally, the result of
`TopicGuardian#can_see_topic?` and `PostGuardian#can_create_post?`
method calls are memoized per topic.
Locally profiling shows a significant improvement for normal users
loading a topic with 100 posts.
Benchmark script command: `ruby script/bench.rb --unicorn --skip-bundle-assets --iterations 100`
Before:
```
topic user:
50: 114
75: 117
90: 122
99: 209
topic.json user:
50: 67
75: 69
90: 72
99: 162
```
After:
```
topic user:
50: 101
75: 104
90: 107
99: 184
topic.json user:
50: 53
75: 53
90: 56
99: 138
```
This commit removes 3 redundant DB queries when loading posts.
1. `@posts` will eventually have to be loaded so we can avoid two
additional queries.
2. No need to preload topic association of posts as we're already
dealing with a fixed topic in `TopicView`.
In some situations (e.g. disaster recovery), it may make sense to spin up a temporary readonly version of a cluster. In that situation, the s3 `expire_missing_assets` job would delete assets which are still in use by the canonical read-write version of the cluster.
To avoid that, this commit will skip deletion if the site is currently in readonly mode.
1. Fix bug where we were not waiting for all unicorn workers to start up
before running benchmarks.
2. Fix a bug where headers were not used when benchmarking. Admin
benchmarks were basically running as anon user.
3. Disable rate limits when in profile env. We're pretty much going to
hit the rate limit every time as a normal user.
4. Benchmark against topic with a fixed posts count of 100. Previously profiling script was just randomly creating posts
and we would benchmark against a topic with a fixed posts count of 30.
Sometimes, the script fails because no topics with a posts count of 30
exists.
5. Benchmarks are not run against a normal user on top of anon and
admin.
6. Add script option to select tests that should be run.
# Context
When a topic is reviewable by a group we give those group moderators some admin abilities including the ability to delete a topic.
# Problem
There are two main problems:
1. Currently when a group moderator deletes a topic they are redirected to root (not the same for staff)
2. Viewing the categories deleted topics (`c/foo/1/?status=deleted`) does not display the deleted topic to the group moderator (not the same for staff).
# Fix
If the `deleted_by` user is part a group that matches the `reviewable_by_group` on a topic then don't redirect. This is the default interaction for staff to give them the ability to do things like restore the topic in case it was accidentally deleted.
To render the deleted topics as expected for the group moderator I am utilizing [the guardian scope of `guardian.can_see_deleted_topics?` for said category](https://github.com/discourse/discourse/pull/19618/files#diff-288e61b8bacdb29d9c2e05b42da6837b0036dcf1867332d977ca7c5e74a44297R802-R803)
Previously, browser logs would be printed to STDOUT halfway through the test run. This commit changes the behaviour so that the logs are included in the failure summary along with other rspec failure information.
There is an issue where chat message processing breaks due to
unhandles `SocketError` exceptions originating in the SSRF check,
specifically in `FinalDestination::Resolver`.
This change gives `FinalDestination::SSRFDetector` a new error class
to wrap the `SocketError` in, and haves the `RetrieveTitle` class
handle that error gracefully.
A few specs in `dashboard_controller_spec.rb` set some state in redis but don't clean it up afterwards which causes other specs to fail when they're ran after `dashboard_controller_spec.rb`.
Related commit: 18467d4.
At the time of writing, this is how the `TopicPosterSerializer` looks
like:
```
class TopicPosterSerializer < ApplicationSerializer
attributes :extras, :description
has_one :user, serializer: PosterSerializer
has_one :primary_group, serializer: PrimaryGroupSerializer
has_one :flair_group, serializer: FlairGroupSerializer
end
```
Within `PosterSerializer`, the `primary_group` and `flair_group`
association is requested on the `user` object. However, the associations
have not been loaded on the `user` object at this point leading to the
N+1 queries problem. One may wonder
why the associations have not been loaded when the `TopicPosterSerializer`
has `has_one :primary_group` and `has_one :flair_group`. It turns out that `TopicPoster`
is just a struct containing the `user`, `primary_group` and
`flair_group` objects. The `primary_group` and `flair_group`
ActiveRecord objects are loaded seperately in `UserLookup` and not preloaded when querying for
the users. This is done for performance reason so that we are able to
load the `primary_group` and `flair_group` records in a single query
without duplication.
We were adding to the resolver's work queue before setting up the `@lookup` and `@parent` information. That could lead to the lookup being performed on the wrong (or `nil`) hostname. This also lead to some flakiness in specs.
This task sometimes fails in CI due to temporary network issues. Retrying twice should help resolve those situations without needing to manually restart the job.
* UX: Wizard Step Enhancements
- Remove illustrations
- Add Emoji graphic to top of steps
- Add description below step title
- Move point of contact to last step
* Move step count to header, plus some button navigation tweaks
* add remaining emoji to step headers
* fix button logic on steps
* Update Point of Contact
* remove automated messages field
* adjust styling for counter, title, and emoji
* Update wording for logos
* Fix tests
* fix prettier
* fix specs
* set same with for steps except for styling screen
* use sentence case; remove duplicate copy under your organization fields
* fix missing buttons on small screens
* add spacing to buttons; adjust font weight to labels
* adjust styling for community logo step; use sentence case for button
* update copy for point of contact text helper
* use sentence case for field labels
* fix ui tests
* use btn-back class to fix ui tests
* reduce bottom margin for toggle fields
* clean up
Co-authored-by: Ella <ella.estigoy@gmail.com>
Follow up to a review in #18937, this commit changes the HashtagAutocompleteService to no longer use class variables to register hashtag data sources or types in context priority order. This is to address multisite concerns, where one site could e.g. have chat disabled and another might not. The filtered plugin registers I added will not be included if the plugin is disabled.
In both ChatMessage#rebake! and in ChatMessageProcessor
when we were calling ChatMessage.cook we were missing the
user_id to cook with, which causes missed hashtag cooks
because of missing permissions.
Previously, restricted category chat channel was available for all groups - even `readonly`. From now on, only user who belong to group with `create_post` or `full` permissions can access that chat channel.
* DEV: Remove enable_whispers site setting
Whispers are enabled as long as there is at least one group allowed to
whisper, see whispers_allowed_groups site setting.
* DEV: Always enable whispers for admins if at least one group is allowed.
We were changing the user's user_option.bookmark_auto_delete_preference
to whatever they changed it to in the bookmark modal to use as default
for future bookmarks. However this was leading to a lot of confusion
since if you wanted to set it for one bookmark you had to remember to
change it back on the next one.
This commit removes that automatic functionality, and instead moves
the bookmark auto delete preference to User Preferences > Interface
in an explicit dropdown.
This commit adds a new notification that gets sent to admins when the site gets new features after an upgrade/deploy. Clicking on the notification takes the admin to the admin dashboard at `/admin` where they can see the new features under the "New Features" section.
Internal topic: t/87166.
This introduces another "section" of queries to the
hashtag autocomplete search, which returns results for
each type that start with the search term. So now results
will be in this order, and within these sections ordered
by the types in priority order:
1. Exact matches sorted by type
2. "starts with" sorted by type
3. Everything else sorted by type then name within type
Depending on the current state of things, sometimes the homepage style
wouldn't update because we were incorrectly blocking updates the
`desktop_category_page_style` site setting if the first item in the top
menu was 'categories'.
Added a test case to handle this situation.
See https://meta.discourse.org/t/248354
When originally introduced, `attempts` was only used in the read-only check
context.
With the introduction of the exponential backoff in cda370db, `attempts` was
also used to count loop iterations, but was left inside the if block instead of
being incremented every loop, meaning the exponential backoff was only
happening when the site was recently readonly.
Co-authored-by: jbrw <jamie.wilson@discourse.org>
* FEATURE: Add chat and sidebar toggles to the setup wizard
- Fix css alighnment
- Add Enable Chat Toggle
- Add Enable Sidebar Toggle
* Check for the chat plugin
* Account for new sidebar step
* update chat and sidebar description
* UI: add checkmark as a visual indicator that it is enabled
* use new navigation_memu site setting for enabling the sidebar
* fix tests
* Add tests
* Update lib/wizard/step_updater.rb
Use HEADER_DROPDOWN instead of LEGACY
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
* Fix spec. Use HEADER_DROPDOWN instead of LEGACY
Co-authored-by: Ella <ella.estigoy@gmail.com>
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
The tsquery used for searching is generated using both functions from
Ruby and Postgresql (for example, unaccent function). Depending on the
term used, it generated an invalid tsquery. For example "can’t"
generated "''can''t''" instead of "''can''''t''".
* FIX: Use Category.secured(guardian) for hashtag datasource
Follow up to comments in #19219, changing the category
hashtag datasource to use Category.secured(guardian) instead
of Site.new(guardian).categories here since the latter does
more work for not much benefit, and the query time is the
same. Also eliminates some Hash -> Model back and forth
busywork. Add some more specs too.
* FIX: Server-side hashtag lookup cooking user loading
When we were using the PrettyText.options.currentUser
and parsing back and forth with JSON for the hashtag
lookups server-side, we had a bug where the user's
secure categories were not loaded since we never actually
loaded a User model from the database, only parsed it
from JSON.
This commit fixes the issue by instead using the
PretyText.options.userId and looking up the user directly
from the database when calling hashtag_lookup via the
PrettyText::Helpers code when cooking server-side. Added
the missing spec to check for this as well.
If configured, this will be used for static JS assets which are stored on S3. This can be useful if you want to use different CDN providers/configuration for Uploads and JS
This commit allows us to type # in the UI and present autocomplete
results immediately with the following logic for the topic composer,
and reversed for the chat composer:
* Categories the user can access and has not muted sorted by `topic_count`
* Tags the user can access and has not muted sorted by `topic_count`
* Chat channels the user is a member of sorted by `messages_count`
So in effect, we allow searching for hashtags without a search term.
To do this we add a new `search_without_term` to each data source so
each one can define how it wants to handle this logic.
This new site setting replaces the
`enable_experimental_sidebar_hamburger` and `enable_sidebar` site
settings as the sidebar feature exits the experimental phase.
Note that we're replacing this without depreciation since the previous
site setting was considered experimental.
Internal Ref: /t/86563
When under extremely high load, it has been observed that updating the categories table when creating a new topic can become a bottleneck.
This change will reduce the two updates to one when a new topic is created within a category, and therefore should help with performance when under extremely high load.
Note that we don't have a database table and a model for post mentions yet, and I decided to implement it without adding one to avoid heavy data migrations. Still, we may want to add such a model later, that would be convenient, we have such a model for mentions in chat.
Note that status appears on all mentions on all posts in a topic except of the case when you just posted a new post, and it appeared on the bottom of the topic. On such posts, status won't be shown immediately for now (you'll need to reload the page to see the status). I'll take care of it in one of the following PRs.
In some cases (e.g. user notification emails) we
are passing an excerpted/stripped version of the
post HTML to Email::Styles, at which point the
<span> elements surrounding the hashtag text have
been stripped. This caused an error when trying to
remove that element to replace the text.
Instead we can just remove all elements inside
a.hashtag-cooked and replace with the raw #hashtag
text which will work in more cases.
This is closer to git's redirect following behaviour. We prevented git
following redirects when we clone in order to prevent SSRF attacks.
Follow-up-to: 291bbc4fb9
When doing local oneboxes we sometimes want to allow
SVGs in the final preview HTML. The main case currently
is for the new cooked hashtags, which include an SVG
icon.
SVGs will be included in local oneboxes via `ExcerptParser` _only_
if they have the d-icon class, and if the caller for `post.excerpt`
specifies the `keep_svg: true` option.
Adds stats for API and user API requests similar to regular page views.
This comes with a new report to visualize API requests per day like the
consolidated page views one.
This commit introduce a new API for registering callbacks, which we'll execute when a user gets destroyed, and the `delete_posts` opt is true. The chat plugin registers one callback and queues a job to destroy every message from that user in batches.
This reverts commit a71f6cf09b.
The github UI had an error I didn't notice which resulted
in a security commit being merged _after_ the bump, now
I have to redo the bump.
Previously we would unconditionally fetch all images via HTTP to grab
original sizing from cooked post processor in 2 different spots.
This was wasteful as we already calculate and cache this info in upload records.
This also simplifies some specs and reduces use of mocks.
This hasn't been necessary for many years, and is no longer supported following 84bec1cb. Only extremely old plugins might be trying to do this. All the affected open-source plugins I can find have already been updated.
We now use Ember CLI (core/plugins) and DiscourseJSProcessor (themes) for all Ember and template compilation. This commit removes the remnants of the legacy Sprockets-based Ember compilation system.
Sprockets, and its DiscourseJSProcess-based Babel transformations, is still in use for a few assets. Ideally that will be removed/replaced in the near future.
Raw paths like `/test/path` are not supported natively in the CSP. This commit prepends the site's base URL to these paths. This allows plugins to add 'local' assets to the CSP without needing to hardcode the site's hostname.
We're going to change the default return value of the `primary_email_verified?` method of `Auth::ManagedAuthenticator` to false, so we need to explicitly define the method on authenticators to return true where it makes sense to do so.
Internal topic: t/82084.
In some setups the keys start with "original/" and "optimized/" and in some setups the key is something like "foo/original/", so lets make the filter less strict.
This will be used by plugins to handle the client side of their custom
post validations without having to overwrite the whole composer save
action as it was done in other plugins.
Co-authored-by: Penar Musaraj <pmusaraj@gmail.com>
This commit fleshes out and adds functionality for the new `#hashtag` search and
lookup system, still hidden behind the `enable_experimental_hashtag_autocomplete`
feature flag.
**Serverside**
We have two plugin API registration methods that are used to define data sources
(`register_hashtag_data_source`) and hashtag result type priorities depending on
the context (`register_hashtag_type_in_context`). Reading the comments in plugin.rb
should make it clear what these are doing. Reading the `HashtagAutocompleteService`
in full will likely help a lot as well.
Each data source is responsible for providing its own **lookup** and **search**
method that returns hashtag results based on the arguments provided. For example,
the category hashtag data source has to take into account parent categories and
how they relate, and each data source has to define their own icon to use for the
hashtag, and so on.
The `Site` serializer has two new attributes that source data from `HashtagAutocompleteService`.
There is `hashtag_icons` that is just a simple array of all the different icons that
can be used for allowlisting in our markdown pipeline, and there is `hashtag_context_configurations`
that is used to store the type priority orders for each registered context.
When sending emails, we cannot render the SVG icons for hashtags, so
we need to change the HTML hashtags to the normal `#hashtag` text.
**Markdown**
The `hashtag-autocomplete.js` file is where I have added the new `hashtag-autocomplete`
markdown rule, and like all of our rules this is used to cook the raw text on both the clientside
and on the serverside using MiniRacer. Only on the server side do we actually reach out to
the database with the `hashtagLookup` function, on the clientside we just render a plainer
version of the hashtag HTML. Only in the composer preview do we do further lookups based
on this.
This rule is the first one (that I can find) that uses the `currentUser` based on a passed
in `user_id` for guardian checks in markdown rendering code. This is the `last_editor_id`
for both the post and chat message. In some cases we need to cook without a user present,
so the `Discourse.system_user` is used in this case.
**Chat Channels**
This also contains the changes required for chat so that chat channels can be used
as a data source for hashtag searches and lookups. This data source will only be
used when `enable_experimental_hashtag_autocomplete` is `true`, so we don't have
to worry about channel results suddenly turning up.
------
**Known Rough Edges**
- Onebox excerpts will not render the icon svg/use tags, I plan to address that in a follow up PR
- Selecting a hashtag + pressing the Quote button will result in weird behaviour, I plan to address that in a follow up PR
- Mixed hashtag contexts for hashtags without a type suffix will not work correctly, e.g. #ux which is both a category and a channel slug will resolve to a category when used inside a post or within a [chat] transcript in that post. Users can get around this manually by adding the correct suffix, for example ::channel. We may get to this at some point in future
- Icons will not show for the hashtags in emails since SVG support is so terrible in email (this is not likely to be resolved, but still noting for posterity)
- Additional refinements and review fixes wil
The UPDATE statement could lock the `uploads` table for a very long time
when the `verification_status` of lots of uploads changed. Splitting up
and simplifying the UPDATE solves that problem.
Also, this change ensures that only the needed data from the inventory
gets inserted into the `TEMP TABLE`. For example, there's no need to
have records for optimized images in that table when the `uploads` table
gets updated.
The hidden site setting `suppress_secured_categories_from_admin` will
suppress visibility of categories without explicit access from admins
in a few key areas (category drop downs and topic lists)
It is not intended to be a security wall since admins can amend any site
setting. Instead it is feature that allows hiding the categories from the
UI.
Admins will still be able to see topics in categories without explicit
access using direct URLs or flags.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
These two dropdown fields in the setup wizard are pre-populated, and
there is no way to de-select a value, you can only change it. So we can
remove the required attribute so that an asterisk doesn't show up in the
UI.
Previously we were forcing node's max-old-space-size to be 2GB. This override was added in a01b1dd6 to avoid issues caused by a lower default node heap_size_limit on machines with less memory.
This commit makes that `max-old-space-size` override more specific so that it only applies to machines with less memory. Other machines will go use Node's defaults.
The override is also lowered to 1GB. This is still high enough for the build to complete, while reducing memory usage.
https://meta.discourse.org/t/245547
* FEATURE: Default Composer Category Site Setting
- Create the default_composer_category site setting
- Replace general_category_id logic for auto selecting the composer
category
- Prevent Uncategorized from being selected if not allowed
- Add default_composer_category option to seeded categories
- Create a migration to populate the default_composer_category site
setting if there is a general_category_id populated
- Added some tests
* Add missing translation for the new site setting
* fix some js tests
* Just check that the header value is null
Currently, moderators are able to set primary group for users
irrespective of the of the `moderators_manage_categories_and_groups` site
setting value.
This change updates Guardian implementation to honour it.
* Remove old bookmark column ignores to follow up b22450c7a8
* Change some group site setting checks to use the _map helper
* Remove old secure_media helper stub for chat
* Change attr_accessor to attr_reader for preloaded_custom_fields to follow up 70af45055a
Previously the stylesheet cachebusting hash was based on the maximum mtime of files. This works well in development and during in-container updates (e.g. via docker_manager). However, when a fresh docker image is created for each deploy, the file mtimes will change even if the contents has not.
This commit changes the production logic to calculate the cachebuster from the filenames and contents of the relevant assets. This should be consistent across deploys, thereby improving cache hits and improving page load times.
- Ensure it works with prefixed S3 buckets
- Perform a sanity check that all current assets are present on S3 before starting deletion
- Remove the lifecycle rule configuration and delete expired assets immediately. This task should be run post-deploy anyway, so adding a 10-day window is not required
Since the system user is a regular user, it can have its
`allow_private_messages` user option turned off, which
with our current `can_send_private_message?(Discourse.system_user)`
check inside the CurrentUserSerializer, will prevent any
user from sending messages in the UI if the system user is not
accepting PMs.
This commit adds a new `can_send_private_messages?` method to
the Guardian, which can be used in serializers and not depend
on the system user. When the user actually sends a message
we still rely on the old `can_send_private_message?(target)`
call to see if they are allowed to send the message to the target.
The new method is just to say they can "generally" send
private messages.
Previously, we didn't have a site-wide setting to set the default behavior for user profile visibility and user presence features. But we already have a user preference for that.
This task is supposed to skip uploading if the asset is already present in S3. However, when a bucket 'folder path' was configured, this logic was broken and so the assets would be re-uploaded every time.
This commit fixes that logic to include the bucket 'folder path' in the check
This should fix fetching from gitlab.
In order to get SSRF protection, we had to prevent redirects when cloning via git, but some repos are behind redirects and we want to support those too. We use `FinalDestination` before cloning to try to simulate git with redirects, but this isn't quite how git works, so there's some discrepancies between our SSRF protected cloning behavior and normal git behavior that I'm trying to work around.
This is temporary fix. It would be better to use `FinalDestination` to simulate the first request that git makes. I aim to make it work like that in the not too distant future, but this is better for now.
Depends on: #18806
We have a banner that prompts to edit the welcome topic, so let's not
show it in the topic list until it has been edited. Previously this
banner covered the welcome topic, now the banner will be above the topic
list, so we need to hide the welcome topic.
Before this commit, there was no way for us to efficiently check an
array of topics for which a user can see. Therefore, this commit
introduces the `TopicGuardian#can_see_topic_ids` method which accepts an
array of `Topic#id`s and filters out the ids which the user is not
allowed to see. The `TopicGuardian#can_see_topic_ids` method is meant to
maintain feature parity with `TopicGuardian#can_see_topic?` at all
times so a consistency check has been added in our tests to ensure that
`TopicGuardian#can_see_topic_ids` returns the same result as
`TopicGuardian#can_see_topic?`. In the near future, the plan is for us
to switch to `TopicGuardian#can_see_topic_ids` completely but I'm not
doing that in this commit as we have to be careful with the performance
impact of such a change.
This method is currently not being used in the current commit but will
be relied on in a subsequent commit.
Linking a commit from a GitHub pull request included the complete commit
message, instead of just the first line. The rest of the commit message
will be added to the body of the Onebox.
When PostRevisor is called with 'skip_validations: true' it can save
the post twice and one of the calls passes the correct 'validate: false'
argument, but the other one does not.
The filenames (minus the extensions) were being used as keys in a hash to pass to Terser, which meant that colocated connector files would overwrite each other. This commit moves the un-colocating earlier in the pipeline so that the fixed filenames are passed to Terser.
Followup to be3d6a56ce
This commit adds a new `/hashtag/search` endpoint and both
relevant JS and ruby plugin APIs to handle plugins adding their
own data sources and priority orders for types of things to search
when `#` is pressed.
A `context` param is added to `setupHashtagAutocomplete` which
a corresponding chat PR https://github.com/discourse/discourse-chat/pull/1302
will now use.
The UI calls `registerHashtagSearchParam` for each context that will
require a `#` search (e.g. the topic composer), for each type of record that
the context needs to search for, as well as a priority order for that type. Core
uses this call to add the `category` and `tag` data sources to the topic composer.
The `register_hashtag_data_source` ruby plugin API call is for plugins to
add a new data source for the hashtag searching endpoint, e.g. discourse-chat
may add a `channel` data source.
This functionality is hidden behind the `enable_experimental_hashtag_autocomplete`
flag, except for the change to `setupHashtagAutocomplete` since only core and
discourse-chat are using that function. Note this PR does **not** include required
changes for hashtag lookup or new styling.
Theme javascript is now minified using Terser, just like our core/plugin JS bundles. This reduces the amount of data sent over the network.
This commit also introduces sourcemaps for theme JS. Browser developer tools will now be able show each source file separately when browsing, and also in backtraces.
For theme test JS, the sourcemap is inlined for simplicity. Network load is not a concern for tests.
Previously, compiling theme 'extra_js' was done with a number of steps. Each theme_field would be compiled into its own value_baked column, and then the JavascriptCache content would be built by concatenating all of those compiled values.
This commit streamlines things by removing the value_baked step. The raw value of all extra_js theme_fields are passed directly to the ThemeJavascriptCompiler, and then the result is stored in the JavascriptCache.
In itself, this commit should not cause any behavior change. It is designed to open the door to more advanced compilation features which have interdependencies between different source files (e.g. template colocation, sourcemaps).
RS256 was added for Windows Hello and as a side effect we speculatively added
RS384 and RS512. These ciphers were not tested and are now failing on solo
keys. It may be the case that the ciphers are not configured correctly on
our side. It may be the case that this is a Solo key bug.
Regardless, we are removing the ciphers and will only consider adding them
again if absolutely needed.
The previous implementation would attempt to fetch groups using the end-user's Google auth token. This only worked for admin accounts, or users with 'delegated' access to the `admin.directory.group.readonly` API.
This commit changes the approach to use a single 'service account' for fetching the groups. This removes the need to add permissions to all regular user accounts. I'll be updating the [meta docs](https://meta.discourse.org/t/226850) with instructions on setting up the service account.
This is technically a breaking change in behavior, but the existing implementation was marked experimental, and is currently unusable in production google workspace environments.
Previously, when the array had both nil and string values it returned the error "comparison of NilClass with String failed". Now I added the `.compact` method to prevent this issue as per @martin-brennan's suggestion https://github.com/discourse/discourse/pull/18431#discussion_r984204788
* Revert "Revert "FEATURE: Preload resources via link header (#18475)" (#18511)"
This reverts commit 95a57f7e0c.
* put behind feature flag
* env -> global setting
* declare global setting
* forgot one spot
* FEATURE: Hide Privacy Policy and TOS topics
As a way to simplify new sites this change will hide the privacy policy
and the TOS topics from the topic list. They can still be accessed and
edited though.
* add tests
Experiment moving from preload tags in the document head to preload information the the response headers.
While this is a minor improvement in most browsers (headers are parsed before the response body), this allows smart proxies like Cloudflare to "learn" from those headers and build HTTP 103 Early Hints for subsequent requests to the same URI, which will allow the user agent to download and parse our JS/CSS while we are waiting for the server to generate and stream the HTML response.
Co-authored-by: Penar Musaraj <pmusaraj@gmail.com>
Adds a new upload field for a second dark mode category logo.
This alternative will be used when the browser is in dark mode (similar to the global site setting for a dark logo).
These errors tend to indicate that the upload is missing on the remote store. This is bad, but we don't want it to block the dominant-color calculation process. This commit catches errors when there is an HTTP error, and fixes the `base_store.rb` implementation when `FileHelper.download` returns nil.
When a user with an email matching those inside the
DISCOURSE_DEVELOPER_EMAILS env var log in, we make
them into admin users if they are not already. This
is used when setting up the first admin user for
self-hosters, since the discourse-setup script sets
the provided admin emails into DISCOURSE_DEVELOPER_EMAILS.
The issue being fixed here is that the new admins were
not being automatically added to the staff and admins
automatic groups, which was causing issues with the site
settings that are group_list based that don't have an explicit
staff override. All we need to do is refresh the automatic
staff, admin groups when admin is granted for the user.