What does this change do?
This change improves the `upload_theme` system test helper method by
automatically setting the uploaded theme as the default theme for the
site. This is to make it easier for users to use the theme instead of
having to fiddle with theme previews. The default behaviour of setting
the uploaded theme as the site's default theme can be disabled by
passing `false` to the `set_theme_as_default` keyword argument.
This change also introduces a new `upload_theme_component` system test
helper method for uploading theme components. The difference between the
`upload_theme` helper method is that the theme component is
automatically added to the site's default theme when uploaded. The theme
which the theme component is added to can be configured via the
`parent_theme_id` keyword argument.
For both methods, we also no longer require the path to the theme to be
provided. Instead both methods will look through the callstack and can
figure out the theme's directory based on the convention that the
theme's system tests are placed in the `spec/system` directory of the
theme folder. This change simplifies the usage of the methods for users
and helps to remove code like `upload_theme_component(File.expand_path("../..", __dir__))`.
Previous to this change when both `normalize_emails` and `hide_email_address_taken`
is enabled the expected `account_exists` email was only sent on exact email
matches.
This expands it so it also sends an email to the canonical email owner.
Why this change?
Currently, we do not have an easy way to test themes and theme components
using Rails system tests. While we support QUnit acceptance tests for
themes and theme components, QUnit acceptance tests stubs out the server
and setting up the fixtures for server responses is difficult and can lead to a
frustrating experience. System tests on the other hand allow authors to
set up the test fixtures using our fabricator system which is much
easier to use.
What does this change do?
In order for us to allow authors to run system tests with their themes
installed, we are adding a `upload_theme` helper that is made available
when writing system tests. The `upload_theme` helper requires a single
`directory` parameter where `directory` is the directory of the theme
locally and returns a `Theme` record.
Doing this because the same issue exists as did for chromedriver
fixed by TGX in X for minio. Need time to add support for parallel
tests in the minio_runner gem so this doesn't happen:
```
Failure/Error:
File.open(dest, "wb", s.stat.mode) do |f|
IO.copy_stream(s, f)
f.chmod(f.lstat.mode)
end
Errno::ETXTBSY:
Text file busy @ rb_sysopen - /github/home/.minio_runner/minio
./lib/freedom_patches/copy_file.rb:10:in `copy_file'
./vendor/bundle/ruby/3.2.0/gems/minio_runner-0.1.1/lib/minio_runner/binary_manager.rb:49:in `block in download_binary'
./vendor/bundle/ruby/3.2.0/gems/minio_runner-0.1.1/lib/minio_runner/network.rb:72:in `download'
./vendor/bundle/ruby/3.2.0/gems/minio_runner-0.1.1/lib/minio_runner/binary_manager.rb:48:in `download_binary'
./vendor/bundle/ruby/3.2.0/gems/minio_runner-0.1.1/lib/minio_runner/binary_manager.rb:29:in `install'
./vendor/bundle/ruby/3.2.0/gems/minio_runner-0.1.1/lib/minio_runner/binary_manager.rb:9:in `install'
./vendor/bundle/ruby/3.2.0/gems/minio_runner-0.1.1/lib/minio_runner.rb:62:in `install_binaries'
./vendor/bundle/ruby/3.2.0/gems/minio_runner-0.1.1/lib/minio_runner.rb:50:in `start'
./spec/support/system_helpers.rb:157:in `setup_s3_system_test'
```
This adds a new secure_uploads_pm_only site setting. When secure_uploads
is true with this setting, only uploads created in PMs will be marked
secure; no uploads in secure categories will be marked as secure, and
the login_required site setting has no bearing on upload security
either.
This is meant to be a stopgap solution to prevent secure uploads
in a single place (private messages) for sensitive admin data exports.
Ideally we would want a more comprehensive way of saying that certain
upload types get secured which is a hybrid/mixed mode secure uploads,
but for now this will do the trick.
They're both constant per-instance values, there is no need to store them
in the session. This also makes the code a bit more readable by moving
the `session_challenge_key` method up to the `DiscourseWebauthn` module.
This commit adds some system specs to test uploads with
direct to S3 single and multipart uploads via uppy. This
is done with minio as a local S3 replacement. We are doing
this to catch regressions when uppy dependencies need to
be upgraded or we change uppy upload code, since before
this there was no way to know outside manual testing whether
these changes would cause regressions.
Minio's server lifecycle and the installed binaries are managed
by the https://github.com/discourse/minio_runner gem, though the
binaries are already installed on the discourse_test image we run
GitHub CI from.
These tests will only run in CI unless you specifically use the
CI=1 or RUN_S3_SYSTEM_SPECS=1 env vars.
For a history of experimentation here see https://github.com/discourse/discourse/pull/22381
Related PRs:
* https://github.com/discourse/minio_runner/pull/1
* https://github.com/discourse/minio_runner/pull/2
* https://github.com/discourse/minio_runner/pull/3
When we receive the stream parameter, we'll queue a job that periodically publishes partial updates, and after the summarization finishes, a final one with the completed version, plus metadata.
`summary-box` listens to these updates via MessageBus, and updates state accordingly.
Context of this change:
There are two site settings which an admin can configured to set the
default categories and tags that are shown for a new user. `default_navigation_menu_categories`
is used to determine the default categories while
`default_navigation_menu_tags` is used to determine the default tags.
Prior to this change when seeding the defaults, we will filter out the
categories/tags that the user do not have permission to see. However,
this means that when the user does eventually gain permission down the
line, the default categories and tags do not appear.
What does this change do?
With this commit, we have changed it such that all the categories and tags
configured in the `default_navigation_menu_categories` and
`default_navigation_menu_tags` site settings are seeded regardless of
whether the user's visibility of the categories or tags. During
serialization, we will then filter out the categories and tags which the
user does not have visibility of.
Why this change?
We were verifying that a url for a section link in a custom sidebar
section is valid by passing the url string to `Router#recognize`.
If a `rootURL` has been set on the router, the url string that is passed
to `Router#recognize` has to start with the `rootURL`.
This commit fixes the problem by ensuring that `RouteInfoHelper` adds
the application subfolder path before calling `Router#recognize` on the
url string.
Why this change?
We're already displaying a category's description as the title attribute
on the category section link. We should do the same for tags as well.
We need a nice way to only return some hashtag data
sources based on various site settings. This commit
adds an enabled? method that every hashtag data source
must implement. If this returns false the data source
will not be used at all for hashtag lookups or search.
This introduces a PLATFORM_KEY_MODIFIER const that
can be used both client and server side, to determine
whether we should be using the Meta or Ctrl key based
on whether the user is on Windows/Linux or Mac.
Updates the interface for implementing summarization strategies and adds a cache layer to summarize topics once.
The cache stores the final summary and each chunk used to build it, which will be useful when we have to extend or rebuild it.
This PR splits up the preference that controls the count vs dot and destination of sidebar links, which is really hard to understand, into 2 simpler checkboxes:
The new preferences/checkboxes are off by default, but there are database migrations to switch the old preference to the new ones so that existing users don't have to update their preferences to keep their preferred behavior of sidebar links when this changed is rolled out.
Internal topic: t/103529.
We use it like this:
expect(message.created_at).to eq_time(created_at)
The problem is that if one of the values or both of them are `nil` the matcher fails
with this error:
NoMethodError: undefined method `-' for nil:NilClass
This commit adds support for `nil` values. If both time values are `nil` they are equal,
if only one value is `nil` they aren't.
https://meta.discourse.org/t/markdown-preview-and-result-differ/263878
The result of this markdown had different results in the composer preview and the post. This is solved by updating Loofah to the latest version and using html5 fragments like our user had reported. While the change was only needed in cooked_post_processor.rb for this fix, other areas also had to be updated due to various side effects.
Different environments run on different hardware so response times vary
based on hardware. Instead of hardcoding the timeout for
`SystemHelpers#try_until_success` to 2 seconds, we change it such that
it follows `Capybara.default_wait_timeout` which we have configured to
be higher in environments which runs on lousier hardware.
This change should reduce the amount of flakiness we're seeing on CI
with tests that rely on `SystemHlpers#try_until_success`.
* FEATURE: Content custom summarization strategies.
This PR establishes a pattern for plugins to register alternative ways of summarizing content by extending a class that defines an interface.
Core controls which strategy we'll use and who has access to it through the `summarization_strategy` and `custom_summarization_allowed_groups`. It also defines the UI for summarizing topics.
Other plugins can access this summarization mechanism and implement their features, removing cross-plugin customizations, as it currently happens between chat and the discourse-ai plugin.
* Group membership validation and rate limiting
* Work with objects instead of classes
* Port summarization feature from discourse-ai to chat
* Rename available summaries to 'Top Replies' and 'Summary'
This patch sets some limits on custom fields:
- an entity can’t have more than 100 custom fields defined on it
- a custom field can’t hold a value greater than 10,000,000 characters
The current implementation of custom fields is relatively complex and
does an upsert in SQL at some point, thus preventing to simply add an
`ActiveRecord` validation on the custom field model without having to
rewrite a part of the existing logic.
That’s one of the reasons this patch is implementing validations in the
`HasCustomField` module adding them to the model including the module.
Clicking on TOC heading anchors in a subfolder setup was breaking the current URL for users.
Other than the fix this change introduces the ability to test the subfolder setup in system specs.
Why is this change required?
We had a system test failed right after visiting a URL because an
element that was supposed to be on screen isn't. In the screenshot
captured, the app was still stuck on the splash screen. We're not sure
if this has been causing flakiness in our system tests but there is no
good reason to enable splash screen which adds some overhead to each
page load.
What is the problem?
In the test environement, we were calling `SiteSetting.setting` directly
to introduce new site settings. However, this leads to changes in state of the SiteSettings
hash that is stored in memory as test runs. Changing or leaking states
when running tests is one of the major contributors of test flakiness.
An example of how this resulted in test flakiness is our `spec/integrity/i18n_spec.rb` spec file which
had a test case that would fail because a new "plugin_setting" site
setting was registered in another test case but the site setting did not
have translations for the site setting set.
What is the fix?
There are a couple of changes being introduced in this commit:
1. Make `SiteSetting.setting` a private method as it is not safe to be
exposed as a public method of the `SiteSetting` class
2. Change test cases to use existing site settings in Discourse instead
of creating custom site settings. Existing site settings are not
removed often so we don't really need to dynamically add new site
settings in test cases. Even if the site settings being used in test
cases are removed, updating the test cases to rely on other site
settings is a very easy change.
3. Set up a plugin instance in the test environment as a "fixture"
instead of having each test create its own plugin instance.
- Improves styleguide support
- Adds toggle color scheme to styleguide
- Adds properties mutators to styleguide
- Attempts to quit a session as soon as done with it in system specs, this should at least free resources faster
- Refactors fabricators to simplify them
- Adds more fabricators (uploads for example)
- Starts implementing components pattern in system specs
- Uses Chat::Message creator to create messages in system specs, this should help to have more real specs as the side effects should now happen
SearchIndexer is only automatically disabled in `before_all` and `before` blocks which means at the start
of test runs. Enabling the SearchIndexer in one `fab!` block will affect
all other `fab!` blocks which is not ideal as we may be indexing stuff
for search when we don't need to.
Currently, some links aren’t properly built in Discobot when Discourse
is hosted in a subfolder. This is because we’re providing
`Discourse.base_url` to the Rails helpers which contains the base URL
*with* the prefix. But Rails helpers already handle this prefix so the
resulting link gets the prefix twice.
The fix is quite simple: use `Discourse.base_url_no_prefix` instead of
`Discourse.base_url`.
This is used when calling click_message_action_mobile to wait
for the message actions menu to finish animating up before
attempting to click on it using capybara. Without this, in
the time between capybara getting the x,y position of a menu
item to click on and the click being fired, the animating menu
can move that item out of the way.
With the new helper, we constantly compare x,y client rect positions
for the animating element and wait for them to stabilise. Once they
do, it means the animation is done, and it is safe to click on
anything within the element.
Re-enables mobile system specs for chat that were ignored because
of this.
Similar spirit to e195e6f614,
this moves the Bookmarkable registration to DiscoursePluginRegistry
so plugins which are not enabled do not register additional
bookmarkable classes.
Small js fix for fast edit to allow posts to save changes when the post contains apostrophes and quotation marks. Replaces unicode characters in text prior to saving the edit.
Includes system tests for fast edit and introduces a new system spec component for fast edit usage.
If a post contains domain with a word that stems to a non prefix single
words will not match it.
For example: in happy.com, `happy` stems to `happi`. Thus searches for happy
will not find URLs with it included.
This bloats the index a tiny bit, but impact is limited.
Will require a full reindex of search to take effect.
When we are done refining search we can consider a full version bump.
* DEV: Rnemae channel path to just c
Also swap the channel id and channel slug params to be consistent with core.
* linting
* channel_path
* Drop slugify helper and channel route without slug
* Request slug and route models through the channel model if possible
* DEV: Pass messageId as a dynamic segment instead of a query param
* Ensure change is backwards-compatible
* drop query param from oneboxes
* Correctly extract channelId from routes
* Better route organization using siblings for regular and near-message
* Ensures sessions are unique even when using parallelism
* prevents didReceiveAttrs to clear input mid test
* we disable animations in capybara so sometimes the message was barely showing
* adds wait
* ensures finished loading
* is it causing more harm than good?
* this check is slowing things for no reason
* actually target the button
* more resilient select chat message
* apply similar fix to bookmark
* fix
---------
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
This commit allows us to set the channel slug when creating new chat
channels. As well as this, it introduces a new `SlugsController` which can
generate a slug using `Slug.for` and a name string for input. We call this
after the user finishes typing the channel name (debounced) and fill in
the autogenerated slug in the background, and update the slug input
placeholder.
This autogenerated slug is used by default, but if the user writes anything
else in the input it will be used instead.
Currently, `Tag#topic_count` is a count of all regular topics regardless of whether the topic is in a read restricted category or not. As a result, any users can technically poll a sensitive tag to determine if a new topic is created in a category which the user has not excess to. We classify this as a minor leak in sensitive information.
The following changes are introduced in this commit:
1. Introduce `Tag#public_topic_count` which only count topics which have been tagged with a given tag in public categories.
2. Rename `Tag#topic_count` to `Tag#staff_topic_count` which counts the same way as `Tag#topic_count`. In other words, it counts all topics tagged with a given tag regardless of the category the topic is in. The rename is also done so that we indicate that this column contains sensitive information.
3. Change all previous spots which relied on `Topic#topic_count` to rely on `Tag.topic_column_count(guardian)` which will return the right "topic count" column to use based on the current scope.
4. Introduce `SiteSetting.include_secure_categories_in_tag_counts` site setting to allow site administrators to always display the tag topics count using `Tag#staff_topic_count` instead.
Fixes the support for kwargs in `DiscourseEvent.trigger()` on Ruby 3, e.g.
```rb
DiscourseEvent.trigger(:before_system_message_sent, message_type: type, recipient: @recipient, post_creator_args: post_creator_args, params: method_params)
```
Fixes https://github.com/discourse/discourse-local-site-contacts
We are all in on system specs, so this commit moves all the chat quoting acceptance tests (some of which have been skipped for a while) into system specs.
Previously, calling `sign_in` would cause the browser to be redirected to `/`, and would cause the Ember app to boot. We would then call `visit()`, causing the app to boot for a second time.
This commit adds a `redirect=false` option to the `/session/username/become` route. This avoids the unnecessary boot of the app, and leads to significantly faster system spec run times.
In local testing, this takes the full system-spec suite for chat from ~6min to ~4min.
Follow up to a review in #18937, this commit changes the HashtagAutocompleteService to no longer use class variables to register hashtag data sources or types in context priority order. This is to address multisite concerns, where one site could e.g. have chat disabled and another might not. The filtered plugin registers I added will not be included if the plugin is disabled.
In #15474 we introduced dedicated support for date ranges. As part of that
change we added a fallback of "magic" date ranges, which treats dates in
any paragraph with exactly two dates as a range. There were discussions
about migrating all such paragraphs to use the new date range element, but
it was ultimately decided against.
This change removes the fallback and, as a bonus, adds support for multiple
date ranges in the same paragraph.
binding.pry gives a nicer syntax-highlighted environment
and better formatting for inspecting objects, and we still
have the byebug continue/step/next commands (which you can
also alias via .pryrc) via the pry-byebug gem
This helper is intended only for dev purposes. It allows you to pause a test while still being able to interact with the browser.
Usage:
```
it "works" do
visit("/")
pause_test
expect(page).to have_css(".foo")
end
```
This new site setting replaces the
`enable_experimental_sidebar_hamburger` and `enable_sidebar` site
settings as the sidebar feature exits the experimental phase.
Note that we're replacing this without depreciation since the previous
site setting was considered experimental.
Internal Ref: /t/86563
Fixes broken behaviour of arrow buttons for certain users as the interval to scroll menu can be cancelled before the scrolling actually happens.
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
This commit fleshes out and adds functionality for the new `#hashtag` search and
lookup system, still hidden behind the `enable_experimental_hashtag_autocomplete`
feature flag.
**Serverside**
We have two plugin API registration methods that are used to define data sources
(`register_hashtag_data_source`) and hashtag result type priorities depending on
the context (`register_hashtag_type_in_context`). Reading the comments in plugin.rb
should make it clear what these are doing. Reading the `HashtagAutocompleteService`
in full will likely help a lot as well.
Each data source is responsible for providing its own **lookup** and **search**
method that returns hashtag results based on the arguments provided. For example,
the category hashtag data source has to take into account parent categories and
how they relate, and each data source has to define their own icon to use for the
hashtag, and so on.
The `Site` serializer has two new attributes that source data from `HashtagAutocompleteService`.
There is `hashtag_icons` that is just a simple array of all the different icons that
can be used for allowlisting in our markdown pipeline, and there is `hashtag_context_configurations`
that is used to store the type priority orders for each registered context.
When sending emails, we cannot render the SVG icons for hashtags, so
we need to change the HTML hashtags to the normal `#hashtag` text.
**Markdown**
The `hashtag-autocomplete.js` file is where I have added the new `hashtag-autocomplete`
markdown rule, and like all of our rules this is used to cook the raw text on both the clientside
and on the serverside using MiniRacer. Only on the server side do we actually reach out to
the database with the `hashtagLookup` function, on the clientside we just render a plainer
version of the hashtag HTML. Only in the composer preview do we do further lookups based
on this.
This rule is the first one (that I can find) that uses the `currentUser` based on a passed
in `user_id` for guardian checks in markdown rendering code. This is the `last_editor_id`
for both the post and chat message. In some cases we need to cook without a user present,
so the `Discourse.system_user` is used in this case.
**Chat Channels**
This also contains the changes required for chat so that chat channels can be used
as a data source for hashtag searches and lookups. This data source will only be
used when `enable_experimental_hashtag_autocomplete` is `true`, so we don't have
to worry about channel results suddenly turning up.
------
**Known Rough Edges**
- Onebox excerpts will not render the icon svg/use tags, I plan to address that in a follow up PR
- Selecting a hashtag + pressing the Quote button will result in weird behaviour, I plan to address that in a follow up PR
- Mixed hashtag contexts for hashtags without a type suffix will not work correctly, e.g. #ux which is both a category and a channel slug will resolve to a category when used inside a post or within a [chat] transcript in that post. Users can get around this manually by adding the correct suffix, for example ::channel. We may get to this at some point in future
- Icons will not show for the hashtags in emails since SVG support is so terrible in email (this is not likely to be resolved, but still noting for posterity)
- Additional refinements and review fixes wil
* Remove old bookmark column ignores to follow up b22450c7a8
* Change some group site setting checks to use the _map helper
* Remove old secure_media helper stub for chat
* Change attr_accessor to attr_reader for preloaded_custom_fields to follow up 70af45055a
Before this commit, there was no way for us to efficiently check an
array of topics for which a user can see. Therefore, this commit
introduces the `TopicGuardian#can_see_topic_ids` method which accepts an
array of `Topic#id`s and filters out the ids which the user is not
allowed to see. The `TopicGuardian#can_see_topic_ids` method is meant to
maintain feature parity with `TopicGuardian#can_see_topic?` at all
times so a consistency check has been added in our tests to ensure that
`TopicGuardian#can_see_topic_ids` returns the same result as
`TopicGuardian#can_see_topic?`. In the near future, the plan is for us
to switch to `TopicGuardian#can_see_topic_ids` completely but I'm not
doing that in this commit as we have to be careful with the performance
impact of such a change.
This method is currently not being used in the current commit but will
be relied on in a subsequent commit.
This commit makes pending reviewables show up in the main tab (a.k.a. "all notifications" tab). Pending reviewables along with unread notifications are always shown first and they're sorted based on their creation date (most recent comes first).
The dismiss button currently only shows up if there are unread notifications and it doesn't dismiss pending reviewables. We may follow up with another change soon that allows makes the dismiss button work with reviewables and remove them from the list without taking any action on them.
Follow-up to 079450c9e4.
cf. e62e93f83a
This PR also makes it so `bot` (negative ID) and `system` users are always allowed
to send PMs, since the old conditional was just based on `enable_personal_messages`
* FEATURE: Make General the default category
* Set general as the default category in the composer model instead
* use semicolon
* Enable allow_uncategorized_topics in create_post spec helper for now
* Check if general_category_id is set
* Enable allow_uncategorized_topics for test env
* Provide an option to the create_post helper to not set allow_uncategorized_topics
* Add tests to check that category… is not present and that General is selected automatically
This commit renames all secure_media related settings to secure_uploads_* along with the associated functionality.
This is being done because "media" does not really cover it, we aren't just doing this for images and videos etc. but for all uploads in the site.
Additionally, in future we want to secure more types of uploads, and enable a kind of "mixed mode" where some uploads are secure and some are not, so keeping media in the name is just confusing.
This also keeps compatibility with the `secure-media-uploads` path, and changes new
secure URLs to be `secure-uploads`.
Deprecated settings:
* secure_media -> secure_uploads
* secure_media_allow_embed_images_in_emails -> secure_uploads_allow_embed_images_in_emails
* secure_media_max_email_embed_image_size_kb -> secure_uploads_max_email_embed_image_size_kb
This commit introduces rails system tests run with chromedriver, selenium,
and headless chrome to our testing toolbox.
We use the `webdrivers` gem and `selenium-webdriver` which is what
the latest Rails uses so the tests run locally and in CI out of the box.
You can use `SELENIUM_VERBOSE_DRIVER_LOGS=1` to show extra
verbose logs of what selenium is doing to communicate with the system
tests.
By default JS logs are verbose so errors from JS are shown when
running system tests, you can disable this with
`SELENIUM_DISABLE_VERBOSE_JS_LOGS=1`
You can use `SELENIUM_HEADLESS=0` to run the system
tests inside a chrome browser instead of headless, which can be useful to debug things
and see what the spec sees. See note above about `bin/ember-cli` to avoid
surprises.
I have modified `bin/turbo_rspec` to exclude `spec/system` by default,
support for parallel system specs is a little shaky right now and we don't
want them slowing down the turbo by default either.
### PageObjects and System Tests
To make querying and inspecting parts of the page easier
and more reusable inbetween system tests, we are using the
concept of [PageObjects](https://www.selenium.dev/documentation/test_practices/encouraged/page_object_models/) in
our system tests. A "Page" here is generally corresponds to
an overarching ember route, e.g. "Topic" for `/t/324345/some-topic`,
and this contains logic for querying components within the topic
such as "Posts".
I have also split "Modals" into their own entity. Further down the
line we may want to explore creating independent "Component"
contexts.
Capybara DSL should be included in each PageObject class,
reference for this can be found at https://rubydoc.info/github/teamcapybara/capybara/master#the-dsl
For system tests, since they are so slow, we want to focus on
the "happy path" and not do every different possible context
and branch check using them. They are meant to be overarching
tests that check a number of things are correct using the full stack
from JS and ember to rails to ruby and then the database.
### CI Setup
Whenever a system spec fails, a screenshot
is taken and a build artifact is produced _after the entire CI run is complete_,
which can be downloaded from the Actions UI in the repo.
Most importantly, a step to build the Ember app using Ember CLI
is needed, otherwise the JS assets cannot be found by capybara:
```
- name: Build Ember CLI
run: bin/ember-cli --build
```
A new `--build` argument has been added to `bin/ember-cli` for this
case, which is not needed locally if you already have the discourse
rails server running via `bin/ember-cli -u` since the whole server is built and
set up by default.
Co-authored-by: David Taylor <david@taylorhq.com>
We have a .ics endpoint for user bookmarks, this
commit makes it so polymorphic bookmarks work on
that endpoint, using the serializer associated with
the RegisteredBookmarkable.
A bit of a mixed bag, this addresses several edge areas of bookmarks and makes them compatible with polymorphic bookmarks (hidden behind the `use_polymorphic_bookmarks` site setting). The main ones are:
* ExportUserArchive compatibility
* SyncTopicUserBookmarked job compatibility
* Sending different notifications for the bookmark reminders based on the bookmarkable type
* Import scripts compatibility
* BookmarkReminderNotificationHandler compatibility
This PR also refactors the `register_bookmarkable` API so it accepts a class descended from a `BaseBookmarkable` class instead. This was done because we kept having to add more and more lambdas/properties inline and it was very messy, so a factory pattern is cleaner. The classes can be tested independently as well.
Some later PRs will address some other areas like the discourse narrative bot, advanced search, reports, and the .ics endpoint for bookmarks.
Discourse has the Discourse Connect Provider protocol that makes it possible to
use a Discourse instance as an identity provider for external sites. As a
natural extension to this protocol, this PR adds a new feature that makes it
possible to use Discourse as a 2FA provider as well as an identity provider.
The rationale for this change is that it's very difficult to implement 2FA
support in a website and if you have multiple websites that need to have 2FA,
it's unrealistic to build and maintain a separate 2FA implementation for each
one. But with this change, you can piggyback on Discourse to take care of all
the 2FA details for you for as many sites as you wish.
To use Discourse as a 2FA provider, you'll need to follow this guide:
https://meta.discourse.org/t/-/32974. It walks you through what you need to
implement on your end/site and how to configure your Discourse instance. Once
you're done, there is only one additional thing you need to do which is to
include `require_2fa=true` in the payload that you send to Discourse.
When Discourse sees `require_2fa=true`, it'll prompt the user to confirm their
2FA using whatever methods they've enabled (TOTP or security keys), and once
they confirm they'll be redirected back to the return URL you've configured and
the payload will contain `confirmed_2fa=true`. If the user has no 2FA methods
enabled however, the payload will not contain `confirmed_2fa`, but it will
contain `no_2fa_methods=true`.
You'll need to be careful to re-run all the security checks and ensure the user
can still access the resource on your site after they return from Discourse.
This is very important because there's nothing that guarantees the user that
will come back from Discourse after they confirm 2FA is the same user that
you've redirected to Discourse.
Internal ticket: t62183.
This PR adds an extra description to the 2FA page when granting a user admin access. It also introduces a general system for adding customized descriptions that can be used by future actions.
(Follow-up to dd6ec65061)
2FA support in Discourse was added and grown gradually over the years: we first
added support for TOTP for logins, then we implemented backup codes, and last
but not least, security keys. 2FA usage was initially limited to logging in,
but it has been expanded and we now require 2FA for risky actions such as
adding a new admin to the site.
As a result of this gradual growth of the 2FA system, technical debt has
accumulated to the point where it has become difficult to require 2FA for more
actions. We now have 5 different 2FA UI implementations and each one has to
support all 3 2FA methods (TOTP, backup codes, and security keys) which makes
it difficult to maintain a consistent UX for these different implementations.
Moreover, there is a lot of repeated logic in the server-side code behind these
5 UI implementations which hinders maintainability even more.
This commit is the first step towards repaying the technical debt: it builds a
system that centralizes as much as possible of the 2FA server-side logic and
UI. The 2 main components of this system are:
1. A dedicated page for 2FA with support for all 3 methods.
2. A reusable server-side class that centralizes the 2FA logic (the
`SecondFactor::AuthManager` class).
From a top-level view, the 2FA flow in this new system looks like this:
1. User initiates an action that requires 2FA;
2. Server is aware that 2FA is required for this action, so it redirects the
user to the 2FA page if the user has a 2FA method, otherwise the action is
performed.
3. User submits the 2FA form on the page;
4. Server validates the 2FA and if it's successful, the action is performed and
the user is redirected to the previous page.
A more technically-detailed explanation/documentation of the new system is
available as a comment at the top of the `lib/second_factor/auth_manager.rb`
file. Please note that the details are not set in stone and will likely change
in the future, so please don't use the system in your plugins yet.
Since this is a new system that needs to be tested, we've decided to migrate
only the 2FA for adding a new admin to the new system at this time (in this
commit). Our plan is to gradually migrate the remaining 2FA implementations to
the new system.
For screenshots of the 2FA page, see PR #15377 on GitHub.
The warnings on git 2.28+ are:
```
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint: git config --global init.defaultBranch <name>
hint:
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint:
hint: git branch -m <name>
```
Also:
* Remove an unused method (#fill_email)
* Replace a method that was used just once (#generate_username) with `SecureRandom.alphanumeric`
* Remove an obsolete dev puma `tmp/restart` file logic
Currently, Discourse rate limits all incoming requests by the IP address they
originate from regardless of the user making the request. This can be
frustrating if there are multiple users using Discourse simultaneously while
sharing the same IP address (e.g. employees in an office).
This commit implements a new feature to make Discourse apply rate limits by
user id rather than IP address for users at or higher than the configured trust
level (1 is the default).
For example, let's say a Discourse instance is configured to allow 200 requests
per minute per IP address, and we have 10 users at trust level 4 using
Discourse simultaneously from the same IP address. Before this feature, the 10
users could only make a total of 200 requests per minute before they got rate
limited. But with the new feature, each user is allowed to make 200 requests
per minute because the rate limits are applied on user id rather than the IP
address.
The minimum trust level for applying user-id-based rate limits can be
configured by the `skip_per_ip_rate_limit_trust_level` global setting. The
default is 1, but it can be changed by either adding the
`DISCOURSE_SKIP_PER_IP_RATE_LIMIT_TRUST_LEVEL` environment variable with the
desired value to your `app.yml`, or changing the setting's value in the
`discourse.conf` file.
Requests made with API keys are still rate limited by IP address and the
relevant global settings that control API keys rate limits.
Before this commit, Discourse's auth cookie (`_t`) was simply a 32 characters
string that Discourse used to lookup the current user from the database and the
cookie contained no additional information about the user. However, we had to
change the cookie content in this commit so we could identify the user from the
cookie without making a database query before the rate limits logic and avoid
introducing a bottleneck on busy sites.
Besides the 32 characters auth token, the cookie now includes the user id,
trust level and the cookie's generation date, and we encrypt/sign the cookie to
prevent tampering.
Internal ticket number: t54739.
When dismissing new topics for the Tracked filter, the dismiss was
limited to 30 topics which is the default per page count for TopicQuery.
This happened even if you specified which topic IDs you were
selectively dismissing. This PR fixes that bug, and also moves
the per_page_count into a DEFAULT_PER_PAGE_COUNT for the TopicQuery
so it can be stubbed in tests.
Also moves the unused stub_const method into the spec helpers
for cases like this; it is much better to handle this in one place
with an ensure. In a follow up PR I will clean up other specs that
do the same thing and make them use stub_const.
* Move onebox gem in core library
* Update template file path
* Remove warning for onebox gem caching
* Remove onebox version file
* Remove onebox gem
* Add sanitize gem
* Require onebox library in lazy-yt plugin
* Remove onebox web specific code
This code was used in standalone onebox Sinatra application
* Merge Discourse specific AllowlistedGenericOnebox engine in core
* Fix onebox engine filenames to match class name casing
* Move onebox specs from gem into core
* DEV: Rename `response` helper to `onebox_response`
Fixes a naming collision.
* Require rails_helper
* Don't use `before/after(:all)`
* Whitespace
* Remove fakeweb
* Remove poor unit tests
* DEV: Re-add fakeweb, plugins are using it
* Move onebox helpers
* Stub Instagram API
* FIX: Follow additional redirect status codes (#476)
Don’t throw errors if we encounter 303, 307 or 308 HTTP status codes in responses
* Remove an empty file
* DEV: Update the license file
Using the copy from https://choosealicense.com/licenses/gpl-2.0/#
Hopefully this will enable GitHub to show the license UI?
* DEV: Update embedded copyrights
* DEV: Add Onebox copyright notice
* DEV: Add MIT license, convert COPYRIGHT.txt to md
* DEV: Remove an incorrect copyright claim
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
Co-authored-by: jbrw <jamie@goatforce5.org>
Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
This PR allows invitations to be used when the DiscourseConnect SSO is enabled for a site (`enable_discourse_connect`) and local logins are disabled. Previously invites could not be accepted with SSO enabled simply because we did not have the code paths to handle that logic.
The invitation methods that are supported include:
* Inviting people to groups via email address
* Inviting people to topics via email address
* Using invitation links generated by the Invite Users UI in the /my/invited/pending route
The flow works like this:
1. User visits an invite URL
2. The normal invitation validations (redemptions/expiry) happen at that point
3. We store the invite key in a secure session
4. The user clicks "Accept Invitation and Continue" (see below)
5. The user is redirected to /session/sso then to the SSO provider URL then back to /session/sso_login
6. We retrieve the invite based on the invite key in secure session. We revalidate the invitation. We show an error to the user if it is not valid. An additional check here for invites with an email specified is to check the SSO email matches the invite email
7. If the invite is OK we create the user via the normal SSO methods
8. We redeem the invite and activate the user. We clear the invite key in secure session.
9. If the invite had a topic we redirect the user there, otherwise we redirect to /
Note that we decided for SSO-based invites the `must_approve_users` site setting is ignored, because the invite is a form of pre-approval, and because regular non-staff users cannot send out email invites or generally invite to the forum in this case.
Also deletes some group invite checks as per https://github.com/discourse/discourse/pull/12353
This test failure was caused by rails calling `.debug` on our FakeLogger which was not supporting it, resulting in more errors than what the test was expecting.
Moves the topic timer jobs from being scheduled ahead of time with enqueue_at to a 5 minute scheduled run like bookmark reminders, in a new job called Jobs::EnqueueTopicTimers. Backwards compatibility is maintained by checking if an existing topic timer job is enqueued in sidekiq for the timer, and if it is not running it inside the new job.
The functionality to close/open a topic if it is in the opposite state still remains in the after_save block of TopicTimer, with further commentary, which is used for Open/Close Temporarily.
This also removes the ensure_consistency! functionality of topic timers as it is no longer needed; the new job will always pick up the timers because they are not stored in a fragile state of sidekiq.
It used to insert block Oneboxes inside paragraphs which resulted in
invalid HTML. This needed an additional parsing for removal of empty
paragraphs and the resulting HTML could still be invalid.
This commit ensure that block Oneboxes are inserted correctly, by
splitting the paragraph containing the link and putting the block
between the two. Paragraphs left with nothing but whitespaces will
be removed.
Follow up to 7f3a30d79f.
There are issues around displaying images on published pages when secure media is enabled. This PR temporarily makes it appear as if published pages are enabled if secure media is also enabled.