* FEATURE: Export topics to markdown
The route `/raw/TOPIC_ID` will now export whole topics (paginated to 100
posts) in a markdown format.
See https://meta.discourse.org/t/-/152185/12
This reverts commit 2c7906999a.
The changes break some things in local development (putting JS files
into minified files, not allowing debugger, and others)
This reverts commit ea84a82f77.
This is causing problems with `/theme-qunit` on legacy, non-ember-cli production sites. Reverting while we work on a fix
This is quite complex as it means that in production we have to build
Ember CLI test files and allow them to be used by our Rails application.
There is a fair bit of glue we can remove in the future once we move to
Ember CLI completely.
An admin could search for all screened ip addresses in a block by
using wildcards. 192.168.* returned all IPs in range 192.168.0.0/16.
This feature allows admins to search for a single IP address in all
screened IP blocks. 192.168.0.1 returns all IP blocks that match it,
for example 192.168.0.0/16.
* FEATURE: Remove roll up button for screened IPs
* FIX: Match more specific screened IP address first
The new warnings cover more cases and more accurate. Most of the
warnings will be visible only to staff members because otherwise they
would leak information about user's preferences.
Also:
* Remove an unused method (#fill_email)
* Replace a method that was used just once (#generate_username) with `SecureRandom.alphanumeric`
* Remove an obsolete dev puma `tmp/restart` file logic
Adding a spec for documenting the delete post API endpoint for our api
docs. As part of this added detailed info for the `force_destroy`
parameter for permanently deleting a post.
Tests fail in Ruby 3.0 and later due to separation of positional and
keyword arguments. RSpec treats the hash at the end of include_examples
as keyword arguments when it should be passed as a positional argument.
* File.exists? is deprecated and removed in Ruby 3.2 in favor of
File.exist?
* Dir.exists? is deprecated and removed in Ruby 3.2 in favor of
Dir.exist?
This allows authenticators to instruct the Auth::Result to override attributes without using the general site settings. This provides an easy migration path for auth plugins which offer their own "overrides email", "overrides username" or "overrides name" settings. With this new api, they can set `overrides_*` on the result object, and the attribute will be overriden regardless of the general site setting.
ManagedAuthenticator is updated to use this new API. Plugins which consume ManagedAuthenticator will instantly take advantage of this change.
This commit adds API documentation for the new upload
endpoints related to direct + multipart external uploads.
Also included is a rake task which watches the files in
the spec/requests/api directory and calls a script file
(spec/regenerate_swagger_docs) whenever one changes. This
script runs rake rswag:specs:swaggerize and then copies
the openapi.yml file over to the discourse_api_docs repo
directory, and hits a script there to convert the YML to
JSON so the API docs are refreshed while the server is
still running. This makes the loop of making a doc change
and seeing it in the local server much faster.
The rake task is rake autospec:swagger
* FEATURE: hide_email_address_taken forces use of email in forgot password form
This strengthens this site setting which is meant to be used to harden sites
that are experiencing abuse on forgot password routes.
Previously we would only deny letting people know if forgot password worked on not
New change also bans usage of username for forgot password when enabled
Discourse sent only translation overrides for the current language to the client instead of sending overrides from fallback locales as well. This especially impacted en_GB -> en since most overrides would be done in English instead of English (UK).
This also adds lots of tests for previously untested code.
There's a small caveat: The client currently doesn't handle fallback locales for MessageFormat strings. That is why overrides for those strings always have a higher priority than regular translations. So, as an example, the lookup order for MessageFormat strings in German is:
1. override for de
2. override for en
3. value from de
4. value from en
Some time ago, we made this fix to external authentication – https://github.com/discourse/discourse/pull/13706. We didn't address Discourse Connect (https://meta.discourse.org/t/discourseconnect-official-single-sign-on-for-discourse-sso/13045) at that moment, so I wanted to fix it for Discourse Connect as well.
Turned out though that Discourse Connect doesn't contain this problem and already handles staged users correctly. This PR adds tests that confirm it. Also, I've extracted two functions in Discourse Connect implementation along the way and decided to merge this refactoring too (the refactoring is supported with tests).
A plugin API that allows customizing existing topic-backed static pages, like:
faq, tos, privacy (see: StaticController) The block passed to this
method has to return a SiteSetting name that contains a topic id.
```
add_topic_static_page("faq") do |controller|
current_user&.locale == "pl" ? "polish_faq_topic_id" : "faq_topic_id"
end
```
You can also add new pages in a plugin, but remember to add a route,
for example:
```
get "contact" => "static#show", id: "contact"
```
We can fake redis transactions so that `fab!` works for redis and PG
data, but it's too slow to be used indiscriminately. Instead, you can
opt into it with the `use_redis_snapshotting` helper.
Insofar as snapshotting allows us to `fab!` more things, it provides a
speedup.
This commit introduces a new site setting "google_oauth2_hd_groups". If enabled, group information will be fetched from Google during authentication, and stored in the Discourse database. These 'associated groups' can be connected to a Discourse group via the "Membership" tab of the group preferences UI.
The majority of the implementation is generic, so we will be able to add support to more authentication methods in the near future.
https://meta.discourse.org/t/managing-group-membership-via-authentication/175950
We don't need it anymore. Actually, I removed using of it on the client side a long time ago, when I was working on improving blank page syndrome on user activity pages (see https://github.com/discourse/discourse/pull/14311).
This PR also removes some old resource strings that we don't use anymore. We have new strings for blank pages.
Documenting the `/u/:username:/emails.json` endpoint.
Also removing some email fields from user api responses because they
aren't actually included in the response unless you are querying
yourself.
When redirecting to login, we store a destination_url cookie, which the user is then redirected to after login. We never want the user to be redirected to a JSON URL. Instead, we should return a 403 in these situations.
This should also be much less confusing for API consumers - a 403 is a better representation than a 302.
Related to: 20f736aa11.
`auto_update` is true by default at the database level, but it doesn't make sense for `auto_update` to be true on themes that are not imported from a Git repository.
Under some conditions, these varied responses could lead to cache poisoning, hence the 'security' label.
Previously the Rails application would serve JSON data in place of HTML whenever Ember CLI requested an `application.html.erb`-rendered page. This commit removes that logic, and instead parses the HTML out of the standard response. This means that Rails doesn't need to customize its response for Ember CLI.
Recently, the wrong new behavior appeared – we started to suggest to invited users usernames like "user1".
To reproduce:
1. Create an invitation with default settings, do not restrict it to email
2. Copy an invitation link and follow it in incognito mode
See username already filled, with eg “user1”. See screenshot. Should be empty.
This bug was very likely introduced by my recent changes to UserNameSuggester.
We have a couple of site setting, `slow_down_crawler_user_agents` and `slow_down_crawler_rate`, that are meant to allow site owners to signal to specific crawlers that they're crawling the site too aggressively and that they should slow down.
When a crawler is added to the `slow_down_crawler_user_agents` setting, Discourse currently adds a `Crawl-delay` directive for that crawler in `/robots.txt`. Unfortunately, many crawlers don't support the `Crawl-delay` directive in `/robots.txt` which leaves the site owners no options if a crawler is crawling the site too aggressively.
This PR replaces the `Crawl-delay` directive with proper rate limiting for crawlers added to the `slow_down_crawler_user_agents` list. On every request made by a non-logged in user, Discourse will check the User Agent string and if it contains one of the values of the `slow_down_crawler_user_agents` list, Discourse will only allow 1 request every N seconds for that User Agent (N is the value of the `slow_down_crawler_rate` setting) and the rest of requests made within the same interval will get a 429 response.
The `slow_down_crawler_user_agents` setting becomes quite dangerous with this PR since it could rate limit lots if not all of anonymous traffic if the setting is not used appropriately. So to protect against this scenario, we've added a couple of new validations to the setting when it's changed:
1) each value added to setting must 3 characters or longer
2) each value cannot be a substring of tokens found in popular browser User Agent. The current list of prohibited values is: apple, windows, linux, ubuntu, gecko, firefox, chrome, safari, applewebkit, webkit, mozilla, macintosh, khtml, intel, osx, os x, iphone, ipad and mac.
Currently when a user creates posts that are moderated (for whatever
reason), a popup is displayed saying the post needs approval and the
total number of the user’s pending posts. But then this piece of
information is kind of lost and there is nowhere for the user to know
what are their pending posts or how many there are.
This patch solves this issue by adding a new “Pending” section to the
user’s activity page when there are some pending posts to display. When
there are none, then the “Pending” section isn’t displayed at all.
* FEATURE: Optionally send a 'noindex' header in non-canonical responses
This will be used in a SEO experiment.
Co-authored-by: David Taylor <david@taylorhq.com>
This commit adds token_hash and scopes columns to email_tokens table.
token_hash is a replacement for the token column to avoid storing email
tokens in plaintext as it can pose a security risk. The new scope column
ensures that email tokens cannot be used to perform a different action
than the one intended.
To sum up, this commit:
* Adds token_hash and scope to email_tokens
* Reuses code that schedules critical_user_email
* Refactors EmailToken.confirm and EmailToken.atomic_confirm methods
* Periodically cleans old, unconfirmed or expired email tokens
This commit refactors the direct external upload routes (get presigned
put, complete external, create/abort/complete multipart) into a
helper which is then included in both BackupController and the
UploadController. This is done so UploadController doesn't need
strange backup logic added to it, and so each controller implementing
this helper can do their own validation/error handling nicely.
This is a follow up to e4350bb966
Currently, Discourse rate limits all incoming requests by the IP address they
originate from regardless of the user making the request. This can be
frustrating if there are multiple users using Discourse simultaneously while
sharing the same IP address (e.g. employees in an office).
This commit implements a new feature to make Discourse apply rate limits by
user id rather than IP address for users at or higher than the configured trust
level (1 is the default).
For example, let's say a Discourse instance is configured to allow 200 requests
per minute per IP address, and we have 10 users at trust level 4 using
Discourse simultaneously from the same IP address. Before this feature, the 10
users could only make a total of 200 requests per minute before they got rate
limited. But with the new feature, each user is allowed to make 200 requests
per minute because the rate limits are applied on user id rather than the IP
address.
The minimum trust level for applying user-id-based rate limits can be
configured by the `skip_per_ip_rate_limit_trust_level` global setting. The
default is 1, but it can be changed by either adding the
`DISCOURSE_SKIP_PER_IP_RATE_LIMIT_TRUST_LEVEL` environment variable with the
desired value to your `app.yml`, or changing the setting's value in the
`discourse.conf` file.
Requests made with API keys are still rate limited by IP address and the
relevant global settings that control API keys rate limits.
Before this commit, Discourse's auth cookie (`_t`) was simply a 32 characters
string that Discourse used to lookup the current user from the database and the
cookie contained no additional information about the user. However, we had to
change the cookie content in this commit so we could identify the user from the
cookie without making a database query before the rate limits logic and avoid
introducing a bottleneck on busy sites.
Besides the 32 characters auth token, the cookie now includes the user id,
trust level and the cookie's generation date, and we encrypt/sign the cookie to
prevent tampering.
Internal ticket number: t54739.
Uppy adds the file name as the "name" parameter in the
payload by default, which means that for things like the
emoji uploader which have a name param used by the controller,
that param will be passed as the file name. We already use
the existing file name if the name param is null, so this
commit just does further cleanup of the name param, removing
the extension if it is a filename so we don't end up with
emoji names like blah_png.
Uppy and Resumable slice up their chunks differently, which causes a difference
in this algorithm. Let's take a 131.6MB file (137951695 bytes) with a 5MB (5242880 bytes)
chunk size. For resumable, there are 26 chunks, and uppy there are 27. This is
controlled by forceChunkSize in resumable which is false by default. The final
chunk size is 6879695 (chunk size + remainder) whereas in uppy it is 1636815 (just remainder).
This means that the current condition of uploaded_file_size + current_chunk_size >= total_size
is hit twice by uppy, because it uses a more correct number of chunks. This
can be solved for both uppy and resumable by checking the _previous_ chunk
number * chunk_size as the uploaded_file_size.
An example of what is happening before that change, using the current
chunk number to calculate uploaded_file_size.
chunk 26: resumable: uploaded_file_size (26 * 5242880) + current_chunk_size (6879695) = 143194575 >= total_size (137951695) ? YES
chunk 26: uppy: uploaded_file_size (26 * 5242880) + current_chunk_size (5242880) = 141557760 >= total_size (137951695) ? YES
chunk 27: uppy: uploaded_file_size (27 * 5242880) + current_chunk_size (1636815) = 143194575 >= total_size (137951695) ? YES
An example of what this looks like after the change, using the previous
chunk number to calculate uploaded_file_size:
chunk 26: resumable: uploaded_file_size (25 * 5242880) + current_chunk_size (6879695) = 137951695 >= total_size (137951695) ? YES
chunk 26: uppy: uploaded_file_size (25 * 5242880) + current_chunk_size (5242880) = 136314880 >= total_size (137951695) ? NO
chunk 27: uppy: uploaded_file_size (26 * 5242880) + current_chunk_size (1636815) = 137951695 >= total_size (137951695) ? YES
This PR introduces a new `enable_experimental_backup_uploads` site setting (default false and hidden), which when enabled alongside `enable_direct_s3_uploads` will allow for direct S3 multipart uploads of backup .tar.gz files.
To make multipart external uploads work with both the S3BackupStore and the S3Store, I've had to move several methods out of S3Store and into S3Helper, including:
* presigned_url
* create_multipart
* abort_multipart
* complete_multipart
* presign_multipart_part
* list_multipart_parts
Then, S3Store and S3BackupStore either delegate directly to S3Helper or have their own special methods to call S3Helper for these methods. FileStore.temporary_upload_path has also removed its dependence on upload_path, and can now be used interchangeably between the stores. A similar change was made in the frontend as well, moving the multipart related JS code out of ComposerUppyUpload and into a mixin of its own, so it can also be used by UppyUploadMixin.
Some changes to ExternalUploadManager had to be made here as well. The backup direct uploads do not need an Upload record made for them in the database, so they can be moved to their final S3 resting place when completing the multipart upload.
This changeset is not perfect; it introduces some special cases in UploadController to handle backups that was previously in BackupController, because UploadController is where the multipart routes are located. A subsequent pull request will pull these routes into a module or some other sharing pattern, along with hooks, so the backup controller and the upload controller (and any future controllers that may need them) can include these routes in a nicer way.
We are only using list_multipart_parts right now in the
uploads controller for multipart uploads to check if the
upload exists; thus we don't need up to 1000 parts.
Also adding a note for future explorers that list_multipart_parts
only gets 1000 parts max, and adding params for max parts
and starting parts.
Support for invites alongside DiscourseConnect was added in 355d51af. This commit fixes the guardian method so that the bulk invite button functionality also works.
* FIX: allowed_theme_ids should not be persisted in GlobalSettings
It was observed that the memoized value of `GlobalSetting.allowed_theme_ids` would be persisted across requests, which could lead to unpredictable/undesired behaviours in a multisite environment.
This change moves that logic out of GlobalSettings so that the returned theme IDs are correct for the current site.
Uses get_set_cache, which ultimately uses DistributedCache, which will take care of multisite issues for us.
By default, Rails only includes the Vary:Accept header in responses when the Accept: header is included in the request. This means that proxies/browsers may cache a response to a request with a missing Accept header, and then later serve that cached version for a request which **does** supply the Accept header. This can lead to some very unexpected behavior in browsers.
This commit adds the Vary:Accept header for all requests, even if the Accept header is not present in the request. If a format parameter (e.g. `.json` suffix) is included in the path, then the Accept header is still omitted. (The format parameter takes precedence over any Accept: header, so the response is no longer varies based on the Accept header)
`/u/username/invited.json?filter=expired` and `/u/username/invited.json?filter=pending` APIs are already returning data to admins. However, the `can_see_invite_details?` boolean was false, which prevented the Ember frontend from showing the tabs correctly. This commit updates the guardian method to match reality.
* FIX: do not display add to calendar for past dates
There is no value in saving past dates into calendar
* FIX: remove postId and move ICS to frontend
PostId is not necessary and will make the solution more generic for dates which doesn't belong to a specific post.
Also, ICS file can be generated in JavaScript to avoid calling backend.
Sometimes administrators want to permanently delete posts and topics
from the database. To make sure that this is done for a good reasons,
administrators can do this only after one minute has passed since the
post was deleted or immediately if another administrator does it.
- Allow the `/presence/get` endpoint to return multiple channels in a single request (limited to 50)
- When multiple presence channels are initialized in a single Ember runloop, batch them into a single GET request
- Introduce the `presence-pretender` to allow easy testing of PresenceChannel-related features
- Introduce a `use_cache` boolean (default true) on the the server-side PresenceChannel initializer. Useful during testing.
Change the type for default_calendar to a string.
The type specified for the default calendar in the api docs wasn't a
valid type. The linting in the api docs repo reports:
```
`type` can be one of the following only: "object", "array", "string", "number", "integer", "boolean", "null".
```
This linting currently is only in the `discourse_api_docs` repo.
* DEV: Add include_subcategories param to api docs
Adding the `include_subcategories=true` query param to the
`/categories.json` api docs.
Follow up to: fe676f334a
* fix spec
It allows saving local date to calendar.
Modal is giving option to pick between ics and google. User choice can be remembered as a default for the next actions.
* FEATURE: Return subcategories on categories endpoint
When using the API subcategories will now be returned nested inside of
each category response under the `subcategory_list` param. We already
return all the subcategory ids under the `subcategory_ids` param, but
you then would have to make multiple separate API calls to fetch each of
those subcategories. This way you can get **ALL** of the categories
along with their subcategories in a single API response.
The UI will not be affected by this change because you need to pass in
the `include_subcategories=true` param in order for subcategories to be
returned.
In a follow up PR I'll add the API scoping for fetching categories so
that a readonly API key can be used for the `/categories.json` endpoint. This
endpoint should be used instead of the `/site.json` endpoint for
fetching a sites categories and subcategories.
* Update PR based on feedback
- Have spec check for specific subcategory
- Move comparison check out of loop
- Only populate subcategory list if option present
- Remove empty array initialization
- Update api spec to allow null response
* More PR updates based on feedback
- Use a category serializer for the subcategory_list
- Don't include the subcategory_list param if empty
- For the spec check for the subcategory by id
- Fix spec to account for param not present when empty
This commit resolves refactors can_invite_to? to use
can_invite_to_forum? for checking the site-wide permissions and then
perform topic specific checkups.
Similarly, can_invite_to? is always used with a topic object and this is
now enforced.
There was another problem before when `must_approve_users` site setting
was not checked when inviting users to forum, but was checked when
inviting to a topic.
Another minor security issue was that group owners could invite to
group topics even if they did not have the minimum trust level to do
it.
It was possible to see notifications of other users using routes:
- notifications/responses
- notifications/likes-received
- notifications/mentions
- notifications/edits
We weren't showing anything private (like notifications about private messages), only things that're publicly available in other places. But anyway, it feels strange that it's possible to look at notifications of someone else. Additionally, there is a risk that we can unintentionally leak something on these pages in the future.
This commit restricts these routes.
The file size error messages for max_image_size_kb and
max_attachment_size_kb are shown to the user in the KB
format, regardless of how large the limit is. Since we
are going to support uploading much larger files soon,
this KB-based limit soon becomes unfriendly to the end
user.
For example, if the max attachment size is set to 512000
KB, this is what the user sees:
> Sorry, the file you are trying to upload is too big (maximum
size is 512000KB)
This makes the user do math. In almost all file explorers that
a regular user would be familiar width, the file size is shown
in a format based on the maximum increment (e.g. KB, MB, GB).
This commit changes the behaviour to output a humanized file size
instead of the raw KB. For the above example, it would now say:
> Sorry, the file you are trying to upload is too big (maximum
size is 512 MB)
This humanization also handles decimals, e.g. 1536KB = 1.5 MB
Allows creating a bookmark with the `for_topic` flag introduced in d1d2298a4c set to true. This happens when clicking on the Bookmark button in the topic footer when no other posts are bookmarked. In a later PR, when clicking on these topic-level bookmarks the user will be taken to the last unread post in the topic, not the OP. Only the OP can have a topic level bookmark, and users can also make a post-level bookmark on the OP of the topic.
I had to do some pretty heavy refactors because most of the bookmark code in the JS topics controller was centred around instances of Post JS models, but the topic level bookmark is not centred around a post. Some refactors were just for readability as well.
Also removes some missed reminderType code from the purge in 41e19adb0d
Due to the way that rswag expands shared components we were getting this
warning when linting our api docs:
```
Component: "user_response" is never used.
```
This change refactors the `api/users_spec.rb` file so that it uses the
new way of doing things with a separate `user_get_response.json` schema
file rather then the old way of loading a shared response inside of the
swagger_helper.rb file.
The color_scheme_id needs to be an integer not a string.
This is one of the failing tests that showed this error:
https://github.com/discourse/discourse/runs/3598414971
It showed this error
`POSSIBLE ISSUE W/: /user_themes/0/color_scheme_id`
And this is part of the site.json response:
```
...
"user_themes"=>[{"theme_id"=>149, "name"=>"Cool theme 111", "default"=>false, "color_scheme_id"=>37}]
...
```
We don't actually use the reminder_type for bookmarks anywhere;
we are just storing it. It has no bearing on the UI. It used
to be relevant with the at_desktop bookmark reminders (see
fa572d3a7a)
This commit marks the column as readonly, ignores it, and removes
the index, and it will be dropped in a later PR. Some plugins
are relying on reminder_type partially so some stubs have been
left in place to avoid errors.
We don't need no stinkin' denormalization! This commit ignores
the topic_id column on bookmarks, to be deleted at a later date.
We don't really need this column and it's better to rely on the
post.topic_id as the canonical topic_id for bookmarks, then we
don't need to remember to update both columns if the bookmarked
post moves to another topic.
Administrators can use second factor to confirm granting admin access
without using email. The old method of confirmation via email is still
used as a fallback when second factor is unavailable.
The previous excerpt was a simple truncated raw message. Starting with
this commit, the raw content of the draft is cooked and an excerpt is
extracted from it. The logic for extracting the excerpt mimics the the
`ExcerptParser` class, but does not implement all functionality, being
a much simpler implementation.
The two draft controllers have been merged into one and the /draft.json
route has been changed to /drafts.json to be consistent with the other
route names.
When a user archives a personal message, they are redirected back to the
inbox and will refresh the list of the topics for the given filter.
Publishing an event to the user results in an incorrect incoming message
because the list of topics has already been refreshed.
This does mean that if a user has two tabs opened, the non-active tab
will not receive the incoming message but at this point we do not think
the technical trade-offs are worth it to support this feature. We
basically have to somehow exclude a client from an incoming message
which is not easy to do.
Follow-up to fc1fd1b416
At this point in time, we do not think supporting unread and new when an
admin is looking at another user's messages is worth supporting.
Follow-up to fc1fd1b416
From the openapi spec:
https://spec.openapis.org/oas/latest.html#fixed-fields-7
each endpoint needs to have an `operationId`:
> Unique string used to identify the operation. The id MUST be unique
> among all operations described in the API. The operationId value is
> case-sensitive. Tools and libraries MAY use the operationId to uniquely
> identify an operation, therefore, it is RECOMMENDED to follow common
> programming naming conventions.
Running the linter on our openapi.json file with this command:
`npx @redocly/openapi-cli lint openapi.json`
produced the following warning on all of our endpoints:
> Operation object should contain `operationId` field
This commit resolves these warnings by adding an operationId field to
each endpoint.
Previously, a group's `default_notification_level` change will only affect the users added after it.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
PresenceChannel aims to be a generic system for allow the server, and end-users, to track the number and identity of users performing a specific task on the site. For example, it might be used to track who is currently 'replying' to a specific topic, editing a specific wiki post, etc.
A few key pieces of information about the system:
- PresenceChannels are identified by a name of the format `/prefix/blah`, where `prefix` has been configured by some core/plugin implementation, and `blah` can be any string the implementation wants to use.
- Presence is a boolean thing - each user is either present, or not present. If a user has multiple clients 'present' in a channel, they will be deduplicated so that the user is only counted once
- Developers can configure the existence and configuration of channels 'just in time' using a callback. The result of this is cached for 2 minutes.
- Configuration of a channel can specify permissions in a similar way to MessageBus (public boolean, a list of allowed_user_ids, and a list of allowed_group_ids). A channel can also be placed in 'count_only' mode, where the identity of present users is not revealed to end-users.
- The backend implementation uses redis lua scripts, and is designed to scale well. In the future, hard limits may be introduced on the maximum number of users that can be present in a channel.
- Clients can enter/leave at will. If a client has not marked itself 'present' in the last 60 seconds, they will automatically 'leave' the channel. The JS implementation takes care of this regular check-in.
- On the client-side, PresenceChannel instances can be fetched from the `presence` ember service. Each PresenceChannel can be used entered/left/subscribed/unsubscribed, and the service will automatically deduplicate information before interacting with the server.
- When a client joins a PresenceChannel, the JS implementation will automatically make a GET request for the current channel state. To avoid this, the channel state can be serialized into one of your existing endpoints, and then passed to the `subscribe` method on the channel.
- The PresenceChannel JS object is an ember object. The `users` and `count` property can be used directly in ember templates, and in computed properties.
- It is important to make sure that you `unsubscribe()` and `leave()` any PresenceChannel objects after use
An example implementation may look something like this. On the server:
```ruby
register_presence_channel_prefix("site") do |channel|
next nil unless channel == "/site/online"
PresenceChannel::Config.new(public: true)
end
```
And on the client, a component could be implemented like this:
```javascript
import Component from "@ember/component";
import { inject as service } from "@ember/service";
export default Component.extend({
presence: service(),
init() {
this._super(...arguments);
this.set("presenceChannel", this.presence.getChannel("/site/online"));
},
didInsertElement() {
this.presenceChannel.enter();
this.presenceChannel.subscribe();
},
willDestroyElement() {
this.presenceChannel.leave();
this.presenceChannel.unsubscribe();
},
});
```
With this template:
```handlebars
Online: {{presenceChannel.count}}
<ul>
{{#each presenceChannel.users as |user|}}
<li>{{avatar user imageSize="tiny"}} {{user.username}}</li>
{{/each}}
</ul>
```
The generate_presigned_put endpoint for direct external uploads
(such as the one for the uppy-image-uploader) records allowed
S3 metadata values on the uploaded object. We use this to store
the sha1-checksum generated by the UppyChecksum plugin, for later
comparison in ExternalUploadManager.
However, we were not doing this for the create_multipart endpoint,
so the checksum was never captured and compared correctly.
Also includes a fix to make sure UppyChecksum is the last preprocessor to run.
It is important that the UppyChecksum preprocessor is the last one to
be added; the preprocessors are run in order and since other preprocessors
may modify the file (e.g. the UppyMediaOptimization one), we need to
checksum once we are sure the file data has "settled".
Users can invite people to topic and they will be automatically
redirected to the topic when logging in after signing up. This commit
ensures a "invited_to_topic" notification is created when the invite is
redeemed.
The same notification is used for the "Notify" sharing method that is
found in share topic modal.
Since ad3ec5809f when a user chooses
the Dismiss New... option in the New topic list, we send a request
to topics/reset-new.json with ?tracked=false as the only parameter.
This then uses Topic as the scope for topics to dismiss, with no
other limitations. When we do topic_scope.pluck(:id), it gets the
ID of every single topic in the database (that is not deleted) to
pass to TopicsBulkAction, causing a huge query with severe performance
issues.
This commit changes the default scope to use
`TopicQuery.new(current_user).new_results(limit: false)`
which should only use the topics in the user's New list, which
will be a much smaller list, depending on the user's "new_topic_duration_minutes"
setting.
There are certain design decisions that were made in this commit.
Private messages implements its own version of topic tracking state because there are significant differences between regular and private_message topics. Regular topics have to track categories and tags while private messages do not. It is much easier to design the new topic tracking state if we maintain two different classes, instead of trying to mash this two worlds together.
One MessageBus channel per user and one MessageBus channel per group. This allows each user and each group to have their own channel backlog instead of having one global channel which requires the client to filter away unrelated messages.
This pull request introduces the endpoints required, and the JavaScript functionality in the `ComposerUppyUpload` mixin, for direct S3 multipart uploads. There are four new endpoints in the uploads controller:
* `create-multipart.json` - Creates the multipart upload in S3 along with an `ExternalUploadStub` record, storing information about the file in the same way as `generate-presigned-put.json` does for regular direct S3 uploads
* `batch-presign-multipart-parts.json` - Takes a list of part numbers and the unique identifier for an `ExternalUploadStub` record, and generates the presigned URLs for those parts if the multipart upload still exists and if the user has permission to access that upload
* `complete-multipart.json` - Completes the multipart upload in S3. Needs the full list of part numbers and their associated ETags which are returned when the part is uploaded to the presigned URL above. Only works if the user has permission to access the associated `ExternalUploadStub` record and the multipart upload still exists.
After we confirm the upload is complete in S3, we go through the regular `UploadCreator` flow, the same as `complete-external-upload.json`, and promote the temporary upload S3 into a full `Upload` record, moving it to its final destination.
* `abort-multipart.json` - Aborts the multipart upload on S3 and destroys the `ExternalUploadStub` record if the user has permission to access that upload.
Also added are a few new columns to `ExternalUploadStub`:
* multipart - Whether or not this is a multipart upload
* external_upload_identifier - The "upload ID" for an S3 multipart upload
* filesize - The size of the file when the `create-multipart.json` or `generate-presigned-put.json` is called. This is used for validation.
When the user completes a direct S3 upload, either regular or multipart, we take the `filesize` that was captured when the `ExternalUploadStub` was first created and compare it with the final `Content-Length` size of the file where it is stored in S3. Then, if the two do not match, we throw an error, delete the file on S3, and ban the user from uploading files for N (default 5) minutes. This would only happen if the user uploads a different file than what they first specified, or in the case of multipart uploads uploaded larger chunks than needed. This is done to prevent abuse of S3 storage by bad actors.
Also included in this PR is an update to vendor/uppy.js. This has been built locally from the latest uppy source at d613b849a6. This must be done so that I can get my multipart upload changes into Discourse. When the Uppy team cuts a proper release, we can bump the package.json versions instead.
* FIX: Revoking admin or moderator status doesn't require refresh to delete/anonymize/merge user
On the /admin/users/<id>/<username> page, there are action buttons that are either visible or hidden depending on a few fields from the AdminDetailsSerializer: `can_be_deleted`, `can_be_anonymized`, `can_be_merged`, `can_delete_all_posts`.
These fields are updated when granting/revoking admin or moderator status. However, those updates were not being reflected on the page. E.g. if a user is granted moderation privileges, the 'anonymize user' and 'merge' buttons still appear on the page, which is inconsistent with the backend state of the user. It requires refreshing the page to update the state.
This commit fixes that issue, by syncing the client model state with the server state when handling a successful response from the server. Now, when revoking privileges, the buttons automatically appear without refreshing the page. Similarly, when granting moderator privileges, the buttons automatically disappear without refreshing the page.
* Add detailed user response to spec for changed routes.
Add tests to verify that the revoke_moderation, grant_moderation, and revoke_admin routes return a response formatted according to the AdminDetailedUserSerializer.
This is useful in the DiscourseHub mobile app, currently the app queries
the `about.json` endpoint, which can raise a CORS issue in some cases,
for example when the site only accepts logins from an external provider.
`nullable` is no longer a valid type, and types also can't be an empty
string, so just bringing a number of issues with types in compliance
with the openapi spec.
When a user signs up via an external auth method, a new link is added to the signup modal which allows them to connect an existing Discourse account. This will only happen if:
- There is at least 1 other auth method available
and
- The current auth method permits users to disconnect/reconnect their accounts themselves
This handles a few edge cases which are extremely rare (due to the UI layout), but still technically possible:
- Ensure users are authenticated before attempting association.
- Add a message and logic for when a user already has an association for a given auth provider.
This reverts a part of changes introduced by https://github.com/discourse/discourse/pull/13947
In that PR I:
1. Disallowed topic feature links for TL-0 users
2. Additionally, disallowed just putting any URL in topic titles for TL-0 users
Actually, we don't need the second part. It introduced unnecessary complexity for no good reason. In fact, it tries to do the job that anti-spam plugins (like Akismet plugin) should be doing.
This PR reverts this second change.
This disallows putting URLs in topic titles for TL0 users, which means that:
If a TL-0 user puts a link into the title, a topic featured link won't be generated (as if it was disabled in the site settings)
Server methods for creating and updating topics will be refusing featured links when they are called by TL-0 users
TL-0 users won't be able to put any link into the topic title. For example, the title "Hey, take a look at https://my-site.com" will be rejected.
Also, it improves a bit server behavior when creating or updating feature links on topics in the categories with disabled featured links. Before the server just silently ignored a featured link field that was passed to him, now it will be returning 422 response.
This adds a few different things to allow for direct S3 uploads using uppy. **These changes are still not the default.** There are hidden `enable_experimental_image_uploader` and `enable_direct_s3_uploads` settings that must be turned on for any of this code to be used, and even if they are turned on only the User Card Background for the user profile actually uses uppy-image-uploader.
A new `ExternalUploadStub` model and database table is introduced in this pull request. This is used to keep track of uploads that are uploaded to a temporary location in S3 with the direct to S3 code, and they are eventually deleted a) when the direct upload is completed and b) after a certain time period of not being used.
### Starting a direct S3 upload
When an S3 direct upload is initiated with uppy, we first request a presigned PUT URL from the new `generate-presigned-put` endpoint in `UploadsController`. This generates an S3 key in the `temp` folder inside the correct bucket path, along with any metadata from the clientside (e.g. the SHA1 checksum described below). This will also create an `ExternalUploadStub` and store the details of the temp object key and the file being uploaded.
Once the clientside has this URL, uppy will upload the file direct to S3 using the presigned URL. Once the upload is complete we go to the next stage.
### Completing a direct S3 upload
Once the upload to S3 is done we call the new `complete-external-upload` route with the unique identifier of the `ExternalUploadStub` created earlier. Only the user who made the stub can complete the external upload. One of two paths is followed via the `ExternalUploadManager`.
1. If the object in S3 is too large (currently 100mb defined by `ExternalUploadManager::DOWNLOAD_LIMIT`) we do not download and generate the SHA1 for that file. Instead we create the `Upload` record via `UploadCreator` and simply copy it to its final destination on S3 then delete the initial temp file. Several modifications to `UploadCreator` have been made to accommodate this.
2. If the object in S3 is small enough, we download it. When the temporary S3 file is downloaded, we compare the SHA1 checksum generated by the browser with the actual SHA1 checksum of the file generated by ruby. The browser SHA1 checksum is stored on the object in S3 with metadata, and is generated via the `UppyChecksum` plugin. Keep in mind that some browsers will not generate this due to compatibility or other issues.
We then follow the normal `UploadCreator` path with one exception. To cut down on having to re-upload the file again, if there are no changes (such as resizing etc) to the file in `UploadCreator` we follow the same copy + delete temp path that we do for files that are too large.
3. Finally we return the serialized upload record back to the client
There are several errors that could happen that are handled by `UploadsController` as well.
Also in this PR is some refactoring of `displayErrorForUpload` to handle both uppy and jquery file uploader errors.
* DEV: Return 400 instead of 500 for invalid top period
This change will prevent a fatal 500 error when passing in an invalid
period param value to the `/top` route.
* Check if the method exists first
I couldn't get `ListController.respond_to?` to work, but was still able
to check if the method exists with
`ListController.action_methods.include?`. This way we can avoid relying
on the `NoMethodError` exception which may be raised during the course
of executing the method.
* Just check if the period param value is valid
* Use the new TopTopic.validate_period method
* Copy remove_member to new `leave` method
* Remove unneeded code from the leave method
* Rearrange the leave method
* Remove unneeded code from the remove_member method
* Add tests
* Implement on the client side
* Copy the add_members method to the new join method
* Remove unneeded code from the join method
* Rearrange the join method
* Remove unneeded stuff from the add_members method
* Extract add_user_to_group method
* Implement of the client side
* Tests
* Doesn't inline users.uniq
* Return promise from join.then()
* Remove unnecessary begin and end
* Revert "Return promise from join.then()"
This reverts commit bda84d8d
* Remove variable already_in_group
Will show the last 6 seen users as filtering suggestions when typing @ in quick search. (Previously the user suggestion required a character after the @.)
This also adds a default limit of 6 to the user search query, previously the backend was returning 20 results but a maximum of 6 results was being shown anyway.
When configured, all topics in the category inherits the slow mode
duration from the category's default.
Note that currently there is no way to remove the slow mode from the
topics once it has been set.
When the Forever option is selected for suspending a user, the user is suspended for 1000 years. Without customizing the site’s text, this time period is displayed to the user in the suspension email that is sent to the user, and if the user attempts to log back into the site. Telling someone that they have been suspended for 1000 years seems likely to come across as a bad attempt at humour.
This PR special case messages when a user suspended or silenced forever.
Flips content_security_policy_frame_ancestors default to enabled, and
removes HTTP_REFERER checks on embed requests, as the new referer
privacy options made the check fragile.
* No need to return anything except a status code from the server
* Switch a badge state before sending a request and then switch it back in case of an error
Currently when bulk-awarding a badge that can be granted multiple times, users in the CSV file are granted the badge once no matter how many times they're listed in the file and only if they don't have the badge already.
This PR adds a new option to the Badge Bulk Award feature so that it's possible to grant users a badge even if they already have the badge and as many times as they appear in the CSV file.
This PR adds the first use of Uppy in our codebase, hidden behind a enable_experimental_image_uploader site setting. When the setting is enabled only the user card background uploader will use the new uppy-image-uploader component added in this PR.
I've introduced an UppyUpload mixin that has feature parity with the existing Upload mixin, and improves it slightly to deal with multiple/single file distinctions and validations better. For now, this just supports the XHRUpload plugin for uppy, which keeps our existing POST to /uploads.json.
If user had a staged account and logged in using a third party service
a different username was suggested. This change will try to use the
username given by the authentication provider first, then the current
staged username and last suggest a new one.
When a staged user tried to redeem an invite, a different username was
suggested and manually typing the staged username failed because the
username was not available.
User flair was given by user's primary group. This PR separates the
two, adds a new field to the user model for flair group ID and users
can select their flair from user preferences now.
Mixing multisite and standard specs can lead to issues (e.g. when using `fab!`)
Disabled the (upcoming https://github.com/discourse/rubocop-discourse/pull/11) rubocop rule for two files that have thoroughly tangled both types of specs.
* FEATURE: Redirect logged in user to invite topic
Users who were already logged in and were given an invite link to a
topic used to see an error message saying that they already have an
account and cannot redeem the invite. This commit amends that behavior
and redirects the user directly to the topic, if they can see it.
* FEATURE: Add logged in user to invite groups
Users who were already logged in and were given an invite link to a
group used to see an error message saying that they already have an
account and cannot redeem the invite. This commit amends that behavior
and adds the user to the group.
And also move all the "top topics by period" routes to query string param.
/top/monthly => /top?period=monthly
/c/:slug/:id/l/top/monthly => /c/:slug/:id/l/top?period=monthly
/tag/:slug/l/top/daily => /tag/:slug/l/top?period=daily (new)
Badges that are awarded multiple times can be favorite and not favorite
at the same time. This caused few problems when users tried to favorite
them as they were counted multiple times or their state was incorrectly
displayed.
Before this change, calling `StyleSheet::Manager.stylesheet_details`
for the first time resulted in multiple queries to the database. This is
because the code was modelled in a way where each `Theme` was loaded
from the database one at a time.
This PR restructures the code such that it allows us to load all the
theme records in a single query. It also allows us to eager load the
required associations upfront. In order to achieve this, I removed the
support of loading multiple themes per request. It was initially added
to support user selectable theme components but the feature was never
completed and abandoned because it wasn't a feature that we thought was
worth building.
The first thing we needed here was an enum rather than a boolean to determine how a directory_column was created. Now we have `automatic`, `user_field` and `plugin` directory columns.
This plugin API is assuming that the plugin has added a migration to a column to the `directory_items` table.
This was created to be initially used by discourse-solved. PR with API usage - https://github.com/discourse/discourse-solved/pull/137/
When dismissing new topics for the Tracked filter, the dismiss was
limited to 30 topics which is the default per page count for TopicQuery.
This happened even if you specified which topic IDs you were
selectively dismissing. This PR fixes that bug, and also moves
the per_page_count into a DEFAULT_PER_PAGE_COUNT for the TopicQuery
so it can be stubbed in tests.
Also moves the unused stub_const method into the spec helpers
for cases like this; it is much better to handle this in one place
with an ensure. In a follow up PR I will clean up other specs that
do the same thing and make them use stub_const.
Subclasses must call #delete_user_actions inside build_actions to support user deletion. The method adds a delete user bundle, which has a delete and a delete + block option. Every subclass is responsible for implementing these actions.
The `ember_jquery` bundle contains production builds of Ember and jQuery
which doesn't work with tests. This commits introduces a new
`theme_qunit_vendor` bundle which is copy of the `vendor` bundle but
doesn't contain `ember_jquery`.
This commit is a partial revert of
409c8585e4
Over the years we have found that a few communities never discovered tags.
Instead of having them default off we now have them default on, ensuring
that everyone finds out about them.
Co-authored-by: Dan Ungureanu <dan@ungureanu.me>
In Ember CLI addons get put into the vendor bundle, as opposed to their
own bundle like we're doing in the Rails app. We never use pretty-text
without our vendor bundle so this should have no difference on
performance.
We need to keep the pretty-text bundle for server side cooking.
It used to require SiteSetting.min_trust_level_to_allow_invite to
invite a user to a group, even if the user existed and the inviter was
a group owner.
In Ember CLI, the vendor bundler includes Ember/jQuery, so this brings
our app closer to that configuration.
We have a couple pages (Reset Password / Confirm New Email) where we need
`ember_jquery` without vendor so the file still exists for those cases.
Some specs failed when `LOAD_PLUGINS=1` was set while migrating the test DB and the narrative-bot plugin disabled the `send_welcome_message` site setting.
This overhauls the user interface for the group email settings management, aiming to make it a lot easier to test the settings entered and confirm they are correct before proceeding. We do this by forcing the user to test the settings before they can be saved to the database. It also includes some quality of life improvements around setting up IMAP and SMTP for our first supported provider, GMail. This PR does not remove the old group email config, that will come in a subsequent PR. This is related to https://meta.discourse.org/t/imap-support-for-group-inboxes/160588 so read that if you would like more backstory.
### UI
Both site settings of `enable_imap` and `enable_smtp` must be true to test this. You must enable SMTP first to enable IMAP.
You can prefill the SMTP settings with GMail configuration. To proceed with saving these settings you must test them, which is handled by the EmailSettingsValidator.
If there is an issue with the configuration or credentials a meaningful error message should be shown.
IMAP settings must also be validated when IMAP is enabled, before saving.
When saving IMAP, we fetch the mailboxes for that account and populate them. This mailbox must be selected and saved for IMAP to work (the feature acts as though it is disabled until the mailbox is selected and saved):
### Database & Backend
This adds several columns to the Groups table. The purpose of this change is to make it much more explicit that SMTP/IMAP is enabled for a group, rather than relying on settings not being null. Also included is an UPDATE query to backfill these columns. These columns are automatically filled when updating the group.
For GMail, we now filter the mailboxes returned. This is so users cannot use a mailbox like Sent or Trash for syncing, which would generally be disastrous.
There is a new group endpoint for testing email settings. This may be useful in the future for other places in our UI, at which point it can be extracted to a more generic endpoint or module to be included.
We need to be careful when stubbing this method. SessionController#become won't be defined if production is set to true, so if these tests run first, calling #sign_in will fail for other tests.
Calling sign_in before stubbing guarantees the method is defined because the check happens when the class is loaded.
There are two methods which the server uses to verify an invite is being redeemed with a matching email:
1) The email token, supplied via a `?t=` parameter
2) The validity of the email, as provided by the auth provider
Only one of these needs to be true for the invite to be redeemed successfully on the server. The frontend logic was previously only checking (2). This commit updates the frontend logic to match the server.
This commit does not affect the invite redemption logic. It only affects the 'show' endpoint, and the UI.
The previous commits removed reviewables leading to a bad user
experience. This commit updates the status, replaces actions with a
message and greys out the reviewable.
This PR improves the UI of bulk select so that its context is applied to the Dismiss Unread and Dismiss New buttons. Regular users (not just staff) are now able to use topic bulk selection on the /new and /unread routes to perform these dismiss actions more selectively.
For Dismiss Unread, there is a new count in the text of the button and in the modal when one or more topic is selected with the bulk select checkboxes.
For Dismiss New, there is a count in the button text, and we have added functionality to the server side to accept an array of topic ids to dismiss new for, instead of always having to dismiss all new, the same as the bulk dismiss unread functionality. To clean things up, the `DismissTopics` service has been rolled into the `TopicsBulkAction` service.
We now also show the top Dismiss/Dismiss New button based on whether the bottom one is in the viewport, not just based on the topic count.
When editing the first post for the topic we do two AJAX requests
to two separate controllers in this order:
PUT /t/topic-name
PUT /posts/2489523
This causes two post revisor calls, which end up triggering the
:post_edited DiscourseEvent twice. This is then picked up and sent
as a WebHook event twice. However we do not need to send a :post_edited
webhook event if the first post is being edited and topic_changed is
true from the :post_edited DiscourseEvent, because a second event will
shortly come through for just the post.
See https://meta.discourse.org/t/post-webhook-fires-two-times-on-post-edited-for-first-post-in-a-topic/162408
Continued on from https://github.com/discourse/discourse/pull/10590
Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
* FIX: Ensure the same email cannot be invited twice
When creating a new invite with a duplicated email, the old invite will
be updated and returned. When updating an invite with a duplicated email
address, an error will be returned.
* FIX: not Ember helper does not exist
* FIX: Sync can_invite_to_forum? and can_invite_to?
The two methods should perform the same basic set of checks, such as
check must_approve_users site setting.
Ideally, one of the methods would call the other one or be merged and
that will happen in the future.
* FIX: Show invite to group if user is group owner
* FIX: flaky specs after topic view custom filters
When ensuring TopicView class variables return to the original state it should use empty Hash instead of empty Array. That
https://github.com/discourse/discourse/blob/master/lib/topic_view.rb#L60
* FIX: convert to string for topic view custom filter
Some emails coming in via the mail receiver can still end up
with bad encoding when trying to enqueue the job. This catches
the last encoding issue and forces iso-8559-1 and encodes to
UTF-8 to circumvent the issue.
Currently, when the target is not available we're returning the error message "`You are not permitted to view the requested resource`" which is not clear.
We have found when receiving and posting inbound emails to the handle_mail route, it is better to POST the payload as a base64 encoded string to avoid strange encoding issues. This introduces a new param of `email_encoded` and maintains the legacy param of email, showing a deprecation warning. Eventually the old param of `email` will be dropped and the new one `email_encoded` will be the only way to handle_mail.
Uncategorized was sometimes visible even if allow_uncategorized_topics
was false. This was especially happening on mobile, if at least one
topic was uncategorized.
* FEATURE: Small improvements to the topic list embed
- Ability to wrap the list in a custom class so you can styles different
lists using specific CSS
- Adds a topic link to the thumbnail when using the complete template
* FIX: Be more strict about allowed chars in class name
This commit allows site admins to run theme tests in production via a new `/theme-qunit` route. When you visit `/theme-qunit`, you'll see a list of the themes/components installed on your site that have tests, and from there you can select a theme or component that you run its tests.
We also have a new rake task `themes:install_and_test` that can be used to install a list of themes/components on a temporary database and run the tests of the themes/components that are installed. This rake task can be useful when upgrading/deploying a Discourse instance to make sure that the installed themes/components are compatible with the new Discourse version being deployed, and if the tests fail you can abort the build/deploy process so you don't end up with a broken site.
If the "use_site_small_logo_as_system_avatar" setting is enabled, the site's small logo is displayed as the selected option by the avatar-selector. Choosing a different avatar disables the setting.
This commit allows site admins to run theme tests in production via a new `/theme-qunit` route. When you visit `/theme-qunit`, you'll see a list of the themes/components installed on your site that have tests, and from there you can select a theme or component that you run its tests.
We also have a new rake task `themes:install_and_test` that can be used to install a list of themes/components on a temporary database and run the tests of the themes/components that are installed. This rake task can be useful when upgrading/deploying a Discourse instance to make sure that the installed themes/components are compatible with the new Discourse version being deployed, and if the tests fail you can abort the build/deploy process so you don't end up with a broken site.
The old share modal used to host both share and invite functionality,
under two tabs. The new "Share Topic" modal can be used only for
sharing, but has a link to the invite modal.
Among the sharing methods, there is also "Notify" which points out
that existing users will simply be notified (this was not clear
before). Staff members can notify as many users as they want, but
regular users are restricted to one at a time, no more than
max_topic_invitations_per_day. The user will not receive another
notification if they have been notified of the same topic in past hour.
The "Create Invite" modal also suffered some changes: the two radio
boxes for selecting the type (invite or email) have been replaced by a
single checkbox (is email?) and then the two labels about emails have
been replaced by a single one, some fields were reordered and the
advanced options toggle was moved to the bottom right of the modal.
Rails 6.1.3.1 deprecates a few API and has some internal changes that break our tests suite, so this commit fixes all the deprecations and errors and now Discourse should be fully compatible with Rails 6.1.3.1. We also have a new release of the rails_failover gem that's compatible with Rails 6.1.3.1.
The server used to respond with a generic 'error, contact admin' message
which did not offer any hint what the error was. This happened even when
the error could be easily corrected by the user (for example, if they
chose a very common password).
When invited by email, users will receive an invite URL which contains
a token. If that token is present when the invite is redeemed, their
account will be automatically activated.
This PR adds a new category setting which is a column in the `categories` table, `allow_unlimited_owner_edits_on_first_post`.
What this does is:
* Inside the `can_edit_post?` method of `PostGuardian`, if the current user editing a post is the owner of the post, it is the first post, and the topic's category has `allow_unlimited_owner_edits_on_first_post`, then we bypass the check for `LimitedEdit#edit_time_limit_expired?` on that post.
* Also, similar to wiki topics, in `PostActionNotifier#after_create_post_revision` we send a notification to all users watching a topic when the OP is edited in a topic with the category setting `allow_unlimited_owner_edits_on_first_post` enabled.
This is useful for forums where there is a Marketplace or similar category, where topics are created and then updated indefinitely by the OP rather than the OP making new topics or additional replies. In a way this acts similar to a wiki that only one person can edit.
This makes behavior consistent with documentation:
API:
> Will send an email with this message when present
Web UI:
> Optionally, provide more information about the suspension and it will be emailed to the user
This commit allows themes and theme components to have QUnit tests. To add tests to your theme/component, create a top-level directory in your theme and name it `test`, and Discourse will save all the files in that directory (and its sub-directories) as "tests files" in the database. While tests files/directories are not required to be organized in a specific way, we recommend that you follow Discourse core's tests [structure](https://github.com/discourse/discourse/tree/master/app/assets/javascripts/discourse/tests).
Writing theme tests should be identical to writing plugins or core tests; all the `import` statements and APIs that you see in core (or plugins) to define/setup tests should just work in themes.
You do need a working Discourse install to run theme tests, and you have 2 ways to run theme tests:
* In the browser at the `/qunit` route. `/qunit` will run tests of all active themes/components as well as core and plugins. The `/qunit` now accepts a `theme_name` or `theme_url` params that you can use to run tests of a specific theme/component like so: `/qunit?theme_name=<your_theme_name>`.
* In the command line using the `themes:qunit` rake task. This take is meant to run tests of a single theme/component so you need to provide it with a theme name or URL like so: `bundle exec rake themes:qunit[name=<theme_name>]` or `bundle exec rake themes:qunit[url=<theme_url>]`.
There are some refactors to how Discourse processes JavaScript that comes with themes/components, and these refactors may break your JS customizations; see https://meta.discourse.org/t/upcoming-core-changes-that-may-break-some-themes-components-april-12/186252?u=osama for details on how you can check if your themes/components are affected and what you need to do to fix them.
This commit also improves theme error handling in Discourse. We will now be able to catch errors that occur when theme initializers are run and prevent them from breaking the site and other themes/components.
We introduced a cap on the number of bookmarks the user can add in be145ccf2f. However this has caused unintended side effects; when the `jobs/scheduled/bookmark_reminder_notifications.rb` runs we get this error for users who already had more bookmarks than the limit:
> Job exception: Validation failed: Sorry, you have too many bookmarks, visit #{url}/my/activity/bookmarks to remove some.
This is because the `clear_reminder!` call was triggering a bookmark validation, which raised an error because the user already had to many, holding up other reminders.
This PR also adds `max_bookmarks_per_user` hidden site setting (default 2000). This replaces the BOOKMARK_LIMIT const so we can raise it for certain sites.
This commit allows themes and theme components to have QUnit tests. To add tests to your theme/component, create a top-level directory in your theme and name it `test`, and Discourse will save all the files in that directory (and its sub-directories) as "tests files" in the database. While tests files/directories are not required to be organized in a specific way, we recommend that you follow Discourse core's tests [structure](https://github.com/discourse/discourse/tree/master/app/assets/javascripts/discourse/tests).
Writing theme tests should be identical to writing plugins or core tests; all the `import` statements and APIs that you see in core (or plugins) to define/setup tests should just work in themes.
You do need a working Discourse install to run theme tests, and you have 2 ways to run theme tests:
* In the browser at the `/qunit` route. `/qunit` will run tests of all active themes/components as well as core and plugins. The `/qunit` now accepts a `theme_name` or `theme_url` params that you can use to run tests of a specific theme/component like so: `/qunit?theme_name=<your_theme_name>`.
* In the command line using the `themes:qunit` rake task. This take is meant to run tests of a single theme/component so you need to provide it with a theme name or URL like so: `bundle exec rake themes:qunit[name=<theme_name>]` or `bundle exec rake themes:qunit[url=<theme_url>]`.
There are some refactors to internal code that's responsible for processing themes/components in Discourse, most notably:
* `<script type="text/discourse-plugin">` tags are automatically converted to modules.
* The `theme-settings` service is removed in favor of a simple `lib` file responsible for managing theme settings. This was done to allow us to register/lookup theme settings very early in our Ember app lifecycle and because there was no reason for it to be an Ember service.
These refactors should 100% backward compatible and invisible to theme developers.
In Improve invite system, a newly created link only invite cannot
be retrieved via API with the invitee's email once created. A new
route, /invites/retrieve, is introduced to fetch an already
created invite by email address.
For 'local logins', the UX for staged users is designed to be identical to unregistered users. However, staged users logging in via external auth were being automatically unstaged, and skipping the registration/invite flow. In the past this made sense because the registration/invite flows didn't work perfectly with external auth. Now, both registration and invites work well with external auth, so it's best to leave the 'unstage' logic to those endpoints.
This problem was particularly noticeable when using the 'bulk invite' feature to invite users with pre-configured User Fields. In that situation, staged user accounts are used to preserve the user field data.
Admins can use bulk invites to pre-populate user fields. The imported
CSV file must have a header with "email" column (first position) and
names of the user fields (exact match).
Under the hood, the bulk invite will create staged users and populate
the user fields of those.
Find & Replace and Autotag watched words were not completely exported
and import did not work with these either. This commit changes the
input and output format to CSV, which allows for a secondary column.
This change is backwards compatible because a CSV file with only one
column has one value per line.
This PR allows invitations to be used when the DiscourseConnect SSO is enabled for a site (`enable_discourse_connect`) and local logins are disabled. Previously invites could not be accepted with SSO enabled simply because we did not have the code paths to handle that logic.
The invitation methods that are supported include:
* Inviting people to groups via email address
* Inviting people to topics via email address
* Using invitation links generated by the Invite Users UI in the /my/invited/pending route
The flow works like this:
1. User visits an invite URL
2. The normal invitation validations (redemptions/expiry) happen at that point
3. We store the invite key in a secure session
4. The user clicks "Accept Invitation and Continue" (see below)
5. The user is redirected to /session/sso then to the SSO provider URL then back to /session/sso_login
6. We retrieve the invite based on the invite key in secure session. We revalidate the invitation. We show an error to the user if it is not valid. An additional check here for invites with an email specified is to check the SSO email matches the invite email
7. If the invite is OK we create the user via the normal SSO methods
8. We redeem the invite and activate the user. We clear the invite key in secure session.
9. If the invite had a topic we redirect the user there, otherwise we redirect to /
Note that we decided for SSO-based invites the `must_approve_users` site setting is ignored, because the invite is a form of pre-approval, and because regular non-staff users cannot send out email invites or generally invite to the forum in this case.
Also deletes some group invite checks as per https://github.com/discourse/discourse/pull/12353
* FIX: Be able to handle long file extensions
Some applications have really long file extensions, but if we truncate
them weird behavior ensues.
This commit changes the file extension size from 10 characters to 255
characters instead.
See:
https://meta.discourse.org/t/182824
* Keep truncation at 10, but allow uppercase and dashes
Currently the process of adding a custom image to badge is quite clunky; you have to upload your image to a topic, and then copy the image URL and pasting it in a text field. Besides being clucky, if the topic or post that contains the image is deleted, the image will be garbage-collected in a few days and the badge will lose the image because the application is not that the image is referenced by a badge.
This commit improves that by adding a proper image uploader widget for badge images.
The cluster name can be configured by setting the `DISCOURSE_CLUSTER_NAME` environment variable. If set, you can then call /srv/status with a `?cluster=` parameter. If the cluster does not match, an error will be returned. This is useful if you need a load balancer to be able to verify the identity, as well as the presence, of an application container.
When creating a PM target_usernames has been deprecated for some time
now but the api docs have yet to reflect this.
The api docs now specify to use target_recipients.
See: eef21625c6
The user and an admin could create multiple email change requests for
the same user. If any of the requests was validated and it became
primary, the other request could not be deleted anymore.
* FEATURE: allow category group moderators to pin/unpin topics
Category group moderators should be able to pin/unpin any topics within a category where they have appropraite category group moderator permissions.
Previously, we blocked search engines in tag pages since they may get marked as a duplicate content.
* DEV: block tag inner pages from search engines crawling.
Prior to this change, we had weights for very_high, high, low and
very_low. This means there were 4 weights to tweak and what weights to
use for `very_high/high` and `very_low/low` pair was hard to explain.
This change makes it such that `very_high` search priority will always
ensure that the posts are ranked at the top while `very_low` search
priority will ensure that the posts are ranked at the very bottom.
* FIX: Do not show expired invites under Pending tab
* DEV: Controller action was renamed in previous commit
* FEATURE: Add 'Expired' tab to invites
* FEATURE: Refresh model after removing expired invites
* FEATURE: Do not immediately add invite to the list
Opening the 'create-invite' modal used to automatically generate an
invite to reserve an invite link. If the user did not save it and
closed the modal, the invite would be destroyed. This operations caused
the invite list to change in the background and confuse users.
* FEATURE: Sort redeemed users by creation time
* UX: Improve show / hide advanced options link
* FIX: Show redeemed users even if invites were trashed
* UX: Change modal title when editing invite
* UX: Remove Get Link button
Users can get it from the edit modal
* FEATURE: Add limit for invite links generated by regular users
* FEATURE: Add option to skip email
* UX: Show better error messages
* FIX: Show "Invited by" even if invite was trashed
Follow up to 1fdfa13a099d8e46edd0c481b3aaaafe40455ced.
* FEATURE: Add button to save without sending email
Follow up to c86379a465f28a3cc64a4a8c939cf32cf2931659.
* DEV: Use a buffer to hold all changed data
* FEATURE: Close modal after save
* FEATURE: Rate limit resend invite email
* FEATURE: Make the save buttons smarter
* FEATURE: Do not always send email even for new invites
Mailing list mode can generate significant email volume, especially on sites with a large user base. Disable mailing list mode via site settings by default so sites don't experience an unexpectedly large cost from outgoing email.
The urls that we generate for mobile post notifications don't take into
account the subfolder url if a site happens to have one configured. When
this happens when you tap on a new mobile notification it takes you to
a url that doesn't work because it is missing the subfolder portion.
I honestly think this should be handled in the Post model like we do
with the Topic model. `Post.url` should know how to handle subfolder
installs, but that seemed like a very risky change because there are
lots of other places in the codebase where we tack on the base_path and
I didn't want to risk duplicating it.
I also found a small typo in the topics controller spec.