The method was only used for mega topics but it was redundant as the
first post can be determined from using the condition where
`Post#post_number` equal to one.
* FEATURE: Cache CORS preflight requests for 2h
Browsers will cache this for 5 seconds by default. If using MessageBus
in a different domain, Discourse will issue a new long polling, by
default, every 30s or so. This means we would be issuing a new preflight
request **every time**. This can be incredibly wasteful, so let's cache
the authorization in the client for 2h, which is the maximum Chromium
allows us as of today.
* fix tests
`/u/username/invited.json?filter=expired` and `/u/username/invited.json?filter=pending` APIs are already returning data to admins. However, the `can_see_invite_details?` boolean was false, which prevented the Ember frontend from showing the tabs correctly. This commit updates the guardian method to match reality.
* FIX: do not display add to calendar for past dates
There is no value in saving past dates into calendar
* FIX: remove postId and move ICS to frontend
PostId is not necessary and will make the solution more generic for dates which doesn't belong to a specific post.
Also, ICS file can be generated in JavaScript to avoid calling backend.
Sometimes administrators want to permanently delete posts and topics
from the database. To make sure that this is done for a good reasons,
administrators can do this only after one minute has passed since the
post was deleted or immediately if another administrator does it.
We don't want to be using emails as source for username and name suggestions in cases when it's possible that a user have no chance to intervene and correct a suggested username. It risks exposing email addresses.
Previosuly, quotes from original topics are rendered incorrectly since the moved posts are not rebaked.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
* FIX: Remove List-Post email header
This header is used for mailing lists and can confuse some email clients
such as Thunderbird to display wrong replying options.
* FIX: Replace reply_key in email custom headers
Admins can add custom email headers from site settings. Email sender
will try to replace the reply key if %{reply_key} exists or remove the
header if a reply key does not exist.
Calling create_notification_alert could still send a notification to a
suspended user. This just moves the check if user is suspended right
before sending the notification.
Invite is used in two contexts, when inviting a new user to the forum
and when inviting an existent user to a topic. The first case is more
complex and it involves permission checks to ensure that new users can
be created. In the second case, it is enough to ensure that the topic
is visible for both users and that all preconditions are met.
One edge case is the invite to topic via email functionality which
checks for both conditions because first the user must be invited to
create an account first and then to the topic.
A side effect of these changes is that all site settings related to
invites refer to inviting new users only now.
We aren't translating these settings, so it makes more sense to move them into the code. I added an instance method so plugins can add mappings for custom reasons.
- Allow the `/presence/get` endpoint to return multiple channels in a single request (limited to 50)
- When multiple presence channels are initialized in a single Ember runloop, batch them into a single GET request
- Introduce the `presence-pretender` to allow easy testing of PresenceChannel-related features
- Introduce a `use_cache` boolean (default true) on the the server-side PresenceChannel initializer. Useful during testing.
Change the type for default_calendar to a string.
The type specified for the default calendar in the api docs wasn't a
valid type. The linting in the api docs repo reports:
```
`type` can be one of the following only: "object", "array", "string", "number", "integer", "boolean", "null".
```
This linting currently is only in the `discourse_api_docs` repo.
* DEV: Add include_subcategories param to api docs
Adding the `include_subcategories=true` query param to the
`/categories.json` api docs.
Follow up to: fe676f334a
* fix spec
This commit fixes the `eval` and `evalsha` commands/methods and any other methods that don't have a wrapper in `DiscourseRedis` and expect keyword arguments. I noticed this problem in Logster when I was trying to fetch some log messages in JSON format using the rails console and saw the messages were missing the `env` field. Logster uses the `eval` command to fetch messages `env`s:
dc351fd00f/lib/logster/redis_store.rb (L250-L253)
and that code was not fetching anything because `DiscourseRedis` didn't pass the `keys` keyword arg to the redis gem.
It allows saving local date to calendar.
Modal is giving option to pick between ics and google. User choice can be remembered as a default for the next actions.
* FEATURE: Return subcategories on categories endpoint
When using the API subcategories will now be returned nested inside of
each category response under the `subcategory_list` param. We already
return all the subcategory ids under the `subcategory_ids` param, but
you then would have to make multiple separate API calls to fetch each of
those subcategories. This way you can get **ALL** of the categories
along with their subcategories in a single API response.
The UI will not be affected by this change because you need to pass in
the `include_subcategories=true` param in order for subcategories to be
returned.
In a follow up PR I'll add the API scoping for fetching categories so
that a readonly API key can be used for the `/categories.json` endpoint. This
endpoint should be used instead of the `/site.json` endpoint for
fetching a sites categories and subcategories.
* Update PR based on feedback
- Have spec check for specific subcategory
- Move comparison check out of loop
- Only populate subcategory list if option present
- Remove empty array initialization
- Update api spec to allow null response
* More PR updates based on feedback
- Use a category serializer for the subcategory_list
- Don't include the subcategory_list param if empty
- For the spec check for the subcategory by id
- Fix spec to account for param not present when empty
Usually, when an email is received a user lookup is performed using the
email address found in the `From` header. When an email has an
`X-Original-From` header, if it is equal to `Reply-To` then it uses that
one instead. The comparison was sensitive to whitespaces and other
insignificant characters such as quotes because it reconstructed the
`From` header.
For the fixture added in this commit, it compared the reconstructed
`From` header `John Doe <johndoe@example.com>` with the `Reply-To`
header `"John Doe" <johndoe@example.com>`.
We relied on backticks to identify and replace site setting names with links. Unfortunately, some translations don't follow this convention, breaking this feature.
Additionally, this lets us linkify `category settings` and `watched words` without using HTML in the translations.
You may notice that I split the texts we want to linkify into two groups. I did this on purpose to emphasize those that should be translated (regular_links) from those who don't (site_settings_link). If you can think of a better solution, I'm open to suggestions.
* DEV: Remove HTML setting type and sanitization logic.
We concluded that we don't want settings to contain HTML, so I'm removing the setting type and sanitization logic. Additionally, we no longer allow the global-notice text to contain HTML.
I searched for usages of this setting type in the `all-the-plugins` repo and found none, so I haven't added a migration for existing settings.
* Mark Global notices containing links as HTML Safe.
FinalDestination now supports the `follow_canonical` option, which will perform an initial GET request, parse the canonical link if present, and perform a HEAD request to it.
We use this mode during embeds to avoid treating URLs with different query parameters as different topics.
Previously, while retrieving each upload urls in a post S3 CDN urls with different path in prefix (external urls technically) are considered as uploaded url. It created issue while checking missing uploads.
This commit resolves refactors can_invite_to? to use
can_invite_to_forum? for checking the site-wide permissions and then
perform topic specific checkups.
Similarly, can_invite_to? is always used with a topic object and this is
now enforced.
There was another problem before when `must_approve_users` site setting
was not checked when inviting users to forum, but was checked when
inviting to a topic.
Another minor security issue was that group owners could invite to
group topics even if they did not have the minimum trust level to do
it.
It was possible to see notifications of other users using routes:
- notifications/responses
- notifications/likes-received
- notifications/mentions
- notifications/edits
We weren't showing anything private (like notifications about private messages), only things that're publicly available in other places. But anyway, it feels strange that it's possible to look at notifications of someone else. Additionally, there is a risk that we can unintentionally leak something on these pages in the future.
This commit restricts these routes.
The all inboxes was introduced in
016efeadf6 but we decided to roll it back
for performance reasons. The main performance challenge here is that PG
has to basically loop through all the PMs that a user is allowed to view
before being able to order by `Topic#bumped_at`. The all inboxes was not
planned as part of the new/unread filter so we've decided not to tackle
the performance issue for the upcoming release.
Follow-up to 016efeadf6
* PERF: Improve database query perf when loading topics for a category.
Instead of left joining the `topics` table against `categories` by filtering with `categories.id`,
we can improve the query plan by filtering against `topics.category_id`
first before joining which helps to reduce the number of rows in the
topics table that has to be joined against the other tables and also
make better use of our existing index.
The following is a before and after of the query plan for a category
with many subcategories.
Before:
```
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
Limit (cost=1.28..747.09 rows=30 width=12) (actual time=85.502..2453.727 rows=30 loops=1)
-> Nested Loop Left Join (cost=1.28..566518.36 rows=22788 width=12) (actual time=85.501..2453.722 rows=30 loops=1)
Join Filter: (category_users.category_id = topics.category_id)
Filter: ((topics.category_id = 11) OR (COALESCE(category_users.notification_level, 1) <> 0) OR (tu.notification_level > 1))
-> Nested Loop Left Join (cost=1.00..566001.58 rows=22866 width=20) (actual time=85.494..2453.702 rows=30 loops=1)
Filter: ((COALESCE(tu.notification_level, 1) > 0) AND ((topics.category_id <> 11) OR (topics.pinned_at IS NULL) OR ((t
opics.pinned_at <= tu.cleared_pinned_at) AND (tu.cleared_pinned_at IS NOT NULL))))
Rows Removed by Filter: 1
-> Nested Loop (cost=0.57..528561.75 rows=68606 width=24) (actual time=85.472..2453.562 rows=31 loops=1)
Join Filter: ((topics.category_id = categories.id) AND ((categories.topic_id <> topics.id) OR (categories.id = 1
1)))
Rows Removed by Join Filter: 13938306
-> Index Scan using index_topics_on_bumped_at on topics (cost=0.42..100480.05 rows=715549 width=24) (actual ti
me=0.010..633.015 rows=464623 loops=1)
Filter: ((deleted_at IS NULL) AND ((archetype)::text <> 'private_message'::text))
Rows Removed by Filter: 105321
-> Materialize (cost=0.14..36.04 rows=30 width=8) (actual time=0.000..0.002 rows=30 loops=464623)
-> Index Scan using categories_pkey on categories (cost=0.14..35.89 rows=30 width=8) (actual time=0.006.
.0.040 rows=30 loops=1)
Index Cond: (id = ANY ('{11,53,57,55,54,56,112,94,107,115,116,117,97,95,102,103,101,105,99,114,106,1
13,104,98,100,96,108,109,110,111}'::integer[]))
-> Index Scan using index_topic_users_on_topic_id_and_user_id on topic_users tu (cost=0.43..0.53 rows=1 width=16) (a
ctual time=0.004..0.004 rows=0 loops=31)
Index Cond: ((topic_id = topics.id) AND (user_id = 1103877))
-> Materialize (cost=0.28..2.30 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=30)
-> Index Scan using index_category_users_on_user_id_and_last_seen_at on category_users (cost=0.28..2.29 rows=1 width
=8) (actual time=0.004..0.004 rows=0 loops=1)
Index Cond: (user_id = 1103877)
Planning Time: 1.359 ms
Execution Time: 2453.765 ms
(23 rows)
```
After:
```
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=1.28..438.55 rows=30 width=12) (actual time=38.297..657.215 rows=30 loops=1)
-> Nested Loop Left Join (cost=1.28..195944.68 rows=13443 width=12) (actual time=38.296..657.211 rows=30 loops=1)
Filter: ((categories.topic_id <> topics.id) OR (topics.category_id = 11))
Rows Removed by Filter: 29
-> Nested Loop Left Join (cost=1.13..193462.59 rows=13443 width=16) (actual time=38.289..657.092 rows=59 loops=1)
Join Filter: (category_users.category_id = topics.category_id)
Filter: ((topics.category_id = 11) OR (COALESCE(category_users.notification_level, 1) <> 0) OR (tu.notification_level > 1))
-> Nested Loop Left Join (cost=0.85..193156.79 rows=13489 width=20) (actual time=38.282..657.059 rows=59 loops=1)
Filter: ((COALESCE(tu.notification_level, 1) > 0) AND ((topics.category_id <> 11) OR (topics.pinned_at IS NULL) OR ((topics.pinned_at <= tu.cleared_pinned_at) AND (tu.cleared_pinned_at IS NOT NULL))))
Rows Removed by Filter: 1
-> Index Scan using index_topics_on_bumped_at on topics (cost=0.42..134521.06 rows=40470 width=24) (actual time=38.267..656.850 rows=60 loops=1)
Filter: ((deleted_at IS NULL) AND ((archetype)::text <> 'private_message'::text) AND (category_id = ANY ('{11,53,57,55,54,56,112,94,107,115,116,117,97,95,102,103,101,105,99,114,106,113,104,98,100,96,108,109,110,111}'::integer[])))
Rows Removed by Filter: 569895
-> Index Scan using index_topic_users_on_topic_id_and_user_id on topic_users tu (cost=0.43..1.43 rows=1 width=16) (actual time=0.003..0.003 rows=0 loops=60)
Index Cond: ((topic_id = topics.id) AND (user_id = 1103877))
-> Materialize (cost=0.28..2.30 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=59)
-> Index Scan using index_category_users_on_user_id_and_last_seen_at on category_users (cost=0.28..2.29 rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=1)
Index Cond: (user_id = 1103877)
-> Index Scan using categories_pkey on categories (cost=0.14..0.17 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=59)
Index Cond: (id = topics.category_id)
Planning Time: 1.633 ms
Execution Time: 657.255 ms
(22 rows)
```
* PERF: Optimize index on topics bumped_at.
Replace `index_topics_on_bumped_at` index with a partial index on `Topic#bumped_at` filtered by archetype since there is already another index that covers private topics.
In the user bookmark list, when we show the excerpt of the bookmark
(which is usually just the bookmarked post excerpt), we want to show
the first unread post's excerpt instead for for_topic bookmarks. This
is because when the user clicks on that bookmark link, they are taken
to the first unread post in the topic, not the OP, as per:
27699648ef
The file size error messages for max_image_size_kb and
max_attachment_size_kb are shown to the user in the KB
format, regardless of how large the limit is. Since we
are going to support uploading much larger files soon,
this KB-based limit soon becomes unfriendly to the end
user.
For example, if the max attachment size is set to 512000
KB, this is what the user sees:
> Sorry, the file you are trying to upload is too big (maximum
size is 512000KB)
This makes the user do math. In almost all file explorers that
a regular user would be familiar width, the file size is shown
in a format based on the maximum increment (e.g. KB, MB, GB).
This commit changes the behaviour to output a humanized file size
instead of the raw KB. For the above example, it would now say:
> Sorry, the file you are trying to upload is too big (maximum
size is 512 MB)
This humanization also handles decimals, e.g. 1536KB = 1.5 MB
Instead of going to the OP of the topic for topic-level bookmarks
(which are bookmarks where for_topic is true) when clicking on the
bookmark in the quick access menu or on the user bookmark list,
this commit takes the user to the last unread post in
the topic instead. This should be generally more useful than landing
on the unchanging OP.
To make this work nicely, I needed to add the last_read_post_number to
the BookmarkQuery based on the TopicUser association. It should not add
too much extra weight to the query, because it is limited to the user
that we are fetching bookmarks for.
Also fixed an issue where the bookmark serializer highest_post_number was
not taking into account whether the user was staff, which is when we
should use highest_staff_post_number instead.
* DEV: use active record `save!` instead of mini sql.
The "save" method will trigger the before_save callback "match_primary_group_changes" for User model. Else `flair_group_id` won't be removed from the user.
* check whether the method `match_primary_group_changes` called or not.
Allows creating a bookmark with the `for_topic` flag introduced in d1d2298a4c set to true. This happens when clicking on the Bookmark button in the topic footer when no other posts are bookmarked. In a later PR, when clicking on these topic-level bookmarks the user will be taken to the last unread post in the topic, not the OP. Only the OP can have a topic level bookmark, and users can also make a post-level bookmark on the OP of the topic.
I had to do some pretty heavy refactors because most of the bookmark code in the JS topics controller was centred around instances of Post JS models, but the topic level bookmark is not centred around a post. Some refactors were just for readability as well.
Also removes some missed reminderType code from the purge in 41e19adb0d
Due to the way that rswag expands shared components we were getting this
warning when linting our api docs:
```
Component: "user_response" is never used.
```
This change refactors the `api/users_spec.rb` file so that it uses the
new way of doing things with a separate `user_get_response.json` schema
file rather then the old way of loading a shared response inside of the
swagger_helper.rb file.
The display name can have quotes around it, which does not work
with our current comparison of a from field (in this case Reply-To)
and another header (X-Original-From), because we are not comparing
the two values in the same way. This causes an issue where the
commit here: b88d8c8 will not
work properly; the forwarded email gets the From address instead
of the Reply-To address as intended.
After deleting a category, we should soft-delete the category definition topic instead of hard deleting it. Else it causes issues while doing the user merge action if the source user has an orphan post that belongs to the deleted topic.
The color_scheme_id needs to be an integer not a string.
This is one of the failing tests that showed this error:
https://github.com/discourse/discourse/runs/3598414971
It showed this error
`POSSIBLE ISSUE W/: /user_themes/0/color_scheme_id`
And this is part of the site.json response:
```
...
"user_themes"=>[{"theme_id"=>149, "name"=>"Cool theme 111", "default"=>false, "color_scheme_id"=>37}]
...
```
We don't actually use the reminder_type for bookmarks anywhere;
we are just storing it. It has no bearing on the UI. It used
to be relevant with the at_desktop bookmark reminders (see
fa572d3a7a)
This commit marks the column as readonly, ignores it, and removes
the index, and it will be dropped in a later PR. Some plugins
are relying on reminder_type partially so some stubs have been
left in place to avoid errors.
First reported in https://meta.discourse.org/t/-/202482/19
There are two optimizations being applied here:
1. Fetch a user's group ids in a seperate query instead of including it
as a sub-query. When I tried a subquery, the query plan becomes very
inefficient.
1. Join against the `topic_allowed_users` and `topic_allowed_groups`
table instead of doing an IN against a subquery where we UNION the
`topic_id`s from the two tables. From my profiling, this enables PG to
do a backwards index scan on the `index_topics_on_timestamps_private`
index.
This commit fixes a bug where listing all messages was incorrectly
excluding topics if a topic has been archived by a group even if the
user did not belong to the group.
This commit also fixes another bug where dismissing private messages
selectively was subjected to the default limit of 30.
This new column will be used to indicate that a bookmark
is at the topic level. The first post of a topic can be
bookmarked twice after this change -- with for_topic set
to true and with for_topic set to false.
A later PR will use this column for logic to bookmark the
topic, and then topic-level bookmark links will take you
to the last unread post in the topic.
See also 22208836c5
We don't need no stinkin' denormalization! This commit ignores
the topic_id column on bookmarks, to be deleted at a later date.
We don't really need this column and it's better to rely on the
post.topic_id as the canonical topic_id for bookmarks, then we
don't need to remember to update both columns if the bookmarked
post moves to another topic.
Discourse is sending regularly message to admins when potential problems are persisted. Most of the time they have exactly the same content. In that case, when there are no replies, the old one should be trashed before a new one is created.
Administrators can use second factor to confirm granting admin access
without using email. The old method of confirmation via email is still
used as a fallback when second factor is unavailable.
The previous excerpt was a simple truncated raw message. Starting with
this commit, the raw content of the draft is cooked and an excerpt is
extracted from it. The logic for extracting the excerpt mimics the the
`ExcerptParser` class, but does not implement all functionality, being
a much simpler implementation.
The two draft controllers have been merged into one and the /draft.json
route has been changed to /drafts.json to be consistent with the other
route names.
When copying an existing upload stub temporary object
on S3 to its final destination we were not copying across
its additional headers such as content-disposition and
cache-control, which led to issues like attachments not
downloading with their original filename when clicking
the download links in posts.
This is because the metadata_directive = REPLACE option
was not being passed to object.copy_from(), so only the
source object's headers were being used. Added an option
for apply_metadata_to_destination to apply this option
conditionally, because we may not always want to replace
this metadata, but we definitely do when copying a temporary
upload.
When a user archives a personal message, they are redirected back to the
inbox and will refresh the list of the topics for the given filter.
Publishing an event to the user results in an incorrect incoming message
because the list of topics has already been refreshed.
This does mean that if a user has two tabs opened, the non-active tab
will not receive the incoming message but at this point we do not think
the technical trade-offs are worth it to support this feature. We
basically have to somehow exclude a client from an incoming message
which is not easy to do.
Follow-up to fc1fd1b416
Watched words of type 'replace' or 'link' replaced the text inside
mentions or hashtags too, which broke these. These types of watched
words must skip any match that has an @ or # before it.
At this point in time, we do not think supporting unread and new when an
admin is looking at another user's messages is worth supporting.
Follow-up to fc1fd1b416
One of the fields that should be present for openapi docs is the "license"
field.
https://spec.openapis.org/oas/latest.html#infoObject
Our API docs already had a license, so this commit just specifies that
and provides a link to it.
In order to include the new/unread count in the browse more message
under suggested topics, a couple of technical changes have to be made.
1. `PrivateMessageTopicTrackingState` is now auto-injected which is
similar to how it is done for `TopicTrackingState`. This is done so
we don't have to attempt to pass the `PrivateMessageTopicTrackingState`
object multiple levels down into the suggested-topics component. While
the object is auto-injected, we only fetch the initial state and start
tracking when the relevant private messages routes has been hit and only
when a private message's suggested topics is loaded. This is
done as we do not want to add the extra overhead of fetching the inital
state to all page loads but instead wait till the private messages
routes are hit.
2. Previously, we would stop tracking once the `user-private-messages`
route has been deactivated. However, that is not ideal since
navigating out of the route and back means we send an API call to the
server each time. Since `PrivateMessageTopicTrackingState` is kept in
sync cheaply via messageBus, we can just continue to track the state
even if the user has navigated away from the relevant stages.
When forwarding emails into the group inbox, we now use the
original sender email as the from_address since
2ac9fd9dff. However, we have not
been saving the original CC addresses of the forwarded email,
which are needed to include those recipients in on the conversation
when replying via the group inbox.
This commit captures the CC addresses on the incoming email, and
makes sure the emails are created as staged users and added to the
list of topic allowed users so they are included on CC's sent by
the GroupSmtpEmail and other jobs.
When 2ac9fd9dff was done, this
affected the small post that is created when forwarding an email
into the group inbox. Instead of using the name and the email of
the user who forwarded the email, it used the original from email
and name to create the small post. So instead of something like
"Discourse Team forwarded the above email" we ended up with
"John Smith forwarded the above email" which is incorrect.
This fixes the issue by creating a staged user for the forwarding
email address (if such a user does not yet exist) and uses that
for the "forwarded" small post instead.
Other locale characters in file names (e.g. é, ä) as well
as special characters can cause issues on S3, notably the S3
copy object operation does not support these special characters.
Instead of storing the original file name in the key, which is
unnecessary, we now generate a random file name with the original
extension for the temporary file and use that for all external
upload stub operations.
From the openapi spec:
https://spec.openapis.org/oas/latest.html#fixed-fields-7
each endpoint needs to have an `operationId`:
> Unique string used to identify the operation. The id MUST be unique
> among all operations described in the API. The operationId value is
> case-sensitive. Tools and libraries MAY use the operationId to uniquely
> identify an operation, therefore, it is RECOMMENDED to follow common
> programming naming conventions.
Running the linter on our openapi.json file with this command:
`npx @redocly/openapi-cli lint openapi.json`
produced the following warning on all of our endpoints:
> Operation object should contain `operationId` field
This commit resolves these warnings by adding an operationId field to
each endpoint.
DEV: Allow passing cook_method to TopicEmbed.import to override default
This will be used in the rss-polling plugin when we want to have
oneboxes on feed content, like youtube for example.
Previously, a group's `default_notification_level` change will only affect the users added after it.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
The seed-fu gem resets the sequence on all the tables it touches. In some situations, this can cause primary keys to be re-used. This commit introduces a freedom patch which ensures seed-fu only touches the sequence when it is less than the id of one of the seeded records.
PresenceChannel aims to be a generic system for allow the server, and end-users, to track the number and identity of users performing a specific task on the site. For example, it might be used to track who is currently 'replying' to a specific topic, editing a specific wiki post, etc.
A few key pieces of information about the system:
- PresenceChannels are identified by a name of the format `/prefix/blah`, where `prefix` has been configured by some core/plugin implementation, and `blah` can be any string the implementation wants to use.
- Presence is a boolean thing - each user is either present, or not present. If a user has multiple clients 'present' in a channel, they will be deduplicated so that the user is only counted once
- Developers can configure the existence and configuration of channels 'just in time' using a callback. The result of this is cached for 2 minutes.
- Configuration of a channel can specify permissions in a similar way to MessageBus (public boolean, a list of allowed_user_ids, and a list of allowed_group_ids). A channel can also be placed in 'count_only' mode, where the identity of present users is not revealed to end-users.
- The backend implementation uses redis lua scripts, and is designed to scale well. In the future, hard limits may be introduced on the maximum number of users that can be present in a channel.
- Clients can enter/leave at will. If a client has not marked itself 'present' in the last 60 seconds, they will automatically 'leave' the channel. The JS implementation takes care of this regular check-in.
- On the client-side, PresenceChannel instances can be fetched from the `presence` ember service. Each PresenceChannel can be used entered/left/subscribed/unsubscribed, and the service will automatically deduplicate information before interacting with the server.
- When a client joins a PresenceChannel, the JS implementation will automatically make a GET request for the current channel state. To avoid this, the channel state can be serialized into one of your existing endpoints, and then passed to the `subscribe` method on the channel.
- The PresenceChannel JS object is an ember object. The `users` and `count` property can be used directly in ember templates, and in computed properties.
- It is important to make sure that you `unsubscribe()` and `leave()` any PresenceChannel objects after use
An example implementation may look something like this. On the server:
```ruby
register_presence_channel_prefix("site") do |channel|
next nil unless channel == "/site/online"
PresenceChannel::Config.new(public: true)
end
```
And on the client, a component could be implemented like this:
```javascript
import Component from "@ember/component";
import { inject as service } from "@ember/service";
export default Component.extend({
presence: service(),
init() {
this._super(...arguments);
this.set("presenceChannel", this.presence.getChannel("/site/online"));
},
didInsertElement() {
this.presenceChannel.enter();
this.presenceChannel.subscribe();
},
willDestroyElement() {
this.presenceChannel.leave();
this.presenceChannel.unsubscribe();
},
});
```
With this template:
```handlebars
Online: {{presenceChannel.count}}
<ul>
{{#each presenceChannel.users as |user|}}
<li>{{avatar user imageSize="tiny"}} {{user.username}}</li>
{{/each}}
</ul>
```
The generate_presigned_put endpoint for direct external uploads
(such as the one for the uppy-image-uploader) records allowed
S3 metadata values on the uploaded object. We use this to store
the sha1-checksum generated by the UppyChecksum plugin, for later
comparison in ExternalUploadManager.
However, we were not doing this for the create_multipart endpoint,
so the checksum was never captured and compared correctly.
Also includes a fix to make sure UppyChecksum is the last preprocessor to run.
It is important that the UppyChecksum preprocessor is the last one to
be added; the preprocessors are run in order and since other preprocessors
may modify the file (e.g. the UppyMediaOptimization one), we need to
checksum once we are sure the file data has "settled".
Users can invite people to topic and they will be automatically
redirected to the topic when logging in after signing up. This commit
ensures a "invited_to_topic" notification is created when the invite is
redeemed.
The same notification is used for the "Notify" sharing method that is
found in share topic modal.
This bug was introduced by f66007ec83.
In PostJobsEnqueuer we previously did not fire the after_post_create
event and after_topic_create event for private message topics. This was
changed in the above commit in order to publish message bus messages
for topic tracking state updates. Unfortunately this caused the
NotifyMailingListSubscribers job to be enqueued for all posts including
private messages, and admins and the users involved in the PMs got
emailed the contents of the PMs if they had mailing list mode enabled.
Luckily the impact of this was mitigated by a Guardian#can_see? check
for each mailing list mode user in the NotifyMailingListSubscribers job.
We never want to notify mailing list mode subscribers for private messages
so an early return has been added there, plus the logic in PostJobsEnqueuer
has been fixed, and tests have been added to that class where there were
none before.
Since ad3ec5809f when a user chooses
the Dismiss New... option in the New topic list, we send a request
to topics/reset-new.json with ?tracked=false as the only parameter.
This then uses Topic as the scope for topics to dismiss, with no
other limitations. When we do topic_scope.pluck(:id), it gets the
ID of every single topic in the database (that is not deleted) to
pass to TopicsBulkAction, causing a huge query with severe performance
issues.
This commit changes the default scope to use
`TopicQuery.new(current_user).new_results(limit: false)`
which should only use the topics in the user's New list, which
will be a much smaller list, depending on the user's "new_topic_duration_minutes"
setting.
This is unnecessary, as when the temporary key is created
in S3Store we already include the s3_bucket_folder_path, and
the key will always start with temp/ to assist with lifecycle
rules for multipart uploads.
This was affecting Discourse.store.object_from_path,
Discourse.store.signed_url_for_path, and possibly others.
See also: e0102a5
There are certain design decisions that were made in this commit.
Private messages implements its own version of topic tracking state because there are significant differences between regular and private_message topics. Regular topics have to track categories and tags while private messages do not. It is much easier to design the new topic tracking state if we maintain two different classes, instead of trying to mash this two worlds together.
One MessageBus channel per user and one MessageBus channel per group. This allows each user and each group to have their own channel backlog instead of having one global channel which requires the client to filter away unrelated messages.
Previously we had temp/ in the middle of the S3 key path like so
* /uploads/default/temp/randomstring/test.png (normal site)
* /sitename/uploads/default/temp/randomstring/test.png (s3 folder path site)
* /standard10/uploads/sitename/temp/randomstring/test.png (multisite site)
However this necessitates making a lifecycle rule to clean up incomplete
S3 multipart uploads for every site, something which we cannot do. It makes
much more sense to have a structure with /temp at the start of the key,
which is what this commit does:
* /temp/uploads/default/randomstring/test.png (normal site)
* /temp/sitename/uploads/default/randomstring/test.png (s3 folder path site)
* /temp/standard10/uploads/sitename/randomstring/test.png (multisite site)
This pull request introduces the endpoints required, and the JavaScript functionality in the `ComposerUppyUpload` mixin, for direct S3 multipart uploads. There are four new endpoints in the uploads controller:
* `create-multipart.json` - Creates the multipart upload in S3 along with an `ExternalUploadStub` record, storing information about the file in the same way as `generate-presigned-put.json` does for regular direct S3 uploads
* `batch-presign-multipart-parts.json` - Takes a list of part numbers and the unique identifier for an `ExternalUploadStub` record, and generates the presigned URLs for those parts if the multipart upload still exists and if the user has permission to access that upload
* `complete-multipart.json` - Completes the multipart upload in S3. Needs the full list of part numbers and their associated ETags which are returned when the part is uploaded to the presigned URL above. Only works if the user has permission to access the associated `ExternalUploadStub` record and the multipart upload still exists.
After we confirm the upload is complete in S3, we go through the regular `UploadCreator` flow, the same as `complete-external-upload.json`, and promote the temporary upload S3 into a full `Upload` record, moving it to its final destination.
* `abort-multipart.json` - Aborts the multipart upload on S3 and destroys the `ExternalUploadStub` record if the user has permission to access that upload.
Also added are a few new columns to `ExternalUploadStub`:
* multipart - Whether or not this is a multipart upload
* external_upload_identifier - The "upload ID" for an S3 multipart upload
* filesize - The size of the file when the `create-multipart.json` or `generate-presigned-put.json` is called. This is used for validation.
When the user completes a direct S3 upload, either regular or multipart, we take the `filesize` that was captured when the `ExternalUploadStub` was first created and compare it with the final `Content-Length` size of the file where it is stored in S3. Then, if the two do not match, we throw an error, delete the file on S3, and ban the user from uploading files for N (default 5) minutes. This would only happen if the user uploads a different file than what they first specified, or in the case of multipart uploads uploaded larger chunks than needed. This is done to prevent abuse of S3 storage by bad actors.
Also included in this PR is an update to vendor/uppy.js. This has been built locally from the latest uppy source at d613b849a6. This must be done so that I can get my multipart upload changes into Discourse. When the Uppy team cuts a proper release, we can bump the package.json versions instead.
When emails were forwarded to a group inbox by the email address
of the group, for example when an email ends up in spam and must
be manually forwarded to the group+site@discoursemail.com address,
the OP of the topic ended up being the group's email address instead
of the sender who originally sent the email to the group inbox.
This commit detects that an email has been forwarded using existing
tools, and if the from address matches one of the group incoming
email addresses, then we look at the forwarded email's from address
and use that instead for the incoming email from address as well as
the staged/regular user used for the Topic.user.
This will make it much cleaner to forward emails into a group inbox,
and will prevent issues with PostAlerter where the OP is double-notified
for these emails.
Currently, pinned topics are ordered by the `bumped_at` column. This behavior is not desired because it gives admins no control over the order of pinned topics. This PR makes pinned topics ordered by the `pinned_at` column. A topic that is pinned last appears first in topic lists. If an admin wants an already pinned topic to appear first in the list of pinned topics, they'll have to unpin that topic and pin it again.
Meta topic: https://meta.discourse.org/t/how-do-i-set-the-order-of-pinned-topics/16935/23?u=osama.
* FIX: Revoking admin or moderator status doesn't require refresh to delete/anonymize/merge user
On the /admin/users/<id>/<username> page, there are action buttons that are either visible or hidden depending on a few fields from the AdminDetailsSerializer: `can_be_deleted`, `can_be_anonymized`, `can_be_merged`, `can_delete_all_posts`.
These fields are updated when granting/revoking admin or moderator status. However, those updates were not being reflected on the page. E.g. if a user is granted moderation privileges, the 'anonymize user' and 'merge' buttons still appear on the page, which is inconsistent with the backend state of the user. It requires refreshing the page to update the state.
This commit fixes that issue, by syncing the client model state with the server state when handling a successful response from the server. Now, when revoking privileges, the buttons automatically appear without refreshing the page. Similarly, when granting moderator privileges, the buttons automatically disappear without refreshing the page.
* Add detailed user response to spec for changed routes.
Add tests to verify that the revoke_moderation, grant_moderation, and revoke_admin routes return a response formatted according to the AdminDetailedUserSerializer.
Emails can include the marker in a different language, depending on
site and user settings. The email receiver always looked for the marker
in default language.
This is useful in the DiscourseHub mobile app, currently the app queries
the `about.json` endpoint, which can raise a CORS issue in some cases,
for example when the site only accepts logins from an external provider.
Allow admins to configure exceptions to our Rails rate limiter.
Configuration happens in the environment variables, and work with both
IPs and CIDR blocks.
Example:
```
env:
DISCOURSE_MAX_REQS_PER_IP_EXCEPTIONS: >-
14.15.16.32/27
216.148.1.2
```
`nullable` is no longer a valid type, and types also can't be an empty
string, so just bringing a number of issues with types in compliance
with the openapi spec.
When a staff member clicks on a user's number of flagged posts, we redirect them to the review queue, so it makes sense to count the number of items there to calculate the count.
We used to look at post action items to calculate this number, which doesn't match the number of items in the queue if old flags exist.
When a post is flagged with the reason of 'Something Else' a brief message can be added by the user which subsequently creates a `meta_topic` private message. The group `moderators` is automatically added to this topic.
If category group moderation is enabled, and the post belongs to a category with a reviewable group, that group should also be added to the meta_topic.
Note: This extends the `notify_moderators` logic, and will add the reviewable group to the meta_topic, regardless of the settings of that group.
The latest openapi spec version is v3.1.0
https://spec.openapis.org/oas/v3.1.0
Specifying the latest version will allow our openapi spec linter to use
this version and allow use to use the new type format that allows for
specifying a type as "null", which we need because sometimes our api
responses include null values instead of a "string", "integer", or
"object" type.
See: https://stackoverflow.com/a/48114322/588458
In 2018 check was added that TL1 welcome message is sent unless user already has BasicBadge granted.
I think we should also check if BasicBadge is even enabled. Otherwise, each time group is assigned to a user and trust level is recalculated, they will receive a welcome message.
When a user signs up via an external auth method, a new link is added to the signup modal which allows them to connect an existing Discourse account. This will only happen if:
- There is at least 1 other auth method available
and
- The current auth method permits users to disconnect/reconnect their accounts themselves
This handles a few edge cases which are extremely rare (due to the UI layout), but still technically possible:
- Ensure users are authenticated before attempting association.
- Add a message and logic for when a user already has an association for a given auth provider.
We weren't calling clear_all! for the rate limiter which
was the first problem, and the second problem was that it
is very odd to do state cleanup before tests instead of after,
so moved the disabling and clear_all! to after.
This reverts a part of changes introduced by https://github.com/discourse/discourse/pull/13947
In that PR I:
1. Disallowed topic feature links for TL-0 users
2. Additionally, disallowed just putting any URL in topic titles for TL-0 users
Actually, we don't need the second part. It introduced unnecessary complexity for no good reason. In fact, it tries to do the job that anti-spam plugins (like Akismet plugin) should be doing.
This PR reverts this second change.
This disallows putting URLs in topic titles for TL0 users, which means that:
If a TL-0 user puts a link into the title, a topic featured link won't be generated (as if it was disabled in the site settings)
Server methods for creating and updating topics will be refusing featured links when they are called by TL-0 users
TL-0 users won't be able to put any link into the topic title. For example, the title "Hey, take a look at https://my-site.com" will be rejected.
Also, it improves a bit server behavior when creating or updating feature links on topics in the categories with disabled featured links. Before the server just silently ignored a featured link field that was passed to him, now it will be returning 422 response.
* FIX: Update draft count when sequence is increased
Sometimes users ended up having a draft count higher than the actual
number of drafts.
* FIX: Do not update draft count twice
The call to DraftSequence.next! above already does it.
Group flair is not removed while removing a user from the group since the `before_save` callback methods are not triggered while using the `update_columns` method.
Discourse automatically sends a private message after backup or
restore finished. The private message used to contain the log inline
even when it was very long. A very long log can create issues because
the length of the post will be over the maximum allowed length of a
post. When that happens, Discourse will try to create an upload with
the logs. If that fails, it will trim the log and inline it.
Inlining secure images with the same name was not possible because they
were indexed by filename. If an email contained two files with the same
name, only the first image was used for both of them. The other file
was still attached to the email.
While merging two user accounts don't merge the source user's email address if the target user is not a human.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
When the Reply-To header is present for incoming emails we
want to use it instead of the from address. This is usually the
case when forwarding an email via a mailing list into Discourse.
For now we are only using the Reply-To header if the email has
been forwarded via Google Groups, which is why we are checking the
X-Original-From header too. In future we may want to use the Reply-To
header in more cases.
Searching in a category looked only one level down, ignoring the site
setting max_category_nesting. The user interface did not support the
third level of categories and did not display them in the "Categorized"
input of the advanced search options.
* FEATURE: Onebox can match engines based on the content_type
`FinalDestination` now returns the `content_type` of a resolved URL.
`Oneboxer` passes this value to `Onebox` itself. Onebox engines can now specify a `matches_content_type` regex of content_types that the engine can handle, regardless of the URL.
`ImageOnebox` will match URLs with a content type of `image/png`, `jpg`, `gif`, `bmp`, `tif`, etc.
This will allow images that exist at a URL without a file type extension to be correctly rendered, assuming a valid `content_type` is returned.
When a post is created, the draft sequence is increased and then older
drafts are automatically executing a raw SQL query. This skipped the
Draft model callbacks and did not update user's draft count.
I fixed another problem related to a raw SQL query from Draft.cleanup!
method.
We shouldn't be checking if a user is allowed to do an action in the logger. We should be checking it just before we perform the action. In fact, guardians in the logger can make things even worse in case of a security bug. Let's say we forgot to check user's permissions before performing some action, but we still have a call to the guardian in the logger. In this case, a user would perform the action anyway, and this action wouldn't even be logged!
I've checked all cases and I confirm that we're safe to delete this calls from the logger.
I've added two calls to guardians in admin/user_controller. We didn't have security bugs there, because regular users can't access admin/... routes at all. But it's good to have calls to guardian in these methods anyway, neighboring methods have them.
This adds a few different things to allow for direct S3 uploads using uppy. **These changes are still not the default.** There are hidden `enable_experimental_image_uploader` and `enable_direct_s3_uploads` settings that must be turned on for any of this code to be used, and even if they are turned on only the User Card Background for the user profile actually uses uppy-image-uploader.
A new `ExternalUploadStub` model and database table is introduced in this pull request. This is used to keep track of uploads that are uploaded to a temporary location in S3 with the direct to S3 code, and they are eventually deleted a) when the direct upload is completed and b) after a certain time period of not being used.
### Starting a direct S3 upload
When an S3 direct upload is initiated with uppy, we first request a presigned PUT URL from the new `generate-presigned-put` endpoint in `UploadsController`. This generates an S3 key in the `temp` folder inside the correct bucket path, along with any metadata from the clientside (e.g. the SHA1 checksum described below). This will also create an `ExternalUploadStub` and store the details of the temp object key and the file being uploaded.
Once the clientside has this URL, uppy will upload the file direct to S3 using the presigned URL. Once the upload is complete we go to the next stage.
### Completing a direct S3 upload
Once the upload to S3 is done we call the new `complete-external-upload` route with the unique identifier of the `ExternalUploadStub` created earlier. Only the user who made the stub can complete the external upload. One of two paths is followed via the `ExternalUploadManager`.
1. If the object in S3 is too large (currently 100mb defined by `ExternalUploadManager::DOWNLOAD_LIMIT`) we do not download and generate the SHA1 for that file. Instead we create the `Upload` record via `UploadCreator` and simply copy it to its final destination on S3 then delete the initial temp file. Several modifications to `UploadCreator` have been made to accommodate this.
2. If the object in S3 is small enough, we download it. When the temporary S3 file is downloaded, we compare the SHA1 checksum generated by the browser with the actual SHA1 checksum of the file generated by ruby. The browser SHA1 checksum is stored on the object in S3 with metadata, and is generated via the `UppyChecksum` plugin. Keep in mind that some browsers will not generate this due to compatibility or other issues.
We then follow the normal `UploadCreator` path with one exception. To cut down on having to re-upload the file again, if there are no changes (such as resizing etc) to the file in `UploadCreator` we follow the same copy + delete temp path that we do for files that are too large.
3. Finally we return the serialized upload record back to the client
There are several errors that could happen that are handled by `UploadsController` as well.
Also in this PR is some refactoring of `displayErrorForUpload` to handle both uppy and jquery file uploader errors.
This commit adds the number of drafts a user has next to the "Draft"
label in the user preferences menu and activity tab. The count is
updated via MessageBus when a draft is created or destroyed.
In badge queries for 'First Share' and 'Nice/Good/Great Share' badges,
check that the user exists.
For 'Nice+ Share' badges, also grant badges if the number of shares is
equal to the threshhold count to better match the descriptions.
* DEV: Return 400 instead of 500 for invalid top period
This change will prevent a fatal 500 error when passing in an invalid
period param value to the `/top` route.
* Check if the method exists first
I couldn't get `ListController.respond_to?` to work, but was still able
to check if the method exists with
`ListController.action_methods.include?`. This way we can avoid relying
on the `NoMethodError` exception which may be raised during the course
of executing the method.
* Just check if the period param value is valid
* Use the new TopTopic.validate_period method
Prior to this fix, post whisperer in personal messages are revealed in
the topic's participants list even though non-staff users are unable to
see the whisper.
* Copy remove_member to new `leave` method
* Remove unneeded code from the leave method
* Rearrange the leave method
* Remove unneeded code from the remove_member method
* Add tests
* Implement on the client side
Using an invalid value was allowed. This commit tries to automatically
fix the color by adding missing # symbol or will show an error to the
user if it is not possible and it is not a CSS color either.
* Copy the add_members method to the new join method
* Remove unneeded code from the join method
* Rearrange the join method
* Remove unneeded stuff from the add_members method
* Extract add_user_to_group method
* Implement of the client side
* Tests
* Doesn't inline users.uniq
* Return promise from join.then()
* Remove unnecessary begin and end
* Revert "Return promise from join.then()"
This reverts commit bda84d8d
* Remove variable already_in_group
Will show the last 6 seen users as filtering suggestions when typing @ in quick search. (Previously the user suggestion required a character after the @.)
This also adds a default limit of 6 to the user search query, previously the backend was returning 20 results but a maximum of 6 results was being shown anyway.
When configured, all topics in the category inherits the slow mode
duration from the category's default.
Note that currently there is no way to remove the slow mode from the
topics once it has been set.
When the Forever option is selected for suspending a user, the user is suspended for 1000 years. Without customizing the site’s text, this time period is displayed to the user in the suspension email that is sent to the user, and if the user attempts to log back into the site. Telling someone that they have been suspended for 1000 years seems likely to come across as a bad attempt at humour.
This PR special case messages when a user suspended or silenced forever.
We now use the group's full name in group SMTP emails, so we are dropping the via #{site_name}. If group owners still want this they can just change the full name of the group.
Configuring staged users to watch categories and tags is a way to sign
them up to get many emails. These emails may be unwanted and get marked
as spam, hurting the site's email deliverability.
Users can opt-in to email notifications by logging on to their
account and configuring their own preferences.
If staff need to be able to configure these preferences on behalf of
staged users, the "allow changing staged user tracking" site setting
can be enabled. Default is to not allow it.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
Flips content_security_policy_frame_ancestors default to enabled, and
removes HTTP_REFERER checks on embed requests, as the new referer
privacy options made the check fragile.
There was a bug with changing timestamps using the topic wrench button. Under some circumstances, a topic was disappearing from the top of the latest tab after changing timestamps. Steps to reproduce:
- Choose a topic on the latest tab (the topic should be created some time ago, but has recent posts)
- Change topic timestamps (for example, move them one day forward):
- Go back to the latest tab and see that topic has disappeared.
This PR fixes this. We were setting topic.bumped_at to the timestamp user specified on the modal. This is incorrect. Instead, we should be setting topic.bumped_at to the created_at timestamp of the last regular (not a whisper and so on) post on the topic.
* No need to return anything except a status code from the server
* Switch a badge state before sending a request and then switch it back in case of an error
Currently when bulk-awarding a badge that can be granted multiple times, users in the CSV file are granted the badge once no matter how many times they're listed in the file and only if they don't have the badge already.
This PR adds a new option to the Badge Bulk Award feature so that it's possible to grant users a badge even if they already have the badge and as many times as they appear in the CSV file.
A couple of weeks we made a change that skipped compressing assets used by the theme qunit page: https://github.com/discourse/discourse/pull/13619. This is a follow-up PR to stop the application helper from generating the assets for the theme qunit page with `.br` or `.gzip` extensions when a site uses S3 as a CDN.
Fixes two issues:
- ignores invalid XML in custom icon sprite SVG file (and outputs an error if sprite was uploaded via admin UI)
- clears SVG sprite cache when deleting an `icons-sprite` upload in a theme
This PR fixes a couple of issues related to group SMTP:
1. When running the group SMTP job, we were exiting early if the email was for the OP because of an IMAP race condition. However this causes issues when replying as a new topic for an existing SMTP topic, as the recipient does not get the OP email which can cause threading problems.
2. When sending emails for a new topic spun out like the issue in 1., we are not maintaining the original subject/topic title because that is based on the incoming email record, which we were not doing because the group SMTP email was never sent because of issue 1.
By default, Twitter will return the URL for the avatar image of the tweet poster as the `og:image` value.
However, if the `user_generated` attribute is true, we should not use this as the avatar URL as this will be an URL of an image in the tweet itself (e.g., an image belonging to a tweeted news story).
Previously we would re-calculate topic_user.liked for all users who have ever viewed the source or destination topic. This can be very expensive on large sites. Instead, we can use the array of moved post ids to find which users are actually affected by the move, and restrict the update query to only check/update their records.
On an example site this reduced the `update_post_action_cache` time from ~27s to 300ms
This PR adds the first use of Uppy in our codebase, hidden behind a enable_experimental_image_uploader site setting. When the setting is enabled only the user card background uploader will use the new uppy-image-uploader component added in this PR.
I've introduced an UppyUpload mixin that has feature parity with the existing Upload mixin, and improves it slightly to deal with multiple/single file distinctions and validations better. For now, this just supports the XHRUpload plugin for uppy, which keeps our existing POST to /uploads.json.
If user had a staged account and logged in using a third party service
a different username was suggested. This change will try to use the
username given by the authentication provider first, then the current
staged username and last suggest a new one.
The date shown in topic timeline was one day later if the post at that
position was made near midnight. This happened because the days number
was rounded down.
When a staged user tried to redeem an invite, a different username was
suggested and manually typing the staged username failed because the
username was not available.
When secure media is enabled or when upload secure status
is updated, we also try and update the upload ACL. However
if the object storage provider does not implement this we
get an Aws::S3::Errors::NotImplemented error. This PR handles
this error so the update_secure_status method does not error
out and still returns whether the secure status changed.
User flair was given by user's primary group. This PR separates the
two, adds a new field to the user model for flair group ID and users
can select their flair from user preferences now.
There was an issue with the TopicTimerEnqueuer where any timer
that failed to enqueue_typed_job with an error would prevent
all other pending timers after the one that errored from running.
To mitigate this we just capture the error and log it (so we can
still fix it if needed for bug crushing) and proceed with the
rest of the timer enqueues.
The commit https://github.com/discourse/discourse/pull/13544 highlighted
this issue originally in hosted sites.
<!-- NOTE: All pull requests should have tests (rspec in Ruby, qunit in JavaScript). If your code does not include test coverage, please include an explanation of why it was omitted. -->
Mixing multisite and standard specs can lead to issues (e.g. when using `fab!`)
Disabled the (upcoming https://github.com/discourse/rubocop-discourse/pull/11) rubocop rule for two files that have thoroughly tangled both types of specs.
* FEATURE: Redirect logged in user to invite topic
Users who were already logged in and were given an invite link to a
topic used to see an error message saying that they already have an
account and cannot redeem the invite. This commit amends that behavior
and redirects the user directly to the topic, if they can see it.
* FEATURE: Add logged in user to invite groups
Users who were already logged in and were given an invite link to a
group used to see an error message saying that they already have an
account and cannot redeem the invite. This commit amends that behavior
and adds the user to the group.
Take 2 of https://github.com/discourse/discourse/pull/13466.
Fixes a few issues with the original PR:
- color definition stylesheet target now includes the theme id, to avoid themes set to use the default color scheme loading the same stylesheet
- changes the internal cache key for color definition stylesheet to reset the pre-existing cache
And also move all the "top topics by period" routes to query string param.
/top/monthly => /top?period=monthly
/c/:slug/:id/l/top/monthly => /c/:slug/:id/l/top?period=monthly
/tag/:slug/l/top/daily => /tag/:slug/l/top?period=daily (new)
Users can invite people to topics from secured category, but they will
not be redirected to the topic after signing up unless they have the
permissions to view the topic. This commit shows a warning when invite
is saved if the topic is in a secured category and none of the invite
groups are allowed to see it.
Skip group SMTP email (and add log) if:
* topic is deleted
* post is deleted
* smtp has been disabled for the group
Skip without log if:
* enable_smtp site setting is false
* disable_emails site setting is yes
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
For other private messages we have the site setting
personal_email_time_window_seconds (default 20s) which allows
people to edit their post etc. before the email is sent.
This PR makes the Jobs::GroupSmtpEmail enqueuer in the
PostAlerter use the same delay.
<!-- NOTE: All pull requests should have tests (rspec in Ruby, qunit in JavaScript). If your code does not include test coverage, please include an explanation of why it was omitted. -->