Meta thread: https://meta.discourse.org/t/cant-dismiss-unread-if-last-post-is-an-assign-or-whisper/131823/7
* when sending a whisper, the highest_staff_post_number is set
in the next_post_number method for a Topic, but the
highest_post_number is left alone. this leaves a situation
where highest_staff_post_number is > highest_post_number
* when TopicsBulkAction#dismiss_posts was run, it was only setting the topic_user
highest_seen_post_number using the highest_post_number from the topic, so if
the user was staff and the last post in a topic was a whisper
their highest seen number was not set, and the topic stayed unread
Found through testing that the bug wasn't to do with Assign/Unassign as they do not affect the post numbers, only whispering does.
In a category's settings, the Tags tab has two new fields to
specify the number of tags that must be added to a topic
from a tag group. When creating a new topic, an error will be
shown to the user if the requirement isn't met.
The routes for categories are changing. The scheme that I intend to move
us to is:
/c/*slug_path/(:id)/ENDPOINT
/c/*slug_path/(:id)
This commit adds support for the new scheme to the server side without
dropping support for existing URLs. It is necessary to support existing
URLs for two reasons:
* This commit does not change any client side routing code,
* Posts that contain category hashtags that refer to a root category
are baked into URLs that do not fit this new scheme, (/c/[id]-[slug])
This was not causing any known issue, because the system user ID is always the same across all sites. However, we should cache this on a per-site basis to be safe.
This is a major change to draft internals. Previously there were quite a
few cases where the draft system would say "draft saved", when in fact
we just skipped saving.
This commit ensures the draft system deals with draft ownership handover in
a predictable way.
For example:
- Window 1 editing draft
- Window 2 editing same draft at the same time
Previously we would allow window 1 and 2 to just fight on the same draft
each window overwriting the same draft over an over.
This commit introduces an ownership concept where either window 1 or 2 win
and user is prompted on the loser window to reload screen to correct the issue
This also corrects edge cases where a user could have multiple browser windows
open and posts in 1 window, later to post in the second window. Previously
drafts would break in the second window, this corrects it.
This ensures we only update last_posted_at which is user facing for non messages
and non whispers.
We still update this date for secure categories, we do not revert it for
deleted posts.
* Require q param in /tags/filter/search route.
* If not provided this route was causing a 500 error when
DiscourseTagging.clean_tag was called, because .downcase
was being called on the param (which was nil).
* Now return a 400 error instead.
Adds the settings:
raw_email_max_length, raw_rejected_email_max_length, delete_rejected_email_after_days.
These settings control retention of the "raw" emails logs.
raw_email_max_length ensures that if we get incoming email that is huge we will truncate it removing uploads from the raw log.
raw_rejected_email_max_length introduces an even more aggressive truncation for rejected incoming mail.
delete_rejected_email_after_days controls how many days we will keep rejected emails for (default 90)
I was searching for a reason for randomly failing jobs_base_spec.rb. The reason was that after restorer_spec, the database is not restored to default.
After restorer spec RailsMultisite::ConnectionManagement.all_dbs is returning array of ['default', 'second']
Then base job execution is evaluated twice
```
dbs = RailsMultisite::ConnectionManagement.all_dbs
dbs.each do |db|
execute(opts)
end
```
* FEATURE: Site setting/ui to allow users to set their primary group
* prettier and remove logic from account template
* added 1 to 43 to make web_hook_user_serializer_spec pass
Previously every hour we would run a full scan of the entire DB searching
for expired uploads that need to be moved to the tombstone folder.
This commit amends it so we only run the job 2 times per clean_orpha_uploads_grace_period_hours
There is a upper bound of 7 days so even if the grace period is set really
high it will still run at least once a week.
By default we have a 48 grace period so this amends it to run this cleanup
daily instead of hourly. This eliminates 23 times we run this ultra expensive
query.
The query to count how many new users there are since a given date
is expensive. It's the least personalized stat and the one we fallback
to last when no better number can be found for the target user.
Give up accuracy so we can aggressively cache the user counts
that appear in this email.
Break up single large example into multiple examples, using fab! to maintain performance. On my machine, this speeds up the test slightly, and also makes it more readable.
That commit introduced a bug to the system: f69dacf979
Restore works fine for multisite, however, stopped working for non-multisite.
Reason for that was that `establish_connection` method got a check if the multisite instance is available:
```
def self.instance
@instance
end
def self.establish_connection(opts)
@instance.establish_connection(opts) if @instance
end
```
However, the reload method don't have that check
```
def self.reload
@instance = new(instance.config_filename)
end
```
To solve it, let's ensure we are in a multisite environment before call reload
This fix ensures that searches that contain a null byte return a 400
error instead of causing a 500 error.
For some reason from rspec we will reach the raise statement inside
of the `rescue_from ArgumentError` block, but outside of rspec it will
not execute the raise statement and so a 500 is thrown instead of
reaching the `rescue_from Discourse::InvalidParameters` block inside of
the application controller.
This fix raises Discourse::InvalidParameters directly from the search
controller instead of relying on `PG::Connection.escape_string` to
raise the `ArgumentError`.
The payload when receiving a notification webhook is pointless without
knowing which user the notification is for. This fix adds the user_id to
the notification serializer so that when you receive a notification
webhook you can properly identify which user the notification is for.
See
https://meta.discourse.org/t/getting-the-target-user-for-notification-webhook-events/129052?u=blake
for more details.