Before, whispers were only available for staff members.
Config has been changed to allow to configure privileged groups with access to whispers. Post migration was added to move from the old setting into the new one.
I considered having a boolean column `whisperer` on user model similar to `admin/moderator` for performance reason. Finally, I decided to keep looking for groups as queries are only done for current user and didn't notice any N+1 queries.
This reverts commit 94c3bbc2d1.
At this current point in time, we do not have enough data on whether
this centralisation is the trade-offs of coupling features into a single
channel.
Currently when a user creates posts that are moderated (for whatever
reason), a popup is displayed saying the post needs approval and the
total number of the user’s pending posts. But then this piece of
information is kind of lost and there is nowhere for the user to know
what are their pending posts or how many there are.
This patch solves this issue by adding a new “Pending” section to the
user’s activity page when there are some pending posts to display. When
there are none, then the “Pending” section isn’t displayed at all.
In b8c8909a9d, we introduced a regression
where users may have had their `UserStat.first_unread_pm_at` set
incorrectly. This commit introduces a migration to reset `UserStat.first_unread_pm_at` back to
`User#created_at`.
Follow-up to b8c8909a9d.
The inefficiency here is that we were previously fetching all the
records from `TopicAllowedUser` before filtering against a limited subset of
users based on `User#last_seen_at`.
When a post is created, the draft sequence is increased and then older
drafts are automatically executing a raw SQL query. This skipped the
Draft model callbacks and did not update user's draft count.
I fixed another problem related to a raw SQL query from Draft.cleanup!
method.
This commit adds the number of drafts a user has next to the "Draft"
label in the user preferences menu and activity tab. The count is
updated via MessageBus when a draft is created or destroyed.
Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
Previously, Jobs::EnqueueDigestEmails would enqueue a digest job for every user, even if there are no topics to send. The digest job would exit, no email would send, and last_emailed_at would not change. 30 minutes later, Jobs::EnqueueDigestEmails would run again and re-enqueue jobs for the same users.
120fa8ad introduced a temporary mitigation for this issue, by randomly selecting a subset of those users each time.
This commit adds a new `digest_attempted_at` column to the `user_stats` table. This column is updated every time a digest job completes for a user. Using this, we can avoid scheduling digest jobs for the same user every 30 minutes. This also removes the random user selection in 120fa8ad, and instead prioritizes users who had digests attempted the longest time ago.
* PERF: Dematerialize topic_reply_count
It's only ever used for trust level promotions that run daily, or compared to 0. We don't need to track it on every post creation.
* UX: Add symbol in TL3 report if topic reply count is capped
* DEV: Drop user_stats.topic_reply_count column
Note, to avoid race conditions we are setting last_unread to 10 minutes ago
if there is nothing unread.
This is safer in case of in progress transactions
we don't want to lose unread for any window of time.
This optimisation avoids large scans joining the topics table with the
topic_users table.
Previously when a user carried a lot of read state we would have to join
the entire read state with the topics table. This operation would slow down
home page and every topic page. The more read state you accumulated the
larger the impact.
The optimisation helps people who clean up unread, however if you carry
unread from years ago it will only have minimal impact.
UserStat has some special logic to keep adding time read if repeat calls
are made in intervals less than 100 seconds. This is called regularly
when we update read timings on a topic.
We only need to cache this key in redis for 100 seconds, however previously
we would keep it forever, 1 key per user. This has potential of bloating
a very large amount of keys for no longer active users in redis.
Introduce new patterns for direct sql that are safe and fast.
MiniSql is not prone to memory bloat that can happen with direct PG usage.
It also has an extremely fast materializer and very a convenient API
- DB.exec(sql, *params) => runs sql returns row count
- DB.query(sql, *params) => runs sql returns usable objects (not a hash)
- DB.query_hash(sql, *params) => runs sql returns an array of hashes
- DB.query_single(sql, *params) => runs sql and returns a flat one dimensional array
- DB.build(sql) => returns a sql builder
See more at: https://github.com/discourse/mini_sql
Figuring out what unread topics a user has is a very expensive
operation over time.
Users can easily accumulate 10s of thousands of tracking state rows
(1 for every topic they ever visit)
When figuring out what a user has that is unread we need to join
the tracking state records to the topic table. This can very quickly
lead to cases where you need to scan through the entire topic table.
This commit optimises it so we always keep track of the "first" date
a user has unread topics. Then we can easily filter out all earlier
topics from the join.
We use pg functions, instead of nested queries here to assist the
planner.
* cut down on storage of the work Topic, 3 times per row (in 2 indexes)
* only store one view per user per topic
* only store one view per ip per topic
Introduced badge triggers, introduced concept of badge that happens due to a post but has the post hidden
Delta badge grant happens once a minute, backed by redis