This commit updates all Sidekiq signal handling event logs to go through
Unicorn's logger instead of logging to STDOUT. Going through a proper logger
means the log messages are logged in the format which the logger has configured.
This means we get proper timestamp for the log messages.
Delay rendering sidebar sections after sidebar is shown
Showing the popup takes about 100ms, then rendering each section
could take up to and additional 200ms, which leaves the total just
outside of 300ms. If we cheat by rendering the popup first then
the sections in the next frame, it improves our paint time
Introduce DeferredRender to encapsulate 'paint later'
This uses a new nav style with the heirarchy:
```
Breadcrumbs
|- Title
|- Description
|- Third-Level Navigation
```
The navigation bar uses the transparent red-underlined
buttons similar to the user activity page.
Over time all admin pages will use this, but this starts
with the new plugin show page.
---------
Co-authored-by: Ella <ella.estigoy@gmail.com>
This commit updates various spots in `config/unicorn.conf.rb` which were
doing `STDERR.puts` to either use `server.logger` which is unicorn's
logger or `Rails.logger` which is Rails' logger. The reason we want to
do so is because `STDERR.puts` doesn't format the logs properly and is a
problem especially when custom loggers with structured formatting is
enabled.
Was "removing" (rather not re-applying) the `[spoiler]` BBCode because we were testing the **whole** class of the `span`/`div` was `spoiled` but we added another class and thus broke this functionnality.
In order to fix this issue, the test to determine whether a `span`/`div` is a spoiler, now uses a regular expression to check whether the `class` **contains** the word `spoiled`.
Reference - https://meta.discourse.org/t/quoting-spoiler-text-doesnt-include-spoiler-tags-in-the-quote/170145
We need to register a waiter so that `settled()` will wait for `runAfterFramePaint()` callbacks to be run before proceeding.
Re-lands 63b7b598cb, but wrapped with `isTesting()` to avoid production errors.
# Context
We currently have a tracked value of `topic` in the header service that we utilize across the app for determining the presence of a topic.
A simple example is: If you are in a topic, and scroll down the page, we need to communicate to the header that a topic is present and we change the styling of the header.
The issue with this logic is that when entering a topic (and you are at the top of the page), we **haven't** set the topic on the header service yet. We only set the topic when you have scrolled down on the page (set by `app/components/discourse-topic.js`)
This is unhelpful behavior when you are utilizing a plugin outlet that is receiving the `topic` from the header:
17add599e3/app/assets/javascripts/discourse/app/components/header/topic/info.gjs (L85)
As the `topic` won't be present until you scroll down the page.
# Changes
This PR adds a tracked `inTopic` value to the header service that is a boolean value. This is to let the app know
> Yes, we are scrolled within a topic
And instead sets the tracked `topic` value immediately, if you are loading a topic, to allow the necessary data to be populated to the plugin outlets on page load.
Adds a placeholder image + CTA in chat, for empty channel and DM lists.
On desktop with drawer mode, we split chat into tabs (like mobile).
---------
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
Co-authored-by: David Battersby <info@davidbattersby.com>
Co-authored-by: Régis Hanol <regis@hanol.fr>
Previously, avatars would be 'sticky' when:
1. The post was longer than the viewport
OR
2. You were scrolling up
The difference in behavior based on scroll direction doesn't 'feel' quite right. This commit makes the behavior consistent, so sticky avatar logic is applied to all posts regardless of scroll direction.
For some reason, despite iframe also indicating a
```
<meta name="robots" content="noindex">
```
.. Google is still indexing the embed/comment URLs. This causes links like http://\<site>/embed/comments\?topic_id\=6366 to be indexed instead of the topic.
This commit adds it explicitly in the header.
Introduced back in 2022 in
e3d495850d,
our new more specific message-id format for inbound and
outbound emails has now been in use for a very long time,
we can remove the support for the old formats:
`topic/:topic_id/:post_id.:random@:host`
`topic/:topic_id@:host`
`topic/:topic_id.:random@:host`
In this PR service objects were moved to Core https://github.com/discourse/discourse/pull/26506
However, ServiceRunner should be moved as well. Mostly for CI to run effortlessly without loading plugins.
We are seeing a weird resolution error on Github actions with the
following backtrace:
```
Failure/Error:
visit File.join(
GlobalSetting.relative_url_root || "",
"/session/#{user.encoded_username}/become.json?redirect=false",
)
Socket::ResolutionError:
getaddrinfo: Temporary failure in name resolution
```
Switch to use `127.0.0.1` instead of forcing a name resolution.
In 958437e7dd we ensured that the email summaries are properly sent based on 'digest_attempted_at' for people who barely/never visit the forum.
This fixed the "frequency" of the email summaries but introduced a bug where the digest would be sent even though there wasn't anything new since for some users.
The logic we use to compute the threshold date for the content to be included in the digest was
```ruby
@since = opts[:since] || user.last_seen_at || user.user_stat&.digest_attempted_at || 1.month.ago
```
It was working as expected for users who haven never been seen but for users who have connected at least once, we would use their "last_seen_at" date as the "threshold date" for the content to be sent in a summary 😬
This fix changes the logic to be the most recent date amongst the `last_seen_at`, `digest_attempted_at` and `1.month.ago` so it's correctly handling cases where
- user has never been seen nor emailed a summary
- user has been seen in a while but has recently been sent a summary
- user has been sent a summary recently but hasn't been seen in a while.
This commit moves the logic for crawler rate limits out of the application controller and into the request tracker middleware. The reason for this move is to apply rate limits to all crawler requests instead of just the requests that make it to the application controller. Some requests are served early from the middleware stack without reaching the Rails app for performance reasons (e.g. `AnonymousCache`) which results in crawlers getting 200 responses even though they've reached their limits and should be getting 429 responses.
Internal topic: t/128810.
This commit splits out the updating of `TopicUser#last_read_post_number` in
`TopicUser.ensure_consistency!` to a new
`TopicUser.update_last_read_post_number` method` which
`PostTiming.pretend_read` will now call instead. Previously,
`PostTiming.pretend_read` calls `TopicUser.ensure_consistency!` which in
turn calls `TopicUser.update_post_action_cache` but that is
unnecessary for `PostTiming.pretend_read` since `PostTiming.pretend_read` does not
affect the `TopicUser#liked` or `TopicUser.bookmarked` columns which
`TopicUser.update_post_action_cache` updates. As the query in
`TopicUser.update_post_action_cache` can be expensive, we should avoid
calling it when it isn't necessary.
One such scenario where it is unnecessary is when we are closing a
topic.