The `EXECUTE FUNCTION` syntax for `CREATE TRIGGER` statements was introduced in PostgreSQL 11. We need to replace `EXECUTE FUNCTION` with `EXECUTE PROCEDURE` in order to be able to restore backups created with PG12 on PG10.
Fixed bugs, added specs, extracted the upload downsizing code to a class, added support for non-S3 setups, changed it so that images aren't downloaded twice.
This code has been tested on production and successfully resized ~180k uploads.
Includes:
* DEV: Extract upload downsizing logic
* DEV: Add support for non-S3 uploads
* DEV: Process only images uploaded by users
* FIX: Incorrect usage of `count` and `exist?` typo
* DEV: Spec S3 image downsizing
* DEV: Avoid downloading images twice
* DEV: Update filesizes earlier in the process
* DEV: Return false on invalid upload
* FIX: Download images that currently above the limit (If the image size limit is decreased, then there was no way to resize those images that now fall outside the allowed size range)
* Update script/downsize_uploads.rb (Co-authored-by: Régis Hanol <regis@hanol.fr>)
This came in as a request on meta to include the raw field in the post
webhook serializer.
https://meta.discourse.org/t/-/49045/55?u=blake
Including this field can prevent needing to make a 2nd API request to
get the raw field of a post.
It would be handy down the road if we updated the webhook ui to specify
fields or arguments that you wanted to be included in the serialized
data, but most requests I've seen to update the serializers have been
valid requests that are good to add anyways, so I don't think we have
reached that point yet.
FIX: prevent re-flagging when we have reviewed flags before
Fixes an edge case where a review can be reflagged when:
User flags as inappropriate.
Moderator rejects the flag.
Another user re-flags the post as spam.
Before, anyone was able to re-flag as inappropriate despite it being flagged
previously. With this, users are unable to re-flag for the same reason
regardless of reviewable status.
This test lasted about 7 years prior to it becoming flaky.
Today ... for whatever reason the test suite created 100 users
prior to running this spec. So the new user becomes user id 101
And... lo and behold:
```
1) PostSerializer a hidden post with add_raw enabled a hidden post shows the raw post only if authorized to see it
Failure/Error: expect(serialized_post_for_user(Fabricate(:user))[:raw]).to eq(nil)
expected: nil
got: "Raw contents of the post."
(compared using ==)
# ./spec/serializers/post_serializer_spec.rb:127:in `block (4 levels) in <main>'
```
We removed pry-nav a while back because it is not up to date with pry but it is super useful. Luckily pry-byebug is here to save us all from Satan's power.
To get this to work you need to add the following to your $HOME/.pryrc file.
```
if defined?(PryByebug)
Pry.commands.alias_command 'c', 'continue'
Pry.commands.alias_command 's', 'step'
Pry.commands.alias_command 'n', 'next'
Pry.commands.alias_command 'f', 'finish'
end
Pry::Commands.command /^$/, "repeat last command" do
pry_instance.run_command Pry.history.to_a.last
end
```
The require-ing of pry, pry-rails, and pry-byebug in specs is controlled by the IMPROVED_SPEC_DEBUGGING flag (disabled by default).
Adds new hidden site settings for rate limits:
30 for logged in users, 15 for anon
Adds an anon cache for searching, caches results of searches for 1 minute
Allow limiting the number of migrations to do at once, both to do migrations that
have impact limited to multiple off-peak usage hours to reduce user impact from
a migration, and to allow tests that do only a very small number for test
purposes. ("Give me a ping, Vasili. One ping only, please.")
Looks like some html elements like `aside` and `section` will throw an error
when checking if they are inline or not. The commit simply handles
```
Job exception: undefined method `inline?' for nil:NilClass
```
and adds a test for it.
Moves the most important checks into a linter. It gets executed by Lefthook as well as the docker rake task and Github actions. Doing those checks in rspec takes too long and it produces errors when the discourse:test Docker image contains old, invalid locale files.
Discourse needs a bunch of data preloaded before it can start up.
Normally we throw blobs of this into the HTML document that is requested
but in some cases that's awkward to retrieve.
For example with Ember CLI you have a separate javascript application
that needs to make its own HTML.
This API endpoint returns a JSON object with all the data Discourse needs to
bootstrap and start up.
In some restricted setups all JS payloads need tight control.
This setting bans admins from making changes to JS on the site and
requires all themes be whitelisted to be used.
There are edge cases we still need to work through in this mode
hence this is still not supported in production and experimental.
Use an example like this to enable:
`DISCOURSE_WHITELISTED_THEME_REPOS="https://repo.com/repo.git,https://repo.com/repo2.git"`
By default this feature is not enabled and no changes are made.
One exception is that default theme id was missing a security check
this was added for correctness.
If `default email digest frequency` was set to "Never", users would get
a `digest_after_minutes` set to `nil` which triggered this error
in the logs if/when the site eventually changed that setting and
enabled digests:
```
NoMethodError (undefined method `>=' for nil:NilClass)
/var/www/discourse/app/mailers/user_notifications.rb:227:in `digest'
```
* FEATURE: notify admins about old credentials
Security and API keys should be renewed periodically.
This additional notification should help admins keep their Discourse safe and secure.
Previously the pull hotlinked images job was skipped after system edits. This ensured that we never had an infinite loop of system-edit/pull-hotlinked/system-edit/pull-hotlinked etc.
A side effect was that edits made by system for any other reason (e.g. API, removing full quotes) would prevent pulling hotlinked images. This commit removes the system edit check, and replaces it with another method to avoid an infinite job scheduling loop.
Hostname can vary per-site on a multisite cluster, so this change requires converting the compiler_version from a constant into a class method which is evaluated at runtime. The value is stored in the theme DistributedCache, so performance impact should be negligible.
We previously did not account for completely untagged topics when
looking at muted tags, this caused new/unread counts to be off if
1. You had muted tags
2. You had an unread/new topic
3. This topic had no tags
This feature allows certain plugins to output tag information
to topic tracking state, this allows per tag stats which can be
used by sidebars and other plugins that need per tag stats
Not enabled by default cause this would add cost to a critical
query
* DEV: new S3 backup layout
Currently, with $S3_BACKUP_BUCKET of "bucket/backups", multisite backups
end up in "bucket/backups/backups/dbname/" and single-site will be in
"bucket/backups/".
Both _should_ be in "bucket/backups/dbname/"
- remove MULTISITE_PREFIX,
- always include dbname,
- method to move to the new prefix
- job to call the method
* SPEC: add tests for `VacateLegacyPrefixBackups` onceoff job.
Co-authored-by: Vinoth Kannan <vinothkannan@vinkas.com>
When running jobs in tests, we use `Jobs.run_immediately!`. This means that jobs are run synchronously when they are enqueued. Jobs sometimes enqueue other jobs, which are also executed synchronously. This means that the outermost job will block until the inner jobs have finished executing. In some cases (e.g. process_post with hotlinked images) this can lead to a deadlock.
This commit changes the behavior slightly. Now we will never run jobs inside other jobs. Instead, we will queue them up and run them sequentially in the order they were enqueued. As a whole, they are still executed synchronously. Consider the example
```ruby
class Jobs::InnerJob < Jobs::Base
def execute(args)
puts "Running inner job"
end
end
class Jobs::OuterJob < Jobs::Base
def execute(args)
puts "Starting outer job"
Jobs.enqueue(:inner_job)
puts "Finished outer job"
end
end
Jobs.enqueue(:outer_job)
puts "All jobs complete"
```
The old behavior would result in:
```
Starting outer job
Running inner job
Finished outer job
All jobs complete
```
The new behavior will result in:
```
Starting outer job
Finished outer job
Running inner job
All jobs complete
```
Fixes a regression in
e8fb9d4066
which caused a bug where you couldn't send a message to a group that
contained an Uppercase letter. Added a test case for this.
Bug report: https://meta.discourse.org/t/-/152999
* FIX: add X-Robots-Tag header for check_xhr-covered GET actions, too
see https://meta.discourse.org/t/missing-x-robots-tag/152593/3 for context
* test: a spec making sure X-Robots-Tag header is present when needed
/groups path responds to anonymous requests and doesn't skip `check_xhr` method, so we can use it here.
It might happen that some User records have no associated primary emails.
In which case we don't ever want to send them a digest.
Also added a new "user_email_no_email" skipped email log to ensure these cases
are properly handled and surfaced.
* FEATURE: notify admins about old credentials
Security and API keys should be renewed periodically.
This additional notification should help admins keep their Discourse safe and secure.
Previously we had a partial fix in place where non human users
were not allowed draft sequences, this left edges around where non
human users asked for drafts yet had none.
For example system could already have a few drafts in place.
This also removes and extensibility point we added that is not in use
This reverts commit 20780a1eee.
* SECURITY: re-adds accidentally reverted commit:
03d26cd6: ensure embed_url contains valid http(s) uri
* when the merge commit e62a85cf was reverted, git chose the 2660c2e2 parent to land on
instead of the 03d26cd6 parent (which contains security fixes)
* We now have a site setting "topic_excerpt_maxlength" that is used when the OP is created or revised to generate a topic excerpt.
* However, posts created before this setting was introduced cannot benefit from this change unless they are revised, and if the topic excerpt length setting is changed that situation is also not covererd.
* This PR makes a change to rebake! to update the topic excerpt IF the post is the OP.
If a user is created with an id of 999, a `upload.user_id ==
user_avatar.user_id` will return true. This fix increases the id of the
upload to something that we will not hit in the foreseeable future.
Adds a new topic_excerpt_maxlength site setting.
* When topic excerpt is requested for a post, use the new topic_excerpt_maxlength site setting to limit the size of the excerpt
* Remove code for getting/setting Post.excerpt_size as it is not used anywhere
* FIX: Emit web hooks for flags
* FEATURE: Remove 'flag' web hook in favor of 'reviewable' web hook
* FEATURE: Remove 'queued post' web hook in favor of 'reviewable' web hook
* FIX: Do not set a default value for web hooks with no events
In some cases, between Discourse forums the hostname of a URL could match if they are hosting S3 files on the same bucket but the S3 bucket path might not. So e.g. https://testbucket.somesite.com/testpath/some/file/url.png vs https://testbucket.somesite.com/prodpath/some/file/url.png. So has_been_uploaded? was returning true for the second URL, even though it may have been uploaded on a different Discourse forum.
This is a very rare case but must be accounted for, because this impacts UrlHelper.is_local which mistakenly thinks the file has already been downloaded and thus allows the URL to be cooked, where we want to return the full URL to be downloaded using PullHotlinkedImages.
* FIX: randomize file name when created from fixtures
When a temporary file is created from fixtures it should have a unique name.
It is to prevent a collision in parallel specs evaluation
* FIX: use /tmp/pid folder to keep fixture files
Signed S3 URLs are valid for 15 seconds, so we can safely allow the browser to cache them for 10 seconds. This should help with large numbers of requests when composing a post with many images.
PG already handles English stop words, the list in cppjieba is
bigger than the list PG uses, which in turn causes confusion cause
words such as "volume" are stripped using cppijieba stop word list
We will follow up with another commit here to apply the Chinese
word stopwords, but for now to eliminate the confusion we are
skipping applying the stopword list when the dictionary in PG is
in English.
* DEV: Add framework for filtered plugin registers
Plugins often need to add values to a list, and we need to filter those lists at runtime to ignore values from disabled plugins. This commit provides a re-usable way to do that, which should make it easier to add new registers in future, and also reduce repeated code.
Follow-up commits will migrate existing registers to use this new system
* DEV: Migrate user and group custom field APIs to plugin registry
This gives us a consistent system for checking plugin enabled state, so we are repeating less logic. API changes are backwards compatible
* DEV: Standardize table sorting verbiage
This commit creates a common component that tables can use to make their
headers sortable. This commit also standardizes on using `desc` as the
default and passing in the `asc=true` flag to adjust the sorting
direction.
* Add deprecation warnings
Adds deprecation warnings if using previous params and maintains
backwards compatibility. Set the default sort value for group members to
be asc.
* switch group requests to use common table-header-toggle
* update fixture
* FIX: randomly falling user_spec
When we evaluate `update_last_seen!` we relay on Redis to not run that code too often
https://github.com/discourse/discourse/blob/master/app/models/user.rb#L753
The problem is that not all specs which are running `update_last_seen!` are not cleaning after themselves
For examples specs in that block https://github.com/discourse/discourse/blob/master/spec/models/user_spec.rb#L901
So it can be replicated when you run a few times
`bundle exec rspec ./spec/models/user_spec.rb -e "should not update the first seen value if it doesn't exist" -e "should have 0 for days_visited"`
We should delete Redis key after each spec which is evaluating `update_last_seen!`
* PERF: Dematerialize topic_reply_count
It's only ever used for trust level promotions that run daily, or compared to 0. We don't need to track it on every post creation.
* UX: Add symbol in TL3 report if topic reply count is capped
* DEV: Drop user_stats.topic_reply_count column
Use a helper method to simplify creating a new register. Previously this would require creating lots of different methods manually, and adding every register to the clear/reset functions
Previously the code was very race condition prone leading to
odd failures in production
It was re-written in raw SQL to avoid conditions where rows
conflict on inserts
There is no clean way in ActiveRecord to do:
Insert, on conflict do nothing and return existing id.
This also increases test coverage, we were previously not testing
the code responsible for crawling external sites directly
This commit prevents a 500 error from occurring if someone is trying to
setup their discourse instance as a sso provider and they don't pass in
a `return_sso_url` in their payload.
We were getting errors like this in Reviewables in some cases:
```
ActiveRecord::StatementInvalid (PG::AmbiguousColumn: ERROR: column reference "category_id" is ambiguous
LINE 4: ...TRUE) OR (reviewable_by_group_id IN (NULL))) AND (category_i...
```
The problem that was making everything go boom is that plugins can add their own custom filters for Reviewables. If one is doing an INNER JOIN on topics, which has its own category_id column, we would get the above AmbiguousColumn error. The solution here is to just make all references to the reviewable columns in the list_for and viewable_by code prefixed by the table name e.g. reviewables.category_id.
* FEATURE: Support for App Shortcuts Menu
This adds a list of shortcuts to a installed Discourse instance.
It can be accessed by right clicks or long press on the app icon.
See https://github.com/MicrosoftEdge/MSEdgeExplainers/blob/master/Shortcuts/explainer.md
List of possible follow ups include:
- Making it admin customizable
- Making it user customizable
- Using SVG icons from the site icon sprite
- Picking an accent color for icons
* FIX: Add type to shortcut menu icons
This refactors default_current_user_provider in a few ways:
- Introduce a generic `api_parameter_allowed?` method which checks for whitelisted routes/formats
- Only read the api_key parameter on allowed routes. It is now completely ignored on other routes (previously it would raise a 403)
- Start reading user_api_key parameter on allowed routes
- Refactor tests as end-end integration tests
A plugin API for PARAMETER_API_PATTERNS will be added soon
Previously we only changed sequence on ownership change, this
cause a race condition between tabs where user could type for a
long time without being warned of an out of date draft.
This change is a radical change and we should watch closely.
Code was already in place to track sequence on the client so no
changes are needed there.
For pages that do not specify canonical URL we will default to `https://SITENAME/PATH`.
This ensures that if a URL is crawled on the CDN the search ranking will transfer to the main site.
Additionally we whitelist the `?page` param
* DEV: api documentation updates
- Created a script to convert json responses to rswag
- Documented several api endpoints
- Switched rswag to use header based auth
* Update script, fix some schema missmatches
Google insists on indexing pages so it can figure out if they
can be removed from the index.
see: https://support.google.com/webmasters/answer/6332384?hl=en
This change ensures the we have special behavior for Googlebot
where we allow indexing, but block the actual indexing via
X-Robots-Tag
Expand SiteSetting.allow_index_in_robots_txt so it also adds a
noindex header if set to false.
This makes sure that nothing is indexed even if it somehow reaches
Google.
1. Total 6 attempts per day per user
2. Total of 5 per unique email/login that is not found per hour
3. If an admin blocks an IP that IP can not request a reset
There were two constants here, `INLINE_ONEBOX_LOADING_CSS_CLASS` and
`INLINE_ONEBOX_CSS_CLASS` that were both longer than the strings they
were DRYing up: `inline-onebox-loading` and `inline-onebox`
I normally appreciate constants, but in this case it meant that we had
a lot of JS imports resulting in many more lines of code (and CPU cycles
spent figuring them out.)
It also meant we had an `.erb` file and had to invoke Ruby to create the
JS file, which meant the app was harder to port to Ember CLI.
I removed the constants. It's less DRY but faster and simpler, and
arguably the loss of DRYness is not significant as you can still search
for the `inline-onebox-loading` and `inline-onebox` strings easily if
you are refactoring.
We now show an options gear icon next to the bookmark name.
When expanded we show the "delete bookmark when reminder sent" option. The value of this checkbox is saved in local storage for the user.
If this is ticked, when a reminder is sent for the bookmark the bookmark itself is deleted. This is so people can use the reminder functionality by itself.
Also remove the blue alert reminder section from the "Edit Bookmark" modal as it just added clutter, because the user can already see they had a reminder set:
Adds a default false boolean column `delete_when_reminder_sent` to bookmarks.
Locale files get precompiled after deployment and they contained translations from the `default_locale`. That's especially bad in multisites, because the initial `default_locale` is `en_US`. Sites where the `default_locale` isn't `en_US` could see missing translations. The same thing could happen when users are allowed to chose a different locale.
This change simplifies the logic by not using the `default_locale` in the locale chain. It always falls back to `en` in case of missing translations.
The failover spec is very fragile and tests specific implementation
vs actual behavior
We rely on a different script during the build process to test
failover operates correctly
We were sharing `Discourse` both as an application object and a
namespace which complicated things for Ember CLI. This patch
moves raw templates into `__DISCOURSE_RAW_TEMPLATES` and adds
a couple helper methods to create/remove them.
This introduces new APIs for obtaining optimized thumbnails for topics. There are a few building blocks required for this:
- Introduces new `image_upload_id` columns on the `posts` and `topics` table. This replaces the old `image_url` column, which means that thumbnails are now restricted to uploads. Hotlinked thumbnails are no longer possible. In normal use (with pull_hotlinked_images enabled), this has no noticeable impact
- A migration attempts to match existing urls to upload records. If a match cannot be found then the posts will be queued for rebake
- Optimized thumbnails are generated during post_process_cooked. If thumbnails are missing when serializing a topic list, then a sidekiq job is queued
- Topic lists and topics now include a `thumbnails` key, which includes all the available images:
```
"thumbnails": [
{
"max_width": null,
"max_height": null,
"url": "//example.com/original-image.png",
"width": 1380,
"height": 1840
},
{
"max_width": 1024,
"max_height": 1024,
"url": "//example.com/optimized-image.png",
"width": 768,
"height": 1024
}
]
```
- Themes can request additional thumbnail sizes by using a modifier in their `about.json` file:
```
"modifiers": {
"topic_thumbnail_sizes": [
[200, 200],
[800, 800]
],
...
```
Remember that these are generated asynchronously, so your theme should include logic to fallback to other available thumbnails if your requested size has not yet been generated
- Two new raw plugin outlets are introduced, to improve the customisability of the topic list. `topic-list-before-columns` and `topic-list-before-link`
This ensures that at a minimum you are notified once a day of
repeat edits by the same user.
Long term we may consider winding this down to say 1 hour or
making it configurable.
Due to a refactor in e90f9e5cc4 we stopped notifying on edits if
a user liked a post and then edited.
The like could have happened a long time ago so this gets extra
confusing.
This change makes the suppression more deliberate. We only want
to suppress quite/link/mention if the user already got a reply
notification.
We can expand this suppression if it is not enough.
* When hovering over the bookmark icon for a post, show the name of the bookmark at the end of the tooltip _if_ it has been set.
* Order bookmarks by `updated_at DESC` in the user list and show that instead of created at.
Recently, we added feature that we are sending `/muted` to users who muted specific topic just before `/latest` so the client knows to ignore those messages - https://github.com/discourse/discourse/pull/9482
Same `/muted` message should be included when the post is edited
TLDR; this commit vastly improves how whitespaces are handled when converting from HTML to Markdown.
It also adds support for converting HTML <tables> to markdown tables.
The previous 'remove_whitespaces!' method was traversing the whole HTML tree and used a heuristic to remove
leading and trailing whitespaces whenever it was appropriate (ie. mostly before and after HTML block elements)
It was a good idea, but it was very limited and leaded to bad conversion when the html had leading whitespaces on several lines for example.
One such example can be found [here](https://meta.discourse.org/t/86782).
For various reasons, most of the whitespaces in a HTML file is ignored when the page is being displayed in a browser.
The rules that the browsers follow are the [CSS' White Space Processing Rules](https://www.w3.org/TR/css-text-3/#white-space-rules).
They can be quite complicated when you take into account RTL languages and other various tidbits but they boils down to the following:
- Collapse whitespaces down to one space (0x20) inside an inline context (ie. nodes/tags that are being displaying on the same line)
- Remove any leading/trailing whitespaces inside an inline context
One quick & dirty way of getting this 90% solved would be to do 'HTML.gsub!(/[[:space:]]+/, " ")'.
We would also need to hoist <pre> elements in order to not mess with their whitespaces.
Unfortunately, this solution let some whitespaces creep around HTML tags which leads to more '.strip!' calls than I can bear.
I decided to "emulate" the browser's handling of whitespaces and came up with a solution in 4 parts
1. remove_not_allowed!
The HtmlToMarkdown library is recursively "visiting" all the nodes in the HTML in order to convert them to Markdown.
All the nodes that aren't handled by the library (eg. <script>, <style> or any non-textual HTML tags) are "swallowed".
In order to reduce the number of nodes visited, the method 'remove_not_allowed!' will automatically delete all the nodes
that have no "visitor" (eg. a 'visit_<tag>' method) defined.
2. remove_hidden!
Similar purpose as the previous method (eg. reducing number of nodes visited), there's no point trying to convert something that is hidden.
The 'remove_hidden!' method removes any nodes that was hidden using the "hidden" HTML attribute, some CSS or with a width or height equal to 0.
3. hoist_line_breaks!
The 'hoist_line_breaks!' method is there to handle <br> tags. I know those tiny <br> don't do much but they can be quite annoying.
The <br> tags are inline elements but they visually work like a block element (ie. they create a new line).
If you have the following HTML "<i>Foo<br>Bar</i>", it ends up visually similar to "<i>Foo</i><br><i>Bar</i>".
The latter being much more easy to process than the former, so that's what this method is doing.
The "hoist_line_breaks" will hoist <br> tags out of inline tags until their parent is a block element.
4. remove_whitespaces!
The "remove_whitespaces!" is where all the whitespace removal is happening. It's broken down into 4 methods as well
- remove_whitespaces!
- is_inline?
- collapse_spaces!
- remove_trailing_space!
The 'remove_whitespace!' method is recursively walking the HTML tree (skipping <pre> tags).
If a node has any children, they will be chunked into groups of inline elements vs block elements.
For each chunks of inline elements, it will call the "collapse_space!" and "remove_trailing_space!" methods.
For each chunks of block elements, it will call "remote_whitespace!" to keep walking the HTML tree recursively.
The "is_inline?" method determines whether a node is part of a inline context.
A node is inline iif it's a text node or it's an inline tag, but not <br>, and all its children are also inline.
The "collapse_spaces!" method will collapse any kind of (white) space into a single space (" ") character, even accros tags.
For example, if we have " Foo \n<i> Bar </i>\t42", it will return "Foo <i>Bar </i>42".
Finally, the "remove_trailing_space!" method is there to remove any trailing space that might creep in at the end of the inline chunk.
This solution is not 100% bullet-proof.
It does not support RTL languages at all and has some caveats that I felt were not worth the work to get properly fixed.
FIX: better detection of hidden elements when converting HTML to Markdown
FIX: take into account the 'allowed_href_schemes' site setting when converting HTML <a> to Markdown
FIX: added support for 'mailto:' scheme when converting <a> from HTML to Markdown
FIX: added support for <img> dimensions when converting from HTML to Markdown
FIX: added support for <dl>, <dd> and <dt> when converting from HTML to Markdown
FIX: added support for multilines emphases, strongs and strikes when converting from HTML to Markdown
FIX: added support for <acronym> when converting from HTML to Markdown
DEV: remove unused 'sanitize' gem
Wow, did you just read all that?! Congratz, here's a cookie: 🍪.
We have the `# frozen_string_literal: true` comment on all our
files. This means all string literals are frozen. There is no need
to call #freeze on any literals.
For files with `# frozen_string_literal: true`
```
puts %w{a b}[0].frozen?
=> true
puts "hi".frozen?
=> true
puts "a #{1} b".frozen?
=> true
puts ("a " + "b").frozen?
=> false
puts (-("a " + "b")).frozen?
=> true
```
For more details see: https://samsaffron.com/archive/2018/02/16/reducing-string-duplication-in-ruby
* Rename all instances of bookmarkWithReminder and bookmark_with_reminder to just bookmark
* Delete old bookmark code at the same time
* Add migration to remove the bookmarkWithReminder post menu item if people have it set in site settings
Make sure the topic_user.bookmarked column is set correctly when user bookmarks/unbookmarks any post in a topic. For example, you bookmarked a post in the topic that was not the OP, the bookmark icon in the topic list would not be shown. Same if deleting a bookmark for the last bookmarked post in a topic, the bookmark icon in the topic list would not be removed.
Previously this was only setting it to true if bookmarking the OP/topic, which was not correct -- we want to show the icon on the topic list if any post is bookmarked.
Also set to false if unbookmarking the last bookmarked post in the topic.
Also in this PR is a migration to correct any out of sync topic_user.bookmarked columns, based on the new logic.
Previously we relied entirely on levenshtein_distance_spammer_emails site
setting to handle "similar looking" emails.
This commit improves the situation by always preferring to block (and check)
canonical emails.
This means that if:
`samevil+test@domain.com` is blocked the system will block `samevil@domain.com`
This means that `samevil+2@domain.com` (ad infinitum) will be blocked
This reverts commit 6f9177e2ed.
We decided on a completely different approach to the problem.
Instead we will let blocked emails be treated as canonical.
* When copying the markdown for an image between posts, we were not adding the srcset and data-small-image attributes which are done by calling optimize_image! in cooked post processor
* Refactored the code which was confusing in its current state (the consider_for_reuse method was super confusing) and fixed the issue
* FEATURE: don't display new/unread notification for muted topics
Currently, even if user mute topic, when a new reply to that topic arrives, the user will get "See 1 new or updated topic" message. After clicking on that link, nothing is visible (because the topic is muted)
To solve that problem, we will send background message to all users who recently muted that topic that update is coming and they can ignore the next message about that topic.
It's possible to cause a 500 error by putting in weird characters in the
input field for updating a users website on their profile.
Normal invalid input like not including the domain extension is already
handled by the user_profile model validation. This fix ensures a server
error doesn't occur for weird input characters.
If for some reason an update did not go through (for example,
concurrently updating the same topic twice), we were logging something
like:
```
create_errors_json called with unrecognized type: #<Topic
```
This happened because we knew an error occurred but the active record
object had no errors attached.
This patch fixes the issue by attaching a proper error message in the
event that this happens.
The main thrust of this PR is to take all the conditional checks based on the `enable_bookmarks_with_reminders` away and only keep the code from the `true` path, making bookmarks with reminders the core bookmarks feature. There is also a migration to create `Bookmark` records out of `PostAction` bookmarks for a site.
### Summary
* Remove logic based on whether enable_bookmarks_with_reminders is true. This site setting is now obsolete, the old bookmark functionality is being removed. Retain the setting and set the value to `true` in a migration.
* Use the code from the rake task to create a database migration that creates bookmarks from post actions.
* Change the bookmark report to read from the new table.
* Get rid of old endpoints for bookmarks
* Link to the new bookmarks list from the user summary page
Otherwise we are trying to update the reminder type with a string which often evaluates to 0 (At Desktop) which causes reminders to come through early.
* DEV: Use `render_json_error` (Adds specs for Admin::GroupsController)
* DEV: Use a specific error on blank category slug (Fixes a `render_json_error` warning)
* DEV: Use a specific error on reviewable claim conflict (Fixes a `render_json_error` warning)
* DEV: Use specific errors in Admin::UsersController (Fixes `render_json_error` warnings)
* FIX: PublishedPages error responses
* FIX: TopicsController error responses (There was an issue of two separate `Topic` instances for the same record. This makes sure there's only one up-to-date instance.)
It previously failed to match URLs with characters other than `[a-zA-z0-9\.\/:-]`. This meant that `PullHotlinkedImages` would sometimes download an external image and then never use it in any posts.
Latest version of Rails contains compatibility fixes for Ruby 2.7 and some
minor security fixes we would like to have
It also broke some of the multisite tests.
Rails tries to use the same connection for reading from a replica as writing
to the leader during tests, because, with everything happening in a
transaction, changes to the DB wouldn't otherwise be reflected in the
replica connection.
The difference now is that Rails tries to do this for connections opened
after the test has started which affected rails multisite connections.
The upshot of this is that, as things stand, you are likely to
experience problems if you try to connect to a different multisite DB in
a test when the `current_db` is not 'default'.
- Eliminate superfluous "author wrote" block
- Eliminate block-quote for all posts
- Move participant count and reply count to 1 line
- Prioritize name over username if forum requests
- Use fabrication in list controller spec to speed up spec
There is now an explicit "Delete Bookmark" button in the edit modal. A confirmation is shown before deleting.
Along with this, when the bookmarked post icon is clicked the modal is now shown instead of just deleting the bookmark. Also, the "Delete Bookmark" button from the user bookmark list now confirms the action.
Add a `d d` shortcut in the modal to delete the bookmark.
Users can now edit the bookmark name and reminder time from their list of bookmarks.
We use "Custom" for the date and time in the modal because if the user set a reminder for "tomorrow" then edit the reminder "tomorrow", the definition of what "tomorrow" is has changed.
Sometimes spec which is testing order groups by user count is failing.
My theory is that cause is the randomness of Postgres when the order value is the same for 2 rows.
In spec, we got three groups
`moderator_group` - 0 users
`group` - 1 user
`other_group` - 1 user
And we are expecting that controller will return them in ascending order [moderator, group, other_group]
Because `group` and `other_group` contain the same amount of users, we are dealing with luck
Therefore, I believe that adding one more user to other_group should make that query reliable.
It was not crashing on my local machine, so I am not 100% sure.
Within 24 hours of signing up, new users were losing their
default trust level of 3. With this fix, demotions from
trust level 3 won't happen when the "default trust level"
setting is 3 or 4.
* Count user summary bookmarks from new Bookmark table if bookmarks with reminders enabled
* Update topic user bookmarked column when new topic bookmark changed
* Make in:bookmarks search work with new bookmarks
* Fix batch inserts for bookmark rake task (and thus migration). We were only inserting one bookmark at a time, completely defeating the purpose of batching!
This commit is for a frequently requested task on meta so that only 1
API call is needed instead of 3!
In order to create a user via the api and not have them receive an
activation email you can pass in the `active=true` flag. This prevents
sending an email, but it is only half of the solution and puts the db in
a weird state where it has an active user with an unconfirmed email.
This commit fixes that and ensures that if the `active=true` flag is set
the user's email is also marked as confirmed.
This change only applies to admins using the API.
Related topics on meta:
- https://meta.discourse.org/t/-/68663
- https://meta.discourse.org/t/-/33133
- https://meta.discourse.org/t/-/36133
Trigger an event for plugins to consume when a user session is refreshed.
This allows external auth to be notified about account activity, and be
able to take action such as use oauth refresh tokens to keep oauth
tokens valid.
The new `enforce_canonical_emails` site setting ensures that emails in the
canonical form are unique.
This mean that if `s.a.m+1@gmail.com` is registered `sam@gmail.com` will
not be allowed.
The commit contains a blanket "tag strip" (stripping everything after +)
it also contains special handling of a "dot strip" for googlemail and gmail.
The setting only impacts new registrations after `enforce_canonical_emails`
The setting is default false so it will not impact any existing installs.
Timezone is guessed by moment.js if unset upon a normal login, but was not when
logging in via an email link. This adds logic to update a guessed
timezone upon email login so timezones don't end up blank.
FEATURE: add after-reviewable-post-user plugin outlet
Add a plugin outlet after reviewable post user
Add a basic user serializer that includes custom fields.
Allows review queue serializer to include custom fields for its users
If the feature is enabled, staff members can construct a URL and publish a
topic for others to browse without the regular Discourse chrome.
This is useful if you want to use Discourse like a CMS and publish
topics as articles, which can then be embedded into other systems.
Previously we were only updating group membership when a user record was
first created in an SSO setting.
This corrects it so we also update it if SSO changes the email
It wasn't intended that people should be able to earn trust level
3 without participating in public topics. When counting topic
views and likes given/received, don't count private topics.
Users noticed that sometimes, avatar from Gravatar is not correctly updated - https://meta.discourse.org/t/updated-image-on-gravatar-not-seeing-it-update-on-site/54357
A potential reason for that is that even if you update your avatar in Gravatar, URL stays the same and if the cache is involved, service is still receiving the old photo.
For example. In my case, when I click the button to refresh avatar the
new Upload record is created with `origin` URL to new avatar, and `url` to
old one
I made some tests in the rails console and adding random param to Gravatar URL is deceiving cache and correct, the newest avatar is downloaded
* DEPRECATION: Remove support for api creds in query params
This commit removes support for api credentials in query params except
for a few whitelisted routes like rss/json feeds and the handle_mail
route.
Several tests were written to valid these changes, but the bulk of the
spec changes are just switching them over to use header based auth so
that they will pass without changing what they were actually testing.
Original commit that notified admins this change was coming was created
over 3 months ago: 2db2003187
* fix tests
* Also allow iCalendar feeds
Co-authored-by: Rafael dos Santos Silva <xfalcox@gmail.com>
* FIX: guardian always got user but sometimes it is anonymous
```
def initialize(user = nil, request = nil)
@user = user.presence || AnonymousUser.new
@request = request
end
```
AnonymouseUser defines `blank?` method
```
class AnonymousUser
def blank?
true
end
...
end
```
so if we would use @user.present? it would be correct, however, just @user is always true
Previously all topic posters would be added which could lead to major performance issues. Now if there are too many posters, only the acting user will be added as a participant.