Also:
* Remove an unused method (#fill_email)
* Replace a method that was used just once (#generate_username) with `SecureRandom.alphanumeric`
* Remove an obsolete dev puma `tmp/restart` file logic
Currently, Discourse rate limits all incoming requests by the IP address they
originate from regardless of the user making the request. This can be
frustrating if there are multiple users using Discourse simultaneously while
sharing the same IP address (e.g. employees in an office).
This commit implements a new feature to make Discourse apply rate limits by
user id rather than IP address for users at or higher than the configured trust
level (1 is the default).
For example, let's say a Discourse instance is configured to allow 200 requests
per minute per IP address, and we have 10 users at trust level 4 using
Discourse simultaneously from the same IP address. Before this feature, the 10
users could only make a total of 200 requests per minute before they got rate
limited. But with the new feature, each user is allowed to make 200 requests
per minute because the rate limits are applied on user id rather than the IP
address.
The minimum trust level for applying user-id-based rate limits can be
configured by the `skip_per_ip_rate_limit_trust_level` global setting. The
default is 1, but it can be changed by either adding the
`DISCOURSE_SKIP_PER_IP_RATE_LIMIT_TRUST_LEVEL` environment variable with the
desired value to your `app.yml`, or changing the setting's value in the
`discourse.conf` file.
Requests made with API keys are still rate limited by IP address and the
relevant global settings that control API keys rate limits.
Before this commit, Discourse's auth cookie (`_t`) was simply a 32 characters
string that Discourse used to lookup the current user from the database and the
cookie contained no additional information about the user. However, we had to
change the cookie content in this commit so we could identify the user from the
cookie without making a database query before the rate limits logic and avoid
introducing a bottleneck on busy sites.
Besides the 32 characters auth token, the cookie now includes the user id,
trust level and the cookie's generation date, and we encrypt/sign the cookie to
prevent tampering.
Internal ticket number: t54739.
When dismissing new topics for the Tracked filter, the dismiss was
limited to 30 topics which is the default per page count for TopicQuery.
This happened even if you specified which topic IDs you were
selectively dismissing. This PR fixes that bug, and also moves
the per_page_count into a DEFAULT_PER_PAGE_COUNT for the TopicQuery
so it can be stubbed in tests.
Also moves the unused stub_const method into the spec helpers
for cases like this; it is much better to handle this in one place
with an ensure. In a follow up PR I will clean up other specs that
do the same thing and make them use stub_const.
* Move onebox gem in core library
* Update template file path
* Remove warning for onebox gem caching
* Remove onebox version file
* Remove onebox gem
* Add sanitize gem
* Require onebox library in lazy-yt plugin
* Remove onebox web specific code
This code was used in standalone onebox Sinatra application
* Merge Discourse specific AllowlistedGenericOnebox engine in core
* Fix onebox engine filenames to match class name casing
* Move onebox specs from gem into core
* DEV: Rename `response` helper to `onebox_response`
Fixes a naming collision.
* Require rails_helper
* Don't use `before/after(:all)`
* Whitespace
* Remove fakeweb
* Remove poor unit tests
* DEV: Re-add fakeweb, plugins are using it
* Move onebox helpers
* Stub Instagram API
* FIX: Follow additional redirect status codes (#476)
Don’t throw errors if we encounter 303, 307 or 308 HTTP status codes in responses
* Remove an empty file
* DEV: Update the license file
Using the copy from https://choosealicense.com/licenses/gpl-2.0/#
Hopefully this will enable GitHub to show the license UI?
* DEV: Update embedded copyrights
* DEV: Add Onebox copyright notice
* DEV: Add MIT license, convert COPYRIGHT.txt to md
* DEV: Remove an incorrect copyright claim
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
Co-authored-by: jbrw <jamie@goatforce5.org>
Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
This PR allows invitations to be used when the DiscourseConnect SSO is enabled for a site (`enable_discourse_connect`) and local logins are disabled. Previously invites could not be accepted with SSO enabled simply because we did not have the code paths to handle that logic.
The invitation methods that are supported include:
* Inviting people to groups via email address
* Inviting people to topics via email address
* Using invitation links generated by the Invite Users UI in the /my/invited/pending route
The flow works like this:
1. User visits an invite URL
2. The normal invitation validations (redemptions/expiry) happen at that point
3. We store the invite key in a secure session
4. The user clicks "Accept Invitation and Continue" (see below)
5. The user is redirected to /session/sso then to the SSO provider URL then back to /session/sso_login
6. We retrieve the invite based on the invite key in secure session. We revalidate the invitation. We show an error to the user if it is not valid. An additional check here for invites with an email specified is to check the SSO email matches the invite email
7. If the invite is OK we create the user via the normal SSO methods
8. We redeem the invite and activate the user. We clear the invite key in secure session.
9. If the invite had a topic we redirect the user there, otherwise we redirect to /
Note that we decided for SSO-based invites the `must_approve_users` site setting is ignored, because the invite is a form of pre-approval, and because regular non-staff users cannot send out email invites or generally invite to the forum in this case.
Also deletes some group invite checks as per https://github.com/discourse/discourse/pull/12353
This test failure was caused by rails calling `.debug` on our FakeLogger which was not supporting it, resulting in more errors than what the test was expecting.
Moves the topic timer jobs from being scheduled ahead of time with enqueue_at to a 5 minute scheduled run like bookmark reminders, in a new job called Jobs::EnqueueTopicTimers. Backwards compatibility is maintained by checking if an existing topic timer job is enqueued in sidekiq for the timer, and if it is not running it inside the new job.
The functionality to close/open a topic if it is in the opposite state still remains in the after_save block of TopicTimer, with further commentary, which is used for Open/Close Temporarily.
This also removes the ensure_consistency! functionality of topic timers as it is no longer needed; the new job will always pick up the timers because they are not stored in a fragile state of sidekiq.
It used to insert block Oneboxes inside paragraphs which resulted in
invalid HTML. This needed an additional parsing for removal of empty
paragraphs and the resulting HTML could still be invalid.
This commit ensure that block Oneboxes are inserted correctly, by
splitting the paragraph containing the link and putting the block
between the two. Paragraphs left with nothing but whitespaces will
be removed.
Follow up to 7f3a30d79f.
There are issues around displaying images on published pages when secure media is enabled. This PR temporarily makes it appear as if published pages are enabled if secure media is also enabled.
Previously, any errors in those files would e.g. blow up the update process in docker_manager.
Now it prints out an error and proceeds as if there was no compatibility file.
Includes:
* DEV: Extract setup_git_repo
* DEV: Use `Dir.mktmpdir`
* DEV: Default to `main` branch (The latest versions of git already do this, so to avoid problems do this by default)
Extracted commonly used spec helpers into spec/support/uploads_helpers.rb, removed unused stubs and let definitions. Makes it easier to write new S3-related specs without copy and pasting setup steps from other specs.
We have the `# frozen_string_literal: true` comment on all our
files. This means all string literals are frozen. There is no need
to call #freeze on any literals.
For files with `# frozen_string_literal: true`
```
puts %w{a b}[0].frozen?
=> true
puts "hi".frozen?
=> true
puts "a #{1} b".frozen?
=> true
puts ("a " + "b").frozen?
=> false
puts (-("a " + "b")).frozen?
=> true
```
For more details see: https://samsaffron.com/archive/2018/02/16/reducing-string-duplication-in-ruby
Previously we were only updating group membership when a user record was
first created in an SSO setting.
This corrects it so we also update it if SSO changes the email
* Add uploads:sync_s3_acls rake task to ensure the ACLs in S3 are the correct (public-read or private) setting based on upload security
* Improved uploads:disable_secure_media to be more efficient and provide better messages to the user.
* Rename uploads:ensure_correct_acl task to uploads:secure_upload_analyse_and_update as it does more than check the ACL
* Many improvements to uploads:secure_upload_analyse_and_update
* Make sure that upload.access_control_post is unscoped so deleted posts are still fetched, because they still affect the security of the upload.
* Add escape hatch for capture_stdout in the form of RAILS_ENABLE_TEST_STDOUT. If provided the capture_stdout code will be ignored, so you can see the output if you need.
Basically, say you had already downloaded a certain image from a certain URL
using pull_hotlinked_images and the onebox. The upload would be stored
by its sha as an upload record. Whenever you linked to the same URL again
in a post (e.g. in our case an og:image on review.discourse) we would
would reuse the original upload record because of the sha1.
However when you turned on secure media this could cause problems as
the first post that uses that upload after secure media is enabled
will set the access control post for the upload to the new post.
Then if the post is deleted every single onebox/link to that same image
URL will fail forever with 403 as the secure-media-uploads URL fails
if the access control post has been deleted.
To fix this when cooking posts and pulling hotlinked images, we only
allow using an original upload by URL if its access control post
matches the current post, and if the original_sha1 is filled in,
meaning it was uploaded AFTER secure media was enabled. otherwise
we just redownload the media again to be safe, as the URL will always
be new then.
* DEV: Add a fake Mutex that for concurrency testing with Fibers
* DEV: Support running in sleep order in concurrency tests
* FIX: A separate FallbackHandler should be used for each redis pair
This commit refactors the FallbackHandler and Connector:
* There were two different ways to determine whether the redis master
was up. There is now one way and it is the responsibility of the
new RedisStatus class.
* A background thread would be created whenever `verify_master` was
called unless the thread already existed. The thread would
periodically check the status of the redis master. However, checking
that a thread is `alive?` is an ineffective way of determining
whether it will continue to check the redis master in the future
since the thread may be in the process of winding down.
Now, this thread is created when the recorded master status goes from
up to down. Since this thread runs the only part of the code that is
able to bring the recorded status up again, we ensure that only one
thread is probing the redis master at a time and that there is always
a thread probing redis master when it is recorded as being down.
* Each time the status of the redis master was checked periodically, it
would spawn a new thread and immediately join on it. I assume this
happened to isolate the check from the current execution, but since
the join rethrows exceptions in the parent thread, this was not
effective.
* The logic for falling back was spread over the FallbackHandler and
the Connector. The connector is now a dumb object that delegates
responsibility for determining the status of redis to the
FallbackHandler.
* Previously, failing to connect to a master redis instance when it was
not recorded as down would raise an exception. Now, this exception is
passed to `Discourse.warn_exception` and the connection is made to
the slave.
This commit introduces the FallbackHandlers singleton:
* It is responsible for holding the set of FallbackHandlers.
* It adds callbacks to the fallback handlers for when a redis master
comes up or goes down. Main redis and message bus redis may exist on
different or the same redis hosts and so these callbacks may all
exist on the same FallbackHandler or on separate ones.
These objects are tested using fake concurrency provided by the
Concurrency module:
* An `around(:each)` hook is used to cause each test to run inside a
Scenario so that the test body, mocking cleanup and `after(:each)`
callbacks are run in a different Fiber.
* Therefore, holting the execution of the Execution abruptly (so that
the fibers aren't run to completion), prevents the mocking cleaning
and `after(:each)` callbacks from running. I have tried to prevent
this by recovering from all exceptions during an Execution.
* FIX: Create frozen copies of passed in config where possible
* FIX: extract start_reset method and remove method used by tests
Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
We like to stay as close as possible to latest with rubocop cause the cops
get better.
This update required some code changes, specifically the default is to avoid
explicit returns where implicit is done
Also this renames a few rules
This feature adds the ability to define synonyms for tags, and the ability to merge one tag into another while keeping it as a synonym. For example, tags named "js" and "java-script" can be synonyms of "javascript". When searching and creating topics using synonyms, they will be mapped to the base tag.
Along with this change is a new UI found on each tag's page (for example, `/tags/javascript`) where more information about the tag can be shown. It will list the synonyms, which categories it's restricted to (if any), and which tag groups it belongs to (if tag group names are public on the `/tags` page by enabling the "tags listed by group" setting). Staff users will be able to manage tags in this UI, merge tags, and add/remove synonyms.
Previously people were not consistent about mocking which left internals in
a fragile state when running subfolder specs.
This introduces a simple helper `set_subfolder` which you can use to set
the subfolder for the spec. It takes care of proper configuration of subfolder
and teardown.
```
# usage
set_subfolder "/my_amazing_subfolder"
```
You should no longer stub base_uri or global_settings
This change both speeds up specs (less strings to allocate) and helps catch
cases where methods in Discourse are mutating inputs.
Overall we will be migrating everything to use #frozen_string_literal: true
it will take a while, but this is the first and safest move in this direction
Includes support for flags, reviewable users and queued posts, with REST API
backwards compatibility.
Co-Authored-By: romanrizzi <romanalejandro@gmail.com>
Co-Authored-By: jjaffeux <j.jaffeux@gmail.com>
Previously we would bypass touching `Topic.updated_at` for whispers and post
recovery / deletions.
This meant that certain types of caching can not be done where we rely on
this information for cache accuracy.
For example if we know we have zero unread topics as of yesterday and whisper
is made I need to bump this date so the cache remains accurate
This is only half of a larger change but provides the groundwork.
Confirmed none of our serializers leak out Topic.updated_at so this is safe
spot for this info
At the moment edits still do not change this but it is not relevant for the
unread cache.
This commit also cleans up some specs to use the new `eq_time` matcher for
millisecond fidelity comparison of times
Previously `freeze_time` would fudge this which is not that clean.
Fixes two issues:
1. Redirecting to an external origin's path after login did not work
2. User would be erroneously redirected to the external origin after logout
https://meta.discourse.org/t/109755
It is not a setting, and only relevant in specs. The new API is:
```
Jobs.run_later! # jobs will be thrown on the queue
Jobs.run_immediately! # jobs will run right away, avoid the queue
```
Previously if you wanted to have jobs execute in test mode, you'd have
to do `SiteSetting.queue_jobs = false`, because the opposite of queue
is to execute.
I found this very confusing, so I created a test helper called
`run_jobs_synchronously!` which is much more clear about what it does.
We had a missing formats: string on our render partial that caused logs to
spam when CSS files got 404s.
Due to magic discourse_public_exceptions.rb was actually returning the
correct 404 cause it switched format when rendering the error.
SiteSettingExtension triggers message bus which re-establishes a
DB connection in `SiteSettingExtension#process_message`. That happens
concurrently and a test that requires a connection to the db will
fail when the reconnection is happening.
* FEATURE: Added MaxMindDb to resolve IP information.
* FEATURE: Added browser detection based on user agent.
* FEATURE: Added recently used devices in user preferences.
* DEV: Added acceptance test for recently used devices.
* UX: Do not show 'Show more' button if there aren't more tokens.
* DEV: Fix unit tests.
* DEV: Make changes after code review.
* Add more detailed unit tests.
* Improve logging messages.
* Minor coding style fixes.
* DEV: Use DropdownSelectBoxComponent and run Prettier.
* DEV: Fix unit tests.
This updates tests to use latest rails 5 practice
and updates ALL dependencies that could be updated
Performance testing shows that performance has not regressed
if anything it is marginally faster now.
FEATURE: clicking envelope takes you to inbox
Suggested messages works somewhat like suggested topics.
- New show up first (in either group inbox or inbox)
- Then unread (in either group inbox or inbox)
- Finally "related" which are messages with same participants as the current pm.
This introduces match_html, which converts actual and expected HTML strings into "canonical" HTML using Nokogiri with NOBLANKS and COMPACT, and then does a simple equality comparison.
This eliminates whitespace differences introduced by library changes (e.g. the libxml2 2.9.0 change).
pretty_text_spec.rb has been changed to use match_html where appropriate. and all tests pass under libxml2 2.7.6, 2.8.0 or 2.9.0