discourse/lib/discourse_redis.rb

410 lines
10 KiB
Ruby
Raw Normal View History

# frozen_string_literal: true
2013-02-05 14:16:51 -05:00
#
# A wrapper around redis that namespaces keys with the current site id
#
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
require_dependency 'cache'
require_dependency 'concurrency'
2013-02-05 14:16:51 -05:00
class DiscourseRedis
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
class RedisStatus
MASTER_ROLE_STATUS = "role:master".freeze
MASTER_LOADED_STATUS = "loading:0".freeze
CONNECTION_TYPES = %w{normal pubsub}.each(&:freeze)
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
def initialize(master_config, slave_config)
master_config = master_config.dup.freeze unless master_config.frozen?
slave_config = slave_config.dup.freeze unless slave_config.frozen?
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
@master_config = master_config
@slave_config = slave_config
end
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
def master_alive?
master_client = connect(@master_config)
begin
info = master_client.call([:info])
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
rescue Redis::ConnectionError, Redis::CannotConnectError, RuntimeError => ex
raise ex if ex.class == RuntimeError && ex.message != "Name or service not known"
warn "Master not alive, error connecting"
return false
ensure
master_client.disconnect
end
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
unless info.include?(MASTER_LOADED_STATUS)
warn "Master not alive, status is loading"
return false
end
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
unless info.include?(MASTER_ROLE_STATUS)
warn "Master not alive, role != master"
return false
end
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
true
end
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
def fallback
warn "Killing connections to slave..."
slave_client = connect(@slave_config)
begin
CONNECTION_TYPES.each do |connection_type|
slave_client.call([:client, [:kill, 'type', connection_type]])
end
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
rescue Redis::ConnectionError, Redis::CannotConnectError, RuntimeError => ex
raise ex if ex.class == RuntimeError && ex.message != "Name or service not known"
warn "Attempted a redis fallback, but connection to slave failed"
ensure
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
slave_client.disconnect
end
end
private
def connect(config)
config = config.dup
config.delete(:connector)
::Redis::Client.new(config)
end
def log_prefix
@log_prefix ||= begin
master_string = "#{@master_config[:host]}:#{@master_config[:port]}"
slave_string = "#{@slave_config[:host]}:#{@slave_config[:port]}"
"RedisStatus master=#{master_string} slave=#{slave_string}"
end
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
end
def warn(message)
Rails.logger.warn "#{log_prefix}: #{message}"
end
end
class FallbackHandler
def initialize(log_prefix, redis_status, execution)
@log_prefix = log_prefix
@redis_status = redis_status
@mutex = execution.new_mutex
@execution = execution
@master = true
@event_handlers = []
end
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
def add_callbacks(handler)
@mutex.synchronize do
@event_handlers << handler
end
end
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
def start_reset
@mutex.synchronize do
if @master
@master = false
trigger(:down)
true
else
false
end
end
end
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
def use_master?
master = @mutex.synchronize { @master }
if !master
false
elsif safe_master_alive?
true
else
if start_reset
@execution.spawn do
loop do
@execution.sleep 5
info "Checking connection to master"
if safe_master_alive?
@mutex.synchronize do
@master = true
@redis_status.fallback
trigger(:up)
end
break
end
end
end
end
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
false
end
end
private
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
attr_reader :log_prefix
def trigger(event)
@event_handlers.each do |handler|
begin
handler.public_send(event)
rescue Exception => e
Discourse.warn_exception(e, message: "Error running FallbackHandler callback")
end
end
end
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
def info(message)
Rails.logger.info "#{log_prefix}: #{message}"
end
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
def safe_master_alive?
begin
@redis_status.master_alive?
rescue Exception => e
Discourse.warn_exception(e, message: "Error running master_alive?")
false
end
end
end
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
class MessageBusFallbackCallbacks
def down
@keepalive_interval, MessageBus.keepalive_interval =
MessageBus.keepalive_interval, 0
end
def up
MessageBus.keepalive_interval = @keepalive_interval
end
end
class MainRedisReadOnlyCallbacks
def down
end
def up
Discourse.clear_readonly!
Discourse.request_refresh!
end
end
class FallbackHandlers
include Singleton
def initialize
@mutex = Mutex.new
@fallback_handlers = {}
end
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
def handler_for(config)
config = config.dup.freeze unless config.frozen?
@mutex.synchronize do
@fallback_handlers[[config[:host], config[:port]]] ||= begin
log_prefix = "FallbackHandler #{config[:host]}:#{config[:port]}"
slave_config = DiscourseRedis.slave_config(config)
redis_status = RedisStatus.new(config, slave_config)
handler =
FallbackHandler.new(
log_prefix,
redis_status,
Concurrency::ThreadedExecution.new
)
if config == GlobalSetting.redis_config
handler.add_callbacks(MainRedisReadOnlyCallbacks.new)
end
if config == GlobalSetting.message_bus_redis_config
handler.add_callbacks(MessageBusFallbackCallbacks.new)
end
handler
end
end
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
end
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
def self.handler_for(config)
instance.handler_for(config)
end
end
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
class Connector < Redis::Client::Connector
def initialize(options)
options = options.dup.freeze unless options.frozen?
FIX: Redis fallback handler refactoring (#8771) * DEV: Add a fake Mutex that for concurrency testing with Fibers * DEV: Support running in sleep order in concurrency tests * FIX: A separate FallbackHandler should be used for each redis pair This commit refactors the FallbackHandler and Connector: * There were two different ways to determine whether the redis master was up. There is now one way and it is the responsibility of the new RedisStatus class. * A background thread would be created whenever `verify_master` was called unless the thread already existed. The thread would periodically check the status of the redis master. However, checking that a thread is `alive?` is an ineffective way of determining whether it will continue to check the redis master in the future since the thread may be in the process of winding down. Now, this thread is created when the recorded master status goes from up to down. Since this thread runs the only part of the code that is able to bring the recorded status up again, we ensure that only one thread is probing the redis master at a time and that there is always a thread probing redis master when it is recorded as being down. * Each time the status of the redis master was checked periodically, it would spawn a new thread and immediately join on it. I assume this happened to isolate the check from the current execution, but since the join rethrows exceptions in the parent thread, this was not effective. * The logic for falling back was spread over the FallbackHandler and the Connector. The connector is now a dumb object that delegates responsibility for determining the status of redis to the FallbackHandler. * Previously, failing to connect to a master redis instance when it was not recorded as down would raise an exception. Now, this exception is passed to `Discourse.warn_exception` and the connection is made to the slave. This commit introduces the FallbackHandlers singleton: * It is responsible for holding the set of FallbackHandlers. * It adds callbacks to the fallback handlers for when a redis master comes up or goes down. Main redis and message bus redis may exist on different or the same redis hosts and so these callbacks may all exist on the same FallbackHandler or on separate ones. These objects are tested using fake concurrency provided by the Concurrency module: * An `around(:each)` hook is used to cause each test to run inside a Scenario so that the test body, mocking cleanup and `after(:each)` callbacks are run in a different Fiber. * Therefore, holting the execution of the Execution abruptly (so that the fibers aren't run to completion), prevents the mocking cleaning and `after(:each)` callbacks from running. I have tried to prevent this by recovering from all exceptions during an Execution. * FIX: Create frozen copies of passed in config where possible * FIX: extract start_reset method and remove method used by tests Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
2020-01-22 21:39:29 -05:00
super(options)
@slave_options = DiscourseRedis.slave_config(options).freeze
@fallback_handler = DiscourseRedis::FallbackHandlers.handler_for(options)
end
def resolve
if @fallback_handler.use_master?
@options
else
@slave_options
end
end
end
2013-02-25 11:42:20 -05:00
2013-03-25 02:19:59 -04:00
def self.raw_connection(config = nil)
config ||= self.config
Redis.new(config)
2013-03-25 02:19:59 -04:00
end
def self.config
GlobalSetting.redis_config
2013-03-25 02:19:59 -04:00
end
def self.slave_config(options = config)
2017-07-27 21:20:09 -04:00
options.dup.merge!(host: options[:slave_host], port: options[:slave_port])
end
def initialize(config = nil, namespace: true)
@config = config || DiscourseRedis.config
@redis = DiscourseRedis.raw_connection(@config.dup)
@namespace = namespace
2013-02-05 14:16:51 -05:00
end
def without_namespace
# Only use this if you want to store and fetch data that's shared between sites
@redis
end
def self.ignore_readonly
yield
rescue Redis::CommandError => ex
if ex.message =~ /READONLY/
unless Discourse.recently_readonly? || Rails.env.test?
STDERR.puts "WARN: Redis is in a readonly state. Performed a noop"
end
Discourse.received_redis_readonly!
nil
else
raise ex
end
end
2013-02-05 14:16:51 -05:00
# prefix the key with the namespace
def method_missing(meth, *args, &block)
if @redis.respond_to?(meth)
2019-05-06 21:27:05 -04:00
DiscourseRedis.ignore_readonly { @redis.public_send(meth, *args, &block) }
2013-02-05 14:16:51 -05:00
else
super
end
end
# Proxy key methods through, but prefix the keys with the namespace
[:append, :blpop, :brpop, :brpoplpush, :decr, :decrby, :exists, :expire, :expireat, :get, :getbit, :getrange, :getset,
2013-05-05 19:51:09 -04:00
:hdel, :hexists, :hget, :hgetall, :hincrby, :hincrbyfloat, :hkeys, :hlen, :hmget, :hmset, :hset, :hsetnx, :hvals, :incr,
:incrby, :incrbyfloat, :lindex, :linsert, :llen, :lpop, :lpush, :lpushx, :lrange, :lrem, :lset, :ltrim,
2015-09-28 02:38:52 -04:00
:mapped_hmset, :mapped_hmget, :mapped_mget, :mapped_mset, :mapped_msetnx, :move, :mset,
2013-05-05 19:51:09 -04:00
:msetnx, :persist, :pexpire, :pexpireat, :psetex, :pttl, :rename, :renamenx, :rpop, :rpoplpush, :rpush, :rpushx, :sadd, :scard,
:sdiff, :set, :setbit, :setex, :setnx, :setrange, :sinter, :sismember, :smembers, :sort, :spop, :srandmember, :srem, :strlen,
:sunion, :ttl, :type, :watch, :zadd, :zcard, :zcount, :zincrby, :zrange, :zrangebyscore, :zrank, :zrem, :zremrangebyrank,
:zremrangebyscore, :zrevrange, :zrevrangebyscore, :zrevrank, :zrangebyscore ].each do |m|
define_method m do |*args|
args[0] = "#{namespace}:#{args[0]}" if @namespace
2019-05-06 21:27:05 -04:00
DiscourseRedis.ignore_readonly { @redis.public_send(m, *args) }
end
2013-02-05 14:16:51 -05:00
end
2015-09-28 02:38:52 -04:00
def mget(*args)
args.map! { |a| "#{namespace}:#{a}" } if @namespace
2015-09-28 02:38:52 -04:00
DiscourseRedis.ignore_readonly { @redis.mget(*args) }
end
def del(k)
DiscourseRedis.ignore_readonly do
k = "#{namespace}:#{k}" if @namespace
@redis.del k
end
end
def scan_each(options = {}, &block)
DiscourseRedis.ignore_readonly do
match = options[:match].presence || '*'
options[:match] =
if @namespace
"#{namespace}:#{match}"
else
match
end
if block
@redis.scan_each(options) do |key|
key = remove_namespace(key) if @namespace
block.call(key)
end
else
@redis.scan_each(options).map do |key|
key = remove_namespace(key) if @namespace
key
end
end
end
end
2017-07-27 21:20:09 -04:00
def keys(pattern = nil)
DiscourseRedis.ignore_readonly do
pattern = pattern || '*'
pattern = "#{namespace}:#{pattern}" if @namespace
keys = @redis.keys(pattern)
if @namespace
len = namespace.length + 1
keys.map! { |k| k[len..-1] }
end
keys
end
end
def delete_prefixed(prefix)
DiscourseRedis.ignore_readonly do
keys("#{prefix}*").each { |k| Discourse.redis.del(k) }
end
end
def flushdb
DiscourseRedis.ignore_readonly do
2017-07-27 21:20:09 -04:00
keys.each { |k| del(k) }
end
end
def reconnect
@redis._client.reconnect
end
def namespace_key(key)
if @namespace
"#{namespace}:#{key}"
else
key
end
end
def namespace
RailsMultisite::ConnectionManagement.current_db
end
2013-02-05 14:16:51 -05:00
def self.namespace
Rails.logger.warn("DiscourseRedis.namespace is going to be deprecated, do not use it!")
2013-02-05 14:16:51 -05:00
RailsMultisite::ConnectionManagement.current_db
end
def self.new_redis_store
Cache.new
end
private
def remove_namespace(key)
key[(namespace.length + 1)..-1]
end
2013-02-05 14:16:51 -05:00
end