Calling close multiple times on ServerConsumer can result in multiple
notifications being routed around the cluster. This causes cluster
topology info to become skewed. Which affects a number of components
such as message redistribution, metrics and can eventually cause OOM
should multiple queues be redistributing at the same time.
After sending pages, the thread will hold the storage manager write lock
and send synchronization finished packet(use the parent thread pool) to
the backup node. At the same time, thread pool is full bcs they are
waiting for the storage manager read lock to write the page or journal,
leading to replication starting failure. Here we use io executor to send
replicate packet to fix thread pool starvation problem.
Quorum voting is used by both the live and the backup to decide what to do if a replication connection is disconnected.
Basically, the server will request each live server in the cluster to vote as to whether it thinks the server it is replicating to or from is still alive.
You can also configure the time for which the quorum manager will wait for the quorum vote response.
Currently, the value is hardcoded as 30 sec. We should change this 30-second wait to be configurable.
When setting "default-max-consumers" in addressing setting in broker.xml
It has no effect on the "max-consumers" property on matching address queues.
Tests added as part of a previous commit
This closes#2107
1. Add tests case to verify issue and fix, tests also tests for same behavior using CORE, OPENWIRE and AMQP JMS Clients.
2. Update Core Client to check for queue before creating, sharedQueue as per createQueue logic.
3. Update ServerSessionPacketHandler to handle packets from old clients to perform to implement the same fix server side for older clients.
4. Correct AMQP protocol so correct error code is returned on security exception so that amqp jms can correctly throw JMSsecurityException
5. Correct AMQP protocol to check for queue exists before create
6. Correct OpenWire protocol to check for address exists before create
When database persistence and no shared store option is being used,
Artemis is choosing to use InVMNodeManager, that is not providing
the same behaviour of FileLockNodeManager.
Replace guava Preconditions with artemis Preconditions
Replace guava Predicate with java Predicate
Replace guava Ordering with java Comparator
Replace guava Immutable, with ArrayList/Set and then wrap with unmodifiable
PageCursorProviderImpl is not handling any pending cleanup tasks
on stop, leaving paging enabled due to the remaining pages to be
cleared up.
PagingStoreImpl is responsible to trigger the flushing of pending
tasks on PageCursorProviderImpl before stopping it and to try to
execute any remaining tasks on the owned common executor, before
shutting it down.
It fixes testTopicsWithNonDurableSubscription.
NotificationActiveMQServerPlugin and LoggingActiveMQServerPlugin are
implementing the deprecated version of
ActiveMQServerPlugin::messageExpired that is not called by the
new version of the method or any other part of the code
This fixing test org.apache.activemq.artemis.tests.integration.management.NotificationTest#testMessageExpired
It avoid using the system clock to perform the locks logic
by using the DBMS time.
It contains several improvements on the JDBC error handling
and an improved observability thanks to debug logs.
largeMessagesFactory::newBuffer could create a pooled direct ByteBuffer
that will not be released into the factory pool: using a heap ByteBuffer
will perform more internal copies, but will make it simpler to be garbage
collected.
QuorumFailOverTest.testQuorumVotingLiveNotDead fails
because the quorum vote takes longer time to finish than
the test expects to.
(The test used to pass until commit ARTEMIS-1763)
The previous commit about this feature wasn't using the row count query
ResultSet.
The mechanics has been changed to allow the row count query
to fail, because DROP and CREATE aren't transactional and immediate
in most DBMS.
It includes a test that stress its mechanics if used with DBMS like
DB2 10.5 and Oracle 12c.
Additional checks and logs have been added to trace each steps.
JdbcNodeManager is configured to use the same network timeout
value of the journal and to validate all the timeout values
related to a correct HA behaviour.
The JDBC Connection leaks on:
- JDBCFileUtils::getDBFileDriver(DataSource, SQLProvider)
- SharedStoreBackupActivation.FailbackChecker::run on a failed awaitLiveStatus
Expose method to return current mappings of groups to consumers
Expose methods to reset (remove) specific group mapping from groupID to Consumer
Expose methods to reset (remove) all group mappings
messageAcknowledged plugin callback methods
Knowing the consumer that expired or acked a message (if available) is
useful and right now a message reference only contains a consumer id
which by itself is not unique so the actual consumer needs to be passed
When finding out if a connector belong to a target node it compares
the whole parameter map which is not necessary. Also in understanding
the connector the best place is to delegate it to the corresponding
remoting connection who understands it. (e.g. INVMConnection knows
whether the connector belongs to a target node by checking it's
serverID only. The netty ones only need to match host and port, and
understanding that localhost and 127.0.0.1 are same thing).
The queue metrics were being decremented improperly because on iteration
over the cancelled scheduled messages because the flag for fromMessageReferences was not
set to false. Setting the flag to false skips over the metrics update
which is what we want as the scheduled messages were never added to the
message references in the first place so the metrics don't need updating