Support using group buckets on a queue for better local group scaling
Support disabling message groups on a queue
Support rebalancing groups when a consumer is added.
This is simply fixing the example under examples/features/standard/divert
Other tests are passing.
No additional tests are needed as the example on this case acts like a test.
Push isDirectDeliver method from netty impl, to the Connection interface
Add support to InVMConnection for isDirectDeliver flag and ability to set via config, defaulting to false, to keep current default behavior.
Extend DirectDeliverTest to check InVM as well.
Any checkProperties();<usage of this.properties> pattern has been
replaced by an atomic checkProperties().<usage of returned properties>
to help both performance and consistency.
The cleanup is now performed into CoreTypedProperties both
for performance reasons (avoid lock/unlock many times)
and consistency, given that the operation is now atomic.
In Page.write(final PagedMessage message) if the page file is closed
it returns silently. The caller has no way to know that if the message
is paged to file or not. It should throw an exception so that the
caller can handle it correctly.
This causes random failure PagingTest#testExpireLargeMessageOnPaging().
The test shows that when the server stops it closes the page file.
In the mean time a message is expired to the expiry queue and if
the expiry queue is in paging mode, it goes to Page.write() and
returns without any error. The result is that the message is removed
from the original queue and not added to the expiry queue.
If we throw exception here it makes the expiration failed, the message
will not be removed from the orginal queue. Next time broker is started,
the message will be reloaded and expired again. no message lost.
Add consumer priority support
Includes refactor of consumer iterating in QueueImpl to its own logical class, to be able to implement.
Add OpenWire JMS Test - taken from ActiveMQ5
Add Core JMS Test
Add AMQP Test
Add Docs
There's a *slight* semantic change with the behavior of the queue query
and binding query to make them consistent with the address query, namely
that they will return the name of the queue and the name of the address
in every case and the returned names will be not use the FQQN syntax but
will be parsed to reflect their actual names in the broker.
There were two different but nearly identical implementations of
createQueue(). I consolidated these into a single method. There should
be no semantic differences.
When trying to get the bindings for an address the getBindingsForAddress
method will create a Bindings instance if there are no bindings for the
address. This is unnecessary in most circumstances so use the
lookupBindingsForAddress method instead and check for null.
MULTICAST messages forwarded by a core bridge will not be routed to any
ANYCAST queues and vice-versa. Diverts have the ability to configure how
routing-type is treated. Core bridges now support this same kind of
functionality. By default the bridge does not alter the routing-type of
forwarded messages to maintain compatibility with existing behavior.
Large messages pendingRecordID is not accessed atomically, leading
to races that would lead to records that cannot been found on the
journal for deletion: it would lead to cause NPE that won't clean
the pending tasks on the current OperationContextImpl.
Adding a cleanup on error of those tasks and avoiding the race
to happen by adding proper synchronization will both enforce
correct clean up when something bad happen and avoid NPE.
In PagingManagerImpl#getPageStore() the operations on the map 'stores'
are not synchronzed and it's possible that more than one paging store is
created for one address.
PageSubscriptionImpl::ackTx is already performing a counter update
using the message persistent size: the size can be reused on
PagePosition::setPersistentSize, avoiding to query the page cache just
to compute it.
These improvements were also part of this task:
- Routing is now cached as much as possible.
- A new Runnable is avoided for each individual message,
since we use the Netty executor to perform delivery
https://issues.apache.org/jira/browse/ARTEMIS-2205