Throwing an exception when clearing the bindings when a
cluster-connection is closed short-circuits the clearing (and closing)
process. This commit fixes that by simply logging the failure to clear
and continues on.
No new tests are added with this commit. It relies on existing tests.
PriorityLinkedList has multiple sub-lists, before this commit PriorityLinkedList::setNodeStore would set the same node store between all the lists.
When a removeWithID was called for an item on list[0] the remove from list[4] would always succeed first. This operation would work correctly most of the time except
when tail and head is being used. Many NullPointerExceptions would be seen while iterating on the list for remove operations, and the navigation would be completely broken.
A test was added to PriorityLinkedListTest to make sure the correct lists were used however I was not able to reproduce the NPE condition in that test.
AccumulatedInPageSoakTest reproduced the exact condition for the NPE when significant load is used.
We've traditionally used org.apache.activemq.artemis.utils.Base64 for
Base64 encoding/decoding. This implementation is based on public domain
code from http://iharder.net/base64.
In Java 8 java.util.Base64 was introduced. I assumed we hadn't switched
to this implementation for performance reasons so I created a simple
JMH-based test to compare the two implementations and it appears to me
that java.util.Base64 is significantly faster than our current
implementation. Using the JDK's class will simplify our code and
improve performance. Also, it should be 100% backwards compatible
since Base64 encoding/decoding is standardized.
When an AMQP message is sent over a cluster bridge it is embedded into a
Core message. If the size of the AMQP message is barely beneath the
minLargeMessageSize then the Core message in which the AMQP message is
embedded will become a large message. The on the bridge target when the
embedded AMQP message is extracted from the large Core message it will
not be considered "large." In this situation the file for the large Core
message will leak.
Thanks to Erwin Dondorp for the test. I renamed and refactored it a bit,
but the fundamentals came from Erwin.
For embedded use-cases the Micrometer MeterRegistry may be passed in
from the application and used for other meters unrelated to the broker.
Therefore, the broker should not change the config of the MeterRegistry
(e.g. by adding common tags to all meters) as the config change(s) may
be incompatible with the needs of the embedding application.
Allow the client ID to be configured on normal bridge as well as
cluster-connection bridges. This makes the bridge connection easier to
identify on the target broker.
This is a small usability improvement for management whereby
invocations of some operations no longer require JSON boilerplate. It
impacts the following operations on the ActiveMQServerControl:
- listConnections
- listSessions
- listAddresses
- listQueues
- listConsumers
- listProducers
Send operations should ignore replication, while the ack of the message should wait a round trip in replication.
That will allow us to ack the message faster and still have consistency with its replica.
Say you use Mirroring and journal replication combined.
The target will wait a round trip on replica before sends are done.
It is possible to ignore that rountrip now with an option added into Configuration#mirrorReplicaSync
In some setups, there could be a few hundred thousand queues that are
created due to many consumers that are connecting. However, most of
these are empty and stay empty for the entire day since there aren't
necessarily messages to be sent. The 8K intermediateMessageReferences
instantiates an 64KB buffer (Object[]). This means we have large
allocation and live heap that ultimately remains empty for almost the
entire day.
In this commit, we introduce initial-queue-buffer-size, which defaults
to the current value of 8192. It can be set programmatically via
QueueConfiguration#setInitialQueueBufferSize(int).
Note that this must be a positive power of 2.
The test in this commit was distilled down from a much more complex
integration test that rarely reproduced the problem. It is short and
sweet and reproduces the problem every time.
The problem exists in the iterator's `remove()` method where it uses
`index` instead of `i` when calculating a new highest priority.
When counting messages with provided filters and grouping use the API
getObjectPropertyForFilter which translates values from AMQP message
annotations that are otherwise missed by the filters.
for regular messages it's quite obvious when the message is leaving the queue but for paged messages it becomes a challenge. We should just ignore the update for paged messages.
This commit fixes 3 distinct issues with bridge configuration encoding:
- The encoding size was calculated incorrectly in three ways:
- Using 0 instead of `DataConstants.SIZE_NULL` when no transformer is
defined resulting in a calculated size discrepancy of -1.
- Using `DataConstants.INT` instead of `DataConstants.SIZE_INT` for
the number of transformer properties resulting in a calculated size
discrepancy of +2 when using a configuration with a transformer.
- Using 0 or `sizeOfNullableInteger` instead of
`DataConstants.SIZE_INT` for the number of static connectors
resulting in a variable calculated size discrepancy of either -4 or
+1 respectively.
- Encoding was using `writeString` instead of `writeNullableString` for
the name of the transformer class resulting in an actual buffer size
discrepancy of -1.
Aside from these fixes this commit also adds new tests specifically for
encoding & decoding include a test to verify known bytes during
encoding.
Lastly, it's worth noting that this *won't fix* bad data that was
already stored to disk by an older broker or bad data that comes over
the wire from an older broker (e.g. if a older primary broker paired
with a newer backup).