If an exception is thrown in the AMQP send path and there is an
active transaction we should mark that as rollback only so the
client will see an error when it tries to commit a transaction
that had a failed send.
Throughout the years, the standard mechanism for storing passwords has evolved.
In the beginning, passwords were stored in plaintext. Developers are now
encouraged to leverage adaptive one-way functions to store a password. Using a
two-way function by default for storing passwords without a warning could lead
users to a false sense of security.
Adds support for WebSocket compression using the netty server handler to
enable per message compression and decompression as a transparent layer of
the netty pipeine.
This is only double testing.
Instead of parameterizing with or without forceNextValue(MAX_INT)
I'm just always forcing it. No point on duplicating the test just for this.
When the Core client attempts to create the initial connection to a
broker when initialConnectAttempts > 1 it will adhere to retryInterval,
but it will ignore retryIntervalMultiplier & maxRetryInterval. This
commit fixes that so that these parameters are taken into account.
When a bytes property is added to an AMQPMessage and it is then reencoded it
will fail without first wrapping the byte array in an AMQP binary as required
by the ApplicationProperties section specification defined type allowances.
There was already some verification at AMQPMirrorControllerSource::invalidTarget
however the verification failed on soak test ReplicatedBothNodesMirrorTest,
and an user I was working with also gave me evidence of this happening.
I'm improving the previous verification, which is actually a simplification that works on every case.
When converting a large server message to an outgoing STOMP frame the converter
is allowing unsafe concurrent access to the large message internals which leads
to failures on message deliver as the state is out of sync amongst the dispatch
threads.
PriorityLinkedList has multiple sub-lists, before this commit PriorityLinkedList::setNodeStore would set the same node store between all the lists.
When a removeWithID was called for an item on list[0] the remove from list[4] would always succeed first. This operation would work correctly most of the time except
when tail and head is being used. Many NullPointerExceptions would be seen while iterating on the list for remove operations, and the navigation would be completely broken.
A test was added to PriorityLinkedListTest to make sure the correct lists were used however I was not able to reproduce the NPE condition in that test.
AccumulatedInPageSoakTest reproduced the exact condition for the NPE when significant load is used.
We've traditionally used org.apache.activemq.artemis.utils.Base64 for
Base64 encoding/decoding. This implementation is based on public domain
code from http://iharder.net/base64.
In Java 8 java.util.Base64 was introduced. I assumed we hadn't switched
to this implementation for performance reasons so I created a simple
JMH-based test to compare the two implementations and it appears to me
that java.util.Base64 is significantly faster than our current
implementation. Using the JDK's class will simplify our code and
improve performance. Also, it should be 100% backwards compatible
since Base64 encoding/decoding is standardized.
When an AMQP message is sent over a cluster bridge it is embedded into a
Core message. If the size of the AMQP message is barely beneath the
minLargeMessageSize then the Core message in which the AMQP message is
embedded will become a large message. The on the bridge target when the
embedded AMQP message is extracted from the large Core message it will
not be considered "large." In this situation the file for the large Core
message will leak.
Thanks to Erwin Dondorp for the test. I renamed and refactored it a bit,
but the fundamentals came from Erwin.
Allow the client ID to be configured on normal bridge as well as
cluster-connection bridges. This makes the bridge connection easier to
identify on the target broker.
When using the replay functionality the application of filters to
the replayed messages fails to match against AMQP messages due to the
message not getting scanned when some message values are accessed.
Ensure that on server restart the original priority value assigned to an
AMQP message is used when dispatching durable messages from the store.
The AMQP Header section is scanned if present and the priority value
is recovered in an efficient manner.
AckManager.flush would hold a lock on ackManager, There was a possible deadlock with MirrorTarget:
Thread 1:
at org.apache.activemq.artemis.protocol.amqp.connect.mirror.AckManager.addRetry(AckManager.java:393)
- waiting to lock <0x00000007990a13e8> (a org.apache.activemq.artemis.protocol.amqp.connect.mirror.AckManager)
at org.apache.activemq.artemis.protocol.amqp.connect.mirror.AckManager.ack(AckManager.java:418)
at org.apache.activemq.artemis.protocol.amqp.connect.mirror.AMQPMirrorControllerTarget.performAck(AMQPMirrorControllerTarget.java:479)
at org.apache.activemq.artemis.protocol.amqp.connect.mirror.AMQPMirrorControllerTarget.postAcknowledge(AMQPMirrorControllerTarget.java:461)
at org.apache.activemq.artemis.protocol.amqp.connect.mirror.AMQPMirrorControllerTarget.actualDelivery(AMQPMirrorControllerTarget.java:318)
at org.apache.activemq.artemis.protocol.amqp.proton.ProtonAbstractReceiver.onMessageComplete(ProtonAbstractReceiver.java:361)
Thread 2:
at jdk.internal.misc.Unsafe.park(java.base@11.0.8/Native Method)
- parking to wait for <0x000000079de0af38> (a java.util.concurrent.CountDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.8/LockSupport.java:234)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(java.base@11.0.8/AbstractQueuedSynchronizer.java:1079)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(java.base@11.0.8/AbstractQueuedSynchronizer.java:1369)
at java.util.concurrent.CountDownLatch.await(java.base@11.0.8/CountDownLatch.java:278)
at org.apache.activemq.artemis.protocol.amqp.connect.mirror.AMQPMirrorControllerTarget.flush(AMQPMirrorControllerTarget.java:230)
at org.apache.activemq.artemis.protocol.amqp.connect.mirror.AckManager$$Lambda$601/0x00000008005c3040.accept(Unknown Source)
at java.lang.Iterable.forEach(java.base@11.0.8/Iterable.java:75)
at org.apache.activemq.artemis.protocol.amqp.connect.mirror.AckManager.flushMirrorTargets(AckManager.java:184)
- locked <0x00000007990a13e8> (a org.apache.activemq.artemis.protocol.amqp.connect.mirror.AckManager)
at org.apache.activemq.artemis.protocol.amqp.connect.mirror.AckManager.initRetry(AckManager.java:162)
Send operations should ignore replication, while the ack of the message should wait a round trip in replication.
That will allow us to ack the message faster and still have consistency with its replica.
If a user for some reason force closes the local mirror connection SNF
consumer or the actual connection but hasn't stopped the broker connection
itself the connection should recover and rebuild. The fix ensures that if
local connections are closed first then local session resources get freed.
When an address consumer explicitly closes or is closed we should remove the
address binding for that consumer right away instead of waiting for possible
configured auto delete as the demand is gone and we don't need to binding to
stick around any longer.
Say you use Mirroring and journal replication combined.
The target will wait a round trip on replica before sends are done.
It is possible to ignore that rountrip now with an option added into Configuration#mirrorReplicaSync