ORIG message propertes like _AMQ_ORIG_ADDRESS are added to messages
during various broker operations (e.g. diverting a message, expiring a
message, etc.). However, if multiple operations try to set these
properties on the same message (e.g. administratively moving a message
which eventually gets sent to a dead-letter address) then important
details can be lost. This is particularly problematic when using
auto-created dead-letter or expiry resources which use filters based on
_AMQ_ORIG_ADDRESS and can lead to message loss.
This commit simply over-writes the existing ORIG properties rather than
preserving them so that the most recent information is available.
- when sending messages to DLQ or Expiry we now use x-opt legal names
- we now support filtering thorugh annotations if using m. as a prefix.
- enabling hyphenated_props: to allow m. as a prefix
DivertBindings are now properly cleaned up when a queue binding is
removed that matches the divert. The correct key is now used to remove
the queue address from the set and the correct address is now used to
remove the remote consumer.
Test fails with the primary server being killed by the crash and the backup server is killed
by the tearDown before ScaleDownHandler can kick in. This commit adds a wait method to allow
ScaleDownHandler to process before the test completes.
AmqpExpiredMessageTest will expire messages, eventually the counter could be 0
so it is invalid to assertEquals(1, queue.getMessageCount()) as it will be 0 eventually.
This one should improve eventual failures on ClusterConnectionControlTest
to validate this, run ClusterConnectionControlTest::testNotifications in a loop
you will see eventually the test taking longer to shutdown the executor as the call could be blocked.
This won't be an issue on a real server (Production system)
however, on the testsuite or while embedded this could cause issues,
if a same instance is stopped then started.
This is the reason why BackupAuthenticationTest was intermittently failing.
I also adapted the test since I would need to stop the server and reactivate it in order to change the configuration.
The previous test wasn't acting like a real server.
This commit does the following:
- Deprecates existing overloaded createQueue, createSharedQueue,
createTemporaryQueue, & updateQueue methods for ClientSession,
ServerSession, ActiveMQServer, & ActiveMQServerControl where
applicable.
- Deprecates QueueAttributes, QueueConfig, & CoreQueueConfiguration.
- Deprecates existing overloaded constructors for QueueImpl.
- Implements QueueConfiguration with JavaDoc to be the single,
centralized configuration object for both client-side and broker-side
queue creation including methods to convert to & from JSON for use in
the management API.
- Implements new createQueue, createSharedQueue & updateQueue methods
with JavaDoc for ClientSession, ServerSession, ActiveMQServer, &
ActiveMQServerControl as well as a new constructor for QueueImpl all
using the new QueueConfiguration object.
- Changes all internal broker code to use the new methods.
Due to the changes in 6b5fff40cb the
config parameter message-expiry-thread-priority is no longer needed. The
code now uses a ScheduledExecutorService and a thread pool rather than
dedicating a thread 100% to the expiry scanner. The pool's size can be
controlled via scheduled-thread-pool-max-size.
Using a property on AMQPLargeMessage instead of a ThreadLocal
This was causing issues on the journal as the message may transverse different threads on the journal.
There is no guarantee that the encodeSize size is the same in AMQP right after read.
As the protocol may add additional bytes right after decoded such as header, extra properties.. etc.
The drain control has to immediately flush
otherwise a next flow control event may remove the previous status from Proton.
So, this really cannot wait the next executor, and it has to be done immediately.
Add a Netty socks proxy handler during channel initialisation to allow
Artemis to communicate via a SOCKS proxy. Supports SOCKS version 4a & 5.
Even if enabled in configuration, the proxy will not be used when the
target host is a loopback address.
Historically speaking, all message properties starting with AMQ HDR
would not be passed to OpenWire messages. However, that blocked the
properties from management notifications so ARTEMIS-1209 was raised and
the solution there was to pass properties that started with _AMQ *if*
the consumer was connected to the management notification address.
However, in this case messages are diverted to a different address so
this check fails and the properties are removed. My solution will be to
check the message itself to see if it has the _AMQ_NotifType property
(which all notification messages do) rather than checking where the
consumer is connected.
In case there is a hardware, firewal or any other thing making the UDP connection to go deaf
we will now reopen the connection in an attempt to go over possible issues.
This is also improving locking around DiscoveryGroup initial connection.
This is a Large commit where I am refactoring largeMessage Body out of CoreMessage
which is now reused with AMQP.
I had also to fix Reference Counting to fix how Large Messages are Acked
And I also had to make sure Large Messages are transversing correctly when in cluster.
Update netty version to 4.1.43.Final and netty-tcnative version to 2.0.26.Final.
Change restricted-security-client.policy because Netty 4.1.43.Final requires
access to two more files: /etc/os-release and /usr/lib/os-release.
Some open wire tests will close a consumer and open it right away.
The removal of the queue is asynchronous, having this kind of behaviour may lead to issues
after we moved the QueueManager to be asynchronous.
There is an optimization in AMQP, that properties are only parsed over demand.
It happens that after ARTEMIS-2294 (commit 2dd0671698),
every send would request for the property on the message, resulting the properties to always be parsed upon send.
Even when there's no use of application properties.
This is a surprisingly large change just to fix some log messages, but
the changes were necessary in order to get the relevant data to where it
was being logged. The fact that the data wasn't readily available is
probably why it wasn't logged in the first place.
Add a paged message to the tail, when the QueueIterateAction doesn't handle it, to avoid removing unhandled paged message. Move the refRemoved calls from the QueueIterateActions to the iterQueue to fix the queue stats.
When AMQPMessages are redistributed from one node to
another, the internal property of message is not
cleaned up and this causes a message to be routed
to a same queue more than once, causing duplicated
messages.
If a broker loses its file lock on the journal and doesn't notice (e.g.
network connection failure to an NFS mount) then it can continue to run
after its backup activates resulting in split-brain.
This commit implements periodic journal lock evaluation so that if a live
server loses its lock it will automatically restart itself.
This commit introduces the ability to configure a downstream connection
for federation. This works by sending information to the remote broker
and that broker will parse the message and create a new upstream back
to the original broker.
A new feature to preserve messages sent to an address for queues that will be
created on the address in the future. This is essentially equivalent to the
"retroactive consumer" feature from 5.x. However, it's implemented in a way
that fits with the address model of Artemis.
The parameter failoverOnInitialConnection wouldn't seem to be used and
makes no sense any more, because the connectors are retried in a loop.
So someone can just add the backup in the initial connection.
In LargeMessageImpl.copy(long) it need to open the underlying
file in order to read and copy bytes into the new copied message.
However there is a chance that another thread can come in and close
the file in the middle, making the copy failed
with "channel is null" error.
This is happening in cases where a large message is sent to a jms
topic (multicast address). During delivery it to multiple
subscribers, some consumer is doing delivery and closed the
underlying file after. Some other consumer is rolling back
the messages and eventually move it to DLQ (which will call
the above copy method). So there is a chance this bug being hit on.
The crititical analyser trigger the broker shutdown if try to
removeAllMessages with a huge queue. The iterQueue is split so as
not to keep the lock too time.
When CoreMessage is doing copyHeadersAndProperties() it doesn't
make a full copy of its properties (a TypedProperties object).
It will cause problem when multiple threads/parties are modifying the
properties of the copied messages from the same message.
This will be particular bad if the message is a large message
where moveHeadersAndProperties is being used.
It use RandomAccessFile to allow using heap buffers without additional
copies and/or leaks of direct buffers, as performed by FileChannel JDK
implementation (see https://bugs.openjdk.java.net/browse/JDK-8147468)
When an openwire client closes the session, the broker doesn't
clean up its server consumer references even though the core
consumers are closed. This results a leak when sessions within
a connection are created and closed when the connection keeps open.
It would fail on cannot destroy queue
as the failure could be asynchronous, introducing a quick race, which is acceptable
you just need to make sure the async operation will finish before removing the queue.
Fix is to introduce a Wait.assertEquals call.
After a node is scaled down to a target node, the sf queue in the
target node is not deleted.
Normally this is fine because may be reused when the scaled down
node is back up.
However in cloud environment many drainer pods can be created and
then shutdown in order to drain the messages to a live node (pod).
Each drainer pod will have a different node-id. Over time the sf
queues in the target broker node grows and those sf queues are
no longer reused.
Although use can use management API/console to manually delete
them, it would be nice to have an option to automatically delete
those sf queue/address resources after scale down.
In this PR it added a boolean configuration parameter called
cleanup-sf-queue to scale down policy so that if the parameter
is "true" the broker will send a message to the
target broker signalling that the SF queue is no longer
needed and should be deleted.
If the parameter is not defined (default) or is "false"
the scale down won't remove the sf queue.
When converting from AMQP to core and back again support annotations that
aren't able to be placed into Core message properties by storing the bytes
from encoding the types to AMQP encodings and then decoding them again
when converting back into AMQP messages.
Requires update to proton-j 0.33.2 for encoding fix
The core server session tracks details about producers like what
addresses have had messages sent to them, the most recent message ID
sent to each address, and the number of messages sent to each address.
This information is made available to users via the
listProducersInfoAsJSON method on the various management interfaces
(JMX, web console, etc.). However, in situations where a server session
is long lived (e.g. in a pool) and is used to send to many different
addresses (e.g. randomly named temporary JMS queues) this info can
accumulate to a problematic degree. Therefore, we should limit the
amount of producer details saved by the session.
this test was basically broken, it was silently failing as it was ignoring results and taking a long time to finish.
As this test is multiplied along many options (Netty, Replicated, JDBC) this was taking considerable extra time
on the testsuite.
Most connection related properties, like the SSL ones, currently
have to be encoded in the brokerURL. When configuring connections
purely through JNDI bindings, this is not always desireable.
This commit allows one to configure all properties included
in TransportConstants.ALLOWABLE_CONNECTOR_KEYS to be listed separately
in the JNDI bindings. These properties are then zipped into any
provided brokerURL. For properties that appear in both places,
the one specified separately in the JNDI bindings takes priority.
This commit should not affect any configuration other than those
configure through JNDIReferenceFactory.