Add a Netty socks proxy handler during channel initialisation to allow
Artemis to communicate via a SOCKS proxy. Supports SOCKS version 4a & 5.
Even if enabled in configuration, the proxy will not be used when the
target host is a loopback address.
In case there is a hardware, firewal or any other thing making the UDP connection to go deaf
we will now reopen the connection in an attempt to go over possible issues.
This is also improving locking around DiscoveryGroup initial connection.
This is a Large commit where I am refactoring largeMessage Body out of CoreMessage
which is now reused with AMQP.
I had also to fix Reference Counting to fix how Large Messages are Acked
And I also had to make sure Large Messages are transversing correctly when in cluster.
- Avoid some Properties Decoding, checking if we need certain properties like scheduled delivery
- Avoid creating some unnecessary SimpleString instances
- Removed some intermediate ActiveMQBuffer allocation
- Removed some intermediate UnreleasableByteBuf allocation
Second attempt to fix the following compiler warning that is reported in Travis builds, this time using the correct cast type `Class<?>[]` which prevents temporary object allocation because of var-args handling:
```java
/home/travis/build/apache/activemq-artemis/artemis-core-client/src/main/java/org/apache/activemq/artemis/core/protocol/core/impl/wireformat/FederationStreamConnectMessage.java:151: warning: non-varargs call of varargs method with inexact argument type for last parameter;
return (FederationPolicy) Class.forName(clazz).getConstructor(null).newInstance();
cast to Class<?> for a varargs call
cast to Class<?>[] for a non-varargs call and to suppress this warning
```
This fixes the following compiler warning that is reported in Travis builds:
```
/home/travis/build/apache/activemq-artemis/artemis-core-client/src/main/java/org/apache/activemq/artemis/core/protocol/core/impl/wireformat/FederationStreamConnectMessage.java:151: warning: non-varargs call of varargs method with inexact argument type for last parameter;
return (FederationPolicy) Class.forName(clazz).getConstructor(null).newInstance();
cast to Class<?> for a varargs call
cast to Class<?>[] for a non-varargs call and to suppress this warning
```
When AMQPMessages are redistributed from one node to
another, the internal property of message is not
cleaned up and this causes a message to be routed
to a same queue more than once, causing duplicated
messages.
This commit introduces the ability to configure a downstream connection
for federation. This works by sending information to the remote broker
and that broker will parse the message and create a new upstream back
to the original broker.
A new feature to preserve messages sent to an address for queues that will be
created on the address in the future. This is essentially equivalent to the
"retroactive consumer" feature from 5.x. However, it's implemented in a way
that fits with the address model of Artemis.
The parameter failoverOnInitialConnection wouldn't seem to be used and
makes no sense any more, because the connectors are retried in a loop.
So someone can just add the backup in the initial connection.
When CoreMessage is doing copyHeadersAndProperties() it doesn't
make a full copy of its properties (a TypedProperties object).
It will cause problem when multiple threads/parties are modifying the
properties of the copied messages from the same message.
This will be particular bad if the message is a large message
where moveHeadersAndProperties is being used.
After a node is scaled down to a target node, the sf queue in the
target node is not deleted.
Normally this is fine because may be reused when the scaled down
node is back up.
However in cloud environment many drainer pods can be created and
then shutdown in order to drain the messages to a live node (pod).
Each drainer pod will have a different node-id. Over time the sf
queues in the target broker node grows and those sf queues are
no longer reused.
Although use can use management API/console to manually delete
them, it would be nice to have an option to automatically delete
those sf queue/address resources after scale down.
In this PR it added a boolean configuration parameter called
cleanup-sf-queue to scale down policy so that if the parameter
is "true" the broker will send a message to the
target broker signalling that the SF queue is no longer
needed and should be deleted.
If the parameter is not defined (default) or is "false"
the scale down won't remove the sf queue.