The provider of an SSL key/trust store is different from that store's
type. However, the broker currently doesn't differentiate these and uses
the provider for both. Changing this *may* potentially break existing
users who are setting the provider, but I don't see any way to avoid
that. This is a bug that needs to be fixed in order to support use-cases
like PKCS#11.
Change summary:
- Added documentation.
- Consolidated several 2-way SSL tests classes into a single
parameterized test class. All these classes were essentially the same
except for a few key test parameters. Consolidating them avoided
having to update the same code in multiple places.
- Expanded tests to include different providers & types.
- Regenerated all SSL artifacts to allow tests to pass with new
constraints.
- Improved logging for when SSL handler initialization fails.
If an application wants to use a special key/truststore for Artemis but
have the remainder of the application use the default Java store, the
org.apache.activemq.ssl.keyStore needs to take precedence over Java's
javax.net.ssl.keyStore. However, the current implementation takes the
first non-null value from
System.getProperty(JAVAX_KEYSTORE_PATH_PROP_NAME),
System.getProperty(ACTIVEMQ_KEYSTORE_PATH_PROP_NAME),
keyStorePath
So if the default Java property is set, no override is possible. Swap
the order of the JAVAX_... and ACTIVEMQ_... property names so that the
ActiveMQ ones come first (as a component-specific overrides), the
standard Java ones comes second, and finally a local attribute value
(through Stream.of(...).firstFirst()).
(In our case the application uses the default Java truststore location
at $JAVA_HOME/lib/security/jssecacerts, and only supplies its password
in javax.net.ssl.trustStorePassword, and then uses a dedicated
truststore for Artemis. Defining both org.apache.activemq.ssl.trustStore
and org.apache.activemq.ssl.trustStorePassword now makes Artemis use the
dedicated truststore (javax.net.ssl.trustStore is not set as we use the
default location, so the second choice
org.apache.activemq.ssl.trustStore applies), but with the Java default
truststore password (first choice javax.net.ssl.trustStorePassword
applies instead of the second choice because it is set for the default
truststore). Obviously, this does not work unless both passwords are
identical!)
This commit does the following:
- Deprecates existing overloaded createQueue, createSharedQueue,
createTemporaryQueue, & updateQueue methods for ClientSession,
ServerSession, ActiveMQServer, & ActiveMQServerControl where
applicable.
- Deprecates QueueAttributes, QueueConfig, & CoreQueueConfiguration.
- Deprecates existing overloaded constructors for QueueImpl.
- Implements QueueConfiguration with JavaDoc to be the single,
centralized configuration object for both client-side and broker-side
queue creation including methods to convert to & from JSON for use in
the management API.
- Implements new createQueue, createSharedQueue & updateQueue methods
with JavaDoc for ClientSession, ServerSession, ActiveMQServer, &
ActiveMQServerControl as well as a new constructor for QueueImpl all
using the new QueueConfiguration object.
- Changes all internal broker code to use the new methods.
Add a Netty socks proxy handler during channel initialisation to allow
Artemis to communicate via a SOCKS proxy. Supports SOCKS version 4a & 5.
Even if enabled in configuration, the proxy will not be used when the
target host is a loopback address.
This is a Large commit where I am refactoring largeMessage Body out of CoreMessage
which is now reused with AMQP.
I had also to fix Reference Counting to fix how Large Messages are Acked
And I also had to make sure Large Messages are transversing correctly when in cluster.
This is a surprisingly large change just to fix some log messages, but
the changes were necessary in order to get the relevant data to where it
was being logged. The fact that the data wasn't readily available is
probably why it wasn't logged in the first place.
This commit introduces the ability to configure a downstream connection
for federation. This works by sending information to the remote broker
and that broker will parse the message and create a new upstream back
to the original broker.
A new feature to preserve messages sent to an address for queues that will be
created on the address in the future. This is essentially equivalent to the
"retroactive consumer" feature from 5.x. However, it's implemented in a way
that fits with the address model of Artemis.
When CoreMessage is doing copyHeadersAndProperties() it doesn't
make a full copy of its properties (a TypedProperties object).
It will cause problem when multiple threads/parties are modifying the
properties of the copied messages from the same message.
This will be particular bad if the message is a large message
where moveHeadersAndProperties is being used.
After a node is scaled down to a target node, the sf queue in the
target node is not deleted.
Normally this is fine because may be reused when the scaled down
node is back up.
However in cloud environment many drainer pods can be created and
then shutdown in order to drain the messages to a live node (pod).
Each drainer pod will have a different node-id. Over time the sf
queues in the target broker node grows and those sf queues are
no longer reused.
Although use can use management API/console to manually delete
them, it would be nice to have an option to automatically delete
those sf queue/address resources after scale down.
In this PR it added a boolean configuration parameter called
cleanup-sf-queue to scale down policy so that if the parameter
is "true" the broker will send a message to the
target broker signalling that the SF queue is no longer
needed and should be deleted.
If the parameter is not defined (default) or is "false"
the scale down won't remove the sf queue.