JGroups 3.x hasn't been updated in some time now. The last release was
in April 2020 almost 2 years ago. Lots of protocols have been updated
and added and users are wanting to use them. There is also increasing
concern about using older components triggered mainly by other
recently-discovered high-profile vulnerabilities in the wider Open
Source Java community.
This commit bumps JGroups up to the latest release - 5.2.0.Final.
However, there is a cost associated with upgrading.
The old-style properties configuration is no longer supported. I think
it's unlikely that end-users are leveraging this because it is not
exposed via broker.xml. The JGroups XML configuration has been around
for a long time, is widely adopted, and is still supported. I expect
most (if not all) users are using this. However, a handful of tests
needed to be updated and/or removed to deal with this absence.
Some protocols and/or protocol properties are no longer supported. This
means that users may have to change their JGroups stack configurations
when they upgrade. For example, our own clustered-jgroups example had to
be updated or it wouldn't run properly.
MQTT 5 is an OASIS standard which debuted in March 2019. It boasts
numerous improvments over its predecessor (i.e. MQTT 3.1.1) which will
benefit users. These improvements are summarized in the specification
at:
https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901293
The specification describes all the behavior necessary for a client or
server to conform. The spec is highlighted with special "normative"
conformance statements which distill the descriptions into concise
terms. The specification provides a helpful summary of all these
statements. See:
https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901292
This commit implements all of the mandatory elements from the
specification and provides tests which are identified using the
corresponding normative conformance statement. All normative
conformance statements either have an explicit test or are noted in
comments with an explanation of why an explicit test doesn't exist. See
org.apache.activemq.artemis.tests.integration.mqtt5 for all those
details.
This commit also includes documentation about how to configure
everything related to the new MQTT 5 features.
* Add BindingDTO to allow configuring multiple addresses to listen on
* Start a new ServerConnector for each binding and deploy the corresponding web-applications
* Update documentation and tests
* Add tests to verify old and new configuration style produce equal results
* Add BindingDTO to allow configuring multiple addresses to listen on
* Start a new ServerConnector for each binding and deploy the corresponding web-applications
* Update documentation and tests
* Add tests to verify old and new configuration style produce equal results
scenario - avoid paging, if address is full chain another broker and produce to the head, consume from the tail using producer and consumer roles to partition connections. When tail is drained, drop it.
- adds a option to treat an idle consumer as slow
- adds basic support for credit based address blocking ARTEMIS-2097
- adds some more visiblity to address memory usage and balancer attribute modifier operations
Adds support for extra configuration options to LDAP login module to
prepare for supporting any future/custom string configuration in LDAP
directory context creation.
Details:
- Changed LDAPLoginModule to pass any string configuration not
recognized by the module itself to the InitialDirContext contruction
environment.
- Changed the static LDAPLoginModule configuration key fields to an
enum to be able to loop through the specified keys (e.g. to filter out
the internal LDAPLoginModule configuration keys from the keys passed to
InitialDirContext).
- Few fixes for issues reported by static analysis tools.
- Tested that LDAP authentication with TLS+GSSAPI works against a
recent Windows AD server with Java
OpenJDK11U-jdk_x64_windows_hotspot_11.0.13_8 by setting the property
com.sun.jndi.ldap.tls.cbtype (see ARTEMIS-3140) in JAAS login.conf.
- Moved LDAPLoginModuleTest to the correct package to be able to
access LDAPLoginModule package privates from the test code.
- Added a test to LDAPLoginModuleTest for the task changes.
- Updated documentation to reflect the changes.
Back in version 2.17.0 we began to provide Maven artifacts for Jakarta
Messaging client resources. This commit expands that support in the
following ways:
- Distribute a Jakarta Messaging 3.0 client with the broker (in the
'lib/client' directory alongside the JMS client.
- Update documentation.
- Add example using the Jakarta Messaging client.
- Update Artemis CLI to use core instead of JMS as it was causing
conflicts with the new Jarkarta Messaging client.
- Add example to build Jarkarta Messaging version of the JCA RA for
deployment into Jakarta EE 9 application servers.
The provider of an SSL key/trust store is different from that store's
type. However, the broker currently doesn't differentiate these and uses
the provider for both. Changing this *may* potentially break existing
users who are setting the provider, but I don't see any way to avoid
that. This is a bug that needs to be fixed in order to support use-cases
like PKCS#11.
Change summary:
- Added documentation.
- Consolidated several 2-way SSL tests classes into a single
parameterized test class. All these classes were essentially the same
except for a few key test parameters. Consolidating them avoided
having to update the same code in multiple places.
- Expanded tests to include different providers & types.
- Regenerated all SSL artifacts to allow tests to pass with new
constraints.
- Improved logging for when SSL handler initialization fails.
Change summary:
- Remove the existing Xalan-based XPath evaluator since Xalan appears
to be no longer maintained.
- Implement a JAXP XPath evaluator (from the ActiveMQ 5.x code-base).
- Pull in the changes from https://issues.apache.org/jira/browse/AMQ-5333
to enable configurable XML parser features.
- Add a method to the base Message interface to make it easier to get
the message body as a string. This relieves the filter from having
to deal with message implementation details.
- Update the Qpid JMS client to get the jms.validateSelector parameter.
- Adding a paragraph about addressing and distinct queue names
- Renaming match on peers, senders and receivers as "address-match"
- Changing qpid dispatch test to use a single listener
- Fixing reconnect attemps message
Both authentication and authorization will hit the underlying security
repository (e.g. files, LDAP, etc.). For example, creating a JMS
connection and a consumer will result in 2 hits with the *same*
authentication request. This can cause unwanted (and unnecessary)
resource utilization, especially in the case of networked configuration
like LDAP.
There is already a rudimentary cache for authorization, but it is
cleared *totally* every 10 seconds by default (controlled via the
security-invalidation-interval setting), and it must be populated
initially which still results in duplicate auth requests.
This commit optimizes authentication and authorization via the following
changes:
- Replace our home-grown cache with Google Guava's cache. This provides
simple caching with both time-based and size-based LRU eviction. See more
at https://github.com/google/guava/wiki/CachesExplained. I also thought
about using Caffeine, but we already have a dependency on Guava and the
cache implementions look to be negligibly different for this use-case.
- Add caching for authentication. Both successful and unsuccessful
authentication attempts will be cached to spare the underlying security
repository as much as possible. Authenticated Subjects will be cached
and re-used whenever possible.
- Authorization will used Subjects cached during authentication. If the
required Subject is not in the cache it will be fetched from the
underlying security repo.
- Caching can be disabled by setting the security-invalidation-interval
to 0.
- Cache sizes are configurable.
- Management operations exist to inspect cache sizes at runtime.
Now it is possible to reset queue parameters to their defaults by removing them
from broker.xml and redeploying the configuration.
Originally this PR covered the "filter" parameter only.
ORIG message propertes like _AMQ_ORIG_ADDRESS are added to messages
during various broker operations (e.g. diverting a message, expiring a
message, etc.). However, if multiple operations try to set these
properties on the same message (e.g. administratively moving a message
which eventually gets sent to a dead-letter address) then important
details can be lost. This is particularly problematic when using
auto-created dead-letter or expiry resources which use filters based on
_AMQ_ORIG_ADDRESS and can lead to message loss.
This commit simply over-writes the existing ORIG properties rather than
preserving them so that the most recent information is available.
- when sending messages to DLQ or Expiry we now use x-opt legal names
- we now support filtering thorugh annotations if using m. as a prefix.
- enabling hyphenated_props: to allow m. as a prefix
Due to the changes in 6b5fff40cb the
config parameter message-expiry-thread-priority is no longer needed. The
code now uses a ScheduledExecutorService and a thread pool rather than
dedicating a thread 100% to the expiry scanner. The pool's size can be
controlled via scheduled-thread-pool-max-size.
Remove excluded cipher suites matching the prefix `SSL` because the names of the
IBM Java 8 JVM cipher suites have the prefix `SSL` while the
`DEFAULT_EXCLUDED_CIPHER_SUITES` of org.eclipse.jetty.util.ssl.SslContextFactory
includes "^SSL_.*$". So all IBM JVM cipher suites are excluded by
SslContextFactory using the `DEFAULT_EXCLUDED_CIPHER_SUITES`.
This is a Large commit where I am refactoring largeMessage Body out of CoreMessage
which is now reused with AMQP.
I had also to fix Reference Counting to fix how Large Messages are Acked
And I also had to make sure Large Messages are transversing correctly when in cluster.
This commit introduces the ability to configure a downstream connection
for federation. This works by sending information to the remote broker
and that broker will parse the message and create a new upstream back
to the original broker.
Add the config parameter `page-sync-timeout` to set a customized value,
because if the broker is configured to use ASYNCIO journal, the timeout
has the same value of NIO default journal buffer timeout ie 3333333.
Active Directory servers are unable to handle referrals automatically.
This causes a PartialResultException to be thrown if a referral is
encountered beneath the base search DN, even if the LDAPLoginModule is
set to ignore referrals.
This option may be set to 'true' to ignore these exceptions, allowing
login to proceed with the query results received before the exception
was encountered.
Note: there are no tests for this change as I could not reproduce the
issue with the ApacheDS test server. The issue is specific to directory
servers that don't support the ManageDsaIT control such as Active
Directory.
A new feature to preserve messages sent to an address for queues that will be
created on the address in the future. This is essentially equivalent to the
"retroactive consumer" feature from 5.x. However, it's implemented in a way
that fits with the address model of Artemis.
Improve wildcard support for the key attribute in the roles access
match element and whitelist entry element, allowing prefix match for
the mBean properties.
After a node is scaled down to a target node, the sf queue in the
target node is not deleted.
Normally this is fine because may be reused when the scaled down
node is back up.
However in cloud environment many drainer pods can be created and
then shutdown in order to drain the messages to a live node (pod).
Each drainer pod will have a different node-id. Over time the sf
queues in the target broker node grows and those sf queues are
no longer reused.
Although use can use management API/console to manually delete
them, it would be nice to have an option to automatically delete
those sf queue/address resources after scale down.
In this PR it added a boolean configuration parameter called
cleanup-sf-queue to scale down policy so that if the parameter
is "true" the broker will send a message to the
target broker signalling that the SF queue is no longer
needed and should be deleted.
If the parameter is not defined (default) or is "false"
the scale down won't remove the sf queue.