This is caused by too many entries on the HashMap for ThreadLocals.
Also: I'm reviewing some readlock usage on the StorageManager to simplify things a little bit.
It would be useful to be able to cycle the embedded web server if, for
example, one needed to renew the SSL certificates. To support
functionality I made a handful of changes, e.g.:
- Refactoring WebServerComponent so that all the necessary
configuration would happen in the start() method.
- Refactoring WebServerComponentTest to re-use code.
MQTT 5 is an OASIS standard which debuted in March 2019. It boasts
numerous improvments over its predecessor (i.e. MQTT 3.1.1) which will
benefit users. These improvements are summarized in the specification
at:
https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901293
The specification describes all the behavior necessary for a client or
server to conform. The spec is highlighted with special "normative"
conformance statements which distill the descriptions into concise
terms. The specification provides a helpful summary of all these
statements. See:
https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901292
This commit implements all of the mandatory elements from the
specification and provides tests which are identified using the
corresponding normative conformance statement. All normative
conformance statements either have an explicit test or are noted in
comments with an explanation of why an explicit test doesn't exist. See
org.apache.activemq.artemis.tests.integration.mqtt5 for all those
details.
This commit also includes documentation about how to configure
everything related to the new MQTT 5 features.
scenario - avoid paging, if address is full chain another broker and produce to the head, consume from the tail using producer and consumer roles to partition connections. When tail is drained, drop it.
- adds a option to treat an idle consumer as slow
- adds basic support for credit based address blocking ARTEMIS-2097
- adds some more visiblity to address memory usage and balancer attribute modifier operations
When using the OpenSSL provider on the broker the getPeerCertificates()
method does *not* return a X509Certificate[] so we need to convert the
Certificate[] that is returned. This code is inspired by Tomcat's
org.apache.tomcat.util.net.jsse.JSSESupport class.
Casting the result of getPeerCertificates() to X509Certificate[] mirrors
what is done in the ActiveMQ "Classic" code-base.
A few tests which were imported from ActiveMQ "Classic" to verify our
OpenWire implementation were removed as they relied on a "stub"
implementation of javax.net.ssl.SSLSession that never would have worked
across multiple JDKs once javax.security.cert.X509Certificate[] was
removed. Furthermore, the tests appeared to be related to the OpenWire
*client* and not relevant to our broker-side implementation.
I decided on NO-JIRA as this is only support tests themselves. No need for release notes on this commit:
I changed logging-CI.properties to be the same as logging.properties, with the only exception as file and console are limited by WARN.
while the AssertionLogger would still get INFO. as that's required for certain tests.
Aside from adding audit logging for message acknowledgement this commit
also consolidates the two nearly identical acknowledge method
implementations in o.a.a.a.c.s.i.QueueImpl. This avoids duplicating
code for audit logging, plugin invocation, etc. There is no semantic
change.
Due to the multi-threaded AMQP implementation the ThreadLocal variables
used by the AuditLogger to track the username and remote address don't
work properly. Changes include:
- Passing the audit Subject (set during authentication) and the remote
address explicitly for audit logging on the relevant ServerSession
methods rather than relying on the AuditLogger's ThreadLocal
variables
- Audit logging core session creation *after* successful authentication
so that we have the proper Subject; this is especially important for
the SSL certificate authentication use-case
- Renaming some methods and variables in AuditLogger to more accurately
reflect their intended use
- Adding JavaDoc and refactoring the getCaller methods on AuditLogger
- Refactor audit log testing and add a new test
As a follow-up to #3618/dc7de893747b90b627d729f9f18a758bb4dad9d5 update
checkstyle to the latest version, restoring the originally intended
"RightCurly" style, and updating all the code to properly adhere to the
style as enforced by the new checkstyle version.
The version of checkstyle we used before the aforementioned commit had
a bug which didn't properly enforced our intended "RightCurly" style
(see https://github.com/checkstyle/checkstyle/issues/6345). That commit
changed the style to accommodate the handful of unintended style
violations. This commit reverts that change for 2 main reasons:
- The style was always intended to use `alone` for both `METHOD_DEF`
and `CTOR_DEF`.
- There are over 1,000 existing uses of the intended style and around
30 violations of this style which were unintentionally allowed.
Reverting the style back to the original and cleaning up the unintented
violations makes the code more consistent and prevents further style
inconsistencies in the future.
There were a handful of other changes related to checkstyle bugs which
allowed unintended style violations. These were related to indentation
levels.
This closes#3619
(with some minor changes from Robbie to fix remaining violations)
* removing the JMS dependency on AMQP module
* fixing destinations usage.
* refactoring to remove some JMS usage and make exceptions a bit better
Jira: https://issues.apache.org/jira/browse/ARTEMIS-3113
Since the libaio.close is now async
there might be a situation with more than one close called during a server.stop();
This should deal with that scenario
Using a ThreadLocal for the audit user information works in most cases,
but it can fail when dispatching messages to consumers because threads
are taken out of a pool to do the dispatching and those threads may not
be associated with the proper credentials. This commit fixes that
problem with the following changes:
- Passes the Subject explicitly when logging audit info during dispatch
- Relocates security audit logging from the SecurityManager
implementation(s) to the SecurityStore implementation
- Associates the Subject with the connection properly with the new
security caching
- Fixed an issue where I needed to set connection to null after closing it
- Added more tests on QpidDispatchPeerTest (tests i would have done manually, and reproduced a few issues along the way)
Making Scheduled task to be more reliable when
using scheduledComponent.delay() method and saving
periodic tasks to be skipped although on correct timing
This reverts commit dbb3a90fe6.
The org.apache.activemq.artemis.core.server.Queue#getRate method is for
slow-consumer detection and is designed for internal use only.
Furthermore, it's too opaque to be trusted by a remote user as it only
returns the number of message added to the queue since *the last time
it was called*. The problem here is that the user calling it doesn't
know when it was invoked last. Therefore, they could be getting the
rate of messages added for the last 5 minutes or the last 5
milliseconds. This can lead to inconsistent and misleading results.
There are three main ways for users to track rates of message
production and consumption:
1. Use a metrics plugin. This is the most feature-rich and flexible
way to track broker metrics, although it requires tools (e.g.
Prometheus) to store the metrics and display them (e.g. Grafana).
2. Invoke the getMessageCount() and getMessagesAdded() management
methods and store the returned values along with the time they were
retrieved. A time-series database is a great tool for this job. This is
exactly what tools like Prometheus do. That data can then be used to
create informative graphs, etc. using tools like Grafana. Of course, one
can skip all the tools and just do some simple math to calculate rates
based on the last time the counts were retrieved.
3. Use the broker's message counters. Message counters are the broker's
simple way of providing historical information about the queue. They
provide similar results to the previous solutions, but with less
flexibility since they only track data while the broker is up and
there's not really any good options for graphing.
Both authentication and authorization will hit the underlying security
repository (e.g. files, LDAP, etc.). For example, creating a JMS
connection and a consumer will result in 2 hits with the *same*
authentication request. This can cause unwanted (and unnecessary)
resource utilization, especially in the case of networked configuration
like LDAP.
There is already a rudimentary cache for authorization, but it is
cleared *totally* every 10 seconds by default (controlled via the
security-invalidation-interval setting), and it must be populated
initially which still results in duplicate auth requests.
This commit optimizes authentication and authorization via the following
changes:
- Replace our home-grown cache with Google Guava's cache. This provides
simple caching with both time-based and size-based LRU eviction. See more
at https://github.com/google/guava/wiki/CachesExplained. I also thought
about using Caffeine, but we already have a dependency on Guava and the
cache implementions look to be negligibly different for this use-case.
- Add caching for authentication. Both successful and unsuccessful
authentication attempts will be cached to spare the underlying security
repository as much as possible. Authenticated Subjects will be cached
and re-used whenever possible.
- Authorization will used Subjects cached during authentication. If the
required Subject is not in the cache it will be fetched from the
underlying security repo.
- Caching can be disabled by setting the security-invalidation-interval
to 0.
- Cache sizes are configurable.
- Management operations exist to inspect cache sizes at runtime.
HumanReadableByteCountTest test is no longer failing under environments with locales defining different number format.
The function now returns values according to the Locale.ROOT locale specification.
- when sending messages to DLQ or Expiry we now use x-opt legal names
- we now support filtering thorugh annotations if using m. as a prefix.
- enabling hyphenated_props: to allow m. as a prefix
This commit does the following:
- Deprecates existing overloaded createQueue, createSharedQueue,
createTemporaryQueue, & updateQueue methods for ClientSession,
ServerSession, ActiveMQServer, & ActiveMQServerControl where
applicable.
- Deprecates QueueAttributes, QueueConfig, & CoreQueueConfiguration.
- Deprecates existing overloaded constructors for QueueImpl.
- Implements QueueConfiguration with JavaDoc to be the single,
centralized configuration object for both client-side and broker-side
queue creation including methods to convert to & from JSON for use in
the management API.
- Implements new createQueue, createSharedQueue & updateQueue methods
with JavaDoc for ClientSession, ServerSession, ActiveMQServer, &
ActiveMQServerControl as well as a new constructor for QueueImpl all
using the new QueueConfiguration object.
- Changes all internal broker code to use the new methods.
KMPNeedle::searchInto has been specialized and copied
to handle ReadableBuffer in order to save polymorphic
calls on it that would make it slower on hot paths.
This is a Large commit where I am refactoring largeMessage Body out of CoreMessage
which is now reused with AMQP.
I had also to fix Reference Counting to fix how Large Messages are Acked
And I also had to make sure Large Messages are transversing correctly when in cluster.
- Avoid some Properties Decoding, checking if we need certain properties like scheduled delivery
- Avoid creating some unnecessary SimpleString instances
- Removed some intermediate ActiveMQBuffer allocation
- Removed some intermediate UnreleasableByteBuf allocation
This is a surprisingly large change just to fix some log messages, but
the changes were necessary in order to get the relevant data to where it
was being logged. The fact that the data wasn't readily available is
probably why it wasn't logged in the first place.