If an application wants to use a special key/truststore for Artemis but
have the remainder of the application use the default Java store, the
org.apache.activemq.ssl.keyStore needs to take precedence over Java's
javax.net.ssl.keyStore. However, the current implementation takes the
first non-null value from
System.getProperty(JAVAX_KEYSTORE_PATH_PROP_NAME),
System.getProperty(ACTIVEMQ_KEYSTORE_PATH_PROP_NAME),
keyStorePath
So if the default Java property is set, no override is possible. Swap
the order of the JAVAX_... and ACTIVEMQ_... property names so that the
ActiveMQ ones come first (as a component-specific overrides), the
standard Java ones comes second, and finally a local attribute value
(through Stream.of(...).firstFirst()).
(In our case the application uses the default Java truststore location
at $JAVA_HOME/lib/security/jssecacerts, and only supplies its password
in javax.net.ssl.trustStorePassword, and then uses a dedicated
truststore for Artemis. Defining both org.apache.activemq.ssl.trustStore
and org.apache.activemq.ssl.trustStorePassword now makes Artemis use the
dedicated truststore (javax.net.ssl.trustStore is not set as we use the
default location, so the second choice
org.apache.activemq.ssl.trustStore applies), but with the Java default
truststore password (first choice javax.net.ssl.trustStorePassword
applies instead of the second choice because it is set for the default
truststore). Obviously, this does not work unless both passwords are
identical!)
Replaces direct jdbc connections with dbcp2 datasource. Adds
configuration options to use alternative datasources and to alter the
parameters. While adding slight overhead, this vastly improves the
management and pooling capabilities with db connections.
This reverts commit dbb3a90fe6.
The org.apache.activemq.artemis.core.server.Queue#getRate method is for
slow-consumer detection and is designed for internal use only.
Furthermore, it's too opaque to be trusted by a remote user as it only
returns the number of message added to the queue since *the last time
it was called*. The problem here is that the user calling it doesn't
know when it was invoked last. Therefore, they could be getting the
rate of messages added for the last 5 minutes or the last 5
milliseconds. This can lead to inconsistent and misleading results.
There are three main ways for users to track rates of message
production and consumption:
1. Use a metrics plugin. This is the most feature-rich and flexible
way to track broker metrics, although it requires tools (e.g.
Prometheus) to store the metrics and display them (e.g. Grafana).
2. Invoke the getMessageCount() and getMessagesAdded() management
methods and store the returned values along with the time they were
retrieved. A time-series database is a great tool for this job. This is
exactly what tools like Prometheus do. That data can then be used to
create informative graphs, etc. using tools like Grafana. Of course, one
can skip all the tools and just do some simple math to calculate rates
based on the last time the counts were retrieved.
3. Use the broker's message counters. Message counters are the broker's
simple way of providing historical information about the queue. They
provide similar results to the previous solutions, but with less
flexibility since they only track data while the broker is up and
there's not really any good options for graphing.
Both authentication and authorization will hit the underlying security
repository (e.g. files, LDAP, etc.). For example, creating a JMS
connection and a consumer will result in 2 hits with the *same*
authentication request. This can cause unwanted (and unnecessary)
resource utilization, especially in the case of networked configuration
like LDAP.
There is already a rudimentary cache for authorization, but it is
cleared *totally* every 10 seconds by default (controlled via the
security-invalidation-interval setting), and it must be populated
initially which still results in duplicate auth requests.
This commit optimizes authentication and authorization via the following
changes:
- Replace our home-grown cache with Google Guava's cache. This provides
simple caching with both time-based and size-based LRU eviction. See more
at https://github.com/google/guava/wiki/CachesExplained. I also thought
about using Caffeine, but we already have a dependency on Guava and the
cache implementions look to be negligibly different for this use-case.
- Add caching for authentication. Both successful and unsuccessful
authentication attempts will be cached to spare the underlying security
repository as much as possible. Authenticated Subjects will be cached
and re-used whenever possible.
- Authorization will used Subjects cached during authentication. If the
required Subject is not in the cache it will be fetched from the
underlying security repo.
- Caching can be disabled by setting the security-invalidation-interval
to 0.
- Cache sizes are configurable.
- Management operations exist to inspect cache sizes at runtime.
This is allowing journal appends to happen in burst
during replication, by batching replication response
into the network at the end of the append burst.
1 of 2) - Porting of HORNETMQ-1575
In a live-backup scenario, when live is down and backup becomes live, clients
using HA Connection Factories can failover automatically. However if a
client decides to create a new connection by itself (as in camel jms case)
there is a chance that the new connection is pointing to the dead live
and the connection won't be successful. The reason is that if the old
connection is gone the backup will not get a chance to announce itself
back to client so it fails on initial connection.
The fix is to let CF remember the old topology and use it on any
initial connection attempts.
Since getDiskStoreUsage on the ActiveMQServerControl is converting a
double to a long the value is always 0 in the management API. It should
return a double instead.
Adding this metric required moving the meter registration code from the
AddressInfo class to the ManagementService in order to get clean access
to both the AddressInfo and AddressControl classes.