I decided on NO-JIRA as this is only support tests themselves. No need for release notes on this commit:
I changed logging-CI.properties to be the same as logging.properties, with the only exception as file and console are limited by WARN.
while the AssertionLogger would still get INFO. as that's required for certain tests.
Aside from adding audit logging for message acknowledgement this commit
also consolidates the two nearly identical acknowledge method
implementations in o.a.a.a.c.s.i.QueueImpl. This avoids duplicating
code for audit logging, plugin invocation, etc. There is no semantic
change.
Due to the multi-threaded AMQP implementation the ThreadLocal variables
used by the AuditLogger to track the username and remote address don't
work properly. Changes include:
- Passing the audit Subject (set during authentication) and the remote
address explicitly for audit logging on the relevant ServerSession
methods rather than relying on the AuditLogger's ThreadLocal
variables
- Audit logging core session creation *after* successful authentication
so that we have the proper Subject; this is especially important for
the SSL certificate authentication use-case
- Renaming some methods and variables in AuditLogger to more accurately
reflect their intended use
- Adding JavaDoc and refactoring the getCaller methods on AuditLogger
- Refactor audit log testing and add a new test
This is testing peer integration with qpid-dispatch by using TestContainer and a docker image for Artemis
Also, as I added QpidDispatchTest, I reorganized the brokerConnect tests a bit into a brokerConnect folder.
As a follow-up to #3618/dc7de893747b90b627d729f9f18a758bb4dad9d5 update
checkstyle to the latest version, restoring the originally intended
"RightCurly" style, and updating all the code to properly adhere to the
style as enforced by the new checkstyle version.
The version of checkstyle we used before the aforementioned commit had
a bug which didn't properly enforced our intended "RightCurly" style
(see https://github.com/checkstyle/checkstyle/issues/6345). That commit
changed the style to accommodate the handful of unintended style
violations. This commit reverts that change for 2 main reasons:
- The style was always intended to use `alone` for both `METHOD_DEF`
and `CTOR_DEF`.
- There are over 1,000 existing uses of the intended style and around
30 violations of this style which were unintentionally allowed.
Reverting the style back to the original and cleaning up the unintented
violations makes the code more consistent and prevents further style
inconsistencies in the future.
There were a handful of other changes related to checkstyle bugs which
allowed unintended style violations. These were related to indentation
levels.
This closes#3619
(with some minor changes from Robbie to fix remaining violations)
Updates parent pom, various plugins or deps, tidies up inconsistent versions
and consolidates to inherited version where possible, define properties for
some versions where not. Disables some problematic tests on JDK16+ for now.
Drops DS test dep back 1 version to remove a specific breakage affecting
multiple tests/modules, introduced after its upgrade in commit
9e70b26368.
- It is already entirely disabled one or more ways depending on what JVM is in use.
- If enabled on any modern JVM it would either fail by default or can never work, as
the related ciphers it requires have been disabled (8) or entirely removed (11+)
due to being considered unsuitable for use.
Fixes issues with SaslKrb5LDAPSecurityTest by updating to latest Apache Directory
release which required some updates to the test to fix deprecation warnings and an
updates to commons.lang to fix issues with new namespace for StringUtils that will
work on JDK 8+ only.
In 73c4e399d9 a description is added to DiskStoreUsage. It incorrectly describes the diskStoreUsage as a percentage. This commit changes it to a fraction which it is (also before the description change). A percentage would be better, since MaxDiskUsage is also specified as percentage.
The main benefit on ActiveMQTestBase is to avoid thread leakaging between tests on this case
that is, one test affecting the next and being difficult to find the cause.
The provider of an SSL key/trust store is different from that store's
type. However, the broker currently doesn't differentiate these and uses
the provider for both. Changing this *may* potentially break existing
users who are setting the provider, but I don't see any way to avoid
that. This is a bug that needs to be fixed in order to support use-cases
like PKCS#11.
Change summary:
- Added documentation.
- Consolidated several 2-way SSL tests classes into a single
parameterized test class. All these classes were essentially the same
except for a few key test parameters. Consolidating them avoided
having to update the same code in multiple places.
- Expanded tests to include different providers & types.
- Regenerated all SSL artifacts to allow tests to pass with new
constraints.
- Improved logging for when SSL handler initialization fails.
Previously, when a session was reattached, all the close/failure listeners
were removed from the old connection and set onto the new connection.
This only worked when at most 1 session of the old connection was
transferred: When the second session was transferred, the old
connection already didn't contain any close/failure listeners anymore,
and therefore the list of close/failure listeners was overwritten by
an empty list for the new connection.
Now, when a session is being transferred, it only transfers the
close/failure listeners that belong to it, which are the session itself
+ the TempQueueCleanerUppers.
Modified a test to check whether the sessions are failure listeners of
the new connection after reattachment.
- Remove duplicates dependency definition following e7e3c71511.
- Removes deprecated RELEASE version use, consolidate modules on single paho client version.
- Remove prerequisites entry as per warning, suggested enforcer rule already in place.
Change summary:
- Remove the existing Xalan-based XPath evaluator since Xalan appears
to be no longer maintained.
- Implement a JAXP XPath evaluator (from the ActiveMQ 5.x code-base).
- Pull in the changes from https://issues.apache.org/jira/browse/AMQ-5333
to enable configurable XML parser features.
- Add a method to the base Message interface to make it easier to get
the message body as a string. This relieves the filter from having
to deal with message implementation details.
- Update the Qpid JMS client to get the jms.validateSelector parameter.
* removing the JMS dependency on AMQP module
* fixing destinations usage.
* refactoring to remove some JMS usage and make exceptions a bit better
Jira: https://issues.apache.org/jira/browse/ARTEMIS-3113
If an application wants to use a special key/truststore for Artemis but
have the remainder of the application use the default Java store, the
org.apache.activemq.ssl.keyStore needs to take precedence over Java's
javax.net.ssl.keyStore. However, the current implementation takes the
first non-null value from
System.getProperty(JAVAX_KEYSTORE_PATH_PROP_NAME),
System.getProperty(ACTIVEMQ_KEYSTORE_PATH_PROP_NAME),
keyStorePath
So if the default Java property is set, no override is possible. Swap
the order of the JAVAX_... and ACTIVEMQ_... property names so that the
ActiveMQ ones come first (as a component-specific overrides), the
standard Java ones comes second, and finally a local attribute value
(through Stream.of(...).firstFirst()).
(In our case the application uses the default Java truststore location
at $JAVA_HOME/lib/security/jssecacerts, and only supplies its password
in javax.net.ssl.trustStorePassword, and then uses a dedicated
truststore for Artemis. Defining both org.apache.activemq.ssl.trustStore
and org.apache.activemq.ssl.trustStorePassword now makes Artemis use the
dedicated truststore (javax.net.ssl.trustStore is not set as we use the
default location, so the second choice
org.apache.activemq.ssl.trustStore applies), but with the Java default
truststore password (first choice javax.net.ssl.trustStorePassword
applies instead of the second choice because it is set for the default
truststore). Obviously, this does not work unless both passwords are
identical!)
The fallback consumer authorization implemented in ARTEMIS-592 needs to
check for an *exact* security-settings match otherwise in certain
configurations a more general and more permissive setting might
be used instead of the intended more specific and more restrictive
setting.
The merge method in AddressSettings should *not* use any getters. It
should reference the relevant variables directly. Using any getters will
return default values in the underlying value is null. This can cause
problems for hierarchical settings.
Also fixed a few potential NPEs exposed by the test-case.
notice the quotes on "re-encode", as this is just replacing the set of application properties, properties and headers by a new set
if a flag reEncoded is set to true on AMQPLargeMessage
This is to avoid shutting down the server on a critical failure in case the message is a few bytes shy
from beyond the max buffer size.
This will prevent the issue.
I am also bringing a test I used to report https://issues.apache.org/jira/browse/PROTON-2297
Even though the issue here is on proton. There's no such thing as enough tests so I am keeping the test.
When deleting a durable scheduled message via the management API the
message would be removed from memory but it wouldn't be removed from
storage so when the broker restarted the message would reappear.
This commit fixes that by acking the message during the delete
operation.
Using a ThreadLocal for the audit user information works in most cases,
but it can fail when dispatching messages to consumers because threads
are taken out of a pool to do the dispatching and those threads may not
be associated with the proper credentials. This commit fixes that
problem with the following changes:
- Passes the Subject explicitly when logging audit info during dispatch
- Relocates security audit logging from the SecurityManager
implementation(s) to the SecurityStore implementation
- Associates the Subject with the connection properly with the new
security caching
- Fixed an issue where I needed to set connection to null after closing it
- Added more tests on QpidDispatchPeerTest (tests i would have done manually, and reproduced a few issues along the way)
- Adding a paragraph about addressing and distinct queue names
- Renaming match on peers, senders and receivers as "address-match"
- Changing qpid dispatch test to use a single listener
- Fixing reconnect attemps message
It add additional required fixes:
- Fixed uncommitted deleted tx records
- Fixed JDBC authorization on test
- Using property-based version for commons-dbcp2
- stopping thread pool after activation to allow JDBC lease locks to release the lock
- centralize JDBC network timeout configuration and save repeating it
- adding dbcp2 as the default pooled DataSource to be used
Replaces direct jdbc connections with dbcp2 datasource. Adds
configuration options to use alternative datasources and to alter the
parameters. While adding slight overhead, this vastly improves the
management and pooling capabilities with db connections.
This reverts commit dbb3a90fe6.
The org.apache.activemq.artemis.core.server.Queue#getRate method is for
slow-consumer detection and is designed for internal use only.
Furthermore, it's too opaque to be trusted by a remote user as it only
returns the number of message added to the queue since *the last time
it was called*. The problem here is that the user calling it doesn't
know when it was invoked last. Therefore, they could be getting the
rate of messages added for the last 5 minutes or the last 5
milliseconds. This can lead to inconsistent and misleading results.
There are three main ways for users to track rates of message
production and consumption:
1. Use a metrics plugin. This is the most feature-rich and flexible
way to track broker metrics, although it requires tools (e.g.
Prometheus) to store the metrics and display them (e.g. Grafana).
2. Invoke the getMessageCount() and getMessagesAdded() management
methods and store the returned values along with the time they were
retrieved. A time-series database is a great tool for this job. This is
exactly what tools like Prometheus do. That data can then be used to
create informative graphs, etc. using tools like Grafana. Of course, one
can skip all the tools and just do some simple math to calculate rates
based on the last time the counts were retrieved.
3. Use the broker's message counters. Message counters are the broker's
simple way of providing historical information about the queue. They
provide similar results to the previous solutions, but with less
flexibility since they only track data while the broker is up and
there's not really any good options for graphing.
The queue is missing access to the server,
recent changed functionality on temporary queues namespace needed
the server and now the unit test has to pass in the reference to fix the test.
In a cluster scenario where non durable subscribers fail over to
backup while another live node forwarding messages to it,
there is a chance that the the live node keeps the old remote
binding for the subs and messages go to those
old remote bindings will result in "binding not found".
Both authentication and authorization will hit the underlying security
repository (e.g. files, LDAP, etc.). For example, creating a JMS
connection and a consumer will result in 2 hits with the *same*
authentication request. This can cause unwanted (and unnecessary)
resource utilization, especially in the case of networked configuration
like LDAP.
There is already a rudimentary cache for authorization, but it is
cleared *totally* every 10 seconds by default (controlled via the
security-invalidation-interval setting), and it must be populated
initially which still results in duplicate auth requests.
This commit optimizes authentication and authorization via the following
changes:
- Replace our home-grown cache with Google Guava's cache. This provides
simple caching with both time-based and size-based LRU eviction. See more
at https://github.com/google/guava/wiki/CachesExplained. I also thought
about using Caffeine, but we already have a dependency on Guava and the
cache implementions look to be negligibly different for this use-case.
- Add caching for authentication. Both successful and unsuccessful
authentication attempts will be cached to spare the underlying security
repository as much as possible. Authenticated Subjects will be cached
and re-used whenever possible.
- Authorization will used Subjects cached during authentication. If the
required Subject is not in the cache it will be fetched from the
underlying security repo.
- Caching can be disabled by setting the security-invalidation-interval
to 0.
- Cache sizes are configurable.
- Management operations exist to inspect cache sizes at runtime.