In the detectAndReportRenewSlowness() method, the logger.error() and
logger.warn() methods are called 3 times.
In two of the three calls, the method arguments correspond to parameters
that are tested in the if() conditional construct.
Also, the arguments of the logger.error() and logger.warn() methods are
identical in lines 139 and 141, respectively, which may indicate that
they were copied incorrectly.
The local variable constructorPos is initialized with the value of
buffer.position() in line 330, but the variable itself is not used
anywhere else, which may indicate an error.
In 1st branch (line 274) of the if() statement, the
Boolean.parseBoolean() method accepts the value
frame.getHeader(Stomp.Headers.Subscribe.NO_LOCAL), which is used in line
273.
In 2nd branch (line 276), the Boolean.parseBoolean() method accepts the
value frame.getHeader(Stomp.Headers.Subscribe.NO_LOCAL), although the
other value frame.hasHeader(Stomp.Headers.Subscribe.ACTIVEMQ_NO_LOCAL)
is used.
Found by Linux Verification Center (portal.linuxtesting.ru) with SVACE.
Author A. Slepykh.
In most use cases there are broker instances with few XPath filters performed
many times, so reusing a compiled XPath expression improves XPath filter
performance
This commit does the following:
- Updates HA docs including the chapter on network isolation (i.e.
split brain). The network isolation chapter is now more about
high-level explanation and the HA doc now has all the configuration
parameters.
- Changes references to "pluggable quorum voting" to "pluggable lock
manager." The pluggable functionality really isn't about voting.
Conceptually is much more like the functionality you'd get from a
distributed lock so this naming is more clear. Both the docs and the
code have been changed.
- Reorganize lock manager modules as sub-modules. The API and RI
modules are renamed, but that should be OK based on the
"experimental" tag that's been on this feature up to this point.
- Remove the "experimental" tag from the lock manager.
These changes will not break folks using the standalone broker. However,
they will break folks embedding the broker *if* they are using the
artemis-quorum-ri or artemis-quorum-api modules or the
o.a.a.a.c.c.h.DistributedPrimitiveManagerConfiguration class.
There are no functional changes here. Renaming these modules is more a
conceptual change to facilitate better documentation and increased
adoption.
Whenever we create a queue with a filter we're instantiating 3 different
`org.apache.activemq.artemis.core.filter.impl.FilterImpl` objects. This
is wasteful and entirely avoidable.
This fix was based on static analysis of the code and inspection of the
XA specification. There is no test associated with it due to the
difficult nature in reproducing the failure. This code has been
essentially the same for a decade and only now have there been any
reports of it actually sending back the wrong XA code.
This is particularly true for the Mirrored SNF queue. Redistribution is not meant for internal queues. If an internal queue happens to have the same name on another server, it should not trigger redistribution when consumers are removed.
It would be possible to work around this by adding an address-setting specific to the address with redistribution disabled.
ClusteredMirrorSoakTest was intermittently failing because of this. For a few seconds while the mirror connection is still being made connections could move messages from one node towards another node if both have the same name.
Fix intermittent test failures in pull consumer test by asserting that there
are the expected number of message on the queue before running the JMS consume
cycle to consume credit and trigger federation credit to flow.
Tests need to ensure federation links are up before sending to and address
or the sent message can get discarded before the federation consumer is there
to receive it.
When federation is configured in two directions between nodes for an address
the message can reflect from one node to another if max hops is not set or not
set correctly and in some federation topologies the max hops value can't solve
the issue and still result in a working configuration. This reflection should
be prevented at the federation consumer level for address consumers.