This commit includes the following changes:
- Management operations to get sucess & failure counts for authn and
authz along with the corresponding audit logging.
- Export the aforementioned authn & authz metrics.
- Export metrics for the underlying authn & authz caches including the
ability to enable/disable them.
- Update metrics tests to validate tags in addition to keys and values.
- Update documentation to explain new functionality and clarify
existing metric tags.
This is a list of improvements done as part of this commit / task:
* Page Transactions on mirror target are now optional.
If you had an interrupt mirror while the target destination was paging, duplicate detection would be ineffective unless you used paged transactions
Users can now configure the ack manager retries intervals.
Say you need some time to remove a consumer from a target mirror. The delivering references would prevent acks from happening. You can allow bigger retry intervals and number of retries by tinkiering with ack manager retry parameters.
* AckManager restarted independent of incoming acks
The ackManager was only restarted when new acks were coming in. If you stopped receiving acks on a target server and restarted that server with pending acks, those acks would never be exercised. The AckManager is now restarted as soon as the server is started.
This commit does the following:
- Updates HA docs including the chapter on network isolation (i.e.
split brain). The network isolation chapter is now more about
high-level explanation and the HA doc now has all the configuration
parameters.
- Changes references to "pluggable quorum voting" to "pluggable lock
manager." The pluggable functionality really isn't about voting.
Conceptually is much more like the functionality you'd get from a
distributed lock so this naming is more clear. Both the docs and the
code have been changed.
- Reorganize lock manager modules as sub-modules. The API and RI
modules are renamed, but that should be OK based on the
"experimental" tag that's been on this feature up to this point.
- Remove the "experimental" tag from the lock manager.
These changes will not break folks using the standalone broker. However,
they will break folks embedding the broker *if* they are using the
artemis-quorum-ri or artemis-quorum-api modules or the
o.a.a.a.c.c.h.DistributedPrimitiveManagerConfiguration class.
There are no functional changes here. Renaming these modules is more a
conceptual change to facilitate better documentation and increased
adoption.
This fix was based on static analysis of the code and inspection of the
XA specification. There is no test associated with it due to the
difficult nature in reproducing the failure. This code has been
essentially the same for a decade and only now have there been any
reports of it actually sending back the wrong XA code.
Large message support was added to
o.a.a.a.c.s.f.FederatedQueueConsumerImpl#onMessage via cf85d35 for
ARTEMIS-3308. The problem with that change is that when onMessage
returns o.a.a.a.c.c.i.ClientConsumerImpl#callOnMessage will eventually
call o.a.a.a.c.c.i.ClientLargeMessageImpl#discardBody which eventually
ends up in o.a.a.a.c.c.i.LargeMessageControllerImpl#popPacket waiting 30
seconds (i.e. the default readTimeout) for more packets to arrive (which
never do). This happens because the FederatedQueueConsumer short-cuts
the "normal" process by using LargeMessageControllerImpl#take.
This commit fixes that by tracking the number of bytes "taken" and then
looking at that value later when discarding the body effectively
skipping the 30 second wait.
This commit:
- Eliminates MQTT session storage on every successful connection.
Instead data is only written when subsriptions are created or
destroyed.
- Adds a configuration property for the storage timeout.
- Updates the documentation with relevant information.
- Refactors a few bits of code to eliminate unnecessary variables, etc.
If both scale-down and cluster-connection are using the same JGroups
discovery-group then when the cluster-connection stops it will close the
underlying org.jgroups.JChannel and when the scale-down process tries to
use it to find a server it will fail.
This commit ensures that the JGroupsBroadcastEndpoint implementation of
BroadcastEndpoint#openClient initializes the channel if it has been
closed.
in cluster.
When we know that a node leaves a clustercleanly we shouldn't log WARN
messages about it.
Signed-off-by: Emmanuel Hugonnet <ehugonne@redhat.com>
This commit does the following:
- Replaces non-inclusive terms (e.g. master, slave, etc.) in the
source, docs, & configuration.
- Supports previous configuration elements, but logs when old elements
are used.
- Provides migration documentation.
- Updates XSD with new config elements and simplifies by combining some
overlapping complexTypes.
- Removes ambiguous "live" language that's used with regard to high
availability.
- Standardizes use of "primary," "backup," "active," & "passive" as
nomenclature to describe both configuration & runtime state for high
availability.
Using a prefix "netty.http.header." to be able to define http headers
used for http request from the netty connector.
Issue: https://issues.apache.org/jira/browse/ARTEMIS-4452
Signed-off-by: Emmanuel Hugonnet <ehugonne@redhat.com>
As I worked through implementing a more generic JSON marshaller, I tried using reflection through BeanUtils and other ways
however the endresult was always worse as there were a few caveats that were not as easy to accomplish.
For that reason I went to a declarative appraoch where I define a meta-data object on AddressSettings and AddressSettingsInfo and
reuse the metadata in a few other places.