Allow the Source address to provide consumer priority on the address using the
same option value as a core consumer '?consumer-priority=X'. The change parses
any query string appended to an address and uses the address portion as the
actual receiver address and currently only looks at consumer priority values in
the extracted address query parameters and ignores any other options found. The
existing consumer priority taken from link properties takes precedence over the
value placed on the address query options if both are present.
Race on consumer create and broker shutdown could lead to a deadlocak trying
to access configuration from the policy manager while the federation instance
is trying to shutdown the policy manager.
When the Database option is used, the packets on the ServerMessage may still be in transit.
We should sync the file on the Database before we can actually send it, otherwise we would get an assertion error on the send method.
This commit includes the following changes:
- Management operations to get sucess & failure counts for authn and
authz along with the corresponding audit logging.
- Export the aforementioned authn & authz metrics.
- Export metrics for the underlying authn & authz caches including the
ability to enable/disable them.
- Update metrics tests to validate tags in addition to keys and values.
- Update documentation to explain new functionality and clarify
existing metric tags.
The original commit (1ee3e884b7) for this
issue wasn't completely correct. This commit fixes those issues so that
both the messageCount and scheduledMessageCount are accurate now when
a scheduled message is removed by its ID.
PagingStore is supposed to send an event to replica on every file that is closed.
There are a few situation where the sendClose is being missed and that could generate leaks on the target
in this commit I'm storing a binding record with the address-settings for the correct size
this is also validating eventual merges of the AddressSettings in the same namespace.
This is a list of improvements done as part of this commit / task:
* Page Transactions on mirror target are now optional.
If you had an interrupt mirror while the target destination was paging, duplicate detection would be ineffective unless you used paged transactions
Users can now configure the ack manager retries intervals.
Say you need some time to remove a consumer from a target mirror. The delivering references would prevent acks from happening. You can allow bigger retry intervals and number of retries by tinkiering with ack manager retry parameters.
* AckManager restarted independent of incoming acks
The ackManager was only restarted when new acks were coming in. If you stopped receiving acks on a target server and restarted that server with pending acks, those acks would never be exercised. The AckManager is now restarted as soon as the server is started.
When creating internal temporary queues for the federation control links and the
events link we should use a structured naming convention to ease in configuring
security for the federation user where all internal names fall under a root prefix
which can be used to grant read and write access for the federation user. This
change allows security on the wildcarded address "$ACTIVEMQ_ARTEMIS_FEDERATION.#".
This change also includes some further restrictions added to federation resources
and adds support for wildcarding '$' prefixed addresses.
Allow for configuration of the batch size granted to the remote when an
AMQP federation queue receiver is pulling messages only when there is
local capacity to handle them. Some code housekeeping is done here to
make adding future properties a bit simpler and require fewer changes.
Create a new NettyConnector for each connection attempt that is configured from
distinct broker connection URIs which allows for differing TLS configuration
per remote connection configuration.
Correct the XML parser for core federation queue match policy loading
to call the setQueueMatch instead of setAddressMatch when reading the
queue match element.
Redistribution would add data to the record which would then in turn make the record too large to redistribute.
The Redistributor and Bridges should not be removed.
Also a warning should be added to warn users about the situation.
This commit does the following:
- Updates HA docs including the chapter on network isolation (i.e.
split brain). The network isolation chapter is now more about
high-level explanation and the HA doc now has all the configuration
parameters.
- Changes references to "pluggable quorum voting" to "pluggable lock
manager." The pluggable functionality really isn't about voting.
Conceptually is much more like the functionality you'd get from a
distributed lock so this naming is more clear. Both the docs and the
code have been changed.
- Reorganize lock manager modules as sub-modules. The API and RI
modules are renamed, but that should be OK based on the
"experimental" tag that's been on this feature up to this point.
- Remove the "experimental" tag from the lock manager.
These changes will not break folks using the standalone broker. However,
they will break folks embedding the broker *if* they are using the
artemis-quorum-ri or artemis-quorum-api modules or the
o.a.a.a.c.c.h.DistributedPrimitiveManagerConfiguration class.
There are no functional changes here. Renaming these modules is more a
conceptual change to facilitate better documentation and increased
adoption.
Whenever we create a queue with a filter we're instantiating 3 different
`org.apache.activemq.artemis.core.filter.impl.FilterImpl` objects. This
is wasteful and entirely avoidable.
Fix intermittent test failures in pull consumer test by asserting that there
are the expected number of message on the queue before running the JMS consume
cycle to consume credit and trigger federation credit to flow.
Tests need to ensure federation links are up before sending to and address
or the sent message can get discarded before the federation consumer is there
to receive it.
When federation is configured in two directions between nodes for an address
the message can reflect from one node to another if max hops is not set or not
set correctly and in some federation topologies the max hops value can't solve
the issue and still result in a working configuration. This reflection should
be prevented at the federation consumer level for address consumers.
Generate MQTT message IDs from full allowed range of 1-65535 and skip
currently used values. Do not use atomic integer for current ID, because
all accesses and modifications are performed in synchronized context.
When Queue consumers attach with filters use those instead of the Queue
filter to filter the messages that are federated to avoid stranding of
messages on the local broker. This will result in multiple federation
consumers if the various attached local consumers all use different
filters but does keep unwanted messages on the remote so that consumers
there can consume those.
Under some scenarios federation demand tracking is losing track of total demand
for a federated resource leading to teardown of federated links before all local
demand has been removed from the resource. This occurs most often if the attempts
to establish a federation link are refused because the resource hasn't yet been
created and an eventual attach succeeds, but can also occur in combination with
a plugin blocking or not blocking federation link creation in some cases.
Large message support was added to
o.a.a.a.c.s.f.FederatedQueueConsumerImpl#onMessage via cf85d35 for
ARTEMIS-3308. The problem with that change is that when onMessage
returns o.a.a.a.c.c.i.ClientConsumerImpl#callOnMessage will eventually
call o.a.a.a.c.c.i.ClientLargeMessageImpl#discardBody which eventually
ends up in o.a.a.a.c.c.i.LargeMessageControllerImpl#popPacket waiting 30
seconds (i.e. the default readTimeout) for more packets to arrive (which
never do). This happens because the FederatedQueueConsumer short-cuts
the "normal" process by using LargeMessageControllerImpl#take.
This commit fixes that by tracking the number of bytes "taken" and then
looking at that value later when discarding the body effectively
skipping the 30 second wait.
When an AMQP federation instance attempts to federate an address or queue
it can fail if the remote address or queue is not present or cannot be
created based on broker policy. A federation link can also closed if the
federated resource is removed from the remote broker by management etc.
In those cases the remote broker should note the resources that were
targets of federation and send alerts to the source federation broker to
notify it that these resources become available for federation and the
source should attempt again to create federation links if demand still
exists. This allows an AMQP federation instance to heal itself based on
updates from the remote.
No problems reported on this test.
I needed to validate rather if messages were being distributed correctly when either SNF or the final address itself was paged. Rather than throw away the test I decided to keep the validation here.
Currently when an MQTT topic filter contains characters from the
configured wildcard syntax the conversion to/from this syntax breaks.
For example, when using the default wildcard syntax if an MQTT topic
filter contains a . the conversion from the MQTT wildcard syntax to the
core wildcard syntax and back will result in the `.` being replaced with
a `/.`.
This commit fixes that plus a few other things...
- Implements proper conversions to/from one WildcardConfiguration to
another.
- Refactors the MQTT code which invokes these conversion methods. This
includes simplifying a lot of test code.
- Adds lots of tests for everything.
- Clarifies some variable naming to better distinguish between core and
MQTT.
- Move ActiveMQTestBase to artemis-test-support.
- Add reduced parent for current artemis-server tests.
- Add a simpler test case parent class unit tests can use.
- Convert some existing checks into a rule for reuse.
- Move various rules/utils to artemis[-unit]-test-support module from where they can be used instead of from artemis-server.
If both scale-down and cluster-connection are using the same JGroups
discovery-group then when the cluster-connection stops it will close the
underlying org.jgroups.JChannel and when the scale-down process tries to
use it to find a server it will fail.
This commit ensures that the JGroupsBroadcastEndpoint implementation of
BroadcastEndpoint#openClient initializes the channel if it has been
closed.