Correct the XML parser for core federation queue match policy loading
to call the setQueueMatch instead of setAddressMatch when reading the
queue match element.
Redistribution would add data to the record which would then in turn make the record too large to redistribute.
The Redistributor and Bridges should not be removed.
Also a warning should be added to warn users about the situation.
This commit does the following:
- Updates HA docs including the chapter on network isolation (i.e.
split brain). The network isolation chapter is now more about
high-level explanation and the HA doc now has all the configuration
parameters.
- Changes references to "pluggable quorum voting" to "pluggable lock
manager." The pluggable functionality really isn't about voting.
Conceptually is much more like the functionality you'd get from a
distributed lock so this naming is more clear. Both the docs and the
code have been changed.
- Reorganize lock manager modules as sub-modules. The API and RI
modules are renamed, but that should be OK based on the
"experimental" tag that's been on this feature up to this point.
- Remove the "experimental" tag from the lock manager.
These changes will not break folks using the standalone broker. However,
they will break folks embedding the broker *if* they are using the
artemis-quorum-ri or artemis-quorum-api modules or the
o.a.a.a.c.c.h.DistributedPrimitiveManagerConfiguration class.
There are no functional changes here. Renaming these modules is more a
conceptual change to facilitate better documentation and increased
adoption.
Whenever we create a queue with a filter we're instantiating 3 different
`org.apache.activemq.artemis.core.filter.impl.FilterImpl` objects. This
is wasteful and entirely avoidable.
This is particularly true for the Mirrored SNF queue. Redistribution is not meant for internal queues. If an internal queue happens to have the same name on another server, it should not trigger redistribution when consumers are removed.
It would be possible to work around this by adding an address-setting specific to the address with redistribution disabled.
ClusteredMirrorSoakTest was intermittently failing because of this. For a few seconds while the mirror connection is still being made connections could move messages from one node towards another node if both have the same name.
Fix intermittent test failures in pull consumer test by asserting that there
are the expected number of message on the queue before running the JMS consume
cycle to consume credit and trigger federation credit to flow.
Tests need to ensure federation links are up before sending to and address
or the sent message can get discarded before the federation consumer is there
to receive it.
When federation is configured in two directions between nodes for an address
the message can reflect from one node to another if max hops is not set or not
set correctly and in some federation topologies the max hops value can't solve
the issue and still result in a working configuration. This reflection should
be prevented at the federation consumer level for address consumers.
These tests are using an asynchronous feature. the check on Log has to use the Wait.assertEquals
I had to make a few changes to the methods to allow the use of Wait
Generate MQTT message IDs from full allowed range of 1-65535 and skip
currently used values. Do not use atomic integer for current ID, because
all accesses and modifications are performed in synchronized context.
When Queue consumers attach with filters use those instead of the Queue
filter to filter the messages that are federated to avoid stranding of
messages on the local broker. This will result in multiple federation
consumers if the various attached local consumers all use different
filters but does keep unwanted messages on the remote so that consumers
there can consume those.
Under some scenarios federation demand tracking is losing track of total demand
for a federated resource leading to teardown of federated links before all local
demand has been removed from the resource. This occurs most often if the attempts
to establish a federation link are refused because the resource hasn't yet been
created and an eventual attach succeeds, but can also occur in combination with
a plugin blocking or not blocking federation link creation in some cases.
Large message support was added to
o.a.a.a.c.s.f.FederatedQueueConsumerImpl#onMessage via cf85d35 for
ARTEMIS-3308. The problem with that change is that when onMessage
returns o.a.a.a.c.c.i.ClientConsumerImpl#callOnMessage will eventually
call o.a.a.a.c.c.i.ClientLargeMessageImpl#discardBody which eventually
ends up in o.a.a.a.c.c.i.LargeMessageControllerImpl#popPacket waiting 30
seconds (i.e. the default readTimeout) for more packets to arrive (which
never do). This happens because the FederatedQueueConsumer short-cuts
the "normal" process by using LargeMessageControllerImpl#take.
This commit fixes that by tracking the number of bytes "taken" and then
looking at that value later when discarding the body effectively
skipping the 30 second wait.
When an AMQP federation instance attempts to federate an address or queue
it can fail if the remote address or queue is not present or cannot be
created based on broker policy. A federation link can also closed if the
federated resource is removed from the remote broker by management etc.
In those cases the remote broker should note the resources that were
targets of federation and send alerts to the source federation broker to
notify it that these resources become available for federation and the
source should attempt again to create federation links if demand still
exists. This allows an AMQP federation instance to heal itself based on
updates from the remote.
No problems reported on this test.
I needed to validate rather if messages were being distributed correctly when either SNF or the final address itself was paged. Rather than throw away the test I decided to keep the validation here.
I had to remove the indirect dependency between the maven plugin and jline
to avoid the maven plugin to parse some classes in jline that require experimental features on the JDK
even when they are not in use.
Currently when an MQTT topic filter contains characters from the
configured wildcard syntax the conversion to/from this syntax breaks.
For example, when using the default wildcard syntax if an MQTT topic
filter contains a . the conversion from the MQTT wildcard syntax to the
core wildcard syntax and back will result in the `.` being replaced with
a `/.`.
This commit fixes that plus a few other things...
- Implements proper conversions to/from one WildcardConfiguration to
another.
- Refactors the MQTT code which invokes these conversion methods. This
includes simplifying a lot of test code.
- Adds lots of tests for everything.
- Clarifies some variable naming to better distinguish between core and
MQTT.
- Move ActiveMQTestBase to artemis-test-support.
- Add reduced parent for current artemis-server tests.
- Add a simpler test case parent class unit tests can use.
- Convert some existing checks into a rule for reuse.
- Move various rules/utils to artemis[-unit]-test-support module from where they can be used instead of from artemis-server.
If both scale-down and cluster-connection are using the same JGroups
discovery-group then when the cluster-connection stops it will close the
underlying org.jgroups.JChannel and when the scale-down process tries to
use it to find a server it will fail.
This commit ensures that the JGroupsBroadcastEndpoint implementation of
BroadcastEndpoint#openClient initializes the channel if it has been
closed.
PagingStroeImpl.checkReleasedMemory() will kick off
executor.execute(this::memoryReleased) to pull from the queue onMemoryFreedRunnables
asynchronously. If the executor fires the task too late, it can pick up one of the
late trackMemoryChecks runnable and increase its calls, making assertion fail.
need to flush the executors to make sure it doesn't happen.
in cluster.
When we know that a node leaves a clustercleanly we shouldn't log WARN
messages about it.
Signed-off-by: Emmanuel Hugonnet <ehugonne@redhat.com>
If the broker is embedded into a Jakarta environment then the existing
artemis-openwire-protocol module won't work because it uses javax
classes.
This commit adds a new Jakarta-specific module that can be used to
support OpenWire clients in Jakarta environments (e.g. Spring Boot 3).
Users will simply need to include this version on their classpath to
enable support.
Allows the configuration of AMQP Federation broker connections to be updated and
reloaded. This allows for update, add or remove of AMQP federation broker connections
as well as the basic AMQP sender and receiver broker connections. It checks for and
ignores changes in AMQP broker connections that are performing Mirroring as that
would lead to issues that can break mirroring.
When initially developed the expectation was that no more producers would keep connecting but in a scenario like this
the consumers could actually give up and things will just accumulate on the server.
We should cleanup these upon disconnect.
Mirror acks should be performed atomically with the storage of the source ACK. Both the send of the ack and the recording of the ack should be part of the same transaction (in case of transactional).
We are also adding support on transactions for an afterWired callback for the proper plug of OperationContext sync.
- Async commit
* async here meaning the recording of the commit record is not doing a sync on the storage.
This is useful for internal operations where we don't need an immediate sync on the journal storage.
- Wired notification
* I need finer control on a afterWired (to the storage) and before the completions, so I can plug the sync context on mirror right before the commit is called.
Many MQTT tests are run twice - once using TCP and once using
WebSockets. This is essentially a big waste of time since once the
connection is established to the broker the tests are identical. The
tests should be refactored to run just once and then there can be a
small number of tests specifically for WebSockets.
This should knock several minutes off the test-suite.
This commit does the following:
- Replaces non-inclusive terms (e.g. master, slave, etc.) in the
source, docs, & configuration.
- Supports previous configuration elements, but logs when old elements
are used.
- Provides migration documentation.
- Updates XSD with new config elements and simplifies by combining some
overlapping complexTypes.
- Removes ambiguous "live" language that's used with regard to high
availability.
- Standardizes use of "primary," "backup," "active," & "passive" as
nomenclature to describe both configuration & runtime state for high
availability.
Using a prefix "netty.http.header." to be able to define http headers
used for http request from the netty connector.
Issue: https://issues.apache.org/jira/browse/ARTEMIS-4452
Signed-off-by: Emmanuel Hugonnet <ehugonne@redhat.com>
ShutdownOnCriticalIOErrorMoveNextTest was actually my "inspiration" to have created the isolated tests module in the first place, so it would be reasonable to move it there as well.
the issue comes down to the test simulating a server failure where the server would go down abruptly causing the VM to drop / exit. In this test the nature of failure is leaving a partial shutdown where server executors are still hanging around causing a cascade leak of server pools still running.
So it is completely expected to have these threads "leaking". Hence this test should be moved into the isolated-tests.
there is no semantic / server change as part of this task. It's just moving a test to a better place.
As I worked through implementing a more generic JSON marshaller, I tried using reflection through BeanUtils and other ways
however the endresult was always worse as there were a few caveats that were not as easy to accomplish.
For that reason I went to a declarative appraoch where I define a meta-data object on AddressSettings and AddressSettingsInfo and
reuse the metadata in a few other places.
Starting with 2.28.0, the broker doesn't translate the character `/` to
the configured wildcard delimiter (i.e. `.` by default) when creating
subscription queues for MQTT clients.
This commit fixes that regression and restores the proper translation.
Allow for core messages to be tunneled over broker connection links used
for AMQP Federation and for broker mirroring. This eliminates the need to
convert from Core to AMQP and from loading core large messages fully into
memory for that conversion.
I was not able to reproduce the actual issue here, but I heavily used this test during debugging.
This will not serve as a reproducer to the Ghost consumer issue, but this is a valid test.
The system property `artemis.extra.libs` is a comma separated list of
directories that contains jar files, i.e.
```
-Dartemis.extra.libs=/usr/local/share/java/lib1,/usr/local/share/java/lib2
```
The environment variable `ARTEMIS_EXTRA_LIBS` is a comma separated list of
directories that contains jar files and is ignored if the system property
`artemis.extra.libs` is defined, i.e.
```
export ARTEMIS_EXTRA_LIBS=/usr/local/share/java/lib1,/usr/local/share/java/lib2
```
When big messages are produced if a consumer receives an expired message, the credits are not updated, so if the consumer is too slow and an expiry delay has been set, we can end up with a situation where there are no more credits which prevents the consumer from receiving any more messages.
In case the bindings "news.#" and "news.europe.#" are registered, only the first one matches with the address "news.europe" while both are supposed to match. Those changes are meant to get rid of this limitation.
Durable subscrption state is part of the MQTT specification which has
not been supported until now. This functionality is implemented via an
internal last-value queue. When an MQTT client creates, updates, or
adds a subscription a message using the client-ID as the last-value is
sent to the internal queue. When the broker restarts this data is read
from the queue and populates the in-memory MQTT data-structures.
Therefore subscribers can reconnect and resume their session's
subscriptions without have to manually resubscribe.
MQTT state is now managed centrally per-broker rather than in the
MQTTProtocolManager since there is one instance of MQTTProtocolManager
for each acceptor allowing MQTT connections. Managing state per acceptor
would allow odd behavior with clients connecting to different acceptors
with the same client ID.
The subscriptions are serialized as raw bytes with a "version" byte for
potential future use, but I intentionally avoided adding complex
scaffolding to support multiple versions. We can add that complexity
later if necessary.
Some tests needed to be changed since instantiating an MQTT protocol
manager now creates an internal queue. A handful of tests assume that no
queues will exist other than the ones they create themselves. I updated
the main test super-class so that an MQTT protocol manager is not
automatically instantiated when configuring a broker for in-vm support.
The exception thrown by serverLocator.connect() should be all you need on such case
and the caller should then be responsible for taking appropriate action.
Only report finding matching log message if all requested entries are present in it, not just the last one provided.
Also fix the updated AssertionLoggerHandler usage within AddressFullLoggingTest, ensure it is active across the full period expected messages can happen and doesnt miss early ones.
The test cannot work on Windows unless I can make the `upgrade` CLI command
respect my choice to upgrade a Linux distribution. This commit therefore adds
a new `--linux` option for the `upgrade` command, and leverages it in the
`upgrade-linux` smoke test.
* The `--cygwin` option has been preserved for backwards compatibility.
* The `IS_CYGWIN` attribute has been renamed to `IS_NIX` to reflect the change.
* The OS "recognition" method (in `InstallAbstract::run`) has been updated to
reflect the need for enforcing *nix behavior, which is now the default if all
other methods fail.
This commit introduces support for configuring a specific Duplicate ID cache size per address in the Artemis server. Previously, there was only a global setting for the ID cache size, but now each address can have its own cache size.
The changes include the addition of a new configuration property id-cache-size in the Artemis server configuration file. This property can now be specified under each address setting in the configuration file, and its value will determine the Duplicate ID cache size for that particular address. If the id-cache-size property is not specified for an address, it will use the global setting.
The test cases have been updated to cover this new functionality, and integration test have been added to verify that address-specific cache sizes work as expected.
Documentation has been added to address-settings.adoc, configuration-index.adoc and duplicate-detection.adoc
ARTEMIS-4375 Implement artemis shell using JLine3 integrated with auto-completion from picocli
This commit involves two JIRAs. One is adding PicoCLI and the next is Using JLine3 and implement a shell.
I have tried to keep these commits separate but these changes became interdependent hence the two JIRAs are squashed in this commit.
The test is now setting the mirror to sync
it will block until the first subscription is consumed, kill the servers and restart them
check all the counters
and then start another 4 consumers and at the end check all the counters.
Mirror is now sync making the test more useful and challenging.
This commit contains the following changes:
- eliminate used, undeclared dependencies
- eliminate unused, declared dependencies
- fix scope for test dependencies
- eliminate org.hamcrest completely as its use involved deprecated code
as well as dependencies from multiple versions
In rare cases a store operation could silently fails or starves, blocking the
related server session and all delivering messages. Those server sessions can
be closed adding a management method that cleans their operation context
before closing them.
If the Security Manager is using Netty, and in particular the same Netty connection,
you could run into a deadlock / starvation.
This is particularly true in the Wildfly case where they reuse the same connection for everything via XNIO.
When resource audit logging is enabled STOMP is completely inoperable
due to an NPE during the protocol handshake. Unfortunately the failure
is completely silent. There are no logs to indicate a problem.
This commit fixes this problem via the following changes:
- Mitigate the original NPE via a check for null
- Move the logic necessary to set the "protocol connection" on the
"transport connection" to a class shared by all implementations.
- Add exception handling to log failures like this in the future.
- Add tests to ensure the audit logging is correct.
Improve the CORE client failover connecting to other live servers when all
reconnect attempts fails, i.e. in a cluster composed of 2 live servers,
when the server to which the CORE client is connected goes down the CORE
client should reconnect its sessions to the other liver broker.
Continually read from the compressed byte[] into
the decompressed object
Add test to validate large (>1024 bytes) compressed data can be
deserialized properly
I am also allowing optionally testing with mysql.
The CLI maven plugin is creating a server and downloading the JDBC jar directly into the ./server/lib folder.
Notice this is a test dependency only and it will be used only if mysql is set to true.
Some scrapers, e.g. prometheus, add an "instance" tag. This value may not be the same as
the broker name, which results in these metrics becoming more difficult to match up with
the corresponding broker.