Added test reproducer and changed Queue::isDurableMessage usages into
Queue::isDurable to allow acks to hit the journal and being
correctly replicated across nodes.
Add ability to configure when creating auto created queues at the queue level
Add support for configuring message count check
Add test cases
Update docs
Support using group buckets on a queue for better local group scaling
Support disabling message groups on a queue
Support rebalancing groups when a consumer is added.
* Using SpawnedVMSupport (used to be on testsuite, moving it to Utils)
* Building the classpath for ./lib, similar to what happens on Bootstrap
* Using Path as much as possible to avoid issues encoding files
Push isDirectDeliver method from netty impl, to the Connection interface
Add support to InVMConnection for isDirectDeliver flag and ability to set via config, defaulting to false, to keep current default behavior.
Extend DirectDeliverTest to check InVM as well.
Add consumer priority support
Includes refactor of consumer iterating in QueueImpl to its own logical class, to be able to implement.
Add OpenWire JMS Test - taken from ActiveMQ5
Add Core JMS Test
Add AMQP Test
Add Docs
When broker's advisory is disabled (supportAdvisory=false) any
advisory consumer won't get created at broker and the advisory
consumer ID won't be stored.
Legacy openwire clients can have a reference of advisory consumer
regardless broker's settings and therefore when it closes the
advisory consumer the broker has no reference to it.
Therefore broker throws an exception like:
javax.jms.IllegalStateException: Cannot
remove a consumer that had not been registered
If the broker stores the consumer info (even it doesn't create
it) the exception can be avoided.
There's a *slight* semantic change with the behavior of the queue query
and binding query to make them consistent with the address query, namely
that they will return the name of the queue and the name of the address
in every case and the returned names will be not use the FQQN syntax but
will be parsed to reflect their actual names in the broker.
MULTICAST messages forwarded by a core bridge will not be routed to any
ANYCAST queues and vice-versa. Diverts have the ability to configure how
routing-type is treated. Core bridges now support this same kind of
functionality. By default the bridge does not alter the routing-type of
forwarded messages to maintain compatibility with existing behavior.
Large messages pendingRecordID is not accessed atomically, leading
to races that would lead to records that cannot been found on the
journal for deletion: it would lead to cause NPE that won't clean
the pending tasks on the current OperationContextImpl.
Adding a cleanup on error of those tasks and avoiding the race
to happen by adding proper synchronization will both enforce
correct clean up when something bad happen and avoid NPE.
If a client sends a message to a multicast address and using a qpid-jms
client to receive the message from one of the queues using fully
qualified queue name will fail with following error message:
Address xxxx is not configured for queue support
[condition = amqp:illegal-state]
It should be able to receive the message without any error.
These improvements were also part of this task:
- Routing is now cached as much as possible.
- A new Runnable is avoided for each individual message,
since we use the Netty executor to perform delivery
https://issues.apache.org/jira/browse/ARTEMIS-2205
FakeQueue is not correctly setting the queue on its PageSubscription,
leading to fail the test due to NPEs when PageSubscription::getQueue
is being used.
When a receiving transaction is committed in a paging situation,
if a page happens to be completed and it will be deleted in a
transaction operation (PageCursorTx). The other tx operation
RefsOperation needs to access the page (in PageCache) to finish
its job. There is a chance that the PageCursorTx removes the
page before RefsOperation and it will cause the RefsOperation
failed to find a message in a page.
When a node tries to reconnects to another node in a scale down cluster,
the reconnect request gets denied by the other node and keeps retrying,
which causes tasks in the ordered executor accumulate and eventually OOM.
The fix is to change the ActiveMQPacketHandler#handleCheckForFailover
to allow reconnect if the scale down node is the node itself.
Previously the port was always random. This caused problems with
remote JMX connections that needed to overcome firewalls. As of
this patch it's possible to make the RMI port static and whitelist
it in the firewall settings.
With AMQP protocol when some messages are received in a transaction,
calling JMX QueueControl.listDeliveringMessages() returns empty list
before the transaction is committed.
Add test cases
Add GroupSequence to Message Interface
Implement Support closing/reset group in queue impl
Update Documentation (copy from activemq5)
Change/Fix OpenWireMessageConverter to use default of 0 if not set, for OpenWire as per documentation http://activemq.apache.org/activemq-message-properties.html
This test can verify an issue fixed by the commit:
7a463f038a (ARTEMIS-2096)
The issue was reported later by users where messages
sent by amqp clients are having their body size
randomly being zero when received by multiple core
consumers.
This was also fixed on commit 48e0fc8f42 but that
was exclusively on 2.6.x branch.
Compaction is now reusing direct ByteBuffers on both
reading and writing with explicit and deterministic
release to avoid high peak of native memory utilisation
after compaction.
Implement custom LVQ Key and Non-Destructive in broker - protocol agnostic
Make feature configurable via broker.xml, core apis and activemqservercontrol
Add last-value-key test cases
Add non-destructive with lvq test cases
Add non-destructive with expiry-delay test cases
Update documents
Add new methods to support create, update with new attributes
Refactor to pass through queue-attributes in client side methods to reduce further method changes for adding new attributes in future and avoid methods with endless parameters. (note: in future this should prob be done server side too)
Update existing test cases and fake impls for new methods/attributes
Further fix around network loss.
If network loss (split) and slave activates, for a period its config used when it initializes in initialisePart2 was stale.
Add/Extend test to ensure address-setting change made on live preserved on slave (after activation)
Add/Extend test to ensure security-setting change made on live preserved on backup (after activation)
Major refactoring of the AMQPMessage abstraction to resolve
some issue of message corruption still present in the code and
improve the API handling of message changes and re-encoding.
Improves handling of decoding of message sections limiting the
work to only the portions needed and ensuring the state data
is always updated with what has been done. Fixes issues of
corrupt state on copy of message or other changes in filters.
Given that NettyConnector::createConnection isn't happening on the
channel's event loop, it could race with a channel close event, that
would clean the whole channel pipeline, leading to a NPE while
trying to use a configured channel handler of the pipeline.
The waits for expiration are set at the same value as the expiration
interval default to in the test support setup so on some runs there is a
race when waiting for the message to expire and the expiry task.
Once a re-encode of the message is done the buffer is not being marked
as valid and so subsequent checks on the buffer are all assuming the
message data is not valid and re-encoding over and over. This can lead
to poor performance in some cases and corrupted data in others.
Extend test case to reproduce problem of client created queues being incorrectly removed on simple reload of config.
Add a flag/field to the queues created by configuration/broker.xml so we can correctly filter only queues created/managed by config.
Update listConfiguredQueues to use the new queue flag
Add Tests
Add implementation inline with other queue updatable settings.
Enhance tests to ensure queue is not destroyed during config change and messages in queue already are preserved
Revert previous fix
Keep original ConfigChangeTest
Apply new non-destructive fix.
Enhance tests to ensure messages in queues are not lost either on reload when running or when config changed on-restart (e.g. queue i not destroyed)
Any failure to deploy an address or queue will short-circuit the broker
initialization process preventing any other addresses or queues from
being deployed as well as other critical resources like acceptors, etc.
Ensure the broker looks at local receiver credit when checking for
credit top off threshold and then do a proper top off back to the high
water mark to sync with how client receivers manage their credit.
The ServerSessionPacketHandler has a close() callback handler which will
delete any pending large messages. However, there is a race where a
large message can be routed, then the close delete the associated large
message resulting in data loss.
First, QueueQuery should use address name for address settings
The name used for looking up address settings for a queue now uses the
address name if there is a local queue binding
Second, make sure sent credits to the server is the correct value
Update the Qpid JMS and Proton dependencies to lastest and sync Netty
with the 4.1.28.Final version used by Qpid JMS to avoid clash that
breaks a test. Adds override of new Proton-J WritableBuffer API that
allows it to use the Netty String encoder when needed instead of the
slower default version.
Update Qpid JMS to v0.36.0
Proton-J to v0.29.0
Netty to 4.1.28.Final
In some cases users who migrate from 1.x to 2.x may still want to keep
the legacy prefixes for their JMS destinations (i.e. "jms.queue.",
"jms.topic.", etc.). This commit adds a boolean on our ConnectionFactory
implementation so that it will use the old prefixes when invoking the
queue/topic creation methods on the Session implementation.
Fix checkstyle
Avoid duplicated logic
Ability to filter and group
Instantiate SimpleString property key once
Get property value via getObjectProprty to ensure all special mapped properties such as in AMQPMessage would return
Avoid a custom string to represent null, instead rely on Java's representation "null" by using Objects.toString to get the string value of the property value used to group by.
An OpenWire client can use a compound destination name of the form
"a,b,c..." and consume from, or subscribe to, multiple destinations.
Such a compound destination only works for topics when the subscriber
is non-durable. Attempting to create a durable subscription on a
compound address will end up with an error.
The cause is when creating durable subs to multiple topics/addresses
the broker uses the same name to create internal queues, which
causes duplicate name conflict.
The deliver loop won't give up trying to deliver messages when
back-pressure kicks in (credits and/or TCP) if msg grouping is used and
there are many consumers registered: this change will allow the loop
to exit by instructing the logic that the group consumer is the only
consumer to check.
Parameters going into Wait.waitFor were originally wrong, because
`durationMillis: 3, sleepMillis: 100` means you would test the condition
only once. This commit is changing the durationMillis from 3ms to 3s,
swapping the two numbers (duration 100ms, sleep 3ms) would also be reasonable, I think.
Next, Wait.assertEquals is here being used, instead of Assert.assertTrue.
I saw the test fail only once, and never was able to reproduce it again,
but I think this commit does improve the test and so it is worthwhile.
java.lang.AssertionError
at org.apache.activemq.artemis.tests.integration.management.QueueControlTest.assertMetrics(QueueControlTest.java:2651)
at org.apache.activemq.artemis.tests.integration.management.QueueControlTest.assertMessageMetrics(QueueControlTest.java:2615)
at org.apache.activemq.artemis.tests.integration.management.QueueControlTest.testRemoveAllWithPagingMode(QueueControlTest.java:1554)
The occasional assertion error is prevented by using Wait.assertEquals
where Assert.assertEquals was used previously.
java.lang.AssertionError:
Expected :1
Actual :0
[...]
at org.junit.Assert.assertEquals(Assert.java:542)
at org.apache.activemq.artemis.tests.integration.management.QueueControlTest.testResetMessagesExpired(QueueControlTest.java:2370)
The occasional assertion error is prevented by using Wait.assertEquals
where Assert.assertEquals was used previously.
I did not observe the timing issue on all asserts (only on the first
two), but there is no harm in replacing them all.
java.lang.AssertionError:
Expected :2
Actual :1
The below error is prevented by using Wait.assertEquals
where Assert.assertEquals was used previously.
java.lang.AssertionError:
Expected :2
Actual :1
[...]
at org.junit.Assert.assertEquals(Assert.java:542)
at org.apache.activemq.artemis.tests.integration.management.QueueControlTest.testListMessagesWithNullFilter(QueueControlTest.java:804)
The below error is prevented by using Wait.assertEquals
where Assert.assertEquals was used previously.
java.lang.AssertionError:
Expected :2
Actual :1
[...]
at org.apache.activemq.artemis.tests.integration.management.QueueControlTest.testListMessagesWithEmptyFilter(QueueControlTest.java:827)
This commit adds support for tracking metrics for bridges for both
normal bridges and bridges that are part of a cluster. The two
statistics added in this commit are messages pending acknowledgement
and messages acknowledged but more can be added later.
The below error is prevented by adding Wait.assertEquals,
where Assert.assertEquals was used previously. Timeout is
set to small increments, since we rarely need to wait more
than 100 ms for the condition to become true.
java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at org.apache.activemq.artemis.tests.integration.client.HeuristicXATest.doRecoverHeuristicCompletedTxWithRestart(HeuristicXATest.java:306)
at org.apache.activemq.artemis.tests.integration.client.HeuristicXATest.testRecoverHeuristicCommitWithRestart(HeuristicXATest.java:251)
This test was initializing a libaio of 21K, that would fail on limited servers.
This is decreasing maxIO so it would requires less resources to run it.
Anonymous senders (those created without a target address) are not
blocked when max-disk-usage is reached. The cause is that when such
a sender is created on the broker, the broker doesn't check the
disk/memory usage and gives out the credit immediately.
In a live-backup scenario, if the live is restarted and shutdown too soon,
the client have a chance to fail on failover because it's internal topology
is inconsistent with the final status. The client keeps connecting to live
already shut down, never trying to connect to the backup.
It's a porting from HORNETQ-1572.
Tests should always extend ActiveMQTestBase whenever is possible.
This is because there are a few rules to avoid thread leakages.
The test was also leaking an executor and I believe it was
not always stopping the servers, which I fixed here.
When "large" messages are converted to / from core in order to be stored
in the large message store the type of the AMQP body section is being
lost and reconstituted incorrectly in some cases. The message needs to
be annotated with the original AMQP type for the body and that used to
manage the conversion back to AMQP from Core.
This reverts commit c3fbd1b9e4.
Based on the discussion on the PR
https://github.com/apache/activemq-artemis/pull/2035 this shouldn't have
been merged. It's importing JMS-specific code into the core broker which
is something we've worked hard to eliminate in recent releases.
Avoids pooling corner cases interacting with ARTEMIS 1843 + ARTEMIS 1861 improvements.
Also tagging ARTEMIS-1941 to note test needs altered along with it.
updates max frame size tests to verify behaviour seen with standalone
brokers rather than non represenative test-only conditions, as well
as more closely validate the recieved messages
Calling close multiple times on ServerConsumer can result in multiple
notifications being routed around the cluster. This causes cluster
topology info to become skewed. Which affects a number of components
such as message redistribution, metrics and can eventually cause OOM
should multiple queues be redistributing at the same time.
Quorum voting is used by both the live and the backup to decide what to do if a replication connection is disconnected.
Basically, the server will request each live server in the cluster to vote as to whether it thinks the server it is replicating to or from is still alive.
You can also configure the time for which the quorum manager will wait for the quorum vote response.
Currently, the value is hardcoded as 30 sec. We should change this 30-second wait to be configurable.
Added an example to demonstrate how to configure and use openssl
Moved/Added netty-tcnative dependency to artemis-distribution
Changed artemis-jms-client-all pom to exclude io.netty from relocation
so that the native openssl can be loaded
1. Add tests case to verify issue and fix, tests also tests for same behavior using CORE, OPENWIRE and AMQP JMS Clients.
2. Update Core Client to check for queue before creating, sharedQueue as per createQueue logic.
3. Update ServerSessionPacketHandler to handle packets from old clients to perform to implement the same fix server side for older clients.
4. Correct AMQP protocol so correct error code is returned on security exception so that amqp jms can correctly throw JMSsecurityException
5. Correct AMQP protocol to check for queue exists before create
6. Correct OpenWire protocol to check for address exists before create
If a client ack mode consumer receives a message and closes without
acking it, the redelivery of the message won't set the redelivery
flag (JMSRedelivered) because it doesn't increment the delivery count
when message is cancelled back to queue.
Replace guava Preconditions with artemis Preconditions
Replace guava Predicate with java Predicate
Replace guava Ordering with java Comparator
Replace guava Immutable, with ArrayList/Set and then wrap with unmodifiable
Configure a value of 128KB for AMQP max frame size by default to improve
overall performance and provide a limit on delivery size before chunking
begins.