FakeQueue is not correctly setting the queue on its PageSubscription,
leading to fail the test due to NPEs when PageSubscription::getQueue
is being used.
When a receiving transaction is committed in a paging situation,
if a page happens to be completed and it will be deleted in a
transaction operation (PageCursorTx). The other tx operation
RefsOperation needs to access the page (in PageCache) to finish
its job. There is a chance that the PageCursorTx removes the
page before RefsOperation and it will cause the RefsOperation
failed to find a message in a page.
When a node tries to reconnects to another node in a scale down cluster,
the reconnect request gets denied by the other node and keeps retrying,
which causes tasks in the ordered executor accumulate and eventually OOM.
The fix is to change the ActiveMQPacketHandler#handleCheckForFailover
to allow reconnect if the scale down node is the node itself.
Previously the port was always random. This caused problems with
remote JMX connections that needed to overcome firewalls. As of
this patch it's possible to make the RMI port static and whitelist
it in the firewall settings.
With AMQP protocol when some messages are received in a transaction,
calling JMX QueueControl.listDeliveringMessages() returns empty list
before the transaction is committed.
Add test cases
Add GroupSequence to Message Interface
Implement Support closing/reset group in queue impl
Update Documentation (copy from activemq5)
Change/Fix OpenWireMessageConverter to use default of 0 if not set, for OpenWire as per documentation http://activemq.apache.org/activemq-message-properties.html
This test can verify an issue fixed by the commit:
7a463f038a (ARTEMIS-2096)
The issue was reported later by users where messages
sent by amqp clients are having their body size
randomly being zero when received by multiple core
consumers.
This was also fixed on commit 48e0fc8f42 but that
was exclusively on 2.6.x branch.
Compaction is now reusing direct ByteBuffers on both
reading and writing with explicit and deterministic
release to avoid high peak of native memory utilisation
after compaction.
Implement custom LVQ Key and Non-Destructive in broker - protocol agnostic
Make feature configurable via broker.xml, core apis and activemqservercontrol
Add last-value-key test cases
Add non-destructive with lvq test cases
Add non-destructive with expiry-delay test cases
Update documents
Add new methods to support create, update with new attributes
Refactor to pass through queue-attributes in client side methods to reduce further method changes for adding new attributes in future and avoid methods with endless parameters. (note: in future this should prob be done server side too)
Update existing test cases and fake impls for new methods/attributes
Further fix around network loss.
If network loss (split) and slave activates, for a period its config used when it initializes in initialisePart2 was stale.
Add/Extend test to ensure address-setting change made on live preserved on slave (after activation)
Add/Extend test to ensure security-setting change made on live preserved on backup (after activation)
Major refactoring of the AMQPMessage abstraction to resolve
some issue of message corruption still present in the code and
improve the API handling of message changes and re-encoding.
Improves handling of decoding of message sections limiting the
work to only the portions needed and ensuring the state data
is always updated with what has been done. Fixes issues of
corrupt state on copy of message or other changes in filters.
Given that NettyConnector::createConnection isn't happening on the
channel's event loop, it could race with a channel close event, that
would clean the whole channel pipeline, leading to a NPE while
trying to use a configured channel handler of the pipeline.
The waits for expiration are set at the same value as the expiration
interval default to in the test support setup so on some runs there is a
race when waiting for the message to expire and the expiry task.
Once a re-encode of the message is done the buffer is not being marked
as valid and so subsequent checks on the buffer are all assuming the
message data is not valid and re-encoding over and over. This can lead
to poor performance in some cases and corrupted data in others.
Extend test case to reproduce problem of client created queues being incorrectly removed on simple reload of config.
Add a flag/field to the queues created by configuration/broker.xml so we can correctly filter only queues created/managed by config.
Update listConfiguredQueues to use the new queue flag
Add Tests
Add implementation inline with other queue updatable settings.
Enhance tests to ensure queue is not destroyed during config change and messages in queue already are preserved
Revert previous fix
Keep original ConfigChangeTest
Apply new non-destructive fix.
Enhance tests to ensure messages in queues are not lost either on reload when running or when config changed on-restart (e.g. queue i not destroyed)
Any failure to deploy an address or queue will short-circuit the broker
initialization process preventing any other addresses or queues from
being deployed as well as other critical resources like acceptors, etc.
Ensure the broker looks at local receiver credit when checking for
credit top off threshold and then do a proper top off back to the high
water mark to sync with how client receivers manage their credit.
The ServerSessionPacketHandler has a close() callback handler which will
delete any pending large messages. However, there is a race where a
large message can be routed, then the close delete the associated large
message resulting in data loss.
First, QueueQuery should use address name for address settings
The name used for looking up address settings for a queue now uses the
address name if there is a local queue binding
Second, make sure sent credits to the server is the correct value
Update the Qpid JMS and Proton dependencies to lastest and sync Netty
with the 4.1.28.Final version used by Qpid JMS to avoid clash that
breaks a test. Adds override of new Proton-J WritableBuffer API that
allows it to use the Netty String encoder when needed instead of the
slower default version.
Update Qpid JMS to v0.36.0
Proton-J to v0.29.0
Netty to 4.1.28.Final
In some cases users who migrate from 1.x to 2.x may still want to keep
the legacy prefixes for their JMS destinations (i.e. "jms.queue.",
"jms.topic.", etc.). This commit adds a boolean on our ConnectionFactory
implementation so that it will use the old prefixes when invoking the
queue/topic creation methods on the Session implementation.
Fix checkstyle
Avoid duplicated logic
Ability to filter and group
Instantiate SimpleString property key once
Get property value via getObjectProprty to ensure all special mapped properties such as in AMQPMessage would return
Avoid a custom string to represent null, instead rely on Java's representation "null" by using Objects.toString to get the string value of the property value used to group by.
An OpenWire client can use a compound destination name of the form
"a,b,c..." and consume from, or subscribe to, multiple destinations.
Such a compound destination only works for topics when the subscriber
is non-durable. Attempting to create a durable subscription on a
compound address will end up with an error.
The cause is when creating durable subs to multiple topics/addresses
the broker uses the same name to create internal queues, which
causes duplicate name conflict.
The deliver loop won't give up trying to deliver messages when
back-pressure kicks in (credits and/or TCP) if msg grouping is used and
there are many consumers registered: this change will allow the loop
to exit by instructing the logic that the group consumer is the only
consumer to check.
Parameters going into Wait.waitFor were originally wrong, because
`durationMillis: 3, sleepMillis: 100` means you would test the condition
only once. This commit is changing the durationMillis from 3ms to 3s,
swapping the two numbers (duration 100ms, sleep 3ms) would also be reasonable, I think.
Next, Wait.assertEquals is here being used, instead of Assert.assertTrue.
I saw the test fail only once, and never was able to reproduce it again,
but I think this commit does improve the test and so it is worthwhile.
java.lang.AssertionError
at org.apache.activemq.artemis.tests.integration.management.QueueControlTest.assertMetrics(QueueControlTest.java:2651)
at org.apache.activemq.artemis.tests.integration.management.QueueControlTest.assertMessageMetrics(QueueControlTest.java:2615)
at org.apache.activemq.artemis.tests.integration.management.QueueControlTest.testRemoveAllWithPagingMode(QueueControlTest.java:1554)
The occasional assertion error is prevented by using Wait.assertEquals
where Assert.assertEquals was used previously.
java.lang.AssertionError:
Expected :1
Actual :0
[...]
at org.junit.Assert.assertEquals(Assert.java:542)
at org.apache.activemq.artemis.tests.integration.management.QueueControlTest.testResetMessagesExpired(QueueControlTest.java:2370)
The occasional assertion error is prevented by using Wait.assertEquals
where Assert.assertEquals was used previously.
I did not observe the timing issue on all asserts (only on the first
two), but there is no harm in replacing them all.
java.lang.AssertionError:
Expected :2
Actual :1
The below error is prevented by using Wait.assertEquals
where Assert.assertEquals was used previously.
java.lang.AssertionError:
Expected :2
Actual :1
[...]
at org.junit.Assert.assertEquals(Assert.java:542)
at org.apache.activemq.artemis.tests.integration.management.QueueControlTest.testListMessagesWithNullFilter(QueueControlTest.java:804)
The below error is prevented by using Wait.assertEquals
where Assert.assertEquals was used previously.
java.lang.AssertionError:
Expected :2
Actual :1
[...]
at org.apache.activemq.artemis.tests.integration.management.QueueControlTest.testListMessagesWithEmptyFilter(QueueControlTest.java:827)
This commit adds support for tracking metrics for bridges for both
normal bridges and bridges that are part of a cluster. The two
statistics added in this commit are messages pending acknowledgement
and messages acknowledged but more can be added later.
The below error is prevented by adding Wait.assertEquals,
where Assert.assertEquals was used previously. Timeout is
set to small increments, since we rarely need to wait more
than 100 ms for the condition to become true.
java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at org.apache.activemq.artemis.tests.integration.client.HeuristicXATest.doRecoverHeuristicCompletedTxWithRestart(HeuristicXATest.java:306)
at org.apache.activemq.artemis.tests.integration.client.HeuristicXATest.testRecoverHeuristicCommitWithRestart(HeuristicXATest.java:251)
This test was initializing a libaio of 21K, that would fail on limited servers.
This is decreasing maxIO so it would requires less resources to run it.
Anonymous senders (those created without a target address) are not
blocked when max-disk-usage is reached. The cause is that when such
a sender is created on the broker, the broker doesn't check the
disk/memory usage and gives out the credit immediately.
In a live-backup scenario, if the live is restarted and shutdown too soon,
the client have a chance to fail on failover because it's internal topology
is inconsistent with the final status. The client keeps connecting to live
already shut down, never trying to connect to the backup.
It's a porting from HORNETQ-1572.
Tests should always extend ActiveMQTestBase whenever is possible.
This is because there are a few rules to avoid thread leakages.
The test was also leaking an executor and I believe it was
not always stopping the servers, which I fixed here.
When "large" messages are converted to / from core in order to be stored
in the large message store the type of the AMQP body section is being
lost and reconstituted incorrectly in some cases. The message needs to
be annotated with the original AMQP type for the body and that used to
manage the conversion back to AMQP from Core.