Implements a new feature for the broker whereby it may automatically create and
delete JMS topics which are not explicitly defined through the management API
or file-based configuration. A JMS topic is created in response to a sent
message or connected subscriber. The topic may subsequently be deleted when it
no longer has any subscribers. Auto-creation and auto-deletion can both be
turned on/off via address-setting.
https://issues.apache.org/jira/browse/ARTEMIS-524
I am keeping all the debug ad tracing I added during the debug of this issue,
for that reason this commit may look longer than expected
The fix will be highlited by the tests added on org.apache.activemq.artemis.tests.integration.client.PagingTest
OptimizedAckTest: Using core api to replace old activemq
broker API to checking message count.
JmsQueueTransactionTest#testCloseConsumer: a bug in
delivery when prefetchSize is 0.
(InitalReconnectDelayTest)close connection after test.
- Added a thread pool executor, that combines cached and fixed size thread pooling.
It behaves like a cached thread pool in that it reuses exising threads and removes
idle threads after a timeout, limits the maximum number of threads in the pool, but
queue additional request instead of rejecting them.
- changed existing code to use the new thread pool instead of a fixed-size thread pool in
all places that are configured with a client thread pool size.
Temp Queue not deleted when connection is closed.
Enable Stomp in openwire test because some test uses it.
Remove unused code in opwnwire
Wrong XA error code returned when xid is missing
(ActiveMQXAConnectionFactory.testRollbackXaErrorCode)
regression in ActiveMQSslConnectionFactoryTest (SSL related)
1. Changed public fields in ActiveMQClient to private and added getters.
Exposing fields for thread pool sized allow to modify them in undesired ways.
I made these fields private and added corresponding getter methods.
In addition, I renamed the field 'globalThreadMaxPoolSize'
to 'globalThreadPoolSize' to be more consistent with the
'globalScheduledThreadPoolSize' field name.
I also adapted some tests to always call clearThreadPools after
the thread pool size configuration has been changed.
2. Protect against injecting null as thread pools
ActiveMQClient.injectPools allowed null as injected thread pools.
The effect was that internal threads pools were created,
but not shutdown correctly.
Communication between nodes will fail under certain topologies
JGroups has something called JForkChannel that could be used on container systems.
And be injected into Artemis.
For some reason that channel cannot be reused for more than one channel per VM.
And it cannot ever be closed.
I am keeping the trace logs I used to debug this issue in case anything similar to this happens again.
https://issues.apache.org/jira/browse/ARTEMIS-484
The File copy after the initial synchronization on large messages was broken.
On this commit we fix how the buffer is cleaned up before each read since
a previously unfinished body read would make the buffer dirty.
I'm keeping also lots of Traces I have added to debug this issue, so they will
be useful if anything like this happens again.
If a LM write packet is received from the live assume that the large
message exists and create a local reference.
Old behavour would reject the packet which could lead to loss of data on
failover see JIRA.
https://issues.apache.org/jira/browse/ARTEMIS-463
This will have some extra refactoring on the protocol head, transferring responsibility to the broker classes in a lot of cases
and removing some duplicated code
This was a team effort from Clebert Suconic and Howard Gao
There is a potential deadlock scenario if 2 threads try
sendReplicatePacket. Thread 1 registers Netty callback which will
invoke readyForWrites() method, which locks the callback list and waits
on the replicationLock. Thread 2 enters sendReplicatePacket and gets
the replicationLock and is blocked on the callback list lock. This fix
ensures the sendReplicatePacket blocks on the ReplicationManager meaning
no interleaving of the Netty callback thread and the wait on lock can
happen.
It's possible for the latch used for flow control here to get out of sync. In
other words, multiple count-downs can occur between count-ups so that the latch
always has a count > 0. When this situation arises then every single packet
sent to the replica is delayed by 5 seconds.
The solution here essentially is to eliminate the latch completely and use a
condition/wait/notify pattern.
This patch fixes a number of bugs with the JDBC Journal implementation.
Mainly around how it was handling transactions. The XA transactions
tests are now enabled to test both the File and Database store.
When publishing server connectors to other cluster members the whole buffer is sent.
This fixes ARTEMIS-362 by extracting the filled part of the buffer for broadcasting.
Added test case that checks that packet size does not exceed 1500 bytes with one connector.