OptimizedAckTest: Using core api to replace old activemq
broker API to checking message count.
JmsQueueTransactionTest#testCloseConsumer: a bug in
delivery when prefetchSize is 0.
(InitalReconnectDelayTest)close connection after test.
- Added a thread pool executor, that combines cached and fixed size thread pooling.
It behaves like a cached thread pool in that it reuses exising threads and removes
idle threads after a timeout, limits the maximum number of threads in the pool, but
queue additional request instead of rejecting them.
- changed existing code to use the new thread pool instead of a fixed-size thread pool in
all places that are configured with a client thread pool size.
Temp Queue not deleted when connection is closed.
Enable Stomp in openwire test because some test uses it.
Remove unused code in opwnwire
Wrong XA error code returned when xid is missing
(ActiveMQXAConnectionFactory.testRollbackXaErrorCode)
regression in ActiveMQSslConnectionFactoryTest (SSL related)
1. Changed public fields in ActiveMQClient to private and added getters.
Exposing fields for thread pool sized allow to modify them in undesired ways.
I made these fields private and added corresponding getter methods.
In addition, I renamed the field 'globalThreadMaxPoolSize'
to 'globalThreadPoolSize' to be more consistent with the
'globalScheduledThreadPoolSize' field name.
I also adapted some tests to always call clearThreadPools after
the thread pool size configuration has been changed.
2. Protect against injecting null as thread pools
ActiveMQClient.injectPools allowed null as injected thread pools.
The effect was that internal threads pools were created,
but not shutdown correctly.
Communication between nodes will fail under certain topologies
JGroups has something called JForkChannel that could be used on container systems.
And be injected into Artemis.
For some reason that channel cannot be reused for more than one channel per VM.
And it cannot ever be closed.
I am keeping the trace logs I used to debug this issue in case anything similar to this happens again.
https://issues.apache.org/jira/browse/ARTEMIS-484
The File copy after the initial synchronization on large messages was broken.
On this commit we fix how the buffer is cleaned up before each read since
a previously unfinished body read would make the buffer dirty.
I'm keeping also lots of Traces I have added to debug this issue, so they will
be useful if anything like this happens again.
If a LM write packet is received from the live assume that the large
message exists and create a local reference.
Old behavour would reject the packet which could lead to loss of data on
failover see JIRA.
https://issues.apache.org/jira/browse/ARTEMIS-463
This will have some extra refactoring on the protocol head, transferring responsibility to the broker classes in a lot of cases
and removing some duplicated code
This was a team effort from Clebert Suconic and Howard Gao
There is a potential deadlock scenario if 2 threads try
sendReplicatePacket. Thread 1 registers Netty callback which will
invoke readyForWrites() method, which locks the callback list and waits
on the replicationLock. Thread 2 enters sendReplicatePacket and gets
the replicationLock and is blocked on the callback list lock. This fix
ensures the sendReplicatePacket blocks on the ReplicationManager meaning
no interleaving of the Netty callback thread and the wait on lock can
happen.