Adjust slow-consumer detection logic to use the number of messages in
the queue and not just the number of messages added since the last
check. This means the getRate() method now returns the rate of messages
which it *could* have dispatched since the last check rather than the
rate at which it received messages. This is a more reliable metric to
ensure the slow-consumer detection logic doesn't flag a consumer as
slow unfairly. Although the reliability will come at a performance cost
since getMessageCount() must lock the queue.
As part of my refactoring on AMQP, the broker shouldn't rely on Application properties
for any broker semantic changes on delivery.
I am removing any access to those now, so we can properly deal with this post 2.0.0.
Artemis expose createQueue() method to management console like Jon.
If the queue to be created already exists it throws an ActiveMQException
back to the console, which will get a ClassNotFoundException when
deserializing the exception.
The same issue goes also with AddressControl.sendMessage() method.
To fix that it should throw a common java exception like IllegalStateException.
with this we could send and receive message in their raw format,
without requiring conversions to Core.
- MessageImpl and ServerMessage are removed as part of this
- AMQPMessage and CoreMessage will have the specialized message format for each protocol
- The protocol manager is now responsible to send the message
- The message will provide an encoder for journal and paging
Due to recent changes, the web component is shutdown by the
server, but the shutdown flag is lost so the web component's
cleanup check method is not get called and the web's tmp
dir is left there after user stopped the broker (control-c).
The fix is add a suitable API to allow passing of the
flag so the web component can make sure its tmp dir gets
cleaned up properly before exiting the VM.
In a cluster of replicated live/backup pairs if a backup crashes and
then its live crashes the cluster will retain the topology information
of the live such that when the live server restarts it will check the
cluster to see if its nodeID is present (which it will be) and then it
will activate as a backup rather than a live. To prevent this situation
an additional check is necessary to see if the server with the matching
nodeID is actually active or not which is done by attempting to make a
connection to it.