This method always throws an exception so it should be removed and no
longer used. Places in PageFile that called the method have been updated
to either remove the usage or throw an error.
This fixes an issue on start up of a broker that is configured with
multiple mKahaDB filtered adapters and one is configured with
perDestination=true. Before this fix a duplicate persistence adapter
could be created because the filter did not check for existing matches.
Patch applied with thanks to Ritesh Adval
This commit fixes a bug in KahaDB that caused gaps in sequence ack
tracking for durables that would lead to the appearance of stuck
messages on durable subs if duplicate messages were detected. The
sequence is now correctly rolled back so that there is no gap if the
message is not added to the order index
This adds a check in case a duplicate ack is passed to the store to make
sure that the subscription statistics (if enabled) for a durable sub do
not have the metrics decremented a second time
This commit will reduce the memory required in KahaDB for long running
transactions and transactions with a lot of pending message sends by
clearing out the message memory when no longer needed instead of keeping
it tracked in the pending map
This isolates the validation on data file length on read and adds unit
tests to verify we properly fallback to the real file length on initial
size check failure
This is best practice and will prevent unlock from being attempted
inside of a finally block when the thread doesn't actually own the
lock which can happen when the lock attempt throws an exception
such as calling lockInterruptibly()
A store directory is created by MessageDatabase#getPageFile which
is called in two cases:
1. KahaDBStore.start() when creating a queue
2. KahaDBStore.size() which is performed when sending any persistent message
If both methods are called concurrently it's possible to get an IOException
thrown from the IOHelper.mkdirs method.