If, for whatever reason, the response for a packet sent with blocking
semantics is never returned it's possible that an async response
received in the interventing time will be interpreted as the current
response. This is because ChannelImpl does not verify the correlation
ID set on the response packet when it is received.
Small change but say there's ever a leak on the journal. Removing the clause from paging would allow to also capture other leaks.
This is currently not an issue and the test should still pass.
No need for the iterations, and a few minor improvements to the test.
This method was taking up to 15 seconds on the CI and the same method is reused a couple times, resulting in many minutes wasted
This commit fixes 4 distinct issues with divert configuration encoding:
- The encoding size was calculated incorrectly in three ways:
- Using 0 instead of `DataConstants.SIZE_NULL` when no transformer is
defined resulting in a calculated size discrepancy of -1.
- Using `DataConstants.INT` instead of `DataConstants.SIZE_INT` for
the number of transformer properties resulting in a calculated size
discrepancy of +2 when using a configuration with a transformer.
- Using `BufferHelper.sizeOfNullableBoolean` instead of
`DataConstants.SIZE_BOOLEAN` for the `exclusive` property resulting
in a calculated discrepancy of +1.
- Encoding was using `writeString` instead of `writeNullableString` for
the name of the transformer class resulting in an actual buffer size
discrepancy of -1.
Aside from these fixes this commit also:
- Updates the divert storage test to force a reload from disk which
will also trigger this problem.
- Adds new tests specifically for encoding & decoding include a test to
verify known bytes during encoding.
Lastly, it's worth noting that this *won't fix* bad data that was
already stored to disk by an older broker or bad data that comes over
the wire from an older broker (e.g. if a older primary broker paired
with a newer backup).
Files will leak on the target and messages will not be received after failover.
Also messages will be written to wrong destinations on the replica. The leak is actually between destinations, and the consequence is the file leak.
I usually keep test and fix on the same commit, but in this case I have been heavily validating the server with and without the fix,
so I will open an exception in this case and keep the fix and test separated.
When a bridge is stopped it doesn't wait for pending send
acknowledgements to arrive. However, when a bridge is paused it does
wait. The behavior should be consistent and more importantly
configurable. This commit implements these improvements and generally
refactors BridgeImpl to clarify and simplify the code. In total, this
commit includes the follow changes:
- Removes the hard-coded 60-second timeout for pending acks when
pausing the bridge and adds a new config parameter (i.e.
"pending-ack-timeout").
- Applies the new pending-ack-timeout when the bridge is stopped.
- Updates existing and adds new logging messages for clarity.
- De-duplicates code for sending bridge-related notifications.
- Avoids converting bridge name to/from SimpleString.
- Removes unnecessary comments.
- Renames variables & functions for clarity.
- Replaces the `started`, `stopping`, & `active` booleans with a
single `state` variable which is an enum.
- Adds `final` to a few variables that were functionally final.
- Synchronizes `stop` & `pause` methods to add safety when invoked
concurrently with `handle` (since both deal with `state` and execute
runnables on the ordered executor).
- Reorganizes and removes a few methods for clarity.
- Relocates `connect` method directly into `ConnectRunnable` (mirroring
the structure of the `StopRunnable` and `PauseRunnable`).
- Eliminates unnecessary variables in `ConnectRunnable` and
`ScheduledConnectRunnable`.
- Adds test to verify pending ack timeout works as expected with both
`stop` & `pause` with both regular and large messages.
Handle any exceptions from the proton transport and set the error on the
transport for processing in the events dispatch cycle by adding in handling
of the transport error event.
This commit uses lambdas or method references wherever possible. There
are still a handful of places that appear like they could be changed but
couldn't mainly because they use "this" and the meaning of "this"
changes when using a lambda.
This test is spinning a concurrent call on getDirectBindings
Since I added a synchronization point to get the Bindings the test could
be taking up to 5 seconds, being a variance between 500ms and 5 seconds.
Adding Thread.sleep(1) on this loop solved the issue as it is now letting other threads to do work
since I'm starting more executors than cores I have on my box.
Currently, with 500K+ queues, the cleanup step of TempQueueCleanerUpper
requires invoking WildcardAddressManager#getDirectBindings, which is
O(k) in the number of queues.
From method profiling, this can consume up to 95% of our CPU time when
needing to clean up many of these.
Add a new map to keep track of the direct bindings, and add a test
assertion that fails if we don't properly remove it.
When setting expiration on the AMQPMessage the AMQP header TTL value
should be read as an unsigned integer and as such should use the longValue
API of UnsignedInteger to get the right value to set expiration.
When checking if address federation can be done the manager needs to look
at the policy level settings before looking at federation or connector
level settings for amqp credits.
No semantic changes in this commit,
I was just exploring the possibility of
issues with paging and large messages combined
and I decided to keep the test.
This commit does the following:
- deprecate all QueueConfiguration ctors
- add `of` static factory methods for all the deprecated ctors
- replace any uses of the normal ctors with the `of` counterparts
This makes the code more concise and readable.