Currently, with 500K+ queues, the cleanup step of TempQueueCleanerUpper
requires invoking WildcardAddressManager#getDirectBindings, which is
O(k) in the number of queues.
From method profiling, this can consume up to 95% of our CPU time when
needing to clean up many of these.
Add a new map to keep track of the direct bindings, and add a test
assertion that fails if we don't properly remove it.
This commit does the following:
- deprecate all QueueConfiguration ctors
- add `of` static factory methods for all the deprecated ctors
- replace any uses of the normal ctors with the `of` counterparts
This makes the code more concise and readable.
This commit does the following:
- deprecate the verbosely named `toSimpleString` static factory
methods
- add `of` static factory methods for all the ctors
- replace any uses of the normal ctors with the `of` counterparts
This makes the code more concise and readable.
this is fixing PagingTest::testPagingStoreDestroyed(db=derby) (or any other DB).
This could eventually also fail on journal.
The cleanup could act on a destroyed folder, and issue an IOException stopping the server with this race.
You would need the server active cleaning messages while the queue is being removed.
I'm just adding some context to the code here. I have had to explain these variables a few times to different people,
I guess it's time to make that little explanation as part of the code now.
We observed this assert error in the netty collection used for acks:
java.lang.AssertionError: null is not a legitimate internal value. Concurrent Modification?
at io.netty.util.collection.IntObjectHashMap.toExternal(IntObjectHashMap.java:103) ~[netty-common-4.1.109.Final.jar:4.1.109.Final]
at io.netty.util.collection.IntObjectHashMap.access$900(IntObjectHashMap.java:37) ~[netty-common-4.1.109.Final.jar:4.1.109.Final]
at io.netty.util.collection.IntObjectHashMap$PrimitiveIterator.value(IntObjectHashMap.java:650) ~[netty-common-4.1.109.Final.jar:4.1.109.Final]
at io.netty.util.collection.IntObjectHashMap$2$1.next(IntObjectHashMap.java:234) ~[netty-common-4.1.109.Final.jar:4.1.109.Final]
this will avoid the cleanup and rely on GC for the cleanup.
PagingLeakTest is being added to make sure the cleanup is actually not needed.
This commit includes the following changes:
- Management operations to get sucess & failure counts for authn and
authz along with the corresponding audit logging.
- Export the aforementioned authn & authz metrics.
- Export metrics for the underlying authn & authz caches including the
ability to enable/disable them.
- Update metrics tests to validate tags in addition to keys and values.
- Update documentation to explain new functionality and clarify
existing metric tags.
The original commit (1ee3e884b7) for this
issue wasn't completely correct. This commit fixes those issues so that
both the messageCount and scheduledMessageCount are accurate now when
a scheduled message is removed by its ID.
PagingStore is supposed to send an event to replica on every file that is closed.
There are a few situation where the sendClose is being missed and that could generate leaks on the target
in this commit I'm storing a binding record with the address-settings for the correct size
this is also validating eventual merges of the AddressSettings in the same namespace.
This is a list of improvements done as part of this commit / task:
* Page Transactions on mirror target are now optional.
If you had an interrupt mirror while the target destination was paging, duplicate detection would be ineffective unless you used paged transactions
Users can now configure the ack manager retries intervals.
Say you need some time to remove a consumer from a target mirror. The delivering references would prevent acks from happening. You can allow bigger retry intervals and number of retries by tinkiering with ack manager retry parameters.
* AckManager restarted independent of incoming acks
The ackManager was only restarted when new acks were coming in. If you stopped receiving acks on a target server and restarted that server with pending acks, those acks would never be exercised. The AckManager is now restarted as soon as the server is started.