The original commit (1ee3e884b7) for this
issue wasn't completely correct. This commit fixes those issues so that
both the messageCount and scheduledMessageCount are accurate now when
a scheduled message is removed by its ID.
If the writer is closed while flow controlled write attempts are pending in the
connection executor an NPE is thrown which should be avoided as the closed state
should trigger the writes to stop. Add fix and test that checks that this works
as it should.
This commit fixes the deadlock described on ARTEMIS-3622 by moving the
synchronization "up" a level from the MQTTSession to the
MQTTConnectionManager. It also eliminates the synchronization on the
MQTTSessionState in the MQTTConnectionManager because it's no longer
needed. This change should not only eliminate the deadlock, but improve
performance relatively as well.
There is no test associated with this commit as I wasn't able to
reproduce the deadlock with any kind of straight-forward test. There was
a test linked on the Jira, but it involved intrusive and fragile
scaffolding and wasn't ultimately tenable. That said, I did test this
fix with that test and it was successful. In any case, I think static
analysis should be sufficient here as the changes are pretty
straight-forward.
PagingStore is supposed to send an event to replica on every file that is closed.
There are a few situation where the sendClose is being missed and that could generate leaks on the target
Notice that I'm keeping the old argument here as hidden
As I keep using this myself, my brain always go to ./artemis queue stat --sleep
So, I think people would feel more comfortable if the parameter was renamed.
in this commit I'm storing a binding record with the address-settings for the correct size
this is also validating eventual merges of the AddressSettings in the same namespace.
Due to https://github.com/apache/maven-apache-parent/pull/188 the
property `maven.compiler.release` is now being set which precludes
exporting and using any internal Java classes. Therefore this commit
removed references to `--add-exports` from the build, switches to
reflection, and adds `--add-opens` to the runtime JVM parameters.
This is a list of improvements done as part of this commit / task:
* Page Transactions on mirror target are now optional.
If you had an interrupt mirror while the target destination was paging, duplicate detection would be ineffective unless you used paged transactions
Users can now configure the ack manager retries intervals.
Say you need some time to remove a consumer from a target mirror. The delivering references would prevent acks from happening. You can allow bigger retry intervals and number of retries by tinkiering with ack manager retry parameters.
* AckManager restarted independent of incoming acks
The ackManager was only restarted when new acks were coming in. If you stopped receiving acks on a target server and restarted that server with pending acks, those acks would never be exercised. The AckManager is now restarted as soon as the server is started.
When creating internal temporary queues for the federation control links and the
events link we should use a structured naming convention to ease in configuring
security for the federation user where all internal names fall under a root prefix
which can be used to grant read and write access for the federation user. This
change allows security on the wildcarded address "$ACTIVEMQ_ARTEMIS_FEDERATION.#".
This change also includes some further restrictions added to federation resources
and adds support for wildcarding '$' prefixed addresses.
The resume delivery on AMQP Large Message Writer is using runNow.
When a flow control is paused and then resumed, the runNow will make the first read to happen inline to the thread that's resuming other deliveries.
It would be better to not run the delivery within the same call. Hence the change here is simple, being just using a connection.runLater instead of runNow.
I have seen this on a thread dump from a production server.
No semantic issues were encountered but there was a theory that the Netty thread responsible to resume would be busy with a delivery when not supposed to.
Better to be safe on this case.