max-disk-usage = how much of a disk we can use before the system blocks
global-max-size = how much bytes we can take from memory for messages before we start enter into the configured page mode
This will also change the default created configuration into page-mode as that's more reliable for systems.
This avoids an issue where a broker would discover itself, causing an unexpected behavior when using
core bridges to forward messages:
* Make channel manager a singleton ensuring that only one channel with a given name exists
* Ensure that messages are marked with NON_LOOPBACK to avoid receiving messages originating from
itself
Javax.json is a newer JSR, but has an ASF compliant version, is pretty close to the original JSON.org API and will support a standard annotation based JSON-B solution at some point soon.
Updated integration tests and removed JSON.org from license.
Using array() is a bit dangerous as it's an optional part of any
ByteBuffer implementation. This new method will deal with various
ByteBuffer implementations appropriately.
Implements a new feature for the broker whereby it may automatically create and
delete JMS topics which are not explicitly defined through the management API
or file-based configuration. A JMS topic is created in response to a sent
message or connected subscriber. The topic may subsequently be deleted when it
no longer has any subscribers. Auto-creation and auto-deletion can both be
turned on/off via address-setting.
The previous fix was breaking compatibility with older servers.
We need to check the default address if an exception happened during the send (due to flow control or blocker)
Obfuscate truststore password in TransportConfiguration.toString()
in the same way as keystore. The password will not be logged in
plain text when bridge is connected.
https://issues.apache.org/jira/browse/ARTEMIS-524
I am keeping all the debug ad tracing I added during the debug of this issue,
for that reason this commit may look longer than expected
The fix will be highlited by the tests added on org.apache.activemq.artemis.tests.integration.client.PagingTest
- Added a thread pool executor, that combines cached and fixed size thread pooling.
It behaves like a cached thread pool in that it reuses exising threads and removes
idle threads after a timeout, limits the maximum number of threads in the pool, but
queue additional request instead of rejecting them.
- changed existing code to use the new thread pool instead of a fixed-size thread pool in
all places that are configured with a client thread pool size.
When NettyConnection.classSSLAndChannel is called from the EventLoop,
waiting for the SSL handler to close will always take 10 seconds, because
the sslCloseFuture is from a task that is scheduled with the same
EventLoop. But since the EventLoop is a single threaded executor, it
will only be executed after the current task is completed.
Due to the single threaded nature of the EventLoop, all blocking calls
should be avoided. Therefore, I removed both awaitUninterruptibly calls
if the closing happens within an event loop tasks. As a side effect,
the annoying server log timeout warnings will go away.
Changed the ActiveMQClient interface to expose global thread pools as
ExecutorService and ScheduledExecutorService interface. This is necessary
to allow injecting thread pool implementations that are not based on
ThreadPoolExecutor or ScheduledThreadPoolExecutor.
1. Changed public fields in ActiveMQClient to private and added getters.
Exposing fields for thread pool sized allow to modify them in undesired ways.
I made these fields private and added corresponding getter methods.
In addition, I renamed the field 'globalThreadMaxPoolSize'
to 'globalThreadPoolSize' to be more consistent with the
'globalScheduledThreadPoolSize' field name.
I also adapted some tests to always call clearThreadPools after
the thread pool size configuration has been changed.
2. Protect against injecting null as thread pools
ActiveMQClient.injectPools allowed null as injected thread pools.
The effect was that internal threads pools were created,
but not shutdown correctly.
Adapted code to handle -1 correctly to configure an unbounded thread pool.
In addition, I removed the capability to reconfigure the max pool size
of existing thread pools, because the global thread pool can either be
an unbounded cached pool, or a bounded fixed size pool.
These 2 kinds of pool also differ in the used blocking queue,
therefore cannot be converted into each other.
Communication between nodes will fail under certain topologies
JGroups has something called JForkChannel that could be used on container systems.
And be injected into Artemis.
For some reason that channel cannot be reused for more than one channel per VM.
And it cannot ever be closed.
I am keeping the trace logs I used to debug this issue in case anything similar to this happens again.
https://issues.apache.org/jira/browse/ARTEMIS-484
The File copy after the initial synchronization on large messages was broken.
On this commit we fix how the buffer is cleaned up before each read since
a previously unfinished body read would make the buffer dirty.
I'm keeping also lots of Traces I have added to debug this issue, so they will
be useful if anything like this happens again.