If broker fails to decode any packets from buffer, it should
treat it as a critical bug and disconnect immediately.
Currently broker only logs an error message.
The following changes are made to support Epoll.
Refactored SharedNioEventLoopGroup into renamed SharedEventLoopGroup to be generic (as so we can re-use for both Nio and Epoll)
Add support and toggles for Epoll in NettyAcceptor and NettyConnector (with fall back to NIO if cannot load Epoll)
Removal from code of PartialPooledByteBufAllocator, caused bad address when doing native, and no longer needed - see jira discussion
New Connector Properties:
useEpoll - toggles to use epoll or not, default true (but we failback to nio gracefully)
remotingThreads = same behaviour as nioRemotingThreads. Previous property is depreated.
useGlobalWorkerPool = same behaviour as useNioGlobalWorkerPool. Old property is deprecated.
New Acceptor Properties:
useEpoll - toggles to use epoll or not, default true (but we failback to nio gracefully)
useGlobalWorkerPool = same behaviour as useNioGlobalWorkerPool but for Epoll.
This closes#1093
As part of my refactoring on AMQP, the broker shouldn't rely on Application properties
for any broker semantic changes on delivery.
I am removing any access to those now, so we can properly deal with this post 2.0.0.
with this we could send and receive message in their raw format,
without requiring conversions to Core.
- MessageImpl and ServerMessage are removed as part of this
- AMQPMessage and CoreMessage will have the specialized message format for each protocol
- The protocol manager is now responsible to send the message
- The message will provide an encoder for journal and paging
There are some operations in ActiveMQServerControl that don't have
@Parameter annotations. That will make clients like JBoss Jon failed
to invoke those operations.
Also in AddressControl there is a typo in sendMessage. The second
parameter's name should be 'type' not 'headers'.
When HTTP Upgrade is enabled, update Netty's pipeline only after the
HTTP Upgrade handshake is successful *and* the trailing
EMPTY_LAST_CONTENT is received.
Otherwise, this EMPTY_LAST_CONTENT is handled by
ActiveMQChannelHandler which is only expected to handle ByteBuf
JIRA: https://issues.apache.org/jira/browse/ARTEMIS-963
* Fix isEquivalent() method to take into account the activemqServerName
property when httpUpgradeEnabled is true. Two ActiveMQ server hosted on
the same app server may have the same host and port (corresponding to
the Web server HTTP port). The activemqServerName property is used to
distinguish them.
* Iron out HTTP upgrade handler so that the latch is always count down
and the channel context is closed unless the handshake was completed
successfully
JIRA: https://issues.apache.org/jira/browse/ARTEMIS-931
* Do not offset ports for Netty connector/acceptor with http-upgrade
enabled.
* Pass the name of the ActiveMQ server to the HTTP request to initiate
the Upgrade so that the HTTP endpoint on the app server can find the
correct ActiveMQ broker that must handle the upgrade.
JIRA: https://issues.apache.org/jira/browse/ARTEMIS-803
max-disk-usage = how much of a disk we can use before the system blocks
global-max-size = how much bytes we can take from memory for messages before we start enter into the configured page mode
This will also change the default created configuration into page-mode as that's more reliable for systems.
This avoids an issue where a broker would discover itself, causing an unexpected behavior when using
core bridges to forward messages:
* Make channel manager a singleton ensuring that only one channel with a given name exists
* Ensure that messages are marked with NON_LOOPBACK to avoid receiving messages originating from
itself
Javax.json is a newer JSR, but has an ASF compliant version, is pretty close to the original JSON.org API and will support a standard annotation based JSON-B solution at some point soon.
Updated integration tests and removed JSON.org from license.
Using array() is a bit dangerous as it's an optional part of any
ByteBuffer implementation. This new method will deal with various
ByteBuffer implementations appropriately.
Implements a new feature for the broker whereby it may automatically create and
delete JMS topics which are not explicitly defined through the management API
or file-based configuration. A JMS topic is created in response to a sent
message or connected subscriber. The topic may subsequently be deleted when it
no longer has any subscribers. Auto-creation and auto-deletion can both be
turned on/off via address-setting.
The previous fix was breaking compatibility with older servers.
We need to check the default address if an exception happened during the send (due to flow control or blocker)
Obfuscate truststore password in TransportConfiguration.toString()
in the same way as keystore. The password will not be logged in
plain text when bridge is connected.
https://issues.apache.org/jira/browse/ARTEMIS-524
I am keeping all the debug ad tracing I added during the debug of this issue,
for that reason this commit may look longer than expected
The fix will be highlited by the tests added on org.apache.activemq.artemis.tests.integration.client.PagingTest
- Added a thread pool executor, that combines cached and fixed size thread pooling.
It behaves like a cached thread pool in that it reuses exising threads and removes
idle threads after a timeout, limits the maximum number of threads in the pool, but
queue additional request instead of rejecting them.
- changed existing code to use the new thread pool instead of a fixed-size thread pool in
all places that are configured with a client thread pool size.
When NettyConnection.classSSLAndChannel is called from the EventLoop,
waiting for the SSL handler to close will always take 10 seconds, because
the sslCloseFuture is from a task that is scheduled with the same
EventLoop. But since the EventLoop is a single threaded executor, it
will only be executed after the current task is completed.
Due to the single threaded nature of the EventLoop, all blocking calls
should be avoided. Therefore, I removed both awaitUninterruptibly calls
if the closing happens within an event loop tasks. As a side effect,
the annoying server log timeout warnings will go away.
Changed the ActiveMQClient interface to expose global thread pools as
ExecutorService and ScheduledExecutorService interface. This is necessary
to allow injecting thread pool implementations that are not based on
ThreadPoolExecutor or ScheduledThreadPoolExecutor.
1. Changed public fields in ActiveMQClient to private and added getters.
Exposing fields for thread pool sized allow to modify them in undesired ways.
I made these fields private and added corresponding getter methods.
In addition, I renamed the field 'globalThreadMaxPoolSize'
to 'globalThreadPoolSize' to be more consistent with the
'globalScheduledThreadPoolSize' field name.
I also adapted some tests to always call clearThreadPools after
the thread pool size configuration has been changed.
2. Protect against injecting null as thread pools
ActiveMQClient.injectPools allowed null as injected thread pools.
The effect was that internal threads pools were created,
but not shutdown correctly.
Adapted code to handle -1 correctly to configure an unbounded thread pool.
In addition, I removed the capability to reconfigure the max pool size
of existing thread pools, because the global thread pool can either be
an unbounded cached pool, or a bounded fixed size pool.
These 2 kinds of pool also differ in the used blocking queue,
therefore cannot be converted into each other.
Communication between nodes will fail under certain topologies
JGroups has something called JForkChannel that could be used on container systems.
And be injected into Artemis.
For some reason that channel cannot be reused for more than one channel per VM.
And it cannot ever be closed.
I am keeping the trace logs I used to debug this issue in case anything similar to this happens again.
https://issues.apache.org/jira/browse/ARTEMIS-484
The File copy after the initial synchronization on large messages was broken.
On this commit we fix how the buffer is cleaned up before each read since
a previously unfinished body read would make the buffer dirty.
I'm keeping also lots of Traces I have added to debug this issue, so they will
be useful if anything like this happens again.