Expose User associated with creating Queue on JMX QueueControl (as attribute)
Allow setting of the user to associate with creating the queue when configured in broker.xml (before only if created over wire is it possible to set the user)
Revert "ARTEMIS-1545 Adding HornetQ 2.4.7 on the mesh to validate send-acks"
I'm reverting this as the testsuite is broken..
We will send it back once worked out.
This reverts commit 8f5b7a1e73.
This reverts commit 9b982b3e30.
add connection ID to the trace logging output
dealing with reading/writing packets. This helps correlate
the packets read/written to an individual tcp connection
Openwire clients create consumers to advisory topics to receive
notifications. As a result there are internal queues created
on advisory topics. Those consumer shouldn't be exposed via
management APIs which are used by the Console
To fix that the broker doesn't register any queues from
advisory addresses.
Also refactors a code to remove Openwire specific contants
from AddressInfo class.
The temporary deadlock is avoided by removing 'synchronized' from
ClientSessionImpl::getCredits method. As the method uses only
a producerCreditManger, only this object is guarded against
the parallel access.
When the execution of code jumps out of the while cycle,
it should check whether the blocking-call-timeout expired or not.
If no, ActiveMQUnBlockedException should be thrown
instead of ActiveMQConnectionTimedOutException.
The changes to remove netty-all removes the classifiers that add the
dependency to the netty transport that includes the compiled native
library wrapper. Add those classifiers back in.
Added a new trustAll flag which will support trusting any client
keystore when doing testing against a broker. This setting should not
be used in production and is strictly for testing.
In a cluster if a node is shut down (or crashed) when a
message is being routed to a remote binding, a internal
property may be added to the message and persisted. The
name of the property is like _AMQ_ROUTE_TOsf.my-cluster*.
if the node starts back, it will load and reroute this message
and if it goes to a local consumer, this property won't
get removed and goes to the client.
The fix is to remove this internal property before it
is sent to any client.
Update Tranformer to be able to handle initiation via propertiers (map<string, string>)
Update Configuration to have more specific transfromer configuration type, and to take properties.
Support back compatibility.
Add AddHeadersTransformer which is a main use case, and can act as example also.
Update Control's to expose new property configuration
Add test cases
Update examples for new transformer config style
- it is now possible to disable the TimedBuffer
- this is increasing the default on libaio maxAIO to 4k
- The Auto Tuning on the journal will use asynchronous writes to simulate what would happen on faster disks
- If you set datasync=false on the CLI, the system will suggest mapped and disable the buffer timeout
This closes#1436
This commit superseeds #1436 since it's now disabling the timed buffer through the CLI
priority is stored as a byte in the ICoreMessage's map.
It is stored then in a int when it is converted to JMS (as JMSPriority
header is an Integer).
JIRA: https://issues.apache.org/jira/browse/ARTEMIS-1448
add the adaptTransportConfiguration() method to the
ClientProtocolManagerFactory so that transport configurations used by
the ClientProtocolManager have an opportunity to adapt their transport
configuration.
This allows the HornetQClientProtocolManagerFactory to adapt the
transport configuration received by remote HornetQ broker to replace the
HornetQ-based NettyConnectorFactory by the Artemis-based one.
JIRA: https://issues.apache.org/jira/browse/ARTEMIS-1431
delegate to the jdk saslServer. Allow acceptor configuration of supported mechanismis; saslMechanisms=<a,b>
and allow login config scope for krb5 to be configured via saslLoginConfigScope=x
When calling a consumer to receive message with a timeout
(receive(long timeout), if the consumer's buffer is empty, it sends
a 'forced delivery' the waiting forever, expecting the server to
send back a 'forced delivery" message if the queue is empty.
If the connection is disconnected as the arrived 'forced
delivery' message is corrupted, this 'forced delivery' message
never gets to consumer. After the session is reconnected,
the consumer never knows that and stays waiting.
To fix that we can send a 'forced delivery' to server right
after the session is reconnected.
This is replacing an executor on ServerSessionPacketHandler
by a this actor.
This is to avoid creating a new runnable per packet received.
Instead of creating new Runnable, this will use a single static runnable
and the packet will be send by a message, which will be treated by a listener.
Look at ServerSessionPacketHandler on this commit for more information on how it works.
Add krb5sslloginmodule that will populate userPrincipal that can be mapped to roles independently
Generalised callback handlers to take a connection and pull certs or peerprincipal based on
callback. This bubbled up into api change in securitystore and security manager
Core client with netty connector and acceptor doing kerberos
jaas.doAs around sslengine init such that the SSL handshake can do kerberos ticket
generaton and validation.
The kerberos authenticated user is then validated with the security manager before
being populated into the message userId.
The feature is enabled with the kerb5Config property. When lowercase it is the
principal. With a leading uppercase char it is the login.config entry to use.
The default id-cache-size is 20000 and the default
confirmation-window-size is 1MB. It turns out the 1MB
size is too small for id-cache-size.
To fix it we adjust the confirmation-window-size to 10MB. Also
a test is added to guarantee it won't break this rule when this
default value is to be changed to any new value.
It fixes compatibility issues with JMS Core clients using the old address model, allowing the client to query JMS temporary queues too.
you would eventually see this issue when using older clients:
AMQ119019: Queue already exists
When a broken packet arrives at client side it causes decoding error.
Currently artemis doesn't handle it properly. It should catch such
errors and disconnect the underlying connection, logging a proper
warning message
Use AcitveMQDestination for subscription naming, fixing and aligning queue naming in the process.
The change is behind a configuration toggle so to avoid causing any breaking changes for uses not expecting.
Added a wait-for-activation option to shared-store master HA policies.
This option is enabled by default to ensure unchanged server startup behavior.
If this option is enabled, ActiveMQServer.start() with a shared-store master server will not return
before the server has been activated.
If this options is disabled, start() will return after a background activation thread has been started.
The caller can use waitForActivation() to wait until server is activated, or just check the current activation status.
Wrap the host added to the HTTP request headers with IPV6Util.encloseHost
to ensure that load balancers that reads the header will have a valid
IPv6 address.
JIRA: https://issues.apache.org/jira/browse/ARTEMIS-1043
Use `long` array for hourly counters instead of `int` array.
Prevents overflow when the number of new messages (a `long`) is added.
Fixes one of the "Implicit narrowing conversion in compound assignment"
alerts on https://lgtm.com/projects/g/apache/activemq-artemis/alerts.
When populate-validated-user = true AMQP messages can cause exceptions.
This feature isn't particularly applicable to AMQP so this commit
eliminates the exception and leaves the AMQP messages untouched
even if populate-validated-user = true. In other words,
populate-validated-user + AMQP is not supported.
If broker fails to decode any packets from buffer, it should
treat it as a critical bug and disconnect immediately.
Currently broker only logs an error message.
The following changes are made to support Epoll.
Refactored SharedNioEventLoopGroup into renamed SharedEventLoopGroup to be generic (as so we can re-use for both Nio and Epoll)
Add support and toggles for Epoll in NettyAcceptor and NettyConnector (with fall back to NIO if cannot load Epoll)
Removal from code of PartialPooledByteBufAllocator, caused bad address when doing native, and no longer needed - see jira discussion
New Connector Properties:
useEpoll - toggles to use epoll or not, default true (but we failback to nio gracefully)
remotingThreads = same behaviour as nioRemotingThreads. Previous property is depreated.
useGlobalWorkerPool = same behaviour as useNioGlobalWorkerPool. Old property is deprecated.
New Acceptor Properties:
useEpoll - toggles to use epoll or not, default true (but we failback to nio gracefully)
useGlobalWorkerPool = same behaviour as useNioGlobalWorkerPool but for Epoll.
This closes#1093
As part of my refactoring on AMQP, the broker shouldn't rely on Application properties
for any broker semantic changes on delivery.
I am removing any access to those now, so we can properly deal with this post 2.0.0.
with this we could send and receive message in their raw format,
without requiring conversions to Core.
- MessageImpl and ServerMessage are removed as part of this
- AMQPMessage and CoreMessage will have the specialized message format for each protocol
- The protocol manager is now responsible to send the message
- The message will provide an encoder for journal and paging
There are some operations in ActiveMQServerControl that don't have
@Parameter annotations. That will make clients like JBoss Jon failed
to invoke those operations.
Also in AddressControl there is a typo in sendMessage. The second
parameter's name should be 'type' not 'headers'.
When HTTP Upgrade is enabled, update Netty's pipeline only after the
HTTP Upgrade handshake is successful *and* the trailing
EMPTY_LAST_CONTENT is received.
Otherwise, this EMPTY_LAST_CONTENT is handled by
ActiveMQChannelHandler which is only expected to handle ByteBuf
JIRA: https://issues.apache.org/jira/browse/ARTEMIS-963
* Fix isEquivalent() method to take into account the activemqServerName
property when httpUpgradeEnabled is true. Two ActiveMQ server hosted on
the same app server may have the same host and port (corresponding to
the Web server HTTP port). The activemqServerName property is used to
distinguish them.
* Iron out HTTP upgrade handler so that the latch is always count down
and the channel context is closed unless the handshake was completed
successfully
JIRA: https://issues.apache.org/jira/browse/ARTEMIS-931
* Do not offset ports for Netty connector/acceptor with http-upgrade
enabled.
* Pass the name of the ActiveMQ server to the HTTP request to initiate
the Upgrade so that the HTTP endpoint on the app server can find the
correct ActiveMQ broker that must handle the upgrade.
JIRA: https://issues.apache.org/jira/browse/ARTEMIS-803
max-disk-usage = how much of a disk we can use before the system blocks
global-max-size = how much bytes we can take from memory for messages before we start enter into the configured page mode
This will also change the default created configuration into page-mode as that's more reliable for systems.
This avoids an issue where a broker would discover itself, causing an unexpected behavior when using
core bridges to forward messages:
* Make channel manager a singleton ensuring that only one channel with a given name exists
* Ensure that messages are marked with NON_LOOPBACK to avoid receiving messages originating from
itself