If message senders and receivers uses different
wireformat.tightEncodingEnabled options, broker will get marshalling
problem. This is because when openwire messages are converted to
core messages, and later these core messages converted to openwire
messages, the broker uses a mashaller that comes with the connection
used to carry the messages.
For example, if a producer sents a message using option "wireformat
.tightEncodingEnabled=false" and a receiver tries to receive it
using 'true' for the same option, it'll never get it because the
broker will fail to use a "tight encoding" marshaller to
decode a 'loose encoded' message.
To fix the problem, we always use 'tight encoding' for internal
message converters.
By default, every openwire connection will create a queue
under the multicast address ActiveMQ.Advisory.TempQueue.
If a openwire client is create temporary queues these queues
will fill up with messages for as long as the associated
openwire connection is alive. It appears these messages
do not get consumed from the queues.
The reason behind is that advisory messages don't require
acknowledgement so the messages stay at the queue.
Added integration test, to prove issue, and assert fix.
Fix PersistentQueueBindingEncoding to return value, not false.
Fix some method arg name to align with class interface arg name
Added test case for cross protocol on JMSDeliveryMode proving issue, and asserting fix
Added fix to AmqpCoreConverter to ensure durability (JMSDeliveryMode) is retained.
Similar issue spotted with JMSPriority as with JMSDeliveyMode, fixing at the same time.
Added extra test case for jmspriority
Added fix for jmspriority
Add support to update Queue config via reload using existing updateQueue method at runtime.
Add/extend unit test cases to include testing reload of queue config.
Instead of wait to flush an executor,
I have added a method isFlushed() which will just translate to the
state on the OrderedExecutor.
In the case another executor is provided (for tests) there's a delegate
into normal executors.
delegate to the jdk saslServer. Allow acceptor configuration of supported mechanismis; saslMechanisms=<a,b>
and allow login config scope for krb5 to be configured via saslLoginConfigScope=x
This is replacing an executor on ServerSessionPacketHandler
by a this actor.
This is to avoid creating a new runnable per packet received.
Instead of creating new Runnable, this will use a single static runnable
and the packet will be send by a message, which will be treated by a listener.
Look at ServerSessionPacketHandler on this commit for more information on how it works.
Add krb5sslloginmodule that will populate userPrincipal that can be mapped to roles independently
Generalised callback handlers to take a connection and pull certs or peerprincipal based on
callback. This bubbled up into api change in securitystore and security manager
If replication blocked anything on the journal
the processing from clients would be blocked
and nothing would work.
As part of this fix I am using an executor on ServerSessionPacketHandler
which will also scale better as the reader from Netty would be feed immediately.
Core client with netty connector and acceptor doing kerberos
jaas.doAs around sslengine init such that the SSL handshake can do kerberos ticket
generaton and validation.
The kerberos authenticated user is then validated with the security manager before
being populated into the message userId.
The feature is enabled with the kerb5Config property. When lowercase it is the
principal. With a leading uppercase char it is the login.config entry to use.
The MAPPED journal refactoring include:
- simplified lifecycle and logic (eg fixed file size with single mmap memory region)
- supports for the TimedBuffer to coalesce msyncs (via Decorator pattern)
- TLAB pooling of direct ByteBuffer like the NIO journal
- remove of old benchmarks and benchmark dependencies
When a large message is replicated to backup, a pendingID is generated
when the large message is finished. This pendingID is generated by a
BatchingIDGenerator at backup.
It is possible that a pendingID generated at backup may be a duplicate
to an ID generated at live server.
This can cause a problem when a large message with a messageID that is
the same as another largemessage's pendingID is replicated and stored
in the backup's journal, and then a deleteRecord for the pendingID
is appended. If backup becomes live and loads the journal, it will
drop the large message add record because there is a deleteRecord of
the same ID (even though it is a pendingID of another message).
As a result the expecting client will never get this large message.
So in summary, the root cause is that the pendingIDs for large
messages are generated at backup while backup is not alive.
The solution to this is that instead of the backup generating
the pendingID, we make them all be generated in advance
at live server and let them replicated to backup whereever needed.
The ID generater at backup only works when backup becomes live
(when it is properly initialized from journal).
This method name would clash with ServiceComponent
As the real meaning here on this method is just to failover
So I've renamed the method to avoid the clash with my next commit
(I've done this on a separate commit as you may need to redo this
commit from scratch again in other branches instead of lots of clashes on cherry-pick)
Before sending of messages to server 0 begins, the test
should wait until consumer is registered at RemoteQueueBindingImpl
on server 0. Otherwise some messages may not be rebalanced
to server 1.
Add extra configuration to address-settings to be able to
control / enable address/queue deletion by pattern,
rather than a global toggle.
Add support in the reload logic to remove address
and/or queues if the address matches an address setting,
where it is enabled.
Use AcitveMQDestination for subscription naming, fixing and aligning queue naming in the process.
The change is behind a configuration toggle so to avoid causing any breaking changes for uses not expecting.
Adds headers AMQ_SCHEDULED_DELAY and AMQ_SCHEDULED_TIME to STOMP
protocol handling to allow for delayed and scheduled time of a
message. The AMQ_SCHEDULED_DELAY brings forward the same option
from the 5.x broker and the AMQ_SCHEDULED_TIME option adds a fixed
time of delivery alternative to match that of AMQP and others.
Add test case, to prove the issue, and then obviously ensure it works, post fix.
Apply changes in logic of createQueueName to handle global better and fix the behaviour.
Create queues so names are same as behaviour with core client.
move classes and methods to their correct location to avoid cyclic dependencies between packages and classes.
ARTEMIS-904 Remove cyclic dependencies from artemis-cli
move classes and methods to their correct location to avoid cyclic dependencies between packages and classes.
ARTEMIS-904 Remove cyclic dependencies from artemis-cli
move classes and methods to their correct location to avoid cyclic dependencies between packages and classes.
Added a wait-for-activation option to shared-store master HA policies.
This option is enabled by default to ensure unchanged server startup behavior.
If this option is enabled, ActiveMQServer.start() with a shared-store master server will not return
before the server has been activated.
If this options is disabled, start() will return after a background activation thread has been started.
The caller can use waitForActivation() to wait until server is activated, or just check the current activation status.