The PageCountPendingImpl was increasing the encode size without using its full allocation.
This was causing issues on replication as the encode is also used to determine the size of the packets.
however the packets were not receive the full allocated data causing missing packets on the replication
and test failures.
This is fixing the issue
Adding new metrics for tracking message counts and sizes on a Queue.
This includes tracking metrics for pending, delivering and scheduled
messages. The paging store also tracks message size now.
Support exlusive consumer
Allow default address level settings for exclusive consumer
Allow queue level setting in broker.xml
Add the ability to set queue settings via Core JMS using address. Similar to ActiveMQ 5.X
Allow for Core JMS client to define exclusive consumer using address parameters
Add tests
Make sure that if a bridge disconnects and there is no record in the topology that it uses the original bridge connector to reconnect.
Originally the live broker that disconnected was left in the Topology, thie broke quorum voting as when th evote happened all brokers when asked though th etarget broker was still alive.
The fix for this was to remove the target live broker from the Topology. Since the bridge reconnect logic relied on this in a non HA environment to reconnect this stopped working.
The fix now uses the original target connector (or backup) to reconnect in the case where the broker was actually removed from the cluster.
https://issues.apache.org/jira/browse/ARTEMIS-1654
When using the tool to import more than one large messages
from xml exported file, this utility class will create some
tmp files, each for one large message. However it only delete
one of the tmp files. All the rest of tmp files won't get
cleaned up.
HornetQClientProtocolManager is used to connect HornteQ servers.
During reconnect, it sends a CheckFailoverMessage packet to the
server as part of reconnection. This packet is not supported by
HornetQ server (existing release), so it will break the backward
compatibility.
Also fixed a failover issue where a hornetq NettyConnector's
ConnectorFactory is serialized to the clients who cannot
instantiate it because class not found exception.
ActiveMQTestBase has been enhanced to expose the Database storage configuration and by adding specific JDBC HA configuration properties.
JdbcLeaseLockTest and NettyFailoverTests have been changed in order to make use of the JDBC configuration provided by ActiveMQTestBase.
JdbcNodeManager has been made restartable to allow failover tests to reuse it after a failover.
Test consistency between live and backup, espacially on
a slow live.
The test use MessagePersister::encode to simulate slow IO condition.
After live started, we send 5 message with a delay(default 500ms),
then start backup, wait until replicated, then send more message
without delay. If all message sent successfully, the backup should
has the same messages as live. We assert the message number only.
Flag needs to be set when auto creating an address so that the address
can be removed later if auto delete is configured when creating a
subscription with MQTT
We provide a feature to mask passwords in the configuration files.
However, passwords in the bootstrap.xml (when the console is
secured with HTTPS) cannot be masked. This enhancement has
been opened to allow passwords in the bootstrap.xml to be masked
using the built-in masking feature provided by the broker.
Also the LDAPLoginModule configuration (in login.config) has a
connection password attribute that also needs this mask support.
In addition the ENC() syntax is supported for password masking
to replace the old 'mask-password' flag.
Change all use from Set<RoutingType> to EnumSet<RoutingType>
Deprecating any old exposed interfaces but keeping for back compatibility.
Address info to avoid iterator on getRoutingType hotpath, like wise can be avoided where single RoutingType is passed in.
When an address is removed from the address manager its linked addresses
also need to be removed if there are no more bindings for the address.
Also adding a null check on bindings of linked addresses when a new
binding is added
* Move byte util code into ByteUtil
* Re-use the new equals method in SimpleString
* Apply same pools/interners to client decode
* Create String to SimpleString pools/interners for property access via String keys (producer and consumer benefits)
* Lazy init the pools on withing the get methods of CoreMessageObjectPools to get the specific pool, to avoid having this scattered every where.
* reduce SimpleString creation in conversion to/from core message methods with JMS wrapper.
* reduce SimpleString creation in conversion to/from Core in OpenWire, AMQP, MQTT.
Replace GenericSQLProvider and other implementation by a single
PropertySQLProvider that uses properties to define SQL queries.
SQL queries are loaded from the journal-sql.properties file.
Queries specific to a DB dialect can be specified by adding a suffix to
the key of the generic property.
For example, the generic property to create a file Table is:
create-file-table = CREATE TABLE %s (ID BIGINT AUTO_INCREMENT, ...)
This property can be customized for Derby by using the
create-file-table.derby property:
create-file-table.derby=CREATE TABLE %s (ID BIGINT NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1),...
JIRA: https://issues.apache.org/jira/browse/ARTEMIS-1590
The test is using the wrong indices for the destinations it uses so they
don't match the one's created in the test support class. Because the
code is now using the default routing type the test fails when it tries
to send a message on a JMS Queue when the auto created address default
to the multicast routing type.
Expose User associated with creating Queue on JMX QueueControl (as attribute)
Allow setting of the user to associate with creating the queue when configured in broker.xml (before only if created over wire is it possible to set the user)
These tests used to have a wrong name, so they weren't executed by
Surefire during a `mvn test` run.
After enablement, the following tests are now failing:
* org.apache.activemq.artemis.tests.integration.cluster.ha.HAAutomaticBackupSharedStoreTest
* org.apache.activemq.artemis.tests.integration.ra.OutgoingConnectionNoJTATest
* org.apache.activemq.artemis.tests.unit.core.server.group.impl.SystemPropertyOverrideTest.testSystemPropertyOverride
When openwire client uses compressed option to send messages
(jms.useCompression=true) openwire client failed to receive them.
The reason is in OpenwireMessageConverter.toAMQMessage():
1. message.setContent() should be called after setting properties
(It will cause the compressed content to decompressed before delivering to clients)
2. message.onSend() should not be called here (it should be used
by producers. If used here it changes the internal flags of the
message and cause receive to fail).
Revert "ARTEMIS-1545 Adding HornetQ 2.4.7 on the mesh to validate send-acks"
I'm reverting this as the testsuite is broken..
We will send it back once worked out.
This reverts commit 8f5b7a1e73.
This reverts commit 9b982b3e30.
Unsubscribe topic in clustered environment left open references to the core consumer. This patch properly closes the consumer which results in correct removal of the consumer reference on a remote queue.
Openwire clients create consumers to advisory topics to receive
notifications. As a result there are internal queues created
on advisory topics. Those consumer shouldn't be exposed via
management APIs which are used by the Console
To fix that the broker doesn't register any queues from
advisory addresses.
Also refactors a code to remove Openwire specific contants
from AddressInfo class.
I'm doing an overal improvement on large message support for AMQP
However this commit is just about a Bug on the converter.
It will be moot after all the changes I'm making, but I would rather keep this separate
as a way to cherry-pick on previous versions eventually.
This test starts 2 servers and send messages to
a queue until it enters into paging state. Then
it changes the address max-size to -1, restarts
the 2 servers again and consumes all the messages.
It verifies that even if the max-size has changed
all the paged messages will be depaged and consumed.
No stuck messages after restarting.
The tests is there to guard a case where messages
won't be depaged on server restart after the max-size
is changed to -1. This issue has been fixed into
master along with the fix for ARTEMIS-581, particularly
the changes to the method PagingStoreImpl.getMaxSize().
Server.stop is currently waiting completions on Sessions just because of test cases.
With the recent changes made into the Executors this is not needed any longer
Instead of flushing we just need to make sure there are no more calls into
page executors as we stop the PageManager.
This will avoid any possible starvations or deadlocks here.
The MappedSequentialFile relies on the assumption that any writers
won't exceed the maximum capacity of the file, leaving the JVM to crash otherwise.
This commit adds proper bounds checking on write operations (and position changes too)
in order to provide recoverable effects if such scenario should occour.
In addition are provided minor fixes on Mapped and Nio SequentialFile::fill behaviour
to match the original contract.
- Added Wait.assert methods, what would make it easier to assert on future conditions
- Moved Wait to artemis-junit, we are now using that module on the testsuite
Added a new trustAll flag which will support trusting any client
keystore when doing testing against a broker. This setting should not
be used in production and is strictly for testing.
Extend test cases in MessageTypesTest to cover Core to AMQP combinations for all JMSTypeTests
Fix ServerJMSBytesMessage to correctly return bodyLength
Remove unused/dead code
Openwire consumer is listed twice below "consumers" tab.
First it shows correctly the requested queue consume.
Second it shows consumer from multicast queue ActiveMQ.Advisory.
The second one is internal and should be hidden.
In a cluster if a node is shut down (or crashed) when a
message is being routed to a remote binding, a internal
property may be added to the message and persisted. The
name of the property is like _AMQ_ROUTE_TOsf.my-cluster*.
if the node starts back, it will load and reroute this message
and if it goes to a local consumer, this property won't
get removed and goes to the client.
The fix is to remove this internal property before it
is sent to any client.
Update Tranformer to be able to handle initiation via propertiers (map<string, string>)
Update Configuration to have more specific transfromer configuration type, and to take properties.
Support back compatibility.
Add AddHeadersTransformer which is a main use case, and can act as example also.
Update Control's to expose new property configuration
Add test cases
Update examples for new transformer config style
- it is now possible to disable the TimedBuffer
- this is increasing the default on libaio maxAIO to 4k
- The Auto Tuning on the journal will use asynchronous writes to simulate what would happen on faster disks
- If you set datasync=false on the CLI, the system will suggest mapped and disable the buffer timeout
This closes#1436
This commit superseeds #1436 since it's now disabling the timed buffer through the CLI
Allows for JMS selectors on JMSCorrelationID as well as JMSXGroupID
and JMSXUserID along with some fixes to avoid an NPE case and fixes
to the conversion of AMQP MessageID and CorrelationID values when
doing cross protocol mappings. Adds new tests to cover more cases
of using the JMS selector with Qpid JMS and the AMQP test client.
If message senders and receivers uses different
wireformat.tightEncodingEnabled options, broker will get marshalling
problem. This is because when openwire messages are converted to
core messages, and later these core messages converted to openwire
messages, the broker uses a mashaller that comes with the connection
used to carry the messages.
For example, if a producer sents a message using option "wireformat
.tightEncodingEnabled=false" and a receiver tries to receive it
using 'true' for the same option, it'll never get it because the
broker will fail to use a "tight encoding" marshaller to
decode a 'loose encoded' message.
To fix the problem, we always use 'tight encoding' for internal
message converters.
By default, every openwire connection will create a queue
under the multicast address ActiveMQ.Advisory.TempQueue.
If a openwire client is create temporary queues these queues
will fill up with messages for as long as the associated
openwire connection is alive. It appears these messages
do not get consumed from the queues.
The reason behind is that advisory messages don't require
acknowledgement so the messages stay at the queue.
Added integration test, to prove issue, and assert fix.
Fix PersistentQueueBindingEncoding to return value, not false.
Fix some method arg name to align with class interface arg name
Added test case for cross protocol on JMSDeliveryMode proving issue, and asserting fix
Added fix to AmqpCoreConverter to ensure durability (JMSDeliveryMode) is retained.
Similar issue spotted with JMSPriority as with JMSDeliveyMode, fixing at the same time.
Added extra test case for jmspriority
Added fix for jmspriority
Add support to update Queue config via reload using existing updateQueue method at runtime.
Add/extend unit test cases to include testing reload of queue config.
Instead of wait to flush an executor,
I have added a method isFlushed() which will just translate to the
state on the OrderedExecutor.
In the case another executor is provided (for tests) there's a delegate
into normal executors.
delegate to the jdk saslServer. Allow acceptor configuration of supported mechanismis; saslMechanisms=<a,b>
and allow login config scope for krb5 to be configured via saslLoginConfigScope=x
This is replacing an executor on ServerSessionPacketHandler
by a this actor.
This is to avoid creating a new runnable per packet received.
Instead of creating new Runnable, this will use a single static runnable
and the packet will be send by a message, which will be treated by a listener.
Look at ServerSessionPacketHandler on this commit for more information on how it works.
Add krb5sslloginmodule that will populate userPrincipal that can be mapped to roles independently
Generalised callback handlers to take a connection and pull certs or peerprincipal based on
callback. This bubbled up into api change in securitystore and security manager
If replication blocked anything on the journal
the processing from clients would be blocked
and nothing would work.
As part of this fix I am using an executor on ServerSessionPacketHandler
which will also scale better as the reader from Netty would be feed immediately.
Core client with netty connector and acceptor doing kerberos
jaas.doAs around sslengine init such that the SSL handshake can do kerberos ticket
generaton and validation.
The kerberos authenticated user is then validated with the security manager before
being populated into the message userId.
The feature is enabled with the kerb5Config property. When lowercase it is the
principal. With a leading uppercase char it is the login.config entry to use.
The MAPPED journal refactoring include:
- simplified lifecycle and logic (eg fixed file size with single mmap memory region)
- supports for the TimedBuffer to coalesce msyncs (via Decorator pattern)
- TLAB pooling of direct ByteBuffer like the NIO journal
- remove of old benchmarks and benchmark dependencies
When a large message is replicated to backup, a pendingID is generated
when the large message is finished. This pendingID is generated by a
BatchingIDGenerator at backup.
It is possible that a pendingID generated at backup may be a duplicate
to an ID generated at live server.
This can cause a problem when a large message with a messageID that is
the same as another largemessage's pendingID is replicated and stored
in the backup's journal, and then a deleteRecord for the pendingID
is appended. If backup becomes live and loads the journal, it will
drop the large message add record because there is a deleteRecord of
the same ID (even though it is a pendingID of another message).
As a result the expecting client will never get this large message.
So in summary, the root cause is that the pendingIDs for large
messages are generated at backup while backup is not alive.
The solution to this is that instead of the backup generating
the pendingID, we make them all be generated in advance
at live server and let them replicated to backup whereever needed.
The ID generater at backup only works when backup becomes live
(when it is properly initialized from journal).
This method name would clash with ServiceComponent
As the real meaning here on this method is just to failover
So I've renamed the method to avoid the clash with my next commit
(I've done this on a separate commit as you may need to redo this
commit from scratch again in other branches instead of lots of clashes on cherry-pick)
Before sending of messages to server 0 begins, the test
should wait until consumer is registered at RemoteQueueBindingImpl
on server 0. Otherwise some messages may not be rebalanced
to server 1.
Add extra configuration to address-settings to be able to
control / enable address/queue deletion by pattern,
rather than a global toggle.
Add support in the reload logic to remove address
and/or queues if the address matches an address setting,
where it is enabled.
Use AcitveMQDestination for subscription naming, fixing and aligning queue naming in the process.
The change is behind a configuration toggle so to avoid causing any breaking changes for uses not expecting.
Adds headers AMQ_SCHEDULED_DELAY and AMQ_SCHEDULED_TIME to STOMP
protocol handling to allow for delayed and scheduled time of a
message. The AMQ_SCHEDULED_DELAY brings forward the same option
from the 5.x broker and the AMQ_SCHEDULED_TIME option adds a fixed
time of delivery alternative to match that of AMQP and others.
Add test case, to prove the issue, and then obviously ensure it works, post fix.
Apply changes in logic of createQueueName to handle global better and fix the behaviour.
Create queues so names are same as behaviour with core client.
move classes and methods to their correct location to avoid cyclic dependencies between packages and classes.
ARTEMIS-904 Remove cyclic dependencies from artemis-cli
move classes and methods to their correct location to avoid cyclic dependencies between packages and classes.
ARTEMIS-904 Remove cyclic dependencies from artemis-cli
move classes and methods to their correct location to avoid cyclic dependencies between packages and classes.
Added a wait-for-activation option to shared-store master HA policies.
This option is enabled by default to ensure unchanged server startup behavior.
If this option is enabled, ActiveMQServer.start() with a shared-store master server will not return
before the server has been activated.
If this options is disabled, start() will return after a background activation thread has been started.
The caller can use waitForActivation() to wait until server is activated, or just check the current activation status.
Adds a test that validates that messages that are either lacking a
header or are set to be non-durable are not persisted and are not
recovered on broker restart.
Adding a new ActievMQServerPlugin interface to support adding custom
behavior to the broker at certain events such as connection or session
creation.
https://issues.apache.org/jira/browse/ARTEMIS-898
When creating some AMQP resources (senders, receivers, etc) the broker
can return an error of 'failed' instead of the security error that is
expected in these cases. In the case of a receiver being created and
a security error happening the broker fails to send back a response
causing the client to hang waiting for an attach response.
Refactor the AMQP test suite grouping tests into more logical unit
tests and adding additional coverage in many areas. Adds some negative
validation tests to cover features that were only partially tested.
Brings in tests from ActiveMQ 5.x that were not yet ported to Artemis
to increase coverage amd test scenarios previously seen to have issues
in the 5.x broker.
Improve tests that were failing sporadically due to not waiting for
broker stats to be updated after async calls were made.
Instead of going directly into backup mode within the shared-store
live activation, we just change the HA-policy to slave and return
to the caller - ActiveMQServerImpl.internalStart().
The caller will then handle the backup activation as usual
in a separate thread, such that EmbeddedJMS.start() can return.
Also added a related integration test.
sendMessage() may throw ActiveMQException that causes CNFE
at the management client. Also it should check if headers
in the message is null (to prevent NPE).
Broker should support full qualified queue names (FQQN)
as well as bare queue names. This means when clients access
to a queue they have two equivalent ways to do so. One way
is by queue names and the other is by FQQN (i.e. address::qname)
names. Currently only receiving is supported.
On link attach we currently default out SenderSettleMode to MIXED which
while legal doesn't truly reflect what the client asked for. We instead
now update the link to reflect the mode requested by the client
Also add some tests to ensure that we always return the
ReceiverSettleMode as FIRST since we don't support SECOND.
Adds some new AMQP protocol handling tests brought forward from
ActiveMQ 5.x as well as cleaning up some of th existing tests
code to make adding some other tests easier.
This is fixing an issue introduced on 4b47461f03 (ARTEMIS-822)
The Transactions were being looked up without the readLock and some of the controls for Read and Write lock
were broken after this.
removing JmsTopicWildcardSendReceiveTest::testReceiveWildcardTopicMatchDoubleWildcard
Accordingly to bisect, this test was broken at 21b64b3e4f
and it contradicts the commit done at 21b64b3e4f
So, this is being removed
Broker should support full qualified queue names (FQQN)
as well as bare queue names. This means when clients access
to a queue they have two equivalent ways to do so. One way
is by queue names and the other is by FQQN (i.e. address::qname)
names. Currently only receiving is supported.
Add tests for this management operation with both core and AMQP encoded
messages. Also fix a few problems with the implementation like not
checking the passed-in headers for null and not counting messages
properly.
Fix the getUserID and getTimestamp methods in AMQPMessage to read and
return the correct values. Adds some tests to cover these cases and
cleans up some others.
Ensure that the header value for priority is read and returned in a form
that is scaled such that it won't cause an IndexOutOfBoundsException
from the QueueImpl priority array. Adds some additional testing for
message priority support.
When populate-validated-user = true AMQP messages can cause exceptions.
This feature isn't particularly applicable to AMQP so this commit
eliminates the exception and leaves the AMQP messages untouched
even if populate-validated-user = true. In other words,
populate-validated-user + AMQP is not supported.
If broker fails to decode any packets from buffer, it should
treat it as a critical bug and disconnect immediately.
Currently broker only logs an error message.
When I added flow control, some tests that were using reflection started to fail.
Also as a precaution I'm using <= on the flow control low credit check