Add test cases
Add GroupSequence to Message Interface
Implement Support closing/reset group in queue impl
Update Documentation (copy from activemq5)
Change/Fix OpenWireMessageConverter to use default of 0 if not set, for OpenWire as per documentation http://activemq.apache.org/activemq-message-properties.html
Implement custom LVQ Key and Non-Destructive in broker - protocol agnostic
Make feature configurable via broker.xml, core apis and activemqservercontrol
Add last-value-key test cases
Add non-destructive with lvq test cases
Add non-destructive with expiry-delay test cases
Update documents
Add new methods to support create, update with new attributes
Refactor to pass through queue-attributes in client side methods to reduce further method changes for adding new attributes in future and avoid methods with endless parameters. (note: in future this should prob be done server side too)
Update existing test cases and fake impls for new methods/attributes
Add Concurrency Test to expose concurrency errors seen in logs.
Add Fix to ensure TypedProperties to ensure threadsafety
Add forEach and forEachKey to allow for provide a thread safe way of iterating through keys and values, without needing to duplicate the collection.
Add getMapNames method to remove code duplication and to ensure thread safe
Further fix around network loss.
If network loss (split) and slave activates, for a period its config used when it initializes in initialisePart2 was stale.
Add/Extend test to ensure address-setting change made on live preserved on slave (after activation)
Add/Extend test to ensure security-setting change made on live preserved on backup (after activation)
Extend test case to reproduce problem of client created queues being incorrectly removed on simple reload of config.
Add a flag/field to the queues created by configuration/broker.xml so we can correctly filter only queues created/managed by config.
Update listConfiguredQueues to use the new queue flag
Add Tests
Add implementation inline with other queue updatable settings.
Enhance tests to ensure queue is not destroyed during config change and messages in queue already are preserved
Revert previous fix
Keep original ConfigChangeTest
Apply new non-destructive fix.
Enhance tests to ensure messages in queues are not lost either on reload when running or when config changed on-restart (e.g. queue i not destroyed)
Any failure to deploy an address or queue will short-circuit the broker
initialization process preventing any other addresses or queues from
being deployed as well as other critical resources like acceptors, etc.
The ServerSessionPacketHandler has a close() callback handler which will
delete any pending large messages. However, there is a race where a
large message can be routed, then the close delete the associated large
message resulting in data loss.
First, QueueQuery should use address name for address settings
The name used for looking up address settings for a queue now uses the
address name if there is a local queue binding
Second, make sure sent credits to the server is the correct value
Fix checkstyle
Avoid duplicated logic
Ability to filter and group
Instantiate SimpleString property key once
Get property value via getObjectProprty to ensure all special mapped properties such as in AMQPMessage would return
Avoid a custom string to represent null, instead rely on Java's representation "null" by using Objects.toString to get the string value of the property value used to group by.
Seperate plugin interface by area, all extending a base interface.
Update code to check and call only plugins implementing specific interfaces.
Existing interface extends all the new interfaces for back compatibility or those who want simplicity and don't care about perf.
This commit adds support for tracking metrics for bridges for both
normal bridges and bridges that are part of a cluster. The two
statistics added in this commit are messages pending acknowledgement
and messages acknowledged but more can be added later.
Anonymous senders (those created without a target address) are not
blocked when max-disk-usage is reached. The cause is that when such
a sender is created on the broker, the broker doesn't check the
disk/memory usage and gives out the credit immediately.
Fix so that it only updates if not null, to avoid user being unset from existing api's,
This is similar to other values, that only change the value when not null.
This reverts commit c3fbd1b9e4.
Based on the discussion on the PR
https://github.com/apache/activemq-artemis/pull/2035 this shouldn't have
been merged. It's importing JMS-specific code into the core broker which
is something we've worked hard to eliminate in recent releases.
- Split protocols into individual chapters
- Reorganize summary to flow more logically
- Fill in missing parameters in configuration index
- Normalize spaces for ordered and unordered lists
- Re-wrap lots of text for readability
- Fix incorrect XML snippets
- Normalize table formatting
- Improve internal links with anchors
- Update content to reflect new address model
- Resized architecture images to avoid excessive white-space
- Update some JavaDoc
- Update some schema elements
- Disambiguate AIO & ASYNCIO where necessary
- Use URIs instead of Objects in code examples
Authentication failures are currently only logged for CORE clients.
This change puts the logging in a central location which all protocols
use for authentication so that authentication failures are logged for
all protocols.
Calling close multiple times on ServerConsumer can result in multiple
notifications being routed around the cluster. This causes cluster
topology info to become skewed. Which affects a number of components
such as message redistribution, metrics and can eventually cause OOM
should multiple queues be redistributing at the same time.
After sending pages, the thread will hold the storage manager write lock
and send synchronization finished packet(use the parent thread pool) to
the backup node. At the same time, thread pool is full bcs they are
waiting for the storage manager read lock to write the page or journal,
leading to replication starting failure. Here we use io executor to send
replicate packet to fix thread pool starvation problem.
Quorum voting is used by both the live and the backup to decide what to do if a replication connection is disconnected.
Basically, the server will request each live server in the cluster to vote as to whether it thinks the server it is replicating to or from is still alive.
You can also configure the time for which the quorum manager will wait for the quorum vote response.
Currently, the value is hardcoded as 30 sec. We should change this 30-second wait to be configurable.
When setting "default-max-consumers" in addressing setting in broker.xml
It has no effect on the "max-consumers" property on matching address queues.
Tests added as part of a previous commit
This closes#2107
1. Add tests case to verify issue and fix, tests also tests for same behavior using CORE, OPENWIRE and AMQP JMS Clients.
2. Update Core Client to check for queue before creating, sharedQueue as per createQueue logic.
3. Update ServerSessionPacketHandler to handle packets from old clients to perform to implement the same fix server side for older clients.
4. Correct AMQP protocol so correct error code is returned on security exception so that amqp jms can correctly throw JMSsecurityException
5. Correct AMQP protocol to check for queue exists before create
6. Correct OpenWire protocol to check for address exists before create
When database persistence and no shared store option is being used,
Artemis is choosing to use InVMNodeManager, that is not providing
the same behaviour of FileLockNodeManager.
Replace guava Preconditions with artemis Preconditions
Replace guava Predicate with java Predicate
Replace guava Ordering with java Comparator
Replace guava Immutable, with ArrayList/Set and then wrap with unmodifiable
PageCursorProviderImpl is not handling any pending cleanup tasks
on stop, leaving paging enabled due to the remaining pages to be
cleared up.
PagingStoreImpl is responsible to trigger the flushing of pending
tasks on PageCursorProviderImpl before stopping it and to try to
execute any remaining tasks on the owned common executor, before
shutting it down.
It fixes testTopicsWithNonDurableSubscription.
NotificationActiveMQServerPlugin and LoggingActiveMQServerPlugin are
implementing the deprecated version of
ActiveMQServerPlugin::messageExpired that is not called by the
new version of the method or any other part of the code
This fixing test org.apache.activemq.artemis.tests.integration.management.NotificationTest#testMessageExpired
It avoid using the system clock to perform the locks logic
by using the DBMS time.
It contains several improvements on the JDBC error handling
and an improved observability thanks to debug logs.
largeMessagesFactory::newBuffer could create a pooled direct ByteBuffer
that will not be released into the factory pool: using a heap ByteBuffer
will perform more internal copies, but will make it simpler to be garbage
collected.
QuorumFailOverTest.testQuorumVotingLiveNotDead fails
because the quorum vote takes longer time to finish than
the test expects to.
(The test used to pass until commit ARTEMIS-1763)
The previous commit about this feature wasn't using the row count query
ResultSet.
The mechanics has been changed to allow the row count query
to fail, because DROP and CREATE aren't transactional and immediate
in most DBMS.
It includes a test that stress its mechanics if used with DBMS like
DB2 10.5 and Oracle 12c.
Additional checks and logs have been added to trace each steps.
JdbcNodeManager is configured to use the same network timeout
value of the journal and to validate all the timeout values
related to a correct HA behaviour.
The JDBC Connection leaks on:
- JDBCFileUtils::getDBFileDriver(DataSource, SQLProvider)
- SharedStoreBackupActivation.FailbackChecker::run on a failed awaitLiveStatus
Expose method to return current mappings of groups to consumers
Expose methods to reset (remove) specific group mapping from groupID to Consumer
Expose methods to reset (remove) all group mappings
messageAcknowledged plugin callback methods
Knowing the consumer that expired or acked a message (if available) is
useful and right now a message reference only contains a consumer id
which by itself is not unique so the actual consumer needs to be passed
When finding out if a connector belong to a target node it compares
the whole parameter map which is not necessary. Also in understanding
the connector the best place is to delegate it to the corresponding
remoting connection who understands it. (e.g. INVMConnection knows
whether the connector belongs to a target node by checking it's
serverID only. The netty ones only need to match host and port, and
understanding that localhost and 127.0.0.1 are same thing).
The queue metrics were being decremented improperly because on iteration
over the cancelled scheduled messages because the flag for fromMessageReferences was not
set to false. Setting the flag to false skips over the metrics update
which is what we want as the scheduled messages were never added to the
message references in the first place so the metrics don't need updating
When creating a temp destination and auto-create-address set to false, the
broker throws an error and refuse to create it. This doesn't conform to
normal use-case (like amqp dynamic flag) where the temp destination should
be allowed even if the auto-create-address is false.
There was a logic to validate if member is null.
Which seemed a bit weird considering the else would throw a NPE.
Fixing it proactively based on Coverity-scan findings.
The cluster connection bridge has a TopologyListener and connects to a new node
each time it receives a nodeUp() event. It needs to put a check here to make
sure that the cluster bridge only connects to its target node and it's backups.
This issue shows up when you run LiveToLiveFailoverTest.testConsumerTransacted
test.
Also in this commit improvement of BackupSyncJournalTest so that it runs more
stable.
It includes:
- Message References: no longer uses boxed primitives and AtomicInteger
- Node: intrusive nodes no longer need a reference field holding itself
- RefCountMessage: no longer uses AtomicInteger, but AtomicIntegerFieldUpdater
It allows a user to customize the max allowed distance between system and DB time,
improving HA reliability by shutting down the broker when the misalignment
exceeds configured limit.
In some environments it is not allowed to create a schema
by the application itself. With this change the AbstractJDBCDriver
now tests if an existing table is empty and executes further
statements in the same way as if the table does not exist.
It forces to use InVMNodeManager when no HA option is selected with JDBC persistence and includes the checks that the only valid JDBC HA options are SHARED_STORE_MASTER and SHARED_STORE_SLAVE.
The JDBC Lock Acquisition Timeout is no longer exposed to any user configuration and defaulted to infinite to match the behaviour of the journal (file-based) one.
When creting a durable topic subscription using the Artemis 1.x JMS
client library. The client sends a QueueQuery to the server to see if
the durable subsciption queue already exists. The broker then performs
some transformation of the queue addresses to suit the 1.x naming
scheme. However, if the queue does not already exist the transform is
attempted on a null string causing NPE. To fix we simply check that the
result return isExists=true.
Add Test Case to stop and restart server after config reload and check state, this re-creates network health check issue where config changes are lost when network health check de-activates the server and then re-activates.
Add fix to update the held configuration thats used when initialisation steps during start are done.
Free hash set used to hold page position for acks and removed refs.
The two set is cleared, but they still hold a big array.
It is safe to replace the old one with empty set.
Logging for the "fast-tests" profile used for PR builds could be reduced
significantly. This would save time as well as prevent log truncation
(Travis CI only supports logs up to 4MB).
Revert #1875
This reverts commit 5ad45369ce.
The storage manager is broken now as the AddressManager change here is trying to insert a record on the journal before startup.
- LargeServerMessageImpl.finalize is eventually causing deadlocks
- CoreMessage needs to check properties before decoding
- PagingTest tweaks
- ServerLocatorImpl can deadlock eventually, avoiding a lock and using actors
- ActiveMQServerImpl.finalize is also evil and can cause deadlocks on the testsuite
- MqttClusterRemoteSubscribeTest needs to setup the Address now on the setup
The PageCountPendingImpl was increasing the encode size without using its full allocation.
This was causing issues on replication as the encode is also used to determine the size of the packets.
however the packets were not receive the full allocated data causing missing packets on the replication
and test failures.
This is fixing the issue
This is good when you are a customer and an artemis engineer (e.g. me) asks your journal print-data but you can't do it because that would expose your user's data. If you do artemis data print --safe, that will only expose the journal structure without exposing user's data and eliminate any liability between the engineer and users.
Transactions may initialize a PagedReference without a valid message yet
during load of prepared transactions.
Caching has to be lazy on this case and it should load on demand.
Adding new metrics for tracking message counts and sizes on a Queue.
This includes tracking metrics for pending, delivering and scheduled
messages. The paging store also tracks message size now.
Cache `messageID`, `transactionID` and `isLargeMessage`
in PagedReference, so that when acknowledge, we do not have to
get PagedMessage which may be GCed and cause re-read entire page.
call
There was a small bug in the previous commit, the beforeMessageRoute
callback was being executed too early so the RoutingCountext wasn't
being filled in
Support exlusive consumer
Allow default address level settings for exclusive consumer
Allow queue level setting in broker.xml
Add the ability to set queue settings via Core JMS using address. Similar to ActiveMQ 5.X
Allow for Core JMS client to define exclusive consumer using address parameters
Add tests
Make sure that if a bridge disconnects and there is no record in the topology that it uses the original bridge connector to reconnect.
Originally the live broker that disconnected was left in the Topology, thie broke quorum voting as when th evote happened all brokers when asked though th etarget broker was still alive.
The fix for this was to remove the target live broker from the Topology. Since the bridge reconnect logic relied on this in a non HA environment to reconnect this stopped working.
The fix now uses the original target connector (or backup) to reconnect in the case where the broker was actually removed from the cluster.
https://issues.apache.org/jira/browse/ARTEMIS-1654
ActiveMQTestBase has been enhanced to expose the Database storage configuration and by adding specific JDBC HA configuration properties.
JdbcLeaseLockTest and NettyFailoverTests have been changed in order to make use of the JDBC configuration provided by ActiveMQTestBase.
JdbcNodeManager has been made restartable to allow failover tests to reuse it after a failover.
When live start replication, it must make sure there is
no pending write in message & bindings journal, or we may
lost journal records during initial replication.
So we need flush append executor after acquire StorageManager's
write lock, before Journal's write lock.
Also we set a 10 seconds timeout when flush, the same as
Journal::flushExecutor. If we failed to flush in 10 seconds,
we abort replication, backup will try again later.
Use OrderedExecutorFactory::flushExecutor to flush executor
We provide a feature to mask passwords in the configuration files.
However, passwords in the bootstrap.xml (when the console is
secured with HTTPS) cannot be masked. This enhancement has
been opened to allow passwords in the bootstrap.xml to be masked
using the built-in masking feature provided by the broker.
Also the LDAPLoginModule configuration (in login.config) has a
connection password attribute that also needs this mask support.
In addition the ENC() syntax is supported for password masking
to replace the old 'mask-password' flag.
Change all use from Set<RoutingType> to EnumSet<RoutingType>
Deprecating any old exposed interfaces but keeping for back compatibility.
Address info to avoid iterator on getRoutingType hotpath, like wise can be avoided where single RoutingType is passed in.
When an address is removed from the address manager its linked addresses
also need to be removed if there are no more bindings for the address.
Also adding a null check on bindings of linked addresses when a new
binding is added
* Move byte util code into ByteUtil
* Re-use the new equals method in SimpleString
* Apply same pools/interners to client decode
* Create String to SimpleString pools/interners for property access via String keys (producer and consumer benefits)
* Lazy init the pools on withing the get methods of CoreMessageObjectPools to get the specific pool, to avoid having this scattered every where.
* reduce SimpleString creation in conversion to/from core message methods with JMS wrapper.
* reduce SimpleString creation in conversion to/from Core in OpenWire, AMQP, MQTT.
Replace GenericSQLProvider and other implementation by a single
PropertySQLProvider that uses properties to define SQL queries.
SQL queries are loaded from the journal-sql.properties file.
Queries specific to a DB dialect can be specified by adding a suffix to
the key of the generic property.
For example, the generic property to create a file Table is:
create-file-table = CREATE TABLE %s (ID BIGINT AUTO_INCREMENT, ...)
This property can be customized for Derby by using the
create-file-table.derby property:
create-file-table.derby=CREATE TABLE %s (ID BIGINT NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1),...
JIRA: https://issues.apache.org/jira/browse/ARTEMIS-1590
FileNameKey was holding a reference to PropertiesLoader.this due to its inner class definition, causing RemotingConnectionImpl to leak through a long chain of dependencies rooted in a PropertiesLoader's subclass property ie PropertiesLoaderModule::callbackHandler.
FileNameKey is turned into a inner static class to break this hidden dependency.
Expose User associated with creating Queue on JMX QueueControl (as attribute)
Allow setting of the user to associate with creating the queue when configured in broker.xml (before only if created over wire is it possible to set the user)
In order to make the JDBC Node Manager more resilient has been implemented:
- recovering with fixed number of retries during the NodeId setup + unrecoverable failure otherwise
- unrecoverable fail on exceptions while renewing a lease lock
In addition, in different parts of these critical processes are added more log informations to help diagnose.
Revert "ARTEMIS-1545 Adding HornetQ 2.4.7 on the mesh to validate send-acks"
I'm reverting this as the testsuite is broken..
We will send it back once worked out.
This reverts commit 8f5b7a1e73.
This reverts commit 9b982b3e30.
Openwire clients create consumers to advisory topics to receive
notifications. As a result there are internal queues created
on advisory topics. Those consumer shouldn't be exposed via
management APIs which are used by the Console
To fix that the broker doesn't register any queues from
advisory addresses.
Also refactors a code to remove Openwire specific contants
from AddressInfo class.
I'm doing an overal improvement on large message support for AMQP
However this commit is just about a Bug on the converter.
It will be moot after all the changes I'm making, but I would rather keep this separate
as a way to cherry-pick on previous versions eventually.
This test starts 2 servers and send messages to
a queue until it enters into paging state. Then
it changes the address max-size to -1, restarts
the 2 servers again and consumes all the messages.
It verifies that even if the max-size has changed
all the paged messages will be depaged and consumed.
No stuck messages after restarting.
The tests is there to guard a case where messages
won't be depaged on server restart after the max-size
is changed to -1. This issue has been fixed into
master along with the fix for ARTEMIS-581, particularly
the changes to the method PagingStoreImpl.getMaxSize().
Server.stop is currently waiting completions on Sessions just because of test cases.
With the recent changes made into the Executors this is not needed any longer
Instead of flushing we just need to make sure there are no more calls into
page executors as we stop the PageManager.
This will avoid any possible starvations or deadlocks here.
It fixes the NPE on server start due to:
- missing SqlProviderFactory
- missing executor factory/scheduled pool (ie using exclusive scheduled pools)
It fixes the WARNINGS due to wrong slowness detection while renewing JdbcLeaseLock.
Openwire consumer is listed twice below "consumers" tab.
First it shows correctly the requested queue consume.
Second it shows consumer from multicast queue ActiveMQ.Advisory.
The second one is internal and should be hidden.
Update Tranformer to be able to handle initiation via propertiers (map<string, string>)
Update Configuration to have more specific transfromer configuration type, and to take properties.
Support back compatibility.
Add AddHeadersTransformer which is a main use case, and can act as example also.
Update Control's to expose new property configuration
Add test cases
Update examples for new transformer config style
- it is now possible to disable the TimedBuffer
- this is increasing the default on libaio maxAIO to 4k
- The Auto Tuning on the journal will use asynchronous writes to simulate what would happen on faster disks
- If you set datasync=false on the CLI, the system will suggest mapped and disable the buffer timeout
This closes#1436
This commit superseeds #1436 since it's now disabling the timed buffer through the CLI
add the adaptTransportConfiguration() method to the
ClientProtocolManagerFactory so that transport configurations used by
the ClientProtocolManager have an opportunity to adapt their transport
configuration.
This allows the HornetQClientProtocolManagerFactory to adapt the
transport configuration received by remote HornetQ broker to replace the
HornetQ-based NettyConnectorFactory by the Artemis-based one.
JIRA: https://issues.apache.org/jira/browse/ARTEMIS-1431
By default, every openwire connection will create a queue
under the multicast address ActiveMQ.Advisory.TempQueue.
If a openwire client is create temporary queues these queues
will fill up with messages for as long as the associated
openwire connection is alive. It appears these messages
do not get consumed from the queues.
The reason behind is that advisory messages don't require
acknowledgement so the messages stay at the queue.
Added integration test, to prove issue, and assert fix.
Fix PersistentQueueBindingEncoding to return value, not false.
Fix some method arg name to align with class interface arg name
With NFSv4 it is now necessary to lock/unlock the byte of the server
lock file where the state information is written so that the
information is then flushed to the other clients looking at the file.
The regular expressions for wildcard matching now properly respects the last
delimiter in a pattern and will not match a pattern missing the
delimiter by mistake
The timeout logic is changed to use System::nanoTime, less sensible to OS clock changes.
The volatile set on CriticalMeasure are changed with cheaper lazySet.
An Openwire connection creates an internal session used to track
transaction status, it doesn't have a session callback. When
the connection is closed, the core session should check if
callback is null to avoid NPE.
Add support to update Queue config via reload using existing updateQueue method at runtime.
Add/extend unit test cases to include testing reload of queue config.
Add new error in message bundle to include queue
update security check to support taking optional queue
update code that is operating on queues to pass the queue name during check so queue name could be in the error log if security issue.
There is a leak on replication tokens in the moment when a backup is
shutdowned or killed and the ReplicationManager is stopped. If there
are some tasks (holding replication tokens) in the executor, these
tokens are simply ignored and replicationDone method isn't called on
them. Because of this, some tasks in OperationContextImpl cannot be
finished.
Instead of wait to flush an executor,
I have added a method isFlushed() which will just translate to the
state on the OrderedExecutor.
In the case another executor is provided (for tests) there's a delegate
into normal executors.
delegate to the jdk saslServer. Allow acceptor configuration of supported mechanismis; saslMechanisms=<a,b>
and allow login config scope for krb5 to be configured via saslLoginConfigScope=x
On completion of drain the response is not flushed and the
client can wait a few seconds before another broker task
flushes the work. Flush the connection after updating the
linked as being drained. Also perform the work with the
connection lock held to prevent conccurent update of proton
state.
This is replacing an executor on ServerSessionPacketHandler
by a this actor.
This is to avoid creating a new runnable per packet received.
Instead of creating new Runnable, this will use a single static runnable
and the packet will be send by a message, which will be treated by a listener.
Look at ServerSessionPacketHandler on this commit for more information on how it works.
Add krb5sslloginmodule that will populate userPrincipal that can be mapped to roles independently
Generalised callback handlers to take a connection and pull certs or peerprincipal based on
callback. This bubbled up into api change in securitystore and security manager
If replication blocked anything on the journal
the processing from clients would be blocked
and nothing would work.
As part of this fix I am using an executor on ServerSessionPacketHandler
which will also scale better as the reader from Netty would be feed immediately.
Core client with netty connector and acceptor doing kerberos
jaas.doAs around sslengine init such that the SSL handshake can do kerberos ticket
generaton and validation.
The kerberos authenticated user is then validated with the security manager before
being populated into the message userId.
The feature is enabled with the kerb5Config property. When lowercase it is the
principal. With a leading uppercase char it is the login.config entry to use.
The MAPPED journal refactoring include:
- simplified lifecycle and logic (eg fixed file size with single mmap memory region)
- supports for the TimedBuffer to coalesce msyncs (via Decorator pattern)
- TLAB pooling of direct ByteBuffer like the NIO journal
- remove of old benchmarks and benchmark dependencies
The default id-cache-size is 20000 and the default
confirmation-window-size is 1MB. It turns out the 1MB
size is too small for id-cache-size.
To fix it we adjust the confirmation-window-size to 10MB. Also
a test is added to guarantee it won't break this rule when this
default value is to be changed to any new value.
When a large message is replicated to backup, a pendingID is generated
when the large message is finished. This pendingID is generated by a
BatchingIDGenerator at backup.
It is possible that a pendingID generated at backup may be a duplicate
to an ID generated at live server.
This can cause a problem when a large message with a messageID that is
the same as another largemessage's pendingID is replicated and stored
in the backup's journal, and then a deleteRecord for the pendingID
is appended. If backup becomes live and loads the journal, it will
drop the large message add record because there is a deleteRecord of
the same ID (even though it is a pendingID of another message).
As a result the expecting client will never get this large message.
So in summary, the root cause is that the pendingIDs for large
messages are generated at backup while backup is not alive.
The solution to this is that instead of the backup generating
the pendingID, we make them all be generated in advance
at live server and let them replicated to backup whereever needed.
The ID generater at backup only works when backup becomes live
(when it is properly initialized from journal).
It fixes compatibility issues with JMS Core clients using the old address model, allowing the client to query JMS temporary queues too.
you would eventually see this issue when using older clients:
AMQ119019: Queue already exists
This method name would clash with ServiceComponent
As the real meaning here on this method is just to failover
So I've renamed the method to avoid the clash with my next commit
(I've done this on a separate commit as you may need to redo this
commit from scratch again in other branches instead of lots of clashes on cherry-pick)
When a large message is being diverted, a new copy of the original
message is created and replicated (if there is a backup) to the backup.
In LargeServerMessageImpl.copy(long) it reuse a byte array to copy
message body. It is possible that one block of date is read into
the byte array before the previous read has been replicated,
causing the replicated bytes to corrupt.
If we make a copy of the byte array before replication, the corruption
of data will be avoided.
Add extra configuration to address-settings to be able to
control / enable address/queue deletion by pattern,
rather than a global toggle.
Add support in the reload logic to remove address
and/or queues if the address matches an address setting,
where it is enabled.
Use AcitveMQDestination for subscription naming, fixing and aligning queue naming in the process.
The change is behind a configuration toggle so to avoid causing any breaking changes for uses not expecting.
In a cluster if a node is shut down (or crashed) when a
message is being routed to a remote binding, a internal
property may be added to the message and persisted. The
name of the property is like _AMQ_ROUTE_TOsf.my-cluster*.
if the node starts back, it will load and reroute this message
and if it goes to a local consumer, this property won't
get removed and goes to the client.
The fix is to remove this internal property before it
is sent to any client.
remove custom repo
update groupid to match artifact in maven central.
bump version also to that now deployed to maven central.
bump checkstyle version to 7.7 to make compatible.
updated checkstyle.xml to ignore existing issues which are prolific
which are now flagged in latest version as some bugs in previous meant they we'ren't detected e.g. https://github.com/checkstyle/checkstyle/issues/3320
fixing some violations which are not too prolific.
This commit has 2 changes for backwards compatibility with older
clients:
1) A "bindings query" will now detect if the client is both JMS and
"old" (i.e. pre-2.0) and will prefix the returned queue names with the
old prefix (i.e. "jms.queue."). This will allow the old client to
properly detect whether or not a queue exists in its auto-creation
logic.
2) When messages are dispatched to a consumer there is logic to detect
if the consumer is both JMS and "old" and will prefix the "address"
on the message with "jms.queue." or "jms.topic." as appropriate
(if it's not already prefixed).
Building on ARTEMIS-905 JCtools ConcurrentMap replacement first proposed but currently parked by @franz1981, replace the collections with primitive key concurrent collections to avoid auto boxing.
The goal of this is to reduce/remove autoboxing on the hot path.
We are just adding jctools to the broker (should not be in client dependencies)
Like wise targeting specific use case with specific implementation rather than a blanket replace all.
Using collections from Bookkeeper, reduces outside tlab allocation, on resizing compared to JCTools, which occurs frequently on testing.
The empty folder artemis-server/artemis-load-generator seems to be a
git submodule link. However, there is no submodule configuratation
file (ie .gitmodules) in this repository.
This caused the 'git submodule' command to fail with the following
error message:
fatal: no submodule mapping found in .gitmodules for path 'artemis-server/artemis-load-generator'
Added a wait-for-activation option to shared-store master HA policies.
This option is enabled by default to ensure unchanged server startup behavior.
If this option is enabled, ActiveMQServer.start() with a shared-store master server will not return
before the server has been activated.
If this options is disabled, start() will return after a background activation thread has been started.
The caller can use waitForActivation() to wait until server is activated, or just check the current activation status.
Adding a new ActievMQServerPlugin interface to support adding custom
behavior to the broker at certain events such as connection or session
creation.
https://issues.apache.org/jira/browse/ARTEMIS-898
Use `long` array for hourly counters instead of `int` array.
Prevents overflow when the number of new messages (a `long`) is added.
Fixes one of the "Implicit narrowing conversion in compound assignment"
alerts on https://lgtm.com/projects/g/apache/activemq-artemis/alerts.
Have `AddressControlImpl::getMessageCount` use and return a `long`.
Prevents potential overflow from use of an `int` count variable.
Fixes one of the "Implicit narrowing conversion in compound assignment"
alerts at https://lgtm.com/projects/g/apache/activemq-artemis/alerts.
Instead of going directly into backup mode within the shared-store
live activation, we just change the HA-policy to slave and return
to the caller - ActiveMQServerImpl.internalStart().
The caller will then handle the backup activation as usual
in a separate thread, such that EmbeddedJMS.start() can return.
Also added a related integration test.
sendMessage() may throw ActiveMQException that causes CNFE
at the management client. Also it should check if headers
in the message is null (to prevent NPE).
The ActiveMQJAASSecurityManager class uses LoginContext to validate
users and roles. LoginContext loads LoginModule classes defined in
the configuration (login.config) using current thread's context
classloader.
Normally this wouldn't be a problem but when a caller thread comes
from JMX (for example a client calls QueueControl.sendMessage() via
JMX) the caller thread has a different context class loader.
This will cause the LoginContext to fail to load the LoginModule
class (e.g. PropertiesLoginModule) and the validation will fail
even if correct credentials are supplied.
Broker should support full qualified queue names (FQQN)
as well as bare queue names. This means when clients access
to a queue they have two equivalent ways to do so. One way
is by queue names and the other is by FQQN (i.e. address::qname)
names. Currently only receiving is supported.
AIOFileLockManager doesn't work on NFS-mounted share store directories.
Since the GFS2 bug https://bugzilla.redhat.com/show_bug.cgi?id=678585
has been fixed end of 2011, the class AIOFileLockManager is no longer needed and I have removed it.
Broker should support full qualified queue names (FQQN)
as well as bare queue names. This means when clients access
to a queue they have two equivalent ways to do so. One way
is by queue names and the other is by FQQN (i.e. address::qname)
names. Currently only receiving is supported.
Add tests for this management operation with both core and AMQP encoded
messages. Also fix a few problems with the implementation like not
checking the passed-in headers for null and not counting messages
properly.
When a replica attempts to connect to a live server using a group-name
and there are > 1 servers on the network using that group there is a
chance it will fail because it doesn't keep track of all of the
topology data it receives. This fix ensures that all the topology data
from the cluster tracked until it is used and fails at which point it
is discarded.
When populate-validated-user = true AMQP messages can cause exceptions.
This feature isn't particularly applicable to AMQP so this commit
eliminates the exception and leaves the AMQP messages untouched
even if populate-validated-user = true. In other words,
populate-validated-user + AMQP is not supported.
If broker fails to decode any packets from buffer, it should
treat it as a critical bug and disconnect immediately.
Currently broker only logs an error message.
The following changes are made to support Epoll.
Refactored SharedNioEventLoopGroup into renamed SharedEventLoopGroup to be generic (as so we can re-use for both Nio and Epoll)
Add support and toggles for Epoll in NettyAcceptor and NettyConnector (with fall back to NIO if cannot load Epoll)
Removal from code of PartialPooledByteBufAllocator, caused bad address when doing native, and no longer needed - see jira discussion
New Connector Properties:
useEpoll - toggles to use epoll or not, default true (but we failback to nio gracefully)
remotingThreads = same behaviour as nioRemotingThreads. Previous property is depreated.
useGlobalWorkerPool = same behaviour as useNioGlobalWorkerPool. Old property is deprecated.
New Acceptor Properties:
useEpoll - toggles to use epoll or not, default true (but we failback to nio gracefully)
useGlobalWorkerPool = same behaviour as useNioGlobalWorkerPool but for Epoll.
This closes#1093
Adjust slow-consumer detection logic to use the number of messages in
the queue and not just the number of messages added since the last
check. This means the getRate() method now returns the rate of messages
which it *could* have dispatched since the last check rather than the
rate at which it received messages. This is a more reliable metric to
ensure the slow-consumer detection logic doesn't flag a consumer as
slow unfairly. Although the reliability will come at a performance cost
since getMessageCount() must lock the queue.
As part of my refactoring on AMQP, the broker shouldn't rely on Application properties
for any broker semantic changes on delivery.
I am removing any access to those now, so we can properly deal with this post 2.0.0.
Artemis expose createQueue() method to management console like Jon.
If the queue to be created already exists it throws an ActiveMQException
back to the console, which will get a ClassNotFoundException when
deserializing the exception.
The same issue goes also with AddressControl.sendMessage() method.
To fix that it should throw a common java exception like IllegalStateException.
with this we could send and receive message in their raw format,
without requiring conversions to Core.
- MessageImpl and ServerMessage are removed as part of this
- AMQPMessage and CoreMessage will have the specialized message format for each protocol
- The protocol manager is now responsible to send the message
- The message will provide an encoder for journal and paging
Due to recent changes, the web component is shutdown by the
server, but the shutdown flag is lost so the web component's
cleanup check method is not get called and the web's tmp
dir is left there after user stopped the broker (control-c).
The fix is add a suitable API to allow passing of the
flag so the web component can make sure its tmp dir gets
cleaned up properly before exiting the VM.
In a cluster of replicated live/backup pairs if a backup crashes and
then its live crashes the cluster will retain the topology information
of the live such that when the live server restarts it will check the
cluster to see if its nodeID is present (which it will be) and then it
will activate as a backup rather than a live. To prevent this situation
an additional check is necessary to see if the server with the matching
nodeID is actually active or not which is done by attempting to make a
connection to it.