Extend test case to reproduce problem of client created queues being incorrectly removed on simple reload of config.
Add a flag/field to the queues created by configuration/broker.xml so we can correctly filter only queues created/managed by config.
Update listConfiguredQueues to use the new queue flag
Add Tests
Add implementation inline with other queue updatable settings.
Enhance tests to ensure queue is not destroyed during config change and messages in queue already are preserved
Revert previous fix
Keep original ConfigChangeTest
Apply new non-destructive fix.
Enhance tests to ensure messages in queues are not lost either on reload when running or when config changed on-restart (e.g. queue i not destroyed)
Any failure to deploy an address or queue will short-circuit the broker
initialization process preventing any other addresses or queues from
being deployed as well as other critical resources like acceptors, etc.
The ServerSessionPacketHandler has a close() callback handler which will
delete any pending large messages. However, there is a race where a
large message can be routed, then the close delete the associated large
message resulting in data loss.
First, QueueQuery should use address name for address settings
The name used for looking up address settings for a queue now uses the
address name if there is a local queue binding
Second, make sure sent credits to the server is the correct value
Fix checkstyle
Avoid duplicated logic
Ability to filter and group
Instantiate SimpleString property key once
Get property value via getObjectProprty to ensure all special mapped properties such as in AMQPMessage would return
Avoid a custom string to represent null, instead rely on Java's representation "null" by using Objects.toString to get the string value of the property value used to group by.
Seperate plugin interface by area, all extending a base interface.
Update code to check and call only plugins implementing specific interfaces.
Existing interface extends all the new interfaces for back compatibility or those who want simplicity and don't care about perf.
This commit adds support for tracking metrics for bridges for both
normal bridges and bridges that are part of a cluster. The two
statistics added in this commit are messages pending acknowledgement
and messages acknowledged but more can be added later.
Anonymous senders (those created without a target address) are not
blocked when max-disk-usage is reached. The cause is that when such
a sender is created on the broker, the broker doesn't check the
disk/memory usage and gives out the credit immediately.
Fix so that it only updates if not null, to avoid user being unset from existing api's,
This is similar to other values, that only change the value when not null.
This reverts commit c3fbd1b9e4.
Based on the discussion on the PR
https://github.com/apache/activemq-artemis/pull/2035 this shouldn't have
been merged. It's importing JMS-specific code into the core broker which
is something we've worked hard to eliminate in recent releases.
- Split protocols into individual chapters
- Reorganize summary to flow more logically
- Fill in missing parameters in configuration index
- Normalize spaces for ordered and unordered lists
- Re-wrap lots of text for readability
- Fix incorrect XML snippets
- Normalize table formatting
- Improve internal links with anchors
- Update content to reflect new address model
- Resized architecture images to avoid excessive white-space
- Update some JavaDoc
- Update some schema elements
- Disambiguate AIO & ASYNCIO where necessary
- Use URIs instead of Objects in code examples
Authentication failures are currently only logged for CORE clients.
This change puts the logging in a central location which all protocols
use for authentication so that authentication failures are logged for
all protocols.
Calling close multiple times on ServerConsumer can result in multiple
notifications being routed around the cluster. This causes cluster
topology info to become skewed. Which affects a number of components
such as message redistribution, metrics and can eventually cause OOM
should multiple queues be redistributing at the same time.
After sending pages, the thread will hold the storage manager write lock
and send synchronization finished packet(use the parent thread pool) to
the backup node. At the same time, thread pool is full bcs they are
waiting for the storage manager read lock to write the page or journal,
leading to replication starting failure. Here we use io executor to send
replicate packet to fix thread pool starvation problem.
Quorum voting is used by both the live and the backup to decide what to do if a replication connection is disconnected.
Basically, the server will request each live server in the cluster to vote as to whether it thinks the server it is replicating to or from is still alive.
You can also configure the time for which the quorum manager will wait for the quorum vote response.
Currently, the value is hardcoded as 30 sec. We should change this 30-second wait to be configurable.
When setting "default-max-consumers" in addressing setting in broker.xml
It has no effect on the "max-consumers" property on matching address queues.
Tests added as part of a previous commit
This closes#2107
1. Add tests case to verify issue and fix, tests also tests for same behavior using CORE, OPENWIRE and AMQP JMS Clients.
2. Update Core Client to check for queue before creating, sharedQueue as per createQueue logic.
3. Update ServerSessionPacketHandler to handle packets from old clients to perform to implement the same fix server side for older clients.
4. Correct AMQP protocol so correct error code is returned on security exception so that amqp jms can correctly throw JMSsecurityException
5. Correct AMQP protocol to check for queue exists before create
6. Correct OpenWire protocol to check for address exists before create
When database persistence and no shared store option is being used,
Artemis is choosing to use InVMNodeManager, that is not providing
the same behaviour of FileLockNodeManager.
Replace guava Preconditions with artemis Preconditions
Replace guava Predicate with java Predicate
Replace guava Ordering with java Comparator
Replace guava Immutable, with ArrayList/Set and then wrap with unmodifiable
PageCursorProviderImpl is not handling any pending cleanup tasks
on stop, leaving paging enabled due to the remaining pages to be
cleared up.
PagingStoreImpl is responsible to trigger the flushing of pending
tasks on PageCursorProviderImpl before stopping it and to try to
execute any remaining tasks on the owned common executor, before
shutting it down.
It fixes testTopicsWithNonDurableSubscription.
NotificationActiveMQServerPlugin and LoggingActiveMQServerPlugin are
implementing the deprecated version of
ActiveMQServerPlugin::messageExpired that is not called by the
new version of the method or any other part of the code
This fixing test org.apache.activemq.artemis.tests.integration.management.NotificationTest#testMessageExpired
It avoid using the system clock to perform the locks logic
by using the DBMS time.
It contains several improvements on the JDBC error handling
and an improved observability thanks to debug logs.
largeMessagesFactory::newBuffer could create a pooled direct ByteBuffer
that will not be released into the factory pool: using a heap ByteBuffer
will perform more internal copies, but will make it simpler to be garbage
collected.
QuorumFailOverTest.testQuorumVotingLiveNotDead fails
because the quorum vote takes longer time to finish than
the test expects to.
(The test used to pass until commit ARTEMIS-1763)
The previous commit about this feature wasn't using the row count query
ResultSet.
The mechanics has been changed to allow the row count query
to fail, because DROP and CREATE aren't transactional and immediate
in most DBMS.
It includes a test that stress its mechanics if used with DBMS like
DB2 10.5 and Oracle 12c.
Additional checks and logs have been added to trace each steps.
JdbcNodeManager is configured to use the same network timeout
value of the journal and to validate all the timeout values
related to a correct HA behaviour.
The JDBC Connection leaks on:
- JDBCFileUtils::getDBFileDriver(DataSource, SQLProvider)
- SharedStoreBackupActivation.FailbackChecker::run on a failed awaitLiveStatus
Expose method to return current mappings of groups to consumers
Expose methods to reset (remove) specific group mapping from groupID to Consumer
Expose methods to reset (remove) all group mappings
messageAcknowledged plugin callback methods
Knowing the consumer that expired or acked a message (if available) is
useful and right now a message reference only contains a consumer id
which by itself is not unique so the actual consumer needs to be passed
When finding out if a connector belong to a target node it compares
the whole parameter map which is not necessary. Also in understanding
the connector the best place is to delegate it to the corresponding
remoting connection who understands it. (e.g. INVMConnection knows
whether the connector belongs to a target node by checking it's
serverID only. The netty ones only need to match host and port, and
understanding that localhost and 127.0.0.1 are same thing).
The queue metrics were being decremented improperly because on iteration
over the cancelled scheduled messages because the flag for fromMessageReferences was not
set to false. Setting the flag to false skips over the metrics update
which is what we want as the scheduled messages were never added to the
message references in the first place so the metrics don't need updating
When creating a temp destination and auto-create-address set to false, the
broker throws an error and refuse to create it. This doesn't conform to
normal use-case (like amqp dynamic flag) where the temp destination should
be allowed even if the auto-create-address is false.
There was a logic to validate if member is null.
Which seemed a bit weird considering the else would throw a NPE.
Fixing it proactively based on Coverity-scan findings.
The cluster connection bridge has a TopologyListener and connects to a new node
each time it receives a nodeUp() event. It needs to put a check here to make
sure that the cluster bridge only connects to its target node and it's backups.
This issue shows up when you run LiveToLiveFailoverTest.testConsumerTransacted
test.
Also in this commit improvement of BackupSyncJournalTest so that it runs more
stable.
It includes:
- Message References: no longer uses boxed primitives and AtomicInteger
- Node: intrusive nodes no longer need a reference field holding itself
- RefCountMessage: no longer uses AtomicInteger, but AtomicIntegerFieldUpdater
It allows a user to customize the max allowed distance between system and DB time,
improving HA reliability by shutting down the broker when the misalignment
exceeds configured limit.
In some environments it is not allowed to create a schema
by the application itself. With this change the AbstractJDBCDriver
now tests if an existing table is empty and executes further
statements in the same way as if the table does not exist.
It forces to use InVMNodeManager when no HA option is selected with JDBC persistence and includes the checks that the only valid JDBC HA options are SHARED_STORE_MASTER and SHARED_STORE_SLAVE.