After a node is scaled down to a target node, the sf queue in the
target node is not deleted.
Normally this is fine because may be reused when the scaled down
node is back up.
However in cloud environment many drainer pods can be created and
then shutdown in order to drain the messages to a live node (pod).
Each drainer pod will have a different node-id. Over time the sf
queues in the target broker node grows and those sf queues are
no longer reused.
Although use can use management API/console to manually delete
them, it would be nice to have an option to automatically delete
those sf queue/address resources after scale down.
In this PR it added a boolean configuration parameter called
cleanup-sf-queue to scale down policy so that if the parameter
is "true" the broker will send a message to the
target broker signalling that the SF queue is no longer
needed and should be deleted.
If the parameter is not defined (default) or is "false"
the scale down won't remove the sf queue.
The Audit log allows user to log some important actions,
such as ones performed via management APIs or clients,
like queue management, sending messages, etc.
The log tries to record who (the user if any) doing what
(like deleting a queue) with arguments (if any) and timestamps.
By default the audit log is disabled. Through configuration can
be easily turned on.
Add ability to configure when creating auto created queues at the queue level
Add support for configuring message count check
Add test cases
Update docs
Support using group buckets on a queue for better local group scaling
Support disabling message groups on a queue
Support rebalancing groups when a consumer is added.
Add consumer priority support
Includes refactor of consumer iterating in QueueImpl to its own logical class, to be able to implement.
Add OpenWire JMS Test - taken from ActiveMQ5
Add Core JMS Test
Add AMQP Test
Add Docs
MULTICAST messages forwarded by a core bridge will not be routed to any
ANYCAST queues and vice-versa. Diverts have the ability to configure how
routing-type is treated. Core bridges now support this same kind of
functionality. By default the bridge does not alter the routing-type of
forwarded messages to maintain compatibility with existing behavior.
Previously the port was always random. This caused problems with
remote JMX connections that needed to overcome firewalls. As of
this patch it's possible to make the RMI port static and whitelist
it in the firewall settings.
Add test cases
Add GroupSequence to Message Interface
Implement Support closing/reset group in queue impl
Update Documentation (copy from activemq5)
Change/Fix OpenWireMessageConverter to use default of 0 if not set, for OpenWire as per documentation http://activemq.apache.org/activemq-message-properties.html
Implement custom LVQ Key and Non-Destructive in broker - protocol agnostic
Make feature configurable via broker.xml, core apis and activemqservercontrol
Add last-value-key test cases
Add non-destructive with lvq test cases
Add non-destructive with expiry-delay test cases
Update documents
Add new methods to support create, update with new attributes
Refactor to pass through queue-attributes in client side methods to reduce further method changes for adding new attributes in future and avoid methods with endless parameters. (note: in future this should prob be done server side too)
Update existing test cases and fake impls for new methods/attributes
- Split protocols into individual chapters
- Reorganize summary to flow more logically
- Fill in missing parameters in configuration index
- Normalize spaces for ordered and unordered lists
- Re-wrap lots of text for readability
- Fix incorrect XML snippets
- Normalize table formatting
- Improve internal links with anchors
- Update content to reflect new address model
- Resized architecture images to avoid excessive white-space
- Update some JavaDoc
- Update some schema elements
- Disambiguate AIO & ASYNCIO where necessary
- Use URIs instead of Objects in code examples
Quorum voting is used by both the live and the backup to decide what to do if a replication connection is disconnected.
Basically, the server will request each live server in the cluster to vote as to whether it thinks the server it is replicating to or from is still alive.
You can also configure the time for which the quorum manager will wait for the quorum vote response.
Currently, the value is hardcoded as 30 sec. We should change this 30-second wait to be configurable.
This option was removed in JRE 10 and its presence causes error on startup.
Artemis still does not compile on JDK 9 and up, so what I tried was
to compile on JDK 1.8 and then start Artemis on 1.8 and 10.
That now works for me.
It avoid using the system clock to perform the locks logic
by using the DBMS time.
It contains several improvements on the JDBC error handling
and an improved observability thanks to debug logs.
JdbcNodeManager is configured to use the same network timeout
value of the journal and to validate all the timeout values
related to a correct HA behaviour.
It allows a user to customize the max allowed distance between system and DB time,
improving HA reliability by shutting down the broker when the misalignment
exceeds configured limit.
The JDBC Lock Acquisition Timeout is no longer exposed to any user configuration and defaulted to infinite to match the behaviour of the journal (file-based) one.
Exclusive Queue documents added to detail behaviour and how to configure.
Also update docuents for last-value queue to cover addtional ways to configure
We provide a feature to mask passwords in the configuration files.
However, passwords in the bootstrap.xml (when the console is
secured with HTTPS) cannot be masked. This enhancement has
been opened to allow passwords in the bootstrap.xml to be masked
using the built-in masking feature provided by the broker.
Also the LDAPLoginModule configuration (in login.config) has a
connection password attribute that also needs this mask support.
In addition the ENC() syntax is supported for password masking
to replace the old 'mask-password' flag.
Apply fix so that when using JNDI via tomcat resource it works.
Replace original extract of JNDIStorable taken from Qpid, and use ActiveMQ5's as fits better to address this issue. (which primary use case is users migrating from 5.x)
Refactored ActiveMQConnectionFactory to externalise and turn into reference by StringRefAddr's instead of custom RefAddr which isnt standard.
Refactored ActiveMQDestinations similar
Refactored ActiveMQDestination to remove redundent and duplicated name field and ensured getters still behave the same
Openwire clients create consumers to advisory topics to receive
notifications. As a result there are internal queues created
on advisory topics. Those consumer shouldn't be exposed via
management APIs which are used by the Console
To fix that the broker doesn't register any queues from
advisory addresses.
Also refactors a code to remove Openwire specific contants
from AddressInfo class.
Added a new trustAll flag which will support trusting any client
keystore when doing testing against a broker. This setting should not
be used in production and is strictly for testing.
Update Tranformer to be able to handle initiation via propertiers (map<string, string>)
Update Configuration to have more specific transfromer configuration type, and to take properties.
Support back compatibility.
Add AddHeadersTransformer which is a main use case, and can act as example also.
Update Control's to expose new property configuration
Add test cases
Update examples for new transformer config style
- it is now possible to disable the TimedBuffer
- this is increasing the default on libaio maxAIO to 4k
- The Auto Tuning on the journal will use asynchronous writes to simulate what would happen on faster disks
- If you set datasync=false on the CLI, the system will suggest mapped and disable the buffer timeout
This closes#1436
This commit superseeds #1436 since it's now disabling the timed buffer through the CLI
* add Jolokia to the initial list of ways to access management API
* remove mentions of core queues, etc. (pre-addressing change)
* activemq.notifications can be consumed with any client
* activemq.management can be accessed only with Core or Core JMS clients
* *.management.*Control are interfaces, not classes
* remove ObjectNames from the summary of management methods.
IMO that made the short paragraphs hard to read.
And one can intuitively find them in jconsole,
or use the Builder class to construct them in code.
* move the note about empty filter to the queue management section
* add code snippet for creating ObjectName with ObjectNameBuilder
* mention web console and bin/artemis for interactive management
* fix few typos
delegate to the jdk saslServer. Allow acceptor configuration of supported mechanismis; saslMechanisms=<a,b>
and allow login config scope for krb5 to be configured via saslLoginConfigScope=x
Add extra configuration to address-settings to be able to
control / enable address/queue deletion by pattern,
rather than a global toggle.
Add support in the reload logic to remove address
and/or queues if the address matches an address setting,
where it is enabled.
Add topic and queue cache maps in Session.
Add configuration to use cache or not with defaulting to false, which keeps existing behaviour as the default.
Added a wait-for-activation option to shared-store master HA policies.
This option is enabled by default to ensure unchanged server startup behavior.
If this option is enabled, ActiveMQServer.start() with a shared-store master server will not return
before the server has been activated.
If this options is disabled, start() will return after a background activation thread has been started.
The caller can use waitForActivation() to wait until server is activated, or just check the current activation status.
Broker should support full qualified queue names (FQQN)
as well as bare queue names. This means when clients access
to a queue they have two equivalent ways to do so. One way
is by queue names and the other is by FQQN (i.e. address::qname)
names. Currently only receiving is supported.
Broker should support full qualified queue names (FQQN)
as well as bare queue names. This means when clients access
to a queue they have two equivalent ways to do so. One way
is by queue names and the other is by FQQN (i.e. address::qname)
names. Currently only receiving is supported.
Adjust slow-consumer detection logic to use the number of messages in
the queue and not just the number of messages added since the last
check. This means the getRate() method now returns the rate of messages
which it *could* have dispatched since the last check rather than the
rate at which it received messages. This is a more reliable metric to
ensure the slow-consumer detection logic doesn't flag a consumer as
slow unfairly. Although the reliability will come at a performance cost
since getMessageCount() must lock the queue.