From a6a4d1bed5005737a04554e0df550fc7a4a8f52d Mon Sep 17 00:00:00 2001 From: Andy Taylor Date: Wed, 21 Jan 2015 18:27:19 +0000 Subject: [PATCH] documentation updates --- docs/user-manual/en/diverts.md | 4 +- docs/user-manual/en/embedding-activemq.md | 3 +- docs/user-manual/en/filter-expressions.md | 6 +- docs/user-manual/en/flow-control.md | 2 +- docs/user-manual/en/ha.md | 244 +++++++++++++++--- .../user-manual/en/intercepting-operations.md | 2 +- docs/user-manual/en/jms-bridge.md | 138 +--------- 7 files changed, 221 insertions(+), 178 deletions(-) diff --git a/docs/user-manual/en/diverts.md b/docs/user-manual/en/diverts.md index eddfba3375..98c0a59867 100644 --- a/docs/user-manual/en/diverts.md +++ b/docs/user-manual/en/diverts.md @@ -37,8 +37,8 @@ geographically distributed servers, creating your global messaging mesh. Diverts are defined as xml in the `activemq-configuration.xml` file. There can be zero or more diverts in the file. -Please see ? for a full working example showing you how to configure and -use diverts. +Please see the examples for a full working example showing you how to +configure and use diverts. Let's take a look at some divert examples: diff --git a/docs/user-manual/en/embedding-activemq.md b/docs/user-manual/en/embedding-activemq.md index 67e1a98f5d..0d12f5bec3 100644 --- a/docs/user-manual/en/embedding-activemq.md +++ b/docs/user-manual/en/embedding-activemq.md @@ -35,6 +35,7 @@ import org.apache.activemq.core.server.embedded.EmbeddedActiveMQ; ... EmbeddedActiveMQ embedded = new EmbeddedActiveMQ(); + embedded.start(); ClientSessionFactory nettyFactory = ActiveMQClient.createClientSessionFactory( @@ -182,7 +183,7 @@ jmsServer.setJmsConfiguration(jmsConfig); jmsServer.start(); ``` -Please see ? for an example which shows how to setup and run ActiveMQ +Please see the examples for an example which shows how to setup and run ActiveMQ embedded with JMS. ## Dependency Frameworks diff --git a/docs/user-manual/en/filter-expressions.md b/docs/user-manual/en/filter-expressions.md index 28d4d4d66e..332658430e 100644 --- a/docs/user-manual/en/filter-expressions.md +++ b/docs/user-manual/en/filter-expressions.md @@ -10,13 +10,13 @@ please the JMS javadoc for Filter expressions are used in several places in ActiveMQ -- Predefined Queues. When pre-defining a queue, either in - `activemq-configuration.xml` or `activemq-jms.xml` a filter +- Predefined Queues. When pre-defining a queue, in + `activemq-configuration.xml` in either the core or jms configuration a filter expression can be defined for a queue. Only messages that match the filter expression will enter the queue. - Core bridges can be defined with an optional filter expression, only - matching messages will be bridged (see [Core Bridges]9core-bridges.md)). + matching messages will be bridged (see [Core Bridges](core-bridges.md)). - Diverts can be defined with an optional filter expression, only matching messages will be diverted (see [Diverts](diverts.md)). diff --git a/docs/user-manual/en/flow-control.md b/docs/user-manual/en/flow-control.md index bf826bbde9..46ce823d3b 100644 --- a/docs/user-manual/en/flow-control.md +++ b/docs/user-manual/en/flow-control.md @@ -109,7 +109,7 @@ If the connection factory is directly instantiated, the consumer window size is specified by `ActiveMQConnectionFactory.setConsumerWindowSize()` method. -Please see ? for an example which shows how to configure ActiveMQ to +Please see the examples for an example which shows how to configure ActiveMQ to prevent consumer buffering when dealing with slow consumers. ## Rate limited flow control diff --git a/docs/user-manual/en/ha.md b/docs/user-manual/en/ha.md index e9d2742f47..69b830ab12 100644 --- a/docs/user-manual/en/ha.md +++ b/docs/user-manual/en/ha.md @@ -101,7 +101,6 @@ same data directories, all data synchronization is done over the network. Therefore all (persistent) data received by the live server will be duplicated to the backup. -c Notice that upon start-up the backup server will first need to synchronize all existing data from the live server before becoming @@ -229,22 +228,92 @@ The backup server must be similarly configured but as a `slave` The following table lists all the `ha-policy` configuration elements for HA strategy Replication for `master`: - name Description - ------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - `check-for-live-server` Whether to check the cluster for a (live) server using our own server ID when starting up. This option is only necessary for performing 'fail-back' on replicating servers. - `cluster-name` Name of the cluster configuration to use for replication. This setting is only necessary if you configure multiple cluster connections. If configured then the connector configuration of the cluster configuration with this name will be used when connecting to the cluster to discover if a live server is already running, see `check-for-live-server`. If unset then the default cluster connections configuration is used (the first one configured) - `group-name` If set, backup servers will only pair with live servers with matching group-name + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescription
`check-for-live-server`Whether to check the cluster for a (live) server using our own server ID + when starting up. This option is only necessary for performing 'fail-back' + on replicating servers.
`cluster-name`Name of the cluster configuration to use for replication. This setting is + only necessary if you configure multiple cluster connections. If configured then + the connector configuration of the cluster configuration with this name will be + used when connecting to the cluster to discover if a live server is already running, + see `check-for-live-server`. If unset then the default cluster connections configuration + is used (the first one configured).
`group-name`Whether to check the cluster for a (live) server using our own server ID when starting up. This option is only necessary for performing 'fail-back' on replicating servers.
`check-for-live-server`If set, backup servers will only pair with live servers with matching group-name.
The following table lists all the `ha-policy` configuration elements for HA strategy Replication for `slave`: - name Description - -------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - `cluster-name` Name of the cluster configuration to use for replication. This setting is only necessary if you configure multiple cluster connections. If configured then the connector configuration of the cluster configuration with this name will be used when connecting to the cluster to discover if a live server is already running, see `check-for-live-server`. If unset then the default cluster connections configuration is used (the first one configured) - `group-name` If set, backup servers will only pair with live servers with matching group-name - `max-saved-replicated-journals-size` This specifies how many times a replicated backup server can restart after moving its files on start. Once there are this number of backup journal files the server will stop permanently after if fails back. - `allow-failback` Whether a server will automatically stop when a another places a request to take over its place. The use case is when the backup has failed over - `failback-delay` delay to wait before fail-back occurs on (failed over live's) restart + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescription
`cluster-name`Name of the cluster configuration to use for replication. + This setting is only necessary if you configure multiple cluster + connections. If configured then the connector configuration of + the cluster configuration with this name will be used when + connecting to the cluster to discover if a live server is already + running, see `check-for-live-server`. If unset then the default + cluster connections configuration is used (the first one configured)
`group-name`If set, backup servers will only pair with live servers with matching group-name
`max-saved-replicated-journals-size`This specifies how many times a replicated backup server + can restart after moving its files on start. Once there are + this number of backup journal files the server will stop permanently + after if fails back.
`allow-failback`Whether a server will automatically stop when a another places a + request to take over its place. The use case is when the backup has + failed over
`failback-delay`delay to wait before fail-back occurs on (failed over live's) restart
### Shared Store @@ -410,19 +479,79 @@ automatically by setting the following property in the The following table lists all the `ha-policy` configuration elements for HA strategy shared store for `master`: - name Description - ------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - `failback-delay` If a backup server is detected as being live, via the lock file, then the live server will wait announce itself as a backup and wait this amount of time (in ms) before starting as a live - `failover-on-server-shutdown` If set to true then when this server is stopped normally the backup will become live assuming failover. If false then the backup server will remain passive. Note that if false you want failover to occur the you can use the the management API as explained at [Management](management.md) + + + + + + + + + + + + + + + + + + + + + +
NameDescription
`failback-delay`If a backup server is detected as being live, + via the lock file, then the live server will wait + announce itself as a backup and wait this amount + of time (in ms) before starting as a live
`failover-on-server-shutdown`If set to true then when this server is stopped + normally the backup will become live assuming failover. + If false then the backup server will remain passive. + Note that if false you want failover to occur the you + can use the the management API as explained at [Management](management.md)
The following table lists all the `ha-policy` configuration elements for HA strategy Shared Store for `slave`: - name Description - ------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - `failover-on-server-shutdown` In the case of a backup that has become live. then when set to true then when this server is stopped normally the backup will become liveassuming failover. If false then the backup server will remain passive. Note that if false you want failover to occur the you can use the the management API as explained at [Management](management.md) - `allow-failback` Whether a server will automatically stop when a another places a request to take over its place. The use case is when the backup has failed over. - `failback-delay` After failover and the slave has become live, this is set on the new live server. When starting If a backup server is detected as being live, via the lock file, then the live server will wait announce itself as a backup and wait this amount of time (in ms) before starting as a live, however this is unlikely since this backup has just stopped anyway. It is also used as the delay after failback before this backup will restart (if `allow-failback` is set to true. + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescription
`failover-on-server-shutdown`In the case of a backup that has become live. then + when set to true then when this server is stopped normally + the backup will become liveassuming failover. If false then + the backup server will remain passive. Note that if false + you want failover to occur the you can use the the management + API as explained at [Management](management.md)
`allow-failback`Whether a server will automatically stop when a another + places a request to take over its place. The use case is + when the backup has failed over.
`failback-delay`After failover and the slave has become live, this is + set on the new live server. When starting If a backup server + is detected as being live, via the lock file, then the live + server will wait announce itself as a backup and wait this + amount of time (in ms) before starting as a live, however + this is unlikely since this backup has just stopped anyway. + It is also used as the delay after failback before this backup + will restart (if `allow-failback` is set to true.
#### Colocated Backup Servers @@ -500,15 +629,42 @@ server will notify the target server of which directories to use. If replication is configured then directories will be inherited from the creating server but have the new backups name appended. -The following table lists all the `ha-policy` configuration elements: +The following table lists all the `ha-policy` configuration elements for colocated policy: - name Description - --------------------------------- --------------------------------------------------------------------------------------- - `request-backup` If true then the server will request a backup on another node - `backup-request-retries` How many times the live server will try to request a backup, -1 means for ever. - `backup-request-retry-interval` How long to wait for retries between attempts to request a backup server. - `max-backups` Whether or not this live server will accept backup requests from other live servers. - `backup-port-offset` The offset to use for the Connectors and Acceptors when creating a new backup server. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescription
`request-backup`If true then the server will request a backup on another node
`backup-request-retries`How many times the live server will try to request a backup, -1 means for ever.
`backup-request-retry-interval`How long to wait for retries between attempts to request a backup server.
`max-backups`How many backups a live server can create
`backup-port-offset`The offset to use for the Connectors and Acceptors when creating a new backup server.
### Scaling Down @@ -846,12 +1002,30 @@ happened by either the `failedOver` flag passed in on the error code on the `javax.jms.JMSException` which will be one of the following: - error code Description - ------------ --------------------------------------------------------------------------- - FAILOVER Failover has occurred and we have successfully reattached or reconnected. - DISCONNECT No failover has occurred and we are disconnected. +JMSException error codes - : JMSException error codes + + + + + + + + + + + + + + + + + + + + + +
Error codeDescription
FAILOVERFailover has occurred and we have successfully reattached or reconnected.
DISCONNECTNo failover has occurred and we are disconnected.
### Application-Level Failover diff --git a/docs/user-manual/en/intercepting-operations.md b/docs/user-manual/en/intercepting-operations.md index 889129940e..8b1707047a 100644 --- a/docs/user-manual/en/intercepting-operations.md +++ b/docs/user-manual/en/intercepting-operations.md @@ -77,5 +77,5 @@ and invoked. ## Example -See ? for an example which shows how to use interceptors to add +See the examples for an example which shows how to use interceptors to add properties to a message on the server. diff --git a/docs/user-manual/en/jms-bridge.md b/docs/user-manual/en/jms-bridge.md index 062d097d41..421d3a48eb 100644 --- a/docs/user-manual/en/jms-bridge.md +++ b/docs/user-manual/en/jms-bridge.md @@ -46,141 +46,9 @@ JBoss Application Server and the following example shows an example of a beans file that bridges 2 destinations which are actually on the same server. - - - - - ActiveMQServer - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 5000 - - 10 - - ONCE_AND_ONLY_ONCE - - 1 - - -1 - - - - - - true - - - - - org.apache.activemq:service=JMSBridge - - - - - - - - - - - - - /ConnectionFactory - - - - - - - - - - /ConnectionFactory - - - - - - - - - - /queue/source - - - - - - - - - - /queue/target - - - - - - - - - - java.naming.factory.initial - org.jnp.interfaces.NamingContextFactory - - - java.naming.provider.url - jnp://localhost:1099 - - - java.naming.factory.url.pkgs - org.jboss.naming:org.jnp.interfaces" - - - jnp.timeout - 5000 - - - jnp.sotimeout - 5000 - - - - - - - - - +The JMS Bridge is a simple POJO so can be deployed with most frameworks, +simply instantiate the `org.apache.activemq.api.jms.bridge.impl.JMSBridgeImpl` +class and set the appropriate parameters. ## JMS Bridge Parameters