documentation updates

This commit is contained in:
Andy Taylor 2015-01-21 18:27:19 +00:00
parent a2a5b35bb7
commit a6a4d1bed5
7 changed files with 221 additions and 178 deletions

View File

@ -37,8 +37,8 @@ geographically distributed servers, creating your global messaging mesh.
Diverts are defined as xml in the `activemq-configuration.xml` file.
There can be zero or more diverts in the file.
Please see ? for a full working example showing you how to configure and
use diverts.
Please see the examples for a full working example showing you how to
configure and use diverts.
Let's take a look at some divert examples:

View File

@ -35,6 +35,7 @@ import org.apache.activemq.core.server.embedded.EmbeddedActiveMQ;
...
EmbeddedActiveMQ embedded = new EmbeddedActiveMQ();
embedded.start();
ClientSessionFactory nettyFactory = ActiveMQClient.createClientSessionFactory(
@ -182,7 +183,7 @@ jmsServer.setJmsConfiguration(jmsConfig);
jmsServer.start();
```
Please see ? for an example which shows how to setup and run ActiveMQ
Please see the examples for an example which shows how to setup and run ActiveMQ
embedded with JMS.
## Dependency Frameworks

View File

@ -10,13 +10,13 @@ please the JMS javadoc for
Filter expressions are used in several places in ActiveMQ
- Predefined Queues. When pre-defining a queue, either in
`activemq-configuration.xml` or `activemq-jms.xml` a filter
- Predefined Queues. When pre-defining a queue, in
`activemq-configuration.xml` in either the core or jms configuration a filter
expression can be defined for a queue. Only messages that match the
filter expression will enter the queue.
- Core bridges can be defined with an optional filter expression, only
matching messages will be bridged (see [Core Bridges]9core-bridges.md)).
matching messages will be bridged (see [Core Bridges](core-bridges.md)).
- Diverts can be defined with an optional filter expression, only
matching messages will be diverted (see [Diverts](diverts.md)).

View File

@ -109,7 +109,7 @@ If the connection factory is directly instantiated, the consumer window
size is specified by `ActiveMQConnectionFactory.setConsumerWindowSize()`
method.
Please see ? for an example which shows how to configure ActiveMQ to
Please see the examples for an example which shows how to configure ActiveMQ to
prevent consumer buffering when dealing with slow consumers.
## Rate limited flow control

View File

@ -101,7 +101,6 @@ same data directories, all data synchronization is done over the
network. Therefore all (persistent) data received by the live server
will be duplicated to the backup.
c
Notice that upon start-up the backup server will first need to
synchronize all existing data from the live server before becoming
@ -229,22 +228,92 @@ The backup server must be similarly configured but as a `slave`
The following table lists all the `ha-policy` configuration elements for
HA strategy Replication for `master`:
name Description
------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
`check-for-live-server` Whether to check the cluster for a (live) server using our own server ID when starting up. This option is only necessary for performing 'fail-back' on replicating servers.
`cluster-name` Name of the cluster configuration to use for replication. This setting is only necessary if you configure multiple cluster connections. If configured then the connector configuration of the cluster configuration with this name will be used when connecting to the cluster to discover if a live server is already running, see `check-for-live-server`. If unset then the default cluster connections configuration is used (the first one configured)
`group-name` If set, backup servers will only pair with live servers with matching group-name
<table summary="HA Replication Master Policy" border="1">
<colgroup>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>`check-for-live-server`</td>
<td>Whether to check the cluster for a (live) server using our own server ID
when starting up. This option is only necessary for performing 'fail-back'
on replicating servers.</td>
</tr>
<tr>
<td>`cluster-name`</td>
<td>Name of the cluster configuration to use for replication. This setting is
only necessary if you configure multiple cluster connections. If configured then
the connector configuration of the cluster configuration with this name will be
used when connecting to the cluster to discover if a live server is already running,
see `check-for-live-server`. If unset then the default cluster connections configuration
is used (the first one configured).</td>
</tr>
<tr>
<td>`group-name`</td>
<td>Whether to check the cluster for a (live) server using our own server ID when starting up. This option is only necessary for performing 'fail-back' on replicating servers.</td>
</tr>
<tr>
<td>`check-for-live-server`</td>
<td>If set, backup servers will only pair with live servers with matching group-name.</td>
</tr>
</tbody>
</table>
The following table lists all the `ha-policy` configuration elements for
HA strategy Replication for `slave`:
name Description
-------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
`cluster-name` Name of the cluster configuration to use for replication. This setting is only necessary if you configure multiple cluster connections. If configured then the connector configuration of the cluster configuration with this name will be used when connecting to the cluster to discover if a live server is already running, see `check-for-live-server`. If unset then the default cluster connections configuration is used (the first one configured)
`group-name` If set, backup servers will only pair with live servers with matching group-name
`max-saved-replicated-journals-size` This specifies how many times a replicated backup server can restart after moving its files on start. Once there are this number of backup journal files the server will stop permanently after if fails back.
`allow-failback` Whether a server will automatically stop when a another places a request to take over its place. The use case is when the backup has failed over
`failback-delay` delay to wait before fail-back occurs on (failed over live's) restart
<table summary="HA Replication Slave Policy" border="1">
<colgroup>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>`cluster-name`</td>
<td>Name of the cluster configuration to use for replication.
This setting is only necessary if you configure multiple cluster
connections. If configured then the connector configuration of
the cluster configuration with this name will be used when
connecting to the cluster to discover if a live server is already
running, see `check-for-live-server`. If unset then the default
cluster connections configuration is used (the first one configured)</td>
</tr>
<tr>
<td>`group-name`</td>
<td>If set, backup servers will only pair with live servers with matching group-name</td>
</tr>
<tr>
<td>`max-saved-replicated-journals-size`</td>
<td>This specifies how many times a replicated backup server
can restart after moving its files on start. Once there are
this number of backup journal files the server will stop permanently
after if fails back.</td>
</tr>
<tr>
<td>`allow-failback`</td>
<td>Whether a server will automatically stop when a another places a
request to take over its place. The use case is when the backup has
failed over</td>
</tr>
<tr>
<td>`failback-delay`</td>
<td>delay to wait before fail-back occurs on (failed over live's) restart</td>
</tr>
</tbody>
</table>
### Shared Store
@ -410,19 +479,79 @@ automatically by setting the following property in the
The following table lists all the `ha-policy` configuration elements for
HA strategy shared store for `master`:
name Description
------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
`failback-delay` If a backup server is detected as being live, via the lock file, then the live server will wait announce itself as a backup and wait this amount of time (in ms) before starting as a live
`failover-on-server-shutdown` If set to true then when this server is stopped normally the backup will become live assuming failover. If false then the backup server will remain passive. Note that if false you want failover to occur the you can use the the management API as explained at [Management](management.md)
<table summary="HA Shared Store Master Policy" border="1">
<colgroup>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>`failback-delay`</td>
<td>If a backup server is detected as being live,
via the lock file, then the live server will wait
announce itself as a backup and wait this amount
of time (in ms) before starting as a live</td>
</tr>
<tr>
<td>`failover-on-server-shutdown`</td>
<td>If set to true then when this server is stopped
normally the backup will become live assuming failover.
If false then the backup server will remain passive.
Note that if false you want failover to occur the you
can use the the management API as explained at [Management](management.md)</td>
</tr>
</tbody>
</table>
The following table lists all the `ha-policy` configuration elements for
HA strategy Shared Store for `slave`:
name Description
------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
`failover-on-server-shutdown` In the case of a backup that has become live. then when set to true then when this server is stopped normally the backup will become liveassuming failover. If false then the backup server will remain passive. Note that if false you want failover to occur the you can use the the management API as explained at [Management](management.md)
`allow-failback` Whether a server will automatically stop when a another places a request to take over its place. The use case is when the backup has failed over.
`failback-delay` After failover and the slave has become live, this is set on the new live server. When starting If a backup server is detected as being live, via the lock file, then the live server will wait announce itself as a backup and wait this amount of time (in ms) before starting as a live, however this is unlikely since this backup has just stopped anyway. It is also used as the delay after failback before this backup will restart (if `allow-failback` is set to true.
<table summary="HA Replication Slave Policy" border="1">
<colgroup>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>`failover-on-server-shutdown`</td>
<td>In the case of a backup that has become live. then
when set to true then when this server is stopped normally
the backup will become liveassuming failover. If false then
the backup server will remain passive. Note that if false
you want failover to occur the you can use the the management
API as explained at [Management](management.md)</td>
</tr>
<tr>
<td>`allow-failback`</td>
<td>Whether a server will automatically stop when a another
places a request to take over its place. The use case is
when the backup has failed over.</td>
</tr>
<tr>
<td>`failback-delay`</td>
<td>After failover and the slave has become live, this is
set on the new live server. When starting If a backup server
is detected as being live, via the lock file, then the live
server will wait announce itself as a backup and wait this
amount of time (in ms) before starting as a live, however
this is unlikely since this backup has just stopped anyway.
It is also used as the delay after failback before this backup
will restart (if `allow-failback` is set to true.</td>
</tr>
</tbody>
</table>
#### Colocated Backup Servers
@ -500,15 +629,42 @@ server will notify the target server of which directories to use. If
replication is configured then directories will be inherited from the
creating server but have the new backups name appended.
The following table lists all the `ha-policy` configuration elements:
The following table lists all the `ha-policy` configuration elements for colocated policy:
name Description
--------------------------------- ---------------------------------------------------------------------------------------
`request-backup` If true then the server will request a backup on another node
`backup-request-retries` How many times the live server will try to request a backup, -1 means for ever.
`backup-request-retry-interval` How long to wait for retries between attempts to request a backup server.
`max-backups` Whether or not this live server will accept backup requests from other live servers.
`backup-port-offset` The offset to use for the Connectors and Acceptors when creating a new backup server.
<table summary="HA Replication Colocation Policy" border="1">
<colgroup>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>`request-backup`</td>
<td>If true then the server will request a backup on another node</td>
</tr>
<tr>
<td>`backup-request-retries`</td>
<td>How many times the live server will try to request a backup, -1 means for ever.</td>
</tr>
<tr>
<td>`backup-request-retry-interval`</td>
<td>How long to wait for retries between attempts to request a backup server.</td>
</tr>
<tr>
<td>`max-backups`</td>
<td>How many backups a live server can create</td>
</tr>
<tr>
<td>`backup-port-offset`</td>
<td>The offset to use for the Connectors and Acceptors when creating a new backup server.</td>
</tr>
</tbody>
</table>
### Scaling Down
@ -846,12 +1002,30 @@ happened by either the `failedOver` flag passed in on the
error code on the `javax.jms.JMSException` which will be one of the
following:
error code Description
------------ ---------------------------------------------------------------------------
FAILOVER Failover has occurred and we have successfully reattached or reconnected.
DISCONNECT No failover has occurred and we are disconnected.
JMSException error codes
: JMSException error codes
<table summary="HA Replication Colocation Policy" border="1">
<colgroup>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Error code</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>FAILOVER</td>
<td>Failover has occurred and we have successfully reattached or reconnected.</td>
</tr>
<tr>
<td>DISCONNECT</td>
<td>No failover has occurred and we are disconnected.</td>
</tr>
</tbody>
</table>
### Application-Level Failover

View File

@ -77,5 +77,5 @@ and invoked.
## Example
See ? for an example which shows how to use interceptors to add
See the examples for an example which shows how to use interceptors to add
properties to a message on the server.

View File

@ -46,141 +46,9 @@ JBoss Application Server and the following example shows an example of a
beans file that bridges 2 destinations which are actually on the same
server.
<?xml version="1.0" encoding="UTF-8"?>
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<bean name="JMSBridge" class="org.apache.activemq.api.jms.bridge.impl.JMSBridgeImpl">
<!-- ActiveMQ must be started before the bridge -->
<depends>ActiveMQServer</depends>
<constructor>
<!-- Source ConnectionFactory Factory -->
<parameter>
<inject bean="SourceCFF"/>
</parameter>
<!-- Target ConnectionFactory Factory -->
<parameter>
<inject bean="TargetCFF"/>
</parameter>
<!-- Source DestinationFactory -->
<parameter>
<inject bean="SourceDestinationFactory"/>
</parameter>
<!-- Target DestinationFactory -->
<parameter>
<inject bean="TargetDestinationFactory"/>
</parameter>
<!-- Source User Name (no username here) -->
<parameter><null /></parameter>
<!-- Source Password (no password here)-->
<parameter><null /></parameter>
<!-- Target User Name (no username here)-->
<parameter><null /></parameter>
<!-- Target Password (no password here)-->
<parameter><null /></parameter>
<!-- Selector -->
<parameter><null /></parameter>
<!-- Failure Retry Interval (in ms) -->
<parameter>5000</parameter>
<!-- Max Retries -->
<parameter>10</parameter>
<!-- Quality Of Service -->
<parameter>ONCE_AND_ONLY_ONCE</parameter>
<!-- Max Batch Size -->
<parameter>1</parameter>
<!-- Max Batch Time (-1 means infinite) -->
<parameter>-1</parameter>
<!-- Subscription name (no subscription name here)-->
<parameter><null /></parameter>
<!-- Client ID (no client ID here)-->
<parameter><null /></parameter>
<!-- Add MessageID In Header -->
<parameter>true</parameter>
<!-- register the JMS Bridge in the AS MBeanServer -->
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>org.apache.activemq:service=JMSBridge</parameter>
</constructor>
<property name="transactionManager">
<inject bean="RealTransactionManager"/>
</property>
</bean>
<!-- SourceCFF describes the ConnectionFactory used to connect to the source destination -->
<bean name="SourceCFF"
class="org.apache.activemq.api.jms.bridge.impl.JNDIConnectionFactoryFactory">
<constructor>
<parameter>
<inject bean="JNDI" />
</parameter>
<parameter>/ConnectionFactory</parameter>
</constructor>
</bean>
<!-- TargetCFF describes the ConnectionFactory used to connect to the target destination -->
<bean name="TargetCFF"
class="org.apache.activemq.api.jms.bridge.impl.JNDIConnectionFactoryFactory">
<constructor>
<parameter>
<inject bean="JNDI" />
</parameter>
<parameter>/ConnectionFactory</parameter>
</constructor>
</bean>
<!-- SourceDestinationFactory describes the Destination used as the source -->
<bean name="SourceDestinationFactory" class="org.apache.activemq.api.jms.bridge.impl.JNDIDestinationFactory">
<constructor>
<parameter>
<inject bean="JNDI" />
</parameter>
<parameter>/queue/source</parameter>
</constructor>
</bean>
<!-- TargetDestinationFactory describes the Destination used as the target -->
<bean name="TargetDestinationFactory" class="org.apache.activemq.api.jms.bridge.impl.JNDIDestinationFactory">
<constructor>
<parameter>
<inject bean="JNDI" />
</parameter>
<parameter>/queue/target</parameter>
</constructor>
</bean>
<!-- JNDI is a Hashtable containing the JNDI properties required -->
<!-- to connect to the sources and targets JMS resrouces -->
<bean name="JNDI" class="java.util.Hashtable">
<constructor class="java.util.Map">
<map class="java.util.Hashtable" keyClass="String"
valueClass="String">
<entry>
<key>java.naming.factory.initial</key>
<value>org.jnp.interfaces.NamingContextFactory</value>
</entry>
<entry>
<key>java.naming.provider.url</key>
<value>jnp://localhost:1099</value>
</entry>
<entry>
<key>java.naming.factory.url.pkgs</key>
<value>org.jboss.naming:org.jnp.interfaces"</value>
</entry>
<entry>
<key>jnp.timeout</key>
<value>5000</value>
</entry>
<entry>
<key>jnp.sotimeout</key>
<value>5000</value>
</entry>
</map>
</constructor>
</bean>
<bean name="MBeanServer" class="javax.management.MBeanServer">
<constructor factoryClass="org.jboss.mx.util.MBeanServerLocator" factoryMethod="locateJBoss"/>
</bean>
</deployment>
The JMS Bridge is a simple POJO so can be deployed with most frameworks,
simply instantiate the `org.apache.activemq.api.jms.bridge.impl.JMSBridgeImpl`
class and set the appropriate parameters.
## JMS Bridge Parameters