activemq-artemis/docs/eap-manual/en/clusters.xml

570 lines
28 KiB
XML

<?xml version="1.0" encoding="UTF-8"?>
<!--
~ Copyright 2009 Red Hat, Inc.
~ Red Hat licenses this file to you under the Apache License, version
~ 2.0 (the "License"); you may not use this file except in compliance
~ with the License. You may obtain a copy of the License at
~ http://www.apache.org/licenses/LICENSE-2.0
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
~ implied. See the License for the specific language governing
~ permissions and limitations under the License.
-->
<chapter id="clusters">
<title>HornetQ and EAP Cluster Configuration</title>
<section>
<title>Configuring Failover</title>
<para>
This chapter explains how to configure HornetQ within EAP with live backup-groups. Currently in this version
HornetQ only supports shared store for backup nodes so we assume that in the rest of this chapter.
</para>
<para>There are 2 main ways to configure HornetQ servers to have a backup server:</para>
<itemizedlist>
<listitem>
<para>Colocated. This is when an EAP instance has both a live and backup(s) running.</para>
</listitem>
<listitem>
<para>Dedicated. This is when an EAP instance has either a live or backup running but never both.</para>
</listitem>
</itemizedlist>
<section>
<title>Colocated Live and Backup in Symmetrical cluster</title>
<para>
The colocated symmetrical topology will be the most widely used topology, this is where an EAP instance has
a live node running plus 1 or more backup node. Each backup node will belong to a live node on another EAP
instance. In a simple cluster of 2
EAP instances this would mean that each EAP instance would have a live server and 1 backup server as in
diagram1.
</para>
<para>
<graphic fileref="images/simple-colocated.jpg" align="center" format="jpg" scale="30"/>
</para>
<para>
Here the continuous lines show before failover and the dotted lines show the state of the cluster after
failover has occurred. To start with the 2 live servers are connected forming a cluster with each live server
connected to its local applications (via JCA). Also remote clients are connected to the live servers. After
failover the backup connects to the still available live server (which happens to be in the same vm) and takes
over as the live server in the cluster. Any remote clients also failover.
</para>
<para>
One thing to mention is that in that depending on what consumers/producers and MDB's etc are available messages
will be distributed between the nodes to make sure that all clients are satisfied from a JMS perspective. That is
if a producer is sending messages to a queue on a backup server that has no consumers, the messages will be
distributed to a live node elsewhere.
</para>
<para>
The following diagram is slightly more complex but shows the same configuration with 3 servers. Note that the
cluster connections ave been removed to make the configuration clearer but in reality all live servers will
form a cluster.
</para>
<para>
<graphic fileref="images/simple-colocated2.jpg" align="center" format="jpg" scale="30"/>
</para>
<para>
With more than 2 servers it is up to the user as to how many backups per live server are configured, you can
have
as many backups as required but usually 1 would suffice. In 3 node topology you may have each EAP instance
configured
with 2 backups in a 4 node 3 backups and so on. The following diagram demonstrates this.
</para>
<para>
<graphic fileref="images/simple-colocated3.jpg" align="center" format="jpg" scale="30"/>
</para>
<section>
<title>Configuration</title>
<section>
<title>Live Server Configuration</title>
<para>
First let's start with the configuration of the live server, we will use the EAP 'all' configuration as
our starting point. Since this version only supports shared store for failover we need to configure
this in the
<literal>hornetq-configuration.xml</literal>
file like so:
</para>
<programlisting>
&lt;shared-store>true&lt;/shared-store>
</programlisting>
<para>
Obviously this means that the location of the journal files etc will have to be configured to be some
where
where
this lives backup can access. You may change the lives configuration in
<literal>hornetq-configuration.xml</literal>
to
something like:
</para>
<programlisting>
&lt;large-messages-directory>/media/shared/data/large-messages&lt;/large-messages-directory>
&lt;bindings-directory>/media/shared/data/bindings&lt;/bindings-directory>
&lt;journal-directory>/media/shared/data/journal&lt;/journal-directory>
&lt;paging-directory>/media/shared/data/paging&lt;/paging-directory>
</programlisting>
<para>
How these paths are configured will of course depend on your network settings or file system.
</para>
<para>
Now we need to configure how remote JMS clients will behave if the server is shutdown in a normal
fashion.
By
default
Clients will not failover if the live server is shutdown. Depending on there connection factory
settings
they will either fail or try to reconnect to the live server.
</para>
<para>If you want clients to failover on a normal server shutdown the you must configure the
<literal>failover-on-shutdown</literal>
flag to true in the
<literal>hornetq-configuration.xml</literal>
file like so:
</para>
<programlisting>
&lt;failover-on-shutdown>false&lt;/failover-on-shutdown>
</programlisting>
<para>Don't worry if you have this set to false (which is the default) but still want failover to occur,
simply
kill
the
server process directly or call
<literal>forceFailover</literal>
via jmx or the admin console on the core server object.
</para>
<para>We also need to configure the connection factories used by the client to be HA. This is done by
adding
certain attributes to the connection factories in<literal>hornetq-jms.xml</literal>. Let's look at an
example:
</para>
<programlisting>
&lt;connection-factory name="NettyConnectionFactory">
&lt;xa>true&lt;/xa>
&lt;connectors>
&lt;connector-ref connector-name="netty"/>
&lt;/connectors>
&lt;entries>
&lt;entry name="/ConnectionFactory"/>
&lt;entry name="/XAConnectionFactory"/>
&lt;/entries>
&lt;ha>true&lt;/ha>
&lt;!-- Pause 1 second between connect attempts -->
&lt;retry-interval>1000&lt;/retry-interval>
&lt;!-- Multiply subsequent reconnect pauses by this multiplier. This can be used to
implement an exponential back-off. For our purposes we just set to 1.0 so each reconnect
pause is the same length -->
&lt;retry-interval-multiplier>1.0&lt;/retry-interval-multiplier>
&lt;!-- Try reconnecting an unlimited number of times (-1 means "unlimited") -->
&lt;reconnect-attempts>-1&lt;/reconnect-attempts>
&lt;/connection-factory>
</programlisting>
<para>We have added the following attributes to the connection factory used by the client:</para>
<itemizedlist>
<listitem>
<para>
<literal>ha</literal>
- This tells the client it support HA and must always be true for failover
to occur
</para>
</listitem>
<listitem>
<para>
<literal>retry-interval</literal>
- this is how long the client will wait after each unsuccessful
reconnect to the server
</para>
</listitem>
<listitem>
<para>
<literal>retry-interval-multiplier</literal>
- is used to configure an exponential back off for
reconnect attempts
</para>
</listitem>
<listitem>
<para>
<literal>reconnect-attempts</literal>
- how many reconnect attempts should a client make before failing,
-1 means unlimited.
</para>
</listitem>
</itemizedlist>
</section>
<section>
<title>Backup Server Configuration</title>
<para>
Now let's look at how to create and configure a backup server on the same eap instance. This is running
on the same eap instance as the live server from the previous chapter but is configured as the backup
for a live server running on a different eap instance.
</para>
<para>
The first thing to mention is that the backup only needs a <literal>hornetq-jboss-beans.xml</literal>
and a <literal>hornetq-configuration.xml</literal> configuration file. This is because any JMS components
are created from the Journal when the backup server becomes live.
</para>
<para>
Firstly we need to define a new HornetQ Server that EAP will deploy. We do this by creating a new
<literal>hornetq-jboss-beans.xml</literal>
configuration. We will place this under a new directory
<literal>hornetq-backup1</literal>
which will need creating
in the
<literal>deploy</literal>
directory but in reality it doesn't matter where this is put. This will look like:
</para>
<programlisting>
&lt;?xml version="1.0" encoding="UTF-8"?>
&lt;deployment xmlns="urn:jboss:bean-deployer:2.0">
&lt;!-- The core configuration -->
&lt;bean name="BackupConfiguration" class="org.apache.activemq.core.config.impl.FileConfiguration">
&lt;property
name="configurationUrl">${jboss.server.home.url}/deploy/hornetq-backup1/hornetq-configuration.xml&lt;/property>
&lt;/bean>
&lt;!-- The core server -->
&lt;bean name="BackupHornetQServer" class="org.apache.activemq.core.server.impl.HornetQServerImpl">
&lt;constructor>
&lt;parameter>
&lt;inject bean="BackupConfiguration"/>
&lt;/parameter>
&lt;parameter>
&lt;inject bean="MBeanServer"/>
&lt;/parameter>
&lt;parameter>
&lt;inject bean="HornetQSecurityManager"/>
&lt;/parameter>
&lt;/constructor>
&lt;start ignored="true"/>
&lt;stop ignored="true"/>
&lt;/bean>
&lt;!-- The JMS server -->
&lt;bean name="BackupJMSServerManager" class="org.apache.activemq.jms.server.impl.JMSServerManagerImpl">
&lt;constructor>
&lt;parameter>
&lt;inject bean="BackupHornetQServer"/>
&lt;/parameter>
&lt;/constructor>
&lt;/bean>
&lt;/deployment>
</programlisting>
<para>
The first thing to notice is the BackupConfiguration bean. This is configured to pick up the
configuration
for
the
server which we will place in the same directory.
</para>
<para>
After that we just configure a new HornetQ Server and JMS server.
</para>
<note>
<para>
Notice that the names of the beans have been changed from that of the live servers configuration.
This
is
so
there is no clash. Obviously if you add more backup servers you will need to rename those as well,
backup1,
backup2 etc.
</para>
</note>
<para>
Now let's add the server configuration in
<literal>hornetq-configuration.xml</literal>
and add it to the same directory
<literal>deploy/hornetq-backup1</literal>
and configure it like so:
</para>
<programlisting>
&lt;configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">
&lt;jmx-domain>org.apache.activemq.backup1&lt;/jmx-domain>
&lt;clustered>true&lt;/clustered>
&lt;backup>true&lt;/backup>
&lt;shared-store>true&lt;/shared-store>
&lt;allow-failback>true&lt;/allow-failback>
&lt;bindings-directory>/media/shared/data/hornetq-backup/bindings&lt;/bindings-directory>
&lt;journal-directory>/media/shared/data/hornetq-backup/journal&lt;/journal-directory>
&lt;journal-min-files>10&lt;/journal-min-files>
&lt;large-messages-directory>/media/shared/data/hornetq-backup/largemessages&lt;/large-messages-directory>
&lt;paging-directory>/media/shared/data/hornetq-backup/paging&lt;/paging-directory>
&lt;connectors>
&lt;connector name="netty-connector">
&lt;factory-class>org.apache.activemq.core.remoting.impl.netty.NettyConnectorFactory&lt;/factory-class>
&lt;param key="host" value="${jboss.bind.address:localhost}"/>
&lt;param key="port" value="${hornetq.remoting.backup.netty.port:5446}"/>
&lt;/connector>
&lt;connector name="in-vm">
&lt;factory-class>org.apache.activemq.core.remoting.impl.invm.InVMConnectorFactory&lt;/factory-class>
&lt;param key="server-id" value="${hornetq.server-id:0}"/>
&lt;/connector>
&lt;/connectors>
&lt;acceptors>
&lt;acceptor name="netty">
&lt;factory-class>org.apache.activemq.core.remoting.impl.netty.NettyAcceptorFactory&lt;/factory-class>
&lt;param key="host" value="${jboss.bind.address:localhost}"/>
&lt;param key="port" value="${hornetq.remoting.backup.netty.port:5446}"/>
&lt;/acceptor>
&lt;/acceptors>
&lt;broadcast-groups>
&lt;broadcast-group name="bg-group1">
&lt;group-address>231.7.7.7&lt;/group-address>
&lt;group-port>9876&lt;/group-port>
&lt;broadcast-period>1000&lt;/broadcast-period>
&lt;connector-ref>netty-connector&lt;/connector-ref>
&lt;/broadcast-group>
&lt;/broadcast-groups>
&lt;discovery-groups>
&lt;discovery-group name="dg-group1">
&lt;group-address>231.7.7.7&lt;/group-address>
&lt;group-port>9876&lt;/group-port>
&lt;refresh-timeout>60000&lt;/refresh-timeout>
&lt;/discovery-group>
&lt;/discovery-groups>
&lt;cluster-connections>
&lt;cluster-connection name="my-cluster">
&lt;address>jms&lt;/address>
&lt;connector-ref>netty-connector&lt;/connector-ref>
&lt;discovery-group-ref discovery-group-name="dg-group1"/>
&lt;!--max hops defines how messages are redistributed, the default is 1 meaning only distribute to directly
connected nodes, to disable set to 0-->
&lt;!--&lt;max-hops>0&lt;/max-hops>-->
&lt;/cluster-connection>
&lt;/cluster-connections>
&lt;security-settings>
&lt;security-setting match="#">
&lt;permission type="createNonDurableQueue" roles="guest"/>
&lt;permission type="deleteNonDurableQueue" roles="guest"/>
&lt;permission type="consume" roles="guest"/>
&lt;permission type="send" roles="guest"/>
&lt;/security-setting>
&lt;/security-settings>
&lt;address-settings>
&lt;!--default for catch all-->
&lt;address-setting match="#">
&lt;dead-letter-address>jms.queue.DLQ&lt;/dead-letter-address>
&lt;expiry-address>jms.queue.ExpiryQueue&lt;/expiry-address>
&lt;redelivery-delay>0&lt;/redelivery-delay>
&lt;max-size-bytes>10485760&lt;/max-size-bytes>
&lt;message-counter-history-day-limit>10&lt;/message-counter-history-day-limit>
&lt;address-full-policy>BLOCK&lt;/address-full-policy>
&lt;/address-setting>
&lt;/address-settings>
&lt;/configuration>
</programlisting>
<para>
The second thing you can see is we have added a
<literal>jmx-domain</literal>
attribute, this is used when
adding objects, such as the HornetQ server and JMS server to jmx, we change this from the default
<literal>org.apache.activemq</literal>
to avoid naming clashes with the live server
</para>
<para>
The first important part of the configuration is to make sure that this server starts as a backup
server not
a live server, via the
<literal>backup</literal>
attribute.
</para>
<para>
After that we have the same cluster configuration as live, that is
<literal>clustered</literal>
is true and
<literal>shared-store</literal>
is true. However you can see we have added a new configuration element
<literal>allow-failback</literal>. When this is set to true then this backup server will automatically
stop
and fall back into backup node if failover occurs and the live server has become available. If false
then
the user will have to stop the server manually.
</para>
<para>
Next we can see the configuration for the journal location, as in the live configuration this must
point to
the same directory as this backup's live server.
</para>
<para>
Now we see the connectors configuration, we have 3 defined which are needed for the following
</para>
<itemizedlist>
<listitem>
<para>
<literal>netty-connector.</literal>
This is the connector used to connect to this backup server once live.
</para>
</listitem>
</itemizedlist>
<para>After that you will see the acceptors defined, This is the acceptor where clients will reconnect.
</para>
<para>
The Broadcast groups, Discovery group and cluster configurations are as per normal, details of these
can be found in the HornetQ user manual.
</para>
<note>
<para>notice the commented out <literal>max-hops</literal> in the cluster connection, set this to 0 if
you want to disable server side load balancing.</para>
</note>
<para>
When the backup becomes it will be not be servicing any JEE components on this eap instance. Instead any
existing messages will be redistributed around the cluster and new messages forwarded to and from the backup
to service any remote clients it has (if it has any).
</para>
</section>
<section>
<title>Configuring multiple backups</title>
<para>
In this instance we have assumed that there are only 2 nodes where each node has a backup for the other
node. However you may want to configure a server too have multiple backup nodes. For example you may want
3 nodes where each node has 2 backups, one for each of the other 2 live servers. For this you would simply
copy the backup configuration and make sure you do the following:
</para>
<itemizedlist>
<listitem>
<para>
Make sure that you give all the beans in the <literal>hornetq-jboss-beans.xml</literal> configuration
file a unique name, i.e.
</para>
</listitem>
</itemizedlist>
</section>
<section>
<title>Running the shipped example</title>
<para>
EAP ships with an example configuration for this topology. Look under <literal>extras/hornetq/resources/examples/symmetric-cluster-with-backups-colocated</literal>
and follow the read me
</para>
</section>
</section>
</section>
<section>
<title>Dedicated Live and Backup in Symmetrical cluster</title>
<para>
In reality the configuration for this is exactly the same as the backup server in the previous section, the only
difference is that a backup will reside on an eap instance of its own rather than colocated with another live server.
Of course this means that the eap instance is passive and not used until the backup comes live and is only
really useful for pure JMS applications.
</para>
<para>The following diagram shows a possible configuration for this:</para>
<para>
<graphic fileref="images/simple-dedicated.jpg" align="center" format="jpg" scale="30"/>
</para>
<para>
Here you can see how this works with remote JMS clients. Once failover occurs the HornetQ backup Server takes
running within another eap instance takes over as live.
</para>
<para>
This is fine with applications that are pure JMS and have no JMS components such as MDB's. If you are using
JMS components then there are 2 ways that this can be done. The first is shown in the following diagram:
</para>
<para>
<graphic fileref="images/simple-dedicated-jca.jpg" align="center" format="jpg" scale="30"/>
</para>
<para>
Because there is no live hornetq server running by default in the eap instance running the backup server it
makes no sense to host any applications in it. However you can host applications on the server running the live
hornetq server. If failure occurs to an live hornetq server then remote jms clients will failover as previously
explained however what happens to any messages meant for or sent from JEE components. Well when the backup comes
live, messages will be distributed to and from the backup server over HornetQ cluster connections and handled
appropriately.
</para>
<para>
The second way to do this is to have both live and backup server remote form the eap instance as shown in the
following diagram.
</para>
<para>
<graphic fileref="images/simple-dedicated-jca-remote.jpg" align="center" format="jpg" scale="30"/>
</para>
<para>
Here you can see that all the Application (via JCA) will be serviced by a HornetQ server in its own eap instance.
</para>
<section>
<title>Configuration of dedicated Live and backup</title>
<para>
The live server configuration is exactly the same as in the previous example. The only difference of course
is that there is no backup in the eap instance.
</para>
<para>
For the backup server the <literal>hornetq-configuration.xml</literal> is unchanged, however since there is
no live server we need to make sure that the <literal>hornetq-jboss-beans.xml</literal> instantiates all
the beans needed. For this simply use the same configuration as in the live server changing only the
location of the <literal>hornetq-configuration.xml</literal> parameter for the <literal>Configuration</literal>
bean.
</para>
<para>
As before there will be no <literal>hornetq-jms.xml</literal> or <literal>jms-ds.xml</literal> configuration.
</para>
<para>
If you want both hornetq servers to be in there own dedicated server where they are remote to applications,
as in the last diagram. Then simply edit the <literal>jms-ds.xml</literal> and change the following lines to
</para>
<programlisting>
&lt;config-property name="ConnectorClassName" type="java.lang.String">org.apache.activemq.core.remoting.impl.netty.NettyConnectorFactory&lt;/config-property>
&lt;config-property name="ConnectionParameters" type="java.lang.String">host=127.0.0.1;port=5446&lt;/config-property>
</programlisting>
<para>
This will change the outbound JCA connector, to configure the inbound connector for MDB's edit the
<literal>ra.xml</literal> config file and change the following parameters.
</para>
<programlisting>
&lt;config-property>
&lt;description>The transport type&lt;/description>
&lt;config-property-name>ConnectorClassName&lt;/config-property-name>
&lt;config-property-type>java.lang.String&lt;/config-property-type>
&lt;config-property-value>org.apache.activemq.core.remoting.impl.netty.NettyConnectorFactory&lt;/config-property-value>
&lt;/config-property>
&lt;config-property>
&lt;description>The transport configuration. These values must be in the form of key=val;key=val;&lt;/description>
&lt;config-property-name>ConnectionParameters&lt;/config-property-name>
&lt;config-property-type>java.lang.String&lt;/config-property-type>
&lt;config-property-value>host=127.0.0.1;port=5446&lt;/config-property-value>
&lt;/config-property>
</programlisting>
<para>
In both cases the host and port should match your live server. If you are using Discovery then set the
appropriate parameters for <literal>DiscoveryAddress</literal> and <literal>DiscoveryPort</literal> to match
your configured broadcast groups.
</para>
</section>
<section>
<title>Running the shipped example</title>
<para>
EAP ships with an example configuration for this topology. Look under
<literal>extras/hornetq/resources/examples/cluster-with-dedicated-backup</literal>
and follow the read me
</para>
</section>
</section>
</section>
</chapter>