This example demonstrates how you can colocate live and backup servers in the same VM. We do this by creating an HA Policy that is colocated. colocated means that backup servers can be created and maintained by live servers on behalf of other requesting live servers. In this example we create a colocated shared store server that will scale down. That is it will not become live but scale down the journal to the colocated live server.
This example starts 2 live servers each will request the other to create a backup.
The first live server will be killed and the backup in the second will recover the journal and recreate its state in the live server it shares its VM with.
The following shows how to configure the backup, the slave is configured <scale-down/> which means that the backup server will not fully start on fail over, instead it will just recover the journal and write it to its parent live server.
<ha-policy>
<shared-store>
<colocated>
<backup-port-offset>100</backup-port-offset>
<backup-request-retries>-1</backup-request-retries>
<backup-request-retry-interval>2000</backup-request-retry-interval>
<max-backups>1</max-backups>
<request-backup>true</request-backup>
<master/>
<slave>
<scale-down/>
</slave>
</colocated>
</shared-store>
</ha-policy>
Notice that we dont need to specify a scale down connector as it will use most appropriate from the list of available connectors which in this case is the first INVM connector
One other thing to notice is that the cluster connection has its reconnect attempts set to 5, this is so it will disconnect instead of trying to reconnect to a backup that doesn't exist.
To run the example, simply type mvn verify -Pexample
from this directory
initialContext1 = getContext(1);
initialContext = getContext(0);
Queue queue = (Queue)initialContext.lookup("/queue/exampleQueue");
ConnectionFactory connectionFactory = (ConnectionFactory)initialContext.lookup("/ConnectionFactory");
ConnectionFactory connectionFactory1 = (ConnectionFactory)initialContext1.lookup("/ConnectionFactory");
connection = connectionFactory.createConnection();
connection1 = connectionFactory1.createConnection();
Session session = connection.createSession(false, Session.CLIENT_ACKNOWLEDGE);
Session session1 = connection1.createSession(false, Session.CLIENT_ACKNOWLEDGE);
MessageProducer producer = session.createProducer(queue);
MessageProducer producer1 = session1.createProducer(queue);
for (int i = 0; i < numMessages; i++)
{
TextMessage message = session.createTextMessage("This is text message " + i);
producer.send(message);
System.out.println("Sent message: " + message.getText());
message = session1.createTextMessage("This is another text message " + i);
producer1.send(message);
System.out.println("Sent message: " + message.getText());
}
killServer(0);
connection1.start();
MessageConsumer consumer = session1.createConsumer(queue);
TextMessage message0 = null;
for (int i = 0; i < numMessages * 2; i++)
{
message0 = (TextMessage)consumer.receive(5000);
System.out.println("Got message: " + message0.getText());
}
message0.acknowledge();
finally
block. Closing a JMS connection will automatically close all of its sessions, consumers, producer and browser objects
finally
{
if (connection != null)
{
connection.close();
}
if (initialContext != null)
{
initialContext.close();
}
if (connection1 != null)
{
connection1.close();
}
if (initialContext1 != null)
{
initialContext1.close();
}
}