activemq-artemis/docs/user-manual/en/persistence.xml

358 lines
25 KiB
XML
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<?xml version="1.0" encoding="UTF-8"?>
<!-- ============================================================================= -->
<!-- Copyright © 2009 Red Hat, Inc. and others. -->
<!-- -->
<!-- The text of and illustrations in this document are licensed by Red Hat under -->
<!-- a Creative Commons AttributionShare Alike 3.0 Unported license ("CC-BY-SA"). -->
<!-- -->
<!-- An explanation of CC-BY-SA is available at -->
<!-- -->
<!-- http://creativecommons.org/licenses/by-sa/3.0/. -->
<!-- -->
<!-- In accordance with CC-BY-SA, if you distribute this document or an adaptation -->
<!-- of it, you must provide the URL for the original version. -->
<!-- -->
<!-- Red Hat, as the licensor of this document, waives the right to enforce, -->
<!-- and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent -->
<!-- permitted by applicable law. -->
<!-- ============================================================================= -->
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
<!ENTITY % BOOK_ENTITIES SYSTEM "HornetQ_User_Manual.ent">
%BOOK_ENTITIES;
]>
<chapter id="persistence">
<title>Persistence</title>
<para>In this chapter we will describe how persistence works with HornetQ and how to configure
it.</para>
<para>HornetQ ships with a high performance journal. Since HornetQ handles its own persistence,
rather than relying on a database or other 3rd party persistence engine it is very highly
optimised for the specific messaging use cases.</para>
<para>A HornetQ journal is an <emphasis>append only</emphasis> journal. It consists of a set of
files on disk. Each file is pre-created to a fixed size and initially filled with padding.
As operations are performed on the server, e.g. add message, update message, delete message,
records are appended to the journal. When one journal file is full we move to the next
one.</para>
<para>Because records are only appended, i.e. added to the end of the journal we minimise disk
head movement, i.e. we minimise random access operations which is typically the slowest
operation on a disk.</para>
<para>Making the file size configurable means that an optimal size can be chosen, i.e. making
each file fit on a disk cylinder. Modern disk topologies are complex and we are not in
control over which cylinder(s) the file is mapped onto so this is not an exact science. But
by minimising the number of disk cylinders the file is using, we can minimise the amount of
disk head movement, since an entire disk cylinder is accessible simply by the disk rotating
- the head does not have to move.</para>
<para>As delete records are added to the journal, HornetQ has a sophisticated file garbage
collection algorithm which can determine if a particular journal file is needed any more -
i.e. has all its data been deleted in the same or other files. If so, the file can be
reclaimed and re-used. </para>
<para>HornetQ also has a compaction algorithm which removes dead space from the journal and
compresses up the data so it takes up less files on disk.</para>
<para>The journal also fully supports transactional operation if required, supporting both local
and XA transactions.</para>
<para>The majority of the journal is written in Java, however we abstract out the interaction
with the actual file system to allow different pluggable implementations. HornetQ ships with
two implementations:</para>
<itemizedlist>
<listitem>
<para>Java <ulink url="http://en.wikipedia.org/wiki/New_I/O">NIO</ulink>.</para>
<para>The first implementation uses standard Java NIO to interface with the file system.
This provides extremely good performance and runs on any platform where there's a
Java 6+ runtime.</para>
</listitem>
<listitem id="aio-journal">
<para>Linux Asynchronous IO</para>
<para>The second implementation uses a thin native code wrapper to talk to the Linux
asynchronous IO library (AIO). With AIO, HornetQ will be called back when the data
has made it to disk, allowing us to avoid explicit syncs altogether and simply send
back confirmation of completion when AIO informs us that the data has been
persisted.</para>
<para>Using AIO will typically provide even better performance than using Java
NIO.</para>
<para>The AIO journal is only available when running Linux kernel 2.6 or later and after
having installed libaio (if it's not already installed). For instructions on how to
install libaio please see <xref linkend="installing-aio"/>.</para>
<para>Also, please note that AIO will only work with the following file systems: ext2,
ext3, ext4, jfs, xfs. With other file systems, e.g. NFS it may appear to work, but
it will fall back to a slower synchronous behaviour. Don't put the journal on a NFS
share!</para>
<para>For more information on libaio please see <xref linkend="libaio"/>.</para>
<para>libaio is part of the kernel project.</para>
</listitem>
</itemizedlist>
<para>The standard HornetQ core server uses two instances of the journal:</para>
<itemizedlist id="persistence.journallist">
<listitem>
<para>Bindings journal.</para>
<para>This journal is used to store bindings related data. That includes the set of
queues that are deployed on the server and their attributes. It also stores data
such as id sequence counters. </para>
<para>The bindings journal is always a NIO journal as it is typically low throughput
compared to the message journal.</para>
<para>The files on this journal are prefixed as <literal>hornetq-bindings</literal>.
Each file has a <literal>bindings</literal> extension. File size is <literal
>1048576</literal>, and it is located at the bindings folder.</para>
</listitem>
<listitem>
<para>JMS journal.</para>
<para>This journal instance stores all JMS related data, This is basically any JMS
Queues, Topics and Connection Factories and any JNDI bindings for these
resources.</para>
<para>Any JMS Resources created via the management API will be persisted to this
journal. Any resources configured via configuration files will not. The JMS Journal
will only be created if JMS is being used.</para>
<para>The files on this journal are prefixed as <literal>hornetq-jms</literal>. Each
file has a <literal>jms</literal> extension. File size is <literal
>1048576</literal>, and it is located at the bindings folder.</para>
</listitem>
<listitem>
<para>Message journal.</para>
<para>This journal instance stores all message related data, including the message
themselves and also duplicate-id caches.</para>
<para>By default HornetQ will try and use an AIO journal. If AIO is not available, e.g.
the platform is not Linux with the correct kernel version or AIO has not been
installed then it will automatically fall back to using Java NIO which is available
on any Java platform.</para>
<para>The files on this journal are prefixed as <literal>hornetq-data</literal>. Each
file has a <literal>hq</literal> extension. File size is by the default <literal
>10485760</literal> (configurable), and it is located at the journal
folder.</para>
</listitem>
</itemizedlist>
<para>For large messages, HornetQ persists them outside the message journal. This is discussed
in <xref linkend="large-messages"/>.</para>
<para>HornetQ can also be configured to page messages to disk in low memory situations. This is
discussed in <xref linkend="paging"/>.</para>
<para>If no persistence is required at all, HornetQ can also be configured not to persist any
data at all to storage as discussed in <xref linkend="persistence.enabled"/>.</para>
<section id="configuring.bindings.journal">
<title>Configuring the bindings journal</title>
<para>The bindings journal is configured using the following attributes in <literal
>hornetq-configuration.xml</literal></para>
<itemizedlist>
<listitem>
<para><literal>bindings-directory</literal></para>
<para>This is the directory in which the bindings journal lives. The default value
is <literal>data/bindings</literal>.</para>
</listitem>
<listitem>
<para><literal>create-bindings-dir</literal></para>
<para>If this is set to <literal>true</literal> then the bindings directory will be
automatically created at the location specified in <literal
>bindings-directory</literal> if it does not already exist. The default
value is <literal>true</literal></para>
</listitem>
</itemizedlist>
</section>
<section id="configuring.bindings.jms">
<title>Configuring the jms journal</title>
<para>The jms config shares its configuration with the bindings journal.</para>
</section>
<section id="configuring.message.journal">
<title>Configuring the message journal</title>
<para>The message journal is configured using the following attributes in <literal
>hornetq-configuration.xml</literal></para>
<itemizedlist>
<listitem id="configuring.message.journal.journal-directory">
<para><literal>journal-directory</literal></para>
<para>This is the directory in which the message journal lives. The default value is
<literal>data/journal</literal>.</para>
<para>For the best performance, we recommend the journal is located on its own
physical volume in order to minimise disk head movement. If the journal is on a
volume which is shared with other processes which might be writing other files
(e.g. bindings journal, database, or transaction coordinator) then the disk head
may well be moving rapidly between these files as it writes them, thus
drastically reducing performance.</para>
<para>When the message journal is stored on a SAN we recommend each journal instance
that is stored on the SAN is given its own LUN (logical unit).</para>
</listitem>
<listitem id="configuring.message.journal.create-journal-dir">
<para><literal>create-journal-dir</literal></para>
<para>If this is set to <literal>true</literal> then the journal directory will be
automatically created at the location specified in <literal
>journal-directory</literal> if it does not already exist. The default value
is <literal>true</literal></para>
</listitem>
<listitem id="configuring.message.journal.journal-type">
<para><literal>journal-type</literal></para>
<para>Valid values are <literal>NIO</literal> or <literal>ASYNCIO</literal>.</para>
<para>Choosing <literal>NIO</literal> chooses the Java NIO journal. Choosing
<literal>AIO</literal> chooses the Linux asynchronous IO journal. If you
choose <literal>AIO</literal> but are not running Linux or you do not have
libaio installed then HornetQ will detect this and automatically fall back to
using <literal>NIO</literal>.</para>
</listitem>
<listitem id="configuring.message.journal.journal-sync-transactional">
<para><literal>journal-sync-transactional</literal></para>
<para>If this is set to true then HornetQ will make sure all transaction data is
flushed to disk on transaction boundaries (commit, prepare and rollback). The
default value is <literal>true</literal>.</para>
</listitem>
<listitem id="configuring.message.journal.journal-sync-non-transactional">
<para><literal>journal-sync-non-transactional</literal></para>
<para>If this is set to true then HornetQ will make sure non transactional message
data (sends and acknowledgements) are flushed to disk each time. The default
value for this is <literal>true</literal>.</para>
</listitem>
<listitem id="configuring.message.journal.journal-file-size">
<para><literal>journal-file-size</literal></para>
<para>The size of each journal file in bytes. The default value for this is <literal
>10485760</literal> bytes (10MiB).</para>
</listitem>
<listitem id="configuring.message.journal.journal-min-files">
<para><literal>journal-min-files</literal></para>
<para>The minimum number of files the journal will maintain. When HornetQ starts and
there is no initial message data, HornetQ will pre-create <literal
>journal-min-files</literal> number of files.</para>
<para>Creating journal files and filling them with padding is a fairly expensive
operation and we want to minimise doing this at run-time as files get filled. By
pre-creating files, as one is filled the journal can immediately resume with the
next one without pausing to create it.</para>
<para>Depending on how much data you expect your queues to contain at steady state
you should tune this number of files to match that total amount of data.</para>
</listitem>
<listitem id="configuring.message.journal.journal-max-io">
<para><literal>journal-max-io</literal></para>
<para>Write requests are queued up before being submitted to the system for
execution. This parameter controls the maximum number of write requests that can
be in the IO queue at any one time. If the queue becomes full then writes will
block until space is freed up. </para>
<para>When using NIO, this value should always be equal to <literal
>1</literal></para>
<para>When using AIO, the default should be <literal>500</literal>.</para>
<para>The system maintains different defaults for this parameter depending on whether
it's NIO or AIO (default for NIO is 1, default for AIO is 500)</para>
<para>There is a limit and the total max AIO can't be higher than what is configured
at the OS level (/proc/sys/fs/aio-max-nr) usually at 65536.</para>
</listitem>
<listitem id="configuring.message.journal.journal-buffer-timeout">
<para><literal>journal-buffer-timeout</literal></para>
<para>Instead of flushing on every write that requires a flush, we maintain an
internal buffer, and flush the entire buffer either when it is full, or when a
timeout expires, whichever is sooner. This is used for both NIO and AIO and
allows the system to scale better with many concurrent writes that require
flushing.</para>
<para>This parameter controls the timeout at which the buffer will be flushed if it
hasn't filled already. AIO can typically cope with a higher flush rate than NIO,
so the system maintains different defaults for both NIO and AIO (default for NIO
is 3333333 nanoseconds - 300 times per second, default for AIO is 500000
nanoseconds - ie. 2000 times per second).</para>
<note>
<para>By increasing the timeout, you may be able to increase system throughput
at the expense of latency, the default parameters are chosen to give a
reasonable balance between throughput and latency.</para>
</note>
</listitem>
<listitem id="configuring.message.journal.journal-buffer-size">
<para><literal>journal-buffer-size</literal></para>
<para>The size of the timed buffer on AIO. The default value is <literal
>490KiB</literal>.</para>
</listitem>
<listitem id="configuring.message.journal.journal-compact-min-files">
<para><literal>journal-compact-min-files</literal></para>
<para>The minimal number of files before we can consider compacting the journal. The
compacting algorithm won't start until you have at least <literal
>journal-compact-min-files</literal></para>
<para>The default for this parameter is <literal>10</literal></para>
</listitem>
<listitem id="configuring.message.journal.journal-compact-percentage">
<para><literal>journal-compact-percentage</literal></para>
<para>The threshold to start compacting. When less than this percentage is
considered live data, we start compacting. Note also that compacting won't kick
in until you have at least <literal>journal-compact-min-files</literal> data
files on the journal</para>
<para>The default for this parameter is <literal>30</literal></para>
</listitem>
</itemizedlist>
</section>
<section id="disk-write-cache">
<title>An important note on disabling disk write cache.</title>
<warning>
<para>Most disks contain hardware write caches. A write cache can increase the apparent
performance of the disk because writes just go into the cache and are then lazily
written to the disk later. </para>
<para>This happens irrespective of whether you have executed a fsync() from the
operating system or correctly synced data from inside a Java program!</para>
<para>By default many systems ship with disk write cache enabled. This means that even
after syncing from the operating system there is no guarantee the data has actually
made it to disk, so if a failure occurs, critical data can be lost.</para>
<para>Some more expensive disks have non volatile or battery backed write caches which
won't necessarily lose data on event of failure, but you need to test them!</para>
<para>If your disk does not have an expensive non volatile or battery backed cache and
it's not part of some kind of redundant array (e.g. RAID), and you value your data
integrity you need to make sure disk write cache is disabled.</para>
<para>Be aware that disabling disk write cache can give you a nasty shock performance
wise. If you've been used to using disks with write cache enabled in their default
setting, unaware that your data integrity could be compromised, then disabling it
will give you an idea of how fast your disk can perform when acting really
reliably.</para>
<para>On Linux you can inspect and/or change your disk's write cache settings using the
tools <literal>hdparm</literal> (for IDE disks) or <literal>sdparm</literal> or
<literal>sginfo</literal> (for SDSI/SATA disks)</para>
<para>On Windows you can check / change the setting by right clicking on the disk and
clicking properties.</para>
</warning>
</section>
<section id="installing-aio">
<title>Installing AIO</title>
<para>The Java NIO journal gives great performance, but If you are running HornetQ using
Linux Kernel 2.6 or later, we highly recommend you use the <literal>AIO</literal>
journal for the very best persistence performance.</para>
<para>It's not possible to use the AIO journal under other operating systems or earlier
versions of the Linux kernel.</para>
<para>If you are running Linux kernel 2.6 or later and don't already have <literal
>libaio</literal> installed, you can easily install it using the following
steps:</para>
<para>Using yum, (e.g. on Fedora or Red Hat Enterprise Linux):
<programlisting>yum install libaio</programlisting></para>
<para>Using aptitude, (e.g. on Ubuntu or Debian system):
<programlisting>apt-get install libaio</programlisting></para>
</section>
<section id="persistence.enabled">
<title>Configuring HornetQ for Zero Persistence</title>
<para>In some situations, zero persistence is sometimes required for a messaging system.
Configuring HornetQ to perform zero persistence is straightforward. Simply set the
parameter <literal>persistence-enabled</literal> in <literal
>hornetq-configuration.xml</literal> to <literal>false</literal>. </para>
<para>Please note that if you set this parameter to false, then <emphasis>zero</emphasis>
persistence will occur. That means no bindings data, message data, large message data,
duplicate id caches or paging data will be persisted.</para>
</section>
<section id="persistence.importexport">
<title>Import/Export the Journal Data</title>
<para>You may want to inspect the existent records on each one of the journals used by
HornetQ, and you can use the export/import tool for that purpose. The export/import are
classes located at the hornetq-core.jar, you can export the journal as a text file by
using this command:</para>
<para><literal>java -cp hornetq-core.jar org.apache.activemq6.core.journal.impl.ExportJournal
&lt;JournalDirectory> &lt;JournalPrefix> &lt;FileExtension> &lt;FileSize>
&lt;FileOutput></literal></para>
<para>To import the file as binary data on the journal (Notice you also require
netty.jar):</para>
<para><literal>java -cp hornetq-core.jar:netty.jar org.apache.activemq6.core.journal.impl.ImportJournal
&lt;JournalDirectory> &lt;JournalPrefix> &lt;FileExtension> &lt;FileSize>
&lt;FileInput></literal></para>
<itemizedlist>
<listitem>
<para>JournalDirectory: Use the configured folder for your selected folder. Example:
./hornetq/data/journal</para>
</listitem>
<listitem>
<para>JournalPrefix: Use the prefix for your selected journal, as discussed
<link linkend="persistence.journallist">here</link></para>
</listitem>
<listitem>
<para>FileExtension: Use the extension for your selected journal, as discussed
<link linkend="persistence.journallist">here</link></para>
</listitem>
<listitem>
<para>FileSize: Use the size for your selected journal, as discussed <link
linkend="persistence.journallist">here</link></para>
</listitem>
<listitem>
<para>FileOutput: text file that will contain the exported data</para>
</listitem>
</itemizedlist>
</section>
</chapter>