This commit is contained in:
Clebert Suconic 2017-03-16 10:24:27 -04:00
commit 4936291aec
3 changed files with 41 additions and 3 deletions

View File

@ -74,13 +74,13 @@ Name | Description
[journal-compact-percentage](persistence.md) | The percentage of live data on which we consider compacting the journal. Default=30
[journal-directory](persistence.md) | the directory to store the journal files in. Default=data/journal
[journal-file-size](persistence.md) | the size (in bytes) of each journal file. Default=10485760 (10 MB)
[journal-max-io](persistence.md#configuring.message.journal.journal-max-io) | the maximum number of write requests that can be in the AIO queue at any one time. Default is 500 for AIO and 1 for NIO.
[journal-max-io](persistence.md#configuring.message.journal.journal-max-io) | the maximum number of write requests that can be in the AIO queue at any one time. Default is 500 for AIO and 1 for NIO, ignored for MAPPED.
[journal-min-files](persistence.md#configuring.message.journal.journal-min-files) | how many journal files to pre-create. Default=2
[journal-pool-files](persistence.md#configuring.message.journal.journal-pool-files) | The upper theshold of the journal file pool,-1 (default) means no Limit. The system will create as many files as needed however when reclaiming files it will shrink back to the `journal-pool-files`
[journal-sync-non-transactional](persistence.md) | if true wait for non transaction data to be synced to the journal before returning response to client. Default=true
[journal-sync-transactional](persistence.md) | if true wait for transaction data to be synchronized to the journal before returning response to client. Default=true
[journal-type](persistence.md) | the type of journal to use. Default=ASYNCIO
[journal-datasync](persistence.md) | It will use fsync on journal operations. Default=true.
[journal-datasync](persistence.md) | It will use msync/fsync on journal operations. Default=true.
[large-messages-directory](large-messages.md "Configuring the server") | the directory to store large messages. Default=data/largemessages
[management-address](management.md "Configuring Core Management") | the name of the management address to send management messages to. It is prefixed with "jms.queue" so that JMS clients can send messages to it. Default=jms.queue.activemq.management
[management-notification-address](management.md "Configuring The Core Management Notification Address") | the name of the address that consumers bind to receive management notifications. Default=activemq.notifications

View File

@ -47,6 +47,11 @@ performance.
- If you're running AIO you might be able to get some better
performance by increasing `journal-max-io`. DO NOT change this
parameter if you are running NIO.
- If you are 100% sure you don't need power failure durability guarantees,
disable `journal-data-sync` and use `NIO` or `MAPPED` journal:
you'll benefit a huge performance boost on writes
with process failure durability guarantees.
## Tuning JMS

View File

@ -74,6 +74,22 @@ implementations. Apache ActiveMQ Artemis ships with two implementations:
For more information on libaio please see [lib AIO](libaio.md).
libaio is part of the kernel project.
- [Memory mapped](https://en.wikipedia.org/wiki/Memory-mapped_file).
The third implementation uses a file-backed [READ_WRITE](https://docs.oracle.com/javase/6/docs/api/java/nio/channels/FileChannel.MapMode.html#READ_WRITE)
memory mapping against the OS page cache to interface with the file system.
This provides extremely good performance (especially under strictly process failure durability requirements),
almost zero copy (actually *is* the kernel page cache) and zero garbage (from the Java HEAP perspective) operations and runs
on any platform where there's a Java 4+ runtime.
Under power failure durability requirements it will perform at least on par with the NIO journal with the only
exception of Linux OS with kernel less or equals 2.6, in which the [*msync*](https://docs.oracle.com/javase/6/docs/api/java/nio/MappedByteBuffer.html#force()) implementation necessary to ensure
durable writes was different (and slower) from the [*fsync*](https://docs.oracle.com/javase/6/docs/api/java/nio/channels/FileChannel.html#force(boolean)) used is case of NIO journal.
It benefits by the configuration of OS [huge pages](https://en.wikipedia.org/wiki/Page_(computer_memory)#Huge_pages),
in particular when is used a big number of journal files and sizing them as multiple of the OS page size in bytes.
The standard Apache ActiveMQ Artemis core server uses two instances of the journal:
@ -180,12 +196,13 @@ The message journal is configured using the following attributes in
- `journal-type`
Valid values are `NIO` or `ASYNCIO`.
Valid values are `NIO`, `ASYNCIO` or `MAPPED`.
Choosing `NIO` chooses the Java NIO journal. Choosing `ASYNCIO` chooses
the Linux asynchronous IO journal. If you choose `ASYNCIO` but are not
running Linux or you do not have libaio installed then Apache ActiveMQ Artemis will
detect this and automatically fall back to using `NIO`.
Choosing `MAPPED` chooses the Java Memory Mapped journal.
- `journal-sync-transactional`
@ -302,6 +319,22 @@ The message journal is configured using the following attributes in
- `journal-datasync` (default: true)
This will disable the use of fdatasync on journal writes.
When enabled it ensures full power failure durability, otherwise
process failure durability on journal writes (OS guaranteed).
This is particular effective for `NIO` and `MAPPED` journals, which rely on
*fsync*/*msync* to force write changes to disk.
### An important note on disabling `journal-datasync`.
> Any modern OS guarantees that on process failures (i.e. crash) all the uncommitted changes
> to the page cache will be flushed to the file system, maintaining coherence between
> subsequent operations against the same pages and ensuring that no data will be lost.
> The predictability of the timing of such flushes, in case of a disabled *journal-datasync*,
> depends on the OS configuration, but without compromising (or relaxing) the process
> failure durability semantics as described above.
> Rely on the OS page cache sacrifice the power failure protection, while increasing the
> effectiveness of the journal operations, capable of exploiting
> the read caching and write combining features provided by the OS's kernel page cache subsystem.
### An important note on disabling disk write cache.