OpenSearch/docs/reference/index-modules/translog.asciidoc

94 lines
3.8 KiB
Plaintext
Raw Normal View History

[[index-modules-translog]]
== Translog
2015-05-05 16:01:58 -04:00
Changes to Lucene are only persisted to disk during a Lucene commit,
2015-05-05 15:32:41 -04:00
which is a relatively heavy operation and so cannot be performed after every
2015-05-05 16:01:58 -04:00
index or delete operation. Changes that happen after one commit and before another
will be lost in the event of process exit or HW failure.
2015-05-05 15:32:41 -04:00
To prevent this data loss, each shard has a _transaction log_ or write ahead
2015-05-05 16:01:58 -04:00
log associated with it. Any index or delete operation is written to the
translog after being processed by the internal Lucene index.
2015-05-05 15:32:41 -04:00
In the event of a crash, recent transactions can be replayed from the
transaction log when the shard recovers.
2015-05-05 16:01:58 -04:00
An Elasticsearch flush is the process of performing a Lucene commit and
starting a new translog. It is done automatically in the background in order
to make sure the transaction log doesn't grow too large, which would make
replaying its operations take a considerable amount of time during recovery.
It is also exposed through an API, though its rarely needed to be performed
manually.
2015-05-05 15:32:41 -04:00
[float]
=== Flush settings
The following <<indices-update-settings,dynamically updatable>> settings
control how often the in-memory buffer is flushed to disk:
`index.translog.flush_threshold_size`::
Once the translog hits this size, a flush will happen. Defaults to `512mb`.
`index.translog.flush_threshold_ops`::
After how many operations to flush. Defaults to `unlimited`.
2015-05-05 15:32:41 -04:00
`index.translog.flush_threshold_period`::
2015-05-05 15:32:41 -04:00
How long to wait before triggering a flush regardless of translog size. Defaults to `30m`.
2015-05-05 15:32:41 -04:00
`index.translog.interval`::
2015-05-05 15:32:41 -04:00
How often to check if a flush is needed, randomized between the interval value
and 2x the interval value. Defaults to `5s`.
2015-05-05 15:32:41 -04:00
[float]
=== Translog settings
2015-05-05 15:32:41 -04:00
The translog itself is only persisted to disk when it is ++fsync++ed. Until
then, data recently written to the translog may only exist in the file system
cache and could potentially be lost in the event of hardware failure.
The following <<indices-update-settings,dynamically updatable>> settings
control the behaviour of the transaction log:
Decouple recoveries from engine flush In order to safely complete recoveries / relocations we have to keep all operation done since the recovery start at available for replay. At the moment we do so by preventing the engine from flushing and thus making sure that the operations are kept in the translog. A side effect of this is that the translog keeps on growing until the recovery is done. This is not a problem as we do need these operations but if the another recovery starts concurrently it may have an unneededly long translog to replay. Also, if we shutdown the engine for some reason at this point (like when a node is restarted) we have to recover a long translog when we come back. To void this, the translog is changed to be based on multiple files instead of a single one. This allows recoveries to keep hold to the files they need while allowing the engine to flush and do a lucene commit (which will create a new translog files bellow the hood). Change highlights: - Refactor Translog file management to allow for multiple files. - Translog maintains a list of referenced files, both by outstanding recoveries and files containing operations not yet committed to Lucene. - A new Translog.View concept is introduced, allowing recoveries to get a reference to all currently uncommitted translog files plus all future translog files created until the view is closed. They can use this view to iterate over operations. - Recovery phase3 is removed. That phase was replaying operations while preventing new writes to the engine. This is unneeded as standard indexing also send all operations from the start of the recovery to the recovering shard. Replay all ops in the view acquired in recovery start is enough to guarantee no operation is lost. - IndexShard now creates the translog together with the engine. The translog is closed by the engine on close. ShadowIndexShards do not open the translog. - Moved the ownership of translog fsyncing to the translog it self, changing the responsible setting to `index.translog.sync_interval` (was `index.gateway.local.sync`) Closes #10624
2015-03-27 05:18:09 -04:00
`index.translog.sync_interval`::
2015-05-05 16:01:58 -04:00
How often the translog is ++fsync++ed to disk. Defaults to `5s`. Can be set to
`0` to sync after each operation.
2015-05-05 15:32:41 -04:00
`index.translog.fs.type`::
Either a `buffered` translog (default) which buffers 64kB in memory before
writing to disk, or a `simple` translog which writes every entry to disk
immediately. Whichever is used, these writes are only ++fsync++ed according
to the `sync_interval`.
The `buffered` translog is written to disk when it reaches 64kB in size, or
2015-05-05 16:01:58 -04:00
whenever a `sync` is triggered by the `sync_interval`.
2015-05-05 15:32:41 -04:00
.Why don't we `fsync` the translog after every write?
******************************************************
The disk is the slowest part of any server. An `fsync` ensures that data in
the file system buffer has been physically written to disk, but this
persistence comes with a performance cost.
However, the translog is not the only persistence mechanism in Elasticsearch.
Any index or update request is first written to the primary shard, then
forwarded in parallel to any replica shards. The primary waits for the action
2015-05-05 16:01:58 -04:00
to be completed on the replicas before returning success to the client.
2015-05-05 15:32:41 -04:00
If the node holding the primary shard dies for some reason, its transaction
log could be missing the last 5 seconds of data. However, that data should
already be available on a replica shard on a different node. Of course, if
the whole data centre loses power at the same time, then it is possible that
you could lose the last 5 seconds (or `sync_interval`) of data.
2015-05-05 16:01:58 -04:00
We are constantly monitoring the perfromance implications of better default
translog sync semantics, so the default might change as time passes and HW,
virtualization, and other aspects improve.
******************************************************