[DOCS] Editorial ILM cleanup (#57565) (#57776)

* [DOCS] Editorial cleanup

* Moved example of applying a template to multiple indices.

* Combine existing indices topics

* Fixed test

* Add skip rollover file.

* Revert rename.

* Update include.

* Revert rename

* Apply suggestions from code review

Co-authored-by: Adam Locke <adam.locke@elastic.co>
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>

* Apply suggestions from code review

* Fixed callout

* Update docs/reference/ilm/ilm-with-existing-indices.asciidoc

Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>

* Update docs/reference/ilm/ilm-with-existing-indices.asciidoc

Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>

* Apply suggestions from code review

* Restored policy to template example.

* Fixed JSON parse error

Co-authored-by: Adam Locke <adam.locke@elastic.co>
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>

Co-authored-by: Adam Locke <adam.locke@elastic.co>
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>
This commit is contained in:
debadair 2020-06-05 18:55:51 -07:00 committed by GitHub
parent 26d2b4f871
commit 100d2bd063
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
19 changed files with 510 additions and 798 deletions

View File

@ -6,13 +6,10 @@ Phases allowed: cold.
<<frozen-indices, Freezes>> an index to minimize its memory footprint.
[IMPORTANT]
================================
Freezing an index closes the index and reopens it within the same API call.
This means that for a short time no primaries are allocated.
The cluster will go red until the primaries are allocated.
This limitation might be removed in the future.
================================
IMPORTANT: Freezing an index closes the index and reopens it within the same API call.
This means that for a short time no primaries are allocated.
The cluster will go red until the primaries are allocated.
This limitation might be removed in the future.
[[ilm-freeze-options]]
==== Options

View File

@ -10,12 +10,12 @@ IMPORTANT: If the rollover action is used on a <<ccr-put-follow,follower index>>
policy execution waits until the leader index rolls over (or is
<<skipping-rollover, otherwise marked complete>>),
then converts the follower index into a regular index with the
<<ilm-unfollow-action,the Unfollow action>>.
<<ilm-unfollow-action, Unfollow action>>.
For a managed index to be rolled over:
* The index name must match the pattern '^.*-\\d+$', for example (`my_index-000001`).
* `index.lifecycle.rollover_alias` must be configured as the alias to roll over.
* The `index.lifecycle.rollover_alias` must be configured as the alias to roll over.
* The index must be the <<<<indices-rollover-is-write-index, write index>> for the alias.
For example, if `my_index-000001` has the alias `my_data`,

View File

@ -25,9 +25,10 @@ Retrieves the current {ilm} ({ilm-init}) status.
[[ilm-get-status-desc]]
==== {api-description-title}
Returns the status of the {ilm-init} plugin. The `operation_mode` field in the
response shows one of three states: `STARTED`, `STOPPING`,
or `STOPPED`. You can change the status of the {ilm-init} plugin with the
[[ilm-operating-modes]]
Returns the status of the {ilm-init} plugin.
The `operation_mode` in the response shows one of three states: `STARTED`, `STOPPING`, or `STOPPED`.
You can start or stop {ilm-init} with the
<<ilm-start,start {ilm-init}>> and <<ilm-stop,stop {ilm-init}>> APIs.
[[ilm-get-status-query-params]]

View File

@ -1,20 +1,20 @@
[role="xpack"]
[testenv="basic"]
[[index-lifecycle-error-handling]]
== Index lifecycle error handling
== Resolve lifecycle policy execution errors
During Index Lifecycle Management's execution of the policy for an index, it's
possible for a step to encounter an error during its execution. When this
happens, {ilm-init} will move the management state into an "error" step. This halts
further execution of the policy and gives an administrator the chance to address
any issues with the policy, index, or cluster.
When {ilm-init} executes a lifecycle policy, it's possible for errors to occur
while performing the necessary index operations for a step.
When this happens, {ilm-init} moves the index to an `ERROR` step.
If {ilm-init] cannot resolve the error automatically, execution is halted
until you resolve the underlying issues with the policy, index, or cluster.
An example will be helpful in illustrating this, imagine the following policy
has been created by a user:
For example, you might have a `shrink-index` policy that shrinks an index to four shards once it
is at least five days old:
[source,console]
--------------------------------------------------
PUT _ilm/policy/shrink-the-index
PUT _ilm/policy/shrink-index
{
"policy": {
"phases": {
@ -32,11 +32,8 @@ PUT _ilm/policy/shrink-the-index
--------------------------------------------------
// TEST
This policy waits until the index is at least 5 days old, and then shrinks
the index to 4 shards.
Now imagine that a user creates a new index "myindex" with two primary shards,
telling it to use the policy they have created:
There is nothing that prevents you from applying the `shrink-index` policy to a new
index that has only two shards:
[source,console]
--------------------------------------------------
@ -44,17 +41,18 @@ PUT /myindex
{
"settings": {
"index.number_of_shards": 2,
"index.lifecycle.name": "shrink-the-index"
"index.lifecycle.name": "shrink-index"
}
}
--------------------------------------------------
// TEST[continued]
After five days have passed, {ilm-init} will attempt to shrink this index from 2
shards to 4, which is invalid since the shrink action cannot increase the
number of shards. When this occurs, {ilm-init} will move this
index to the "error" step. Once an index is in this step, information about the
reason for the error can be retrieved from the <<ilm-explain-lifecycle,{ilm-init} Explain API>>:
After five days, {ilm-init} attempts to shrink `myindex` from two shards to four shards.
Because the shrink action cannot _increase_ the number of shards, this operation fails
and {ilm-init} moves `myindex` to the `ERROR` step.
You can use the <<ilm-explain-lifecycle,{ilm-init} Explain API>> to get information about
what went wrong:
[source,console]
--------------------------------------------------
@ -70,24 +68,24 @@ Which returns the following information:
"indices" : {
"myindex" : {
"index" : "myindex",
"managed" : true, <1>
"policy" : "shrink-the-index", <2>
"managed" : true,
"policy" : "shrink-index", <1>
"lifecycle_date_millis" : 1541717265865,
"age": "5.1d", <3>
"phase" : "warm", <4>
"age": "5.1d", <2>
"phase" : "warm", <3>
"phase_time_millis" : 1541717272601,
"action" : "shrink", <5>
"action" : "shrink", <4>
"action_time_millis" : 1541717272601,
"step" : "ERROR", <6>
"step" : "ERROR", <5>
"step_time_millis" : 1541717272688,
"failed_step" : "shrink", <7>
"failed_step" : "shrink", <6>
"step_info" : {
"type" : "illegal_argument_exception", <8>
"reason" : "the number of target shards [4] must be less that the number of source shards [2]" <9>
"type" : "illegal_argument_exception", <7>
"reason" : "the number of target shards [4] must be less that the number of source shards [2]"
},
"phase_execution" : {
"policy" : "shrink-the-index",
"phase_definition" : { <10>
"policy" : "shrink-index",
"phase_definition" : { <8>
"min_age" : "5d",
"actions" : {
"shrink" : {
@ -104,25 +102,20 @@ Which returns the following information:
--------------------------------------------------
// TESTRESPONSE[skip:no way to know if we will get this response immediately]
<1> this index is managed by {ilm-init}
<2> the policy in question, in this case, "shrink-the-index"
<3> the current age for the index
<4> what phase the index is currently in
<5> what action the index is currently on
<6> what step the index is currently on, in this case, because there is an error, the index is in the "ERROR" step
<7> the name of the step that failed to execute, in this case "shrink"
<8> the error class that occurred during this step
<9> the error message that occurred during the execution failure
<10> the definition of the phase (in this case, the "warm" phase) that the index is currently on
<1> The policy being used to manage the index: `shrink-index`
<2> The index age: 5.1 days
<3> The phase the index is currently in: `warm`
<4> The current action: `shrink`
<5> The step the index is currently in: `ERROR`
<6> The step that failed to execute: `shrink`
<7> The type of error and a description of that error.
<8> The definition of the current phase from the `shrink-index` policy
The index here has been moved to the error step because the shrink definition in
the policy is using an incorrect number of shards. So rectifying that in the
policy entails updating the existing policy to use one instead of four for
the targeted number of shards.
To resolve this, you could update the policy to shrink the index to a single shard after 5 days:
[source,console]
--------------------------------------------------
PUT _ilm/policy/shrink-the-index
PUT _ilm/policy/shrink-index
{
"policy": {
"phases": {
@ -141,11 +134,10 @@ PUT _ilm/policy/shrink-the-index
// TEST[continued]
[discrete]
=== Retrying failed index lifecycle management steps
=== Retrying failed lifecycle policy steps
Once the underlying issue that caused an index to move to the error step has
been corrected, index lifecycle management must be told to retry the step to see
if it can progress further. This is accomplished by invoking the retry API
Once you fix the problem that put an index in the `ERROR` step,
you might need to explicitly tell {ilm-init} to retry the step:
[source,console]
--------------------------------------------------
@ -153,7 +145,5 @@ POST /myindex/_ilm/retry
--------------------------------------------------
// TEST[skip:we can't be sure the index is ready to be retried at this point]
Once this has been issue, index lifecycle management will asynchronously pick up
on the step that is in a failed state, attempting to re-run it. The
<<ilm-explain-lifecycle,{ilm-init} Explain API>> can again be used to monitor the status of
re-running the step.
{ilm-init} subsequently attempts to re-run the step that failed.
You can use the <<ilm-explain-lifecycle,{ilm-init} Explain API>> to monitor the progress.

View File

@ -11,25 +11,25 @@ and reduce the number of replicas.
[[ilm-delete-action]]<<ilm-delete,Delete>>::
Permanently remove the index.
[[ilm-forcemerge-action]]<<ilm-forcemerge,Force Merge>>::
[[ilm-forcemerge-action]]<<ilm-forcemerge,Force merge>>::
Reduce the number of index segments and purge deleted documents.
Makes the index read-only.
[[ilm-freeze-action]]<<ilm-freeze,Freeze>>::
Freeze the index to minimize its memory footprint.
[[ilm-readonly-action]]<<ilm-readonly,Read-Only>>::
[[ilm-readonly-action]]<<ilm-readonly,Read only>>::
Block write operations to the index.
[[ilm-rollover-action]]<<ilm-rollover,Rollover>>::
Remove the index as the write index for the rollover alias and
start indexing to a new index.
[[ilm-searchable-snapshot-action]]<<ilm-searchable-snapshot, Searchable Snapshot>>::
[[ilm-searchable-snapshot-action]]<<ilm-searchable-snapshot, Searchable snapshot>>::
Take a snapshot of the managed index in the configured repository
and mount it as a searchable snapshot.
[[ilm-set-priority-action]]<<ilm-set-priority,Set Priority>>::
[[ilm-set-priority-action]]<<ilm-set-priority,Set priority>>::
Lower the priority of an index as it moves through the lifecycle
to ensure that hot indices are recovered first.
@ -40,7 +40,7 @@ Reduce the number of primary shards by shrinking the index into a new index.
Convert a follower index to a regular index.
Performed automatically before a rollover, shrink, or searchable snapshot action.
[[ilm-wait-for-snapshot-action]]<<ilm-wait-for-snapshot,Wait For Snapshot>>::
[[ilm-wait-for-snapshot-action]]<<ilm-wait-for-snapshot,Wait for snapshot>>::
Ensure that a snapshot exists before deleting the index.
include::actions/ilm-allocate.asciidoc[]

View File

@ -3,31 +3,26 @@
[[index-lifecycle-and-snapshots]]
== Restore a managed index
When restoring a snapshot that contains indices managed by Index Lifecycle
Management, the lifecycle will automatically continue to execute after the
snapshot is restored. Notably, the `min_age` is relative to the original
creation or rollover of the index, rather than when the index was restored. For
example, a monthly index that is restored partway through its lifecycle after an
accidental deletion will be continue through its lifecycle as expected: The
index will be shrunk, reallocated to different nodes, or deleted on the same
schedule whether or not it has been restored from a snapshot.
When you restore a snapshot that contains managed indices,
{ilm-init} automatically resumes executing the restored indices' policies.
A restored index's `min_age` is relative to when it was originally created or rolled over,
not its restoration time.
Policy actions are performed on the same schedule whether or not
an index has been restored from a snapshot.
If you restore an index that was accidentally deleted half way through its month long lifecycle,
it proceeds normally through the last two weeks of its lifecycle.
However, there may be cases where you need to restore an index from a snapshot,
but do not want it to automatically continue through its lifecycle, particularly
if the index would rapidly progress through lifecycle phases due to its age. For
example, you may wish to add or update documents in an index before it is marked
read only or shrunk, or prevent an index from automatically being deleted.
In some cases, you might want to restore a managed index and
prevent {ilm-init} from immediately executing its policy.
For example, if you are restoring an older snapshot you might want to
prevent it from rapidly progressing through all of its lifecycle phases.
You might want to add or update documents before it's marked read-only or shrunk,
or prevent the index from being immediately deleted.
To stop lifecycle policy execution on an index restored from a snapshot, before
restoring the snapshot, <<start-stop-ilm,lifecycle policy execution can be
paused>> to allow the policy to be removed.
To prevent {ilm-init} from executing a restored index's policy:
For example, the following workflow can be used in the above situation to
prevent the execution of the lifecycle policy for an index:
1. Pause execution of all lifecycle policies using the <<ilm-stop,Stop {ilm-init} API>>
1. Temporarily <<ilm-stop,stop {ilm-init}>>. This pauses execution of _all_ {ilm-init} policies.
2. Restore the snapshot.
3. Perform whatever operations you wish before resuming lifecycle execution, or
remove the lifecycle policy from the index using the
<<ilm-remove-policy,Remove Policy from Index API>>
4. Resume execution of lifecycle policies using the <<ilm-start,Start {ilm-init} API>>
3. <<ilm-remove-policy,Remove the policy>> from the index or perform whatever actions you need to
before {ilm-init} resumes policy execution.
4. <<ilm-start,Restart {ilm-init}>> to resume policy execution.

View File

@ -8,12 +8,12 @@
{ilm-init} defines four index lifecycle _phases_:
* Hot: The index is actively being updated and queried.
* Warm: The index is no longer being updated but is still being queried.
* Cold: The index is no longer being updated and is seldom queried. The
* **Hot**: The index is actively being updated and queried.
* **Warm**: The index is no longer being updated but is still being queried.
* **Cold**: The index is no longer being updated and is seldom queried. The
information still needs to be searchable, but it's okay if those queries are
slower.
* Delete: The index is no longer needed and can safely be removed.
* **Delete**: The index is no longer needed and can safely be removed.
An index's _lifecycle policy_ specifies which phases
are applicable, what actions are performed in each phase,

View File

@ -11,22 +11,22 @@ You can create and apply {ilm-cap} ({ilm-init}) policies to automatically manage
according to your performance, resiliency, and retention requirements.
Index lifecycle policies can trigger actions such as:
* **Rollover** -
* **Rollover**:
include::../glossary.asciidoc[tag=rollover-def-short]
* **Shrink** -
* **Shrink**:
include::../glossary.asciidoc[tag=shrink-def-short]
* **Force merge** -
* **Force merge**:
include::../glossary.asciidoc[tag=force-merge-def-short]
* **Freeze** -
* **Freeze**:
include::../glossary.asciidoc[tag=freeze-def-short]
* **Delete** - Permanently remove an index, including all of its data and metadata.
* **Delete**: Permanently remove an index, including all of its data and metadata.
{ilm-init} makes it easier to manage indices in hot-warm-cold architectures,
which are common when you're working with time-series data such as logs and metrics.
You can specify:
* The maximum size, number of documents, or age at which you want to roll over to a new index.
* The maximum shard size, number of documents, or age at which you want to roll over to a new index.
* The point at which the index is no longer being updated and the number of
primary shards can be reduced.
* When to force a merge to permanently remove documents marked for deletion.
@ -52,4 +52,4 @@ Although it might be possible to create and apply policies in a mixed-version cl
there is no guarantee they will work as intended.
Attempting to use a policy that contains actions that aren't
supported on all nodes in a cluster will cause errors.
===========================
===========================

View File

@ -0,0 +1,34 @@
[[skipping-rollover]]
== Skip rollover
When `index.lifecycle.indexing_complete` is set to `true`,
{ilm-init} won't perform the rollover action on an index,
even if it otherwise meets the rollover criteria.
It's set automatically by {ilm-init} when the rollover action completes successfully.
You can set it manually to skip rollover if you need to make an exception
to your normal lifecycle policy and update the alias to force a roll over,
but want {ilm-init} to continue to manage the index.
If you use the rollover API. It is not necessary to configure this setting manually.
If an index's lifecycle policy is removed, this setting is also removed.
IMPORTANT: When `index.lifecycle.indexing_complete` is `true`,
{ilm-init} verifies that the index is no longer the write index
for the alias specified by `index.lifecycle.rollover_alias`.
If the index is still the write index or the rollover alias is not set,
the index is moved to the <<index-lifecycle-error-handling,`ERROR` step>>.
For example, if you need to change the name of new indices in a series while retaining
previously-indexed data in accordance with your configured policy, you can:
. Create a template for the new index pattern that uses the same policy.
. Bootstrap the initial index.
. Change the write index for the alias to the bootstrapped index
using the <<indices-aliases, index aliases>> API.
. Set `index.lifecycle.indexing_complete` to `true` on the old index to indicate
that it does not need to be rolled over.
{ilm-init} continues to manage the old index in accordance with your existing policy.
New indices are named according to the new template and
managed according to the same policy without interruption.

View File

@ -34,7 +34,6 @@ You can modify the default policies through
{kibana-ref}/example-using-index-lifecycle-policy.html[{kib} Management]
or the {ilm-init} APIs.
[discrete]
[[ilm-gs-create-policy]]
=== Create a lifecycle policy

View File

@ -3,23 +3,99 @@
[[ilm-with-existing-indices]]
== Manage existing indices
NOTE: If migrating from Curator, ensure you are running Curator version 5.7 or greater
so that Curator will ignore ILM managed indices.
If you've been using Curator or some other mechanism to manage periodic indices,
you have a couple options when migrating to {ilm-init}:
While it is recommended to use {ilm-init} to manage the index lifecycle from
start to finish, it may be useful to use {ilm-init} with existing indices,
for example, when migrating from daily indices to rollover-based indices.
Such use cases are fully supported, but there are some configuration differences
from when {ilm-init} can manage the complete index lifecycle.
* Set up your index templates to use an {ilm-init} policy to manage your new indices.
Once {ilm-init} is managing your current write index, you can apply an appropriate policy to your old indices.
This section describes strategies to leverage {ilm-init} for existing periodic
indices when migrating to fully {ilm-init}-manged indices, which can be done in
a few different ways, each providing different tradeoffs. As an example, we'll
walk through a use case of a very simple logging index with just a field for the
log message and a timestamp.
* Reindex into an {ilm-init}-managed index.
First, we need to create a template for these indices:
NOTE: Starting in Curator version 5.7, Curator ignores {ilm-init} managed indices.
[discrete]
[[ilm-existing-indices-apply]]
=== Apply policies to existing time series indices
The simplest way to transition to managing your periodic indices with {ilm-init} is
to <<apply-policy-template, configure an index template>> to apply a lifecycle policy to new indices.
Once the index you are writing to is being managed by {ilm-init},
you can <<apply-policy-multiple, manually apply a policy>> to your older indices.
Define a separate policy for your older indices that omits the rollover action.
Rollover is used to manage where new data goes, so isn't applicable.
Keep in mind that policies applied to existing indices compare the `min_age` for each phase to
the original creation date of the index, and might proceed through multiple phases immediately.
If your policy performs resource-intensive operations like force merge,
you don't want to have a lot of indices performing those operations all at once
when you switch over to {ilm-init}.
You can specify different `min_age` values in the policy you use for existing indices,
or set <<index-lifecycle-origination-date, `index.lifecycle.origination_date`>>
to control how the index age is calculated.
Once all pre-{ilm-init} indices have been aged out and removed,
you can delete the policy you used to manage them.
NOTE: If you are using {beats} or {ls}, enabling {ilm-init} in version 7.0 and onward
sets up {ilm-init} to manage new indices automatically.
If you are using {beats} through {ls},
you might need to change your {ls} output configuration and invoke the {beats} setup
to use {ilm-init} for new data.
[discrete]
[[ilm-existing-indices-reindex]]
=== Reindex into a managed index
An alternative to <<ilm-with-existing-periodic-indices,applying policies to existing indices>> is to
reindex your data into an {ilm-init}-managed index.
You might want to do this if creating periodic indices with very small amounts of data
has led to excessive shard counts, or if continually indexing into the same index has led to large shards
and performance issues.
First, you need to set up the new {ilm-init}-managed index:
. Update your index template to include the necessary {ilm-init} settings.
. Bootstrap an initial index as the write index.
. Stop writing to the old indices and index new documents using the alias that points to bootstrapped index.
To reindex into the managed index:
. Pause indexing new documents if you do not want to mix new and old data in the {ilm-init}-managed index.
Mixing old and new data in one index is safe,
but a combined index needs to be retained until you are ready to delete the new data.
. Reduce the {ilm-init} poll interval to ensure that the index doesn't
grow too large while waiting for the rollover check.
By default, {ilm-init} checks rollover conditions every 10 minutes.
+
--
[source,console]
-----------------------
PUT _cluster/settings
{
"transient": {
"indices.lifecycle.poll_interval": "1m" <1>
}
}
-----------------------
// TEST[skip:don't want to overwrite this setting for other tests]
<1> Check once a minute to see if {ilm-init} actions such as rollover need to be performed.
--
. Reindex your data using the <<docs-reindex,reindex API>>.
If you want to partition the data in the order in which it was originally indexed,
you can run separate reindex requests.
+
--
IMPORTANT: Documents retain their original IDs. If you don't use automatically generated document IDs,
and are reindexing from multiple source indices, you might need to do additional processing to
ensure that document IDs don't conflict. One way to do this is to use a
<<reindex-scripts,script>> in the reindex call to append the original index name
to the document ID.
//////////////////////////
[source,console]
-----------------------
PUT _template/mylogs_template
@ -27,260 +103,6 @@ PUT _template/mylogs_template
"index_patterns": [
"mylogs-*"
],
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1
},
"mappings": {
"properties": {
"message": {
"type": "text"
},
"@timestamp": {
"type": "date"
}
}
}
}
-----------------------
And we'll ingest a few documents to create a few daily indices:
[source,console]
-----------------------
POST mylogs-pre-ilm-2019.06.24/_doc
{
"@timestamp": "2019-06-24T10:34:00",
"message": "this is one log message"
}
-----------------------
// TEST[continued]
[source,console]
-----------------------
POST mylogs-pre-ilm-2019.06.25/_doc
{
"@timestamp": "2019-06-25T17:42:00",
"message": "this is another log message"
}
-----------------------
// TEST[continued]
//////////////////////////
[source,console]
--------------------------------------------------
DELETE _template/mylogs_template
--------------------------------------------------
// TEST[continued]
//////////////////////////
Now that we have these indices, we'll look at a few different ways of migrating
these indices to {ilm-init}.
[[ilm-with-existing-periodic-indices]]
=== Managing existing periodic indices with {ilm-init}
NOTE: The examples in this section assume daily indices as set up in
<<ilm-with-existing-indices,the previous section>>.
The simplest way to manage existing indices while transitioning to fully
{ilm-init}-managed indices is to allow all new indices to be fully managed by
{ilm-init} before attaching {ilm-init} policies to existing indices. To do this,
all new documents should be directed to {ilm-init}-managed indices - if you are
using Beats or Logstash data shippers, upgrading all of those shippers to
version 7.0.0 or higher will take care of that part for you. If you are not
using Beats or Logstash, you may need to set up {ilm-init} for new indices yourself as
demonstrated in the <<getting-started-index-lifecycle-management,{ilm-init} tutorial>>.
NOTE: If you are using Beats through Logstash, you may need to change your
Logstash output configuration and invoke the Beats setup to use {ilm-init} for new
data.
Once all new documents are being written to fully {ilm-init}-managed indices, it
is easy to add an {ilm-init} policy to existing indices. However, there are two
things to keep in mind when doing this, and a trick that makes those two things
much easier to handle.
The two biggest things to keep in mind are:
1. Existing periodic indices shouldn't use policies with rollover, because
rollover is used to manage where new data goes. Since existing indices should no
longer be receiving new documents, there is no point to using rollover for them.
2. {ilm-init} policies attached to existing indices will compare the `min_age`
for each phase to the original creation date of the index, and so might proceed
through multiple phases immediately.
The first one is the most important, because it makes it difficult to use the
same policy for new and existing periodic indices. But that's easy to solve
with one simple trick: Create a second policy for existing indices, in addition
to the one for new indices. {ilm-init} policies are cheap to create, so don't be
afraid to have more than one. Modifying a policy designed for new indices to be
used on existing indices is generally very simple: just remove the `rollover`
action.
For example, if you created a policy for your new indices with each phase
like so:
[source,console]
-----------------------
PUT _ilm/policy/mylogs_policy
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_size": "25GB"
}
}
},
"warm": {
"min_age": "1d",
"actions": {
"forcemerge": {
"max_num_segments": 1
}
}
},
"cold": {
"min_age": "7d",
"actions": {
"freeze": {}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
-----------------------
// TEST[continued]
You can create a policy for pre-existing indices by removing the `rollover`
action, and in this case, the `hot` phase is now empty so we can remove that
too:
[source,console]
-----------------------
PUT _ilm/policy/mylogs_policy_existing
{
"policy": {
"phases": {
"warm": {
"min_age": "1d",
"actions": {
"forcemerge": {
"max_num_segments": 1
}
}
},
"cold": {
"min_age": "7d",
"actions": {
"freeze": {}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
-----------------------
// TEST[continued]
Creating a separate policy for existing indices will also allow using different
`min_age` values. You may want to use higher values to prevent many indices from
running through the policy at once, which may be important if your policy
includes potentially resource-intensive operations like force merge.
You can configure the lifecycle for many indices at once by using wildcards in
the index name when calling the <<indices-update-settings,Update Settings API>>
to set the policy name, but be careful that you don't include any indices that
you don't want to change the policy for:
[source,console]
-----------------------
PUT mylogs-pre-ilm*/_settings <1>
{
"index": {
"lifecycle": {
"name": "mylogs_policy_existing"
}
}
}
-----------------------
// TEST[continued]
<1> This pattern will match all indices with names that start with
`mylogs-pre-ilm`
Once all pre-{ilm-init} indices have aged out and been deleted, the policy for
older periodic indices can be deleted.
[[ilm-reindexing-into-rollover]]
=== Reindexing via {ilm-init}
NOTE: The examples in this section assume daily indices as set up in
<<ilm-with-existing-indices,the previous section>>.
In some cases, it may be useful to reindex data into {ilm-init}-managed indices.
This is more complex than simply attaching policies to existing indices as
described in <<ilm-with-existing-periodic-indices,the previous section>>, and
requires pausing indexing during the reindexing process. However, this technique
may be useful in cases where periodic indices were created with very small
amounts of data leading to excessive shard counts, or for indices which grow
steadily over time, but have not been broken up into time-series indices leading
to shards which are much too large, situations that cause significant
performance problems.
Before getting started with reindexing data, the new index structure should be
set up. For this section, we'll be using the same setup described in
<<ilm-with-existing-indices,{ilm-imit} with existing indices>>.
First, we'll set up a policy with rollover, and can include any additional
phases required. For simplicity, we'll just use rollover:
[source,console]
-----------------------
PUT _ilm/policy/mylogs_condensed_policy
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_age": "7d",
"max_size": "50G"
}
}
}
}
}
}
-----------------------
// TEST[continued]
And now we'll update the index template for our indices to include the relevant
{ilm-init} settings:
[source,console]
-----------------------
PUT _template/mylogs_template
{
"index_patterns": [
"ilm-mylogs-*" <1>
],
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1,
@ -303,93 +125,41 @@ PUT _template/mylogs_template
}
}
-----------------------
// TEST[continued]
<1> The new index pattern has a prefix compared to the old one, this will
make it easier to reindex later
<2> The name of the policy we defined above
<3> The name of the alias we'll use to write to and query
And create the first index with the alias specified in the `rollover_alias`
setting in the index template:
[source,console]
-----------------------
PUT ilm-mylogs-000001
POST mylogs-pre-ilm-2019.06.24/_doc
{
"aliases": {
"mylogs": {
"is_write_index": true
}
}
"@timestamp": "2019-06-24T10:34:00",
"message": "this is one log message"
}
-----------------------
// TEST[continued]
//////////////////////////
[source,console]
-----------------------
POST mylogs-pre-ilm-2019.06.25/_doc
{
"@timestamp": "2019-06-25T17:42:00",
"message": "this is another log message"
}
-----------------------
// TEST[continued]
[source,console]
--------------------------------------------------
DELETE /_template/mylogs_template
DELETE _template/mylogs_template
--------------------------------------------------
// TEST[continued]
//////////////////////////
All new documents should be indexed via the `mylogs` alias at this point. Adding
new data to the old indices during the reindexing process can cause data to be
added to the old indices, but not be reindexed into the new indices.
NOTE: If you do not want to mix new data and old data in the new {ilm-init}-managed
indices, indexing of new data should be paused entirely while the reindex
completes. Mixing old and new data within one index is safe, but keep in mind
that the indices with mixed data should be retained in their entirety until you
are ready to delete both the old and new data.
By default, {ilm-init} only checks rollover conditions every 10 minutes. Under
normal indexing load, this usually works well, but during reindexing, indices
can grow very, very quickly. We'll need to set the poll interval to something
shorter to ensure that the new indices don't grow too large while waiting for
the rollover check:
[source,console]
-----------------------
PUT _cluster/settings
{
"transient": {
"indices.lifecycle.poll_interval": "1m" <1>
}
}
-----------------------
// TEST[skip:don't want to overwrite this setting for other tests]
<1> This tells {ilm-init} to check for rollover conditions every minute
We're now ready to reindex our data using the <<docs-reindex,reindex API>>. If
you have a timestamp or date field in your documents, as in this example, it may
be useful to specify that the documents should be sorted by that field - this
will mean that all documents in `ilm-mylogs-000001` come before all documents in
`ilm-mylogs-000002`, and so on. However, if this is not a requirement, omitting
the sort will allow the data to be reindexed more quickly.
NOTE: Sorting in reindex is deprecated, see
<<docs-reindex-api-request-body,reindex request body>>. Instead use timestamp
ranges to partition data in separate reindex runs.
IMPORTANT: If your data uses document IDs generated by means other than
Elasticsearch's automatic ID generation, you may need to do additional
processing to ensure that the document IDs don't conflict during the reindex, as
documents will retain their original IDs. One way to do this is to use a
<<reindex-scripts,script>> in the reindex call to append the original index name
to the document ID.
[source,console]
-----------------------
POST _reindex
{
"source": {
"index": "mylogs-*", <1>
"sort": { "@timestamp": "desc" }
"index": "mylogs-*" <1>
},
"dest": {
"index": "mylogs", <2>
@ -399,20 +169,17 @@ POST _reindex
-----------------------
// TEST[continued]
<1> This index pattern matches our existing indices. Using the prefix for
<1> Matches your existing indices. Using the prefix for
the new indices makes using this index pattern much easier.
<2> The alias set up above
<3> This option will cause the reindex to abort if it encounters multiple
documents with the same ID. This is optional, but recommended to prevent
accidentally overwriting documents if two documents from different indices
have the same ID.
Once this completes, indexing new data can be resumed, as long as all new
documents are indexed into the alias used above. All data, existing and new, can
be queried using that alias as well. We should also be sure to set the
{ilm-init} poll interval back to its default value, because keeping it set too
low can cause unnecessary load on the current master node:
<2> The alias that points to your bootstrapped index.
<3> Halts reindexing if multiple documents have the same ID.
This is recommended to prevent accidentally overwriting documents
if documents in different source indices have the same ID.
--
. When reindexing is complete, set the {ilm-init} poll interval back to its default value to
prevent unnecessary load on the master node:
+
[source,console]
-----------------------
PUT _cluster/settings
@ -425,7 +192,9 @@ PUT _cluster/settings
-----------------------
// TEST[skip:don't want to overwrite this setting for other tests]
All of the reindexed data should now be accessible via the alias set up above,
in this case `mylogs`. Once you have verified that all the data has been
reindexed and is available in the new indices, the existing indices can be
safely removed.
. Resume indexing new data using the same alias.
+
Querying using this alias will now search your new data and all of the reindexed data.
. Once you have verified that all of the reindexed data is available in the new managed indices,
you can safely remove the old indices.

View File

@ -28,8 +28,8 @@ use <<getting-started-snapshot-lifecycle-management,snapshot lifecycle policies>
* <<set-up-lifecycle-policy>>
* <<index-lifecycle-error-handling>>
* <<start-stop-ilm>>
* <<ilm-configure-rollover>>
* <<ilm-with-existing-indices>>
* <<skipping-rollover>>
* <<index-lifecycle-and-snapshots>>
--
@ -45,10 +45,10 @@ include::set-up-lifecycle-policy.asciidoc[]
include::error-handling.asciidoc[]
include::start-stop-ilm.asciidoc[]
include::using-policies-rollover.asciidoc[]
include::start-stop.asciidoc[]
include::ilm-with-existing-indices.asciidoc[]
include::ilm-skip-rollover.asciidoc[]
include::ilm-and-snapshots.asciidoc[]

View File

@ -1,23 +1,26 @@
[role="xpack"]
[testenv="basic"]
[[set-up-lifecycle-policy]]
== Configure lifecycle policy [[ilm-policy-definition]]
== Configure a lifecycle policy [[ilm-policy-definition]]
For {ilm-init} to manage an index, a valid policy
must be specified in the `index.lifecycle.name` index setting.
To configure a lifecycle policy for rolling indices,
you create the policy and add it to the index template.
To configure a lifecycle policy for <<index-rollover, rolling indices>>,
you create the policy and add it to the <<indices-templates, index template>>.
To use a policy to manage an index that doesn't roll over,
you can specify the policy directly when you create it.
you can specify a lifecycle policy when you create it.
{ilm-init} policies are stored in the global cluster state and can be included in snapshots
by setting `include_global_state` to `true` when you <<snapshots-take-snapshot, take the snapshot>>.
When the snapshot is restored, all of the policies in the global state are restored and any local policies with the same names are overwritten.
IMPORTANT: When you enable {ilm} for {beats} or the {ls} {es} output plugin,
the necessary policies and configuration changes are applied automatically.
You can modify the default policies, but you do not need to explicitly configure a policy or
bootstrap an initial index.
[discrete]
[[ilm-create-policy]]
=== Create lifecycle policy
@ -55,11 +58,6 @@ PUT _ilm/policy/my_policy
<1> Roll over the index when it reaches 25GB in size
<2> Delete the index 30 days after rollover
NOTE: {ilm-init} policies are stored in the global cluster state and can be included in snapshots by
setting `include_global_state` to `true` when you <<snapshots-take-snapshot, take the snapshot>>.
Restoring {ilm-init} policies from a snapshot is all-or-nothing.
The entire global state, including all policies, is overwritten when you restore the snapshot.
[discrete]
[[apply-policy-template]]
=== Apply lifecycle policy with an index template
@ -135,7 +133,7 @@ index exceeds 25GB.
[[apply-policy-manually]]
=== Apply lifecycle policy manually
When you create an index directly, you can apply a lifecycle policy
When you create an index, you can apply a lifecycle policy
by specifying the `index.lifecycle.name` setting.
This causes {ilm-init} to immediately start managing the index.
@ -154,3 +152,80 @@ PUT test-index
IMPORTANT: Do not manually apply a policy that uses the rollover action.
Policies that use rollover must be applied by the <<apply-policy-template, index template>>.
Otherwise, the policy is not carried forward when the rollover action creates a new index.
[discrete]
[[apply-policy-multiple]]
==== Apply a policy to multiple indices
You can apply the same policy to multiple indices by using wildcards in the index name
when you call the <<indices-update-settings,update settings>> API.
WARNING: Be careful that you don't inadvertently match indices that you don't want to modify.
//////////////////////////
[source,console]
-----------------------
PUT _template/mylogs_template
{
"index_patterns": [
"mylogs-*"
],
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1
},
"mappings": {
"properties": {
"message": {
"type": "text"
},
"@timestamp": {
"type": "date"
}
}
}
}
-----------------------
[source,console]
-----------------------
POST mylogs-pre-ilm-2019.06.24/_doc
{
"@timestamp": "2019-06-24T10:34:00",
"message": "this is one log message"
}
-----------------------
// TEST[continued]
[source,console]
-----------------------
POST mylogs-pre-ilm-2019.06.25/_doc
{
"@timestamp": "2019-06-25T17:42:00",
"message": "this is another log message"
}
-----------------------
// TEST[continued]
[source,console]
--------------------------------------------------
DELETE _template/mylogs_template
--------------------------------------------------
// TEST[continued]
//////////////////////////
[source,console]
-----------------------
PUT mylogs-pre-ilm*/_settings <1>
{
"index": {
"lifecycle": {
"name": "mylogs_policy_existing"
}
}
}
-----------------------
// TEST[continued]
<1> Updates all indices with names that start with `mylogs-pre-ilm`

View File

@ -1,162 +0,0 @@
[role="xpack"]
[testenv="basic"]
[[start-stop-ilm]]
== Start and stop {ilm}
All indices that are managed by {ilm-init} will continue to execute
their policies. There may be times when this is not desired on certain
indices, or maybe even all the indices in a cluster. For example,
maybe there are scheduled maintenance windows when cluster topology
changes are desired that may impact running {ilm-init} actions. For this reason,
{ilm-init} has two ways to disable operations.
When stopping {ilm-init}, snapshot lifecycle management operations are also stopped,
this means that no scheduled snapshots are created (currently ongoing snapshots
are unaffected).
Normally, {ilm-init} will be running by default.
To see the current operating status of {ilm-init}, use the <<ilm-get-status,Get Status API>>
to see the current state of {ilm-init}.
////
[source,console]
--------------------------------------------------
PUT _ilm/policy/my_policy
{
"policy": {
"phases": {
"warm": {
"min_age": "10d",
"actions": {
"forcemerge": {
"max_num_segments": 1
}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
PUT my_index
{
"settings": {
"index.lifecycle.name": "my_policy"
}
}
--------------------------------------------------
////
[source,console]
--------------------------------------------------
GET _ilm/status
--------------------------------------------------
If the request does not encounter errors, you receive the following result:
[source,console-result]
--------------------------------------------------
{
"operation_mode": "RUNNING"
}
--------------------------------------------------
The operating modes of {ilm-init}:
[[ilm-operating-modes]]
.{ilm-init} Operating Modes
[options="header"]
|===
|Name |Description
|RUNNING |Normal operation where all policies are executed as normal
|STOPPING|{ilm-init} has received a request to stop but is still processing some policies
|STOPPED |This represents a state where no policies are executed
|===
[discrete]
=== Stopping {ilm-init}
The {ilm-init} service can be paused such that no further steps will be executed
using the <<ilm-stop,Stop API>>.
[source,console]
--------------------------------------------------
POST _ilm/stop
--------------------------------------------------
// TEST[continued]
When stopped, all further policy actions will be halted. This will
be reflected in the Status API
////
[source,console]
--------------------------------------------------
GET _ilm/status
--------------------------------------------------
// TEST[continued]
////
[source,console-result]
--------------------------------------------------
{
"operation_mode": "STOPPING"
}
--------------------------------------------------
// TESTRESPONSE[s/"STOPPING"/$body.operation_mode/]
The {ilm-init} service will then, asynchronously, run all policies to a point
where it is safe to stop. After {ilm-init} verifies that it is safe, it will
move to the `STOPPED` mode.
////
[source,console]
--------------------------------------------------
PUT trigger_ilm_cs_action
GET _ilm/status
--------------------------------------------------
// TEST[continued]
////
[source,console-result]
--------------------------------------------------
{
"operation_mode": "STOPPED"
}
--------------------------------------------------
// TESTRESPONSE[s/"STOPPED"/$body.operation_mode/]
[discrete]
=== Starting {ilm-init}
To start {ilm-init} and continue executing policies, use the <<ilm-start, Start API>>.
[source,console]
--------------------------------------------------
POST _ilm/start
--------------------------------------------------
// TEST[continued]
////
[source,console]
--------------------------------------------------
GET _ilm/status
--------------------------------------------------
// TEST[continued]
////
The Start API will send a request to the {ilm-init} service to immediately begin
normal operations.
[source,console-result]
--------------------------------------------------
{
"operation_mode": "RUNNING"
}
--------------------------------------------------

View File

@ -0,0 +1,140 @@
[role="xpack"]
[testenv="basic"]
[[start-stop-ilm]]
== Start and stop {ilm}
By default, the {ilm-init} service is in the `RUNNING` state and manages
all indices that have lifecycle policies.
You can stop {ilm} to suspend management operations for all indices.
For example, you might stop {ilm} when performing scheduled maintenance or making
changes to the cluster that could impact the execution of {ilm-init} actions.
IMPORTANT: When you stop {ilm-init}, <<snapshot-lifecycle-management, {slm-init}>>
operations are also suspended.
No snapshots will be taken as scheduled until you restart {ilm-init}.
In-progress snapshots are not affected.
[discrete]
[[get-ilm-status]]
=== Get {ilm-init} status
To see the current status of the {ilm-init} service, use the <<ilm-get-status,Get Status API>>:
////
[source,console]
--------------------------------------------------
PUT _ilm/policy/my_policy
{
"policy": {
"phases": {
"warm": {
"min_age": "10d",
"actions": {
"forcemerge": {
"max_num_segments": 1
}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
PUT my_index
{
"settings": {
"index.lifecycle.name": "my_policy"
}
}
--------------------------------------------------
////
[source,console]
--------------------------------------------------
GET _ilm/status
--------------------------------------------------
Under normal operation, the response shows {ilm-init} is `RUNNING`:
[source,console-result]
--------------------------------------------------
{
"operation_mode": "RUNNING"
}
--------------------------------------------------
[discrete]
[[stop-ilm]]
=== Stop {ilm-init}
To stop the {ilm-init} service and pause execution of all lifecycle policies,
use the <<ilm-stop,Stop API>>:
[source,console]
--------------------------------------------------
POST _ilm/stop
--------------------------------------------------
// TEST[continued]
{ilm-init} service runs all policies to a point where it is safe to stop.
While the {ilm-init} service is shutting down,
the status API shows {ilm-init} is in the `STOPPING` mode:
////
[source,console]
--------------------------------------------------
GET _ilm/status
--------------------------------------------------
// TEST[continued]
////
[source,console-result]
--------------------------------------------------
{
"operation_mode": "STOPPING"
}
--------------------------------------------------
// TESTRESPONSE[s/"STOPPING"/$body.operation_mode/]
Once all policies are at a safe stopping point, {ilm-init} moves into the `STOPPED` mode:
////
[source,console]
--------------------------------------------------
PUT trigger_ilm_cs_action
GET _ilm/status
--------------------------------------------------
// TEST[continued]
////
[source,console-result]
--------------------------------------------------
{
"operation_mode": "STOPPED"
}
--------------------------------------------------
// TESTRESPONSE[s/"STOPPED"/$body.operation_mode/]
[discrete]
=== Start {ilm-init}
To restart {ilm-init} and resume executing policies, use the <<ilm-start, Start API>>.
This puts the {ilm-init} service in the `RUNNING` state and
{ilm-init} begins executing policies from where it left off.
[source,console]
--------------------------------------------------
POST _ilm/start
--------------------------------------------------
// TEST[continued]

View File

@ -1,146 +0,0 @@
[role="xpack"]
[testenv="basic"]
[[ilm-configure-rollover]]
== Configure rollover
You control when the rollover action is triggered by specifying one or more
rollover criteria:
* Maximum size (the combined size of all primary shards in the index)
* Maximum document count
* Maximum age
The rollover is performed once any of the criteria are met.
Because the criteria are checked periodically, the index might grow
slightly beyond the specified threshold.
To control how often the criteria are checked,
specify the `indices.lifecycle.poll_interval` cluster setting.
IMPORTANT: New indices created via rollover will not automatically inherit the
policy used by the old index, and will not use any policy by default. Therefore,
it is highly recommended to apply the policy via
<<apply-policy-template,index template>>, including a Rollover alias
setting, for your indices which specifies the policy you wish to use for each
new index.
The following request defines a policy with a rollover action that triggers
when the index size reaches 25GB. The old index is subsequently deleted after
30 days.
NOTE: Once an index rolls over, {ilm} uses the timestamp of the rollover
operation rather than the index creation time to evaluate when to move the
index to the next phase. For indices that have rolled over, the `min_age`
criteria specified for a phase is relative to the rollover time for indices. In
this example, that means the index will be deleted 30 days after rollover, not
30 days from when the index was created.
[source,console]
--------------------------------------------------
PUT /_ilm/policy/my_policy
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_size": "25GB"
}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
--------------------------------------------------
To use an {ilm} policy, you need to specify it in the index template used to
create the indices. For example, the following template associates `my_policy`
with indices created from the template `my_template`.
[source,console]
-----------------------
PUT _template/my_template
{
"index_patterns": ["test-*"], <1>
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1,
"index.lifecycle.name": "my_policy", <2>
"index.lifecycle.rollover_alias": "test-alias" <3>
}
}
-----------------------
<1> Template applies to all indices with the prefix test-
<2> Associates my_policy with all indices created with this template
<3> Rolls over the write alias test when the rollover action is triggered
//////////////////////////
[source,console]
--------------------------------------------------
DELETE /_template/my_template
--------------------------------------------------
// TEST[continued]
//////////////////////////
To be able to start using the policy for these `test-*` indexes we need to
bootstrap the process by creating the first index.
[source,console]
-----------------------
PUT test-000001 <1>
{
"aliases": {
"test-alias":{
"is_write_index": true <2>
}
}
}
-----------------------
<1> Creates the index called test-000001. The rollover action increments the
suffix number for each subsequent index.
<2> Designates this index as the write index for this alias.
When the rollover is performed, the newly-created index is set as the write
index for the rolled over alias. Documents sent to the alias are indexed into
the new index, enabling indexing to continue uninterrupted.
[[skipping-rollover]]
=== Skipping Rollover
The `index.lifecycle.indexing_complete` setting indicates to {ilm-init} whether this
index has already been rolled over. If it is set to `true`, that indicates that
this index has already been rolled over and does not need to be rolled over
again. Therefore, {ilm} will skip any Rollover Action configured in the
associated lifecycle policy for this index. This is useful if you need to make
an exception to your normal Lifecycle Policy and switching the alias to a
different index by hand, but do not want to remove the index from {ilm}
completely.
This setting is set to `true` automatically by {ilm-init} upon the successful
completion of a Rollover Action. However, it will be removed if
<<ilm-remove-policy,the policy is removed>> from the index.
IMPORTANT: If `index.lifecycle.indexing_complete` is set to `true` on an index,
it will not be rolled over by {ilm}, but {ilm} will verify that this index is no
longer the write index for the alias specified by
`index.lifecycle.rollover_alias`. If that setting is missing, or if the index is
still the write index for that alias, this index will be moved to the
<<index-lifecycle-error-handling,error step>>.
For example, if you wish to change the name of new indices while retaining
previous data in accordance with your configured policy, you can create the
template for the new index name pattern and the first index with the new name
manually, change the write index of the alias using the <<indices-aliases, Index
Aliases API>>, and set `index.lifecycle.indexing_complete` to `true` on the old
index to indicate that it does not need to be rolled over. This way, {ilm} will
continue to manage the old index in accordance with its existing policy, as well
as the new one, with no interruption.

View File

@ -815,7 +815,7 @@ See {painless}/painless-lang-spec.html[Painless language specification].
[role="exclude",id="using-policies-rollover"]
=== Using policies to manage index rollover
See <<ilm-configure-rollover>>.
See <<getting-started-index-lifecycle-management>>.
[role="exclude",id="_applying_a_policy_to_our_index"]
=== Applying a policy to our index
@ -876,6 +876,17 @@ We have stopped adding new customers to our {esms}.
If you are interested in similar capabilities, contact
https://support.elastic.co[Elastic Support] to discuss available options.
[role="exclude",id="ilm-with-existing-periodic-indices"]
=== Manage existing periodic indices with {ilm-init}
See <<ilm-existing-indices-apply>>.
[role="exclude",id="ilm-reindexing-into-rollover"]
=== Reindexing via {ilm-init}
See <<ilm-existing-indices-reindex>>.
////
[role="exclude",id="search-request-body"]
=== Request body search

View File

@ -29,29 +29,38 @@ How often {ilm} checks for indices that meet policy criteria. Defaults to `10m`.
These index-level {ilm-init} settings are typically configured through index
templates. For more information, see <<ilm-gs-create-policy>>.
`index.lifecycle.indexing_complete`::
(<<indices-update-settings,Dynamic>>, boolean)
Indicates whether or not the index has been rolled over.
Automatically set to `true` when {ilm-init} completes the rollover action.
You can explicitly set it to <<skipping-rollover, skip rollover>>.
Defaults to `false`.
`index.lifecycle.name`::
(<<indices-update-settings, Dynamic>>, string)
The name of the policy to use to manage the index.
[[index-lifecycle-origination-date]]
`index.lifecycle.origination_date`::
(<<indices-update-settings,Dynamic>>, long)
If specified, this is the timestamp used to calculate the index age for its phase transitions.
Use this setting if you create a new index that contains old data and
want to use the original creation date to calculate the index age.
Specified as a Unix epoch value.
`index.lifecycle.parse_origination_date`::
(<<indices-update-settings,Dynamic>>, boolean)
Set to `true` to parse the origination date from the index name.
This origination date is used to calculate the index age for its phase transitions.
The index name must match the pattern `^.*-{date_format}-\\d+`,
where the `date_format` is `yyyy.MM.dd` and the trailing digits are optional.
An index that was rolled over would normally match the full format,
for example `logs-2016.10.31-000002`).
If the index name doesn't match the pattern, index creation fails.
`index.lifecycle.rollover_alias`::
(<<indices-update-settings,Dynamic>>, string)
The index alias to update when the index rolls over. Specify when using a
policy that contains a rollover action. When the index rolls over, the alias is
updated to reflect that the index is no longer the write index. For more
information about rollover, see <<using-policies-rollover>>.
`index.lifecycle.parse_origination_date`::
(<<indices-update-settings,Dynamic>>, boolean)
When configured to `true` the origination date will be parsed from the index
name. The index format must match the pattern `^.*-{date_format}-\\d+`, where
the `date_format` is `yyyy.MM.dd` and the trailing digits are optional (an
index that was rolled over would normally match the full format eg.
`logs-2016.10.31-000002`). If the index name doesn't match the pattern
the index creation will fail.
`index.lifecycle.origination_date`::
(<<indices-update-settings,Dynamic>>, long)
The timestamp that will be used to calculate the index age for its phase
transitions. This allows the users to create an index containing old data and
use the original creation date of the old data to calculate the index age.
Must be a long (Unix epoch) value.
information about rolling indices, see <<index-rollover, Rollover>>.

View File

@ -1,7 +1,7 @@
[role="xpack"]
[testenv="basic"]
[[slm-retention]]
== Snapshot lifecycle management retention
=== Snapshot retention
You can include a retention policy in an {slm-init} policy to automatically delete old snapshots.
Retention runs as a cluster-level task and is not associated with a particular policy's schedule.