OpenSearch/docs/reference/indices/shrink-index.asciidoc

230 lines
7.9 KiB
Plaintext
Raw Normal View History

Add primitive to shrink an index into a single shard (#18270) This adds a low level primitive operations to shrink an existing index into a new index with a single shard. This primitive expects all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node. To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{ "settings" : { "index.routing.allocation.require._name" : "shrink_node_name", "index.blocks.write" : true } } ``` once all shards are started on the shrink node. the new index can be created via: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{ "settings" : { "index.codec" : "best_compression", "index.number_of_replicas" : 1 } }' ``` This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc. The shrink operation does not modify the source index, if a shrink operation should be canceled or if the shrink failed, the target index can simply be deleted and all resources are released.
2016-05-31 04:41:44 -04:00
[[indices-shrink-index]]
=== Shrink index API
++++
<titleabbrev>Shrink index</titleabbrev>
++++
Add primitive to shrink an index into a single shard (#18270) This adds a low level primitive operations to shrink an existing index into a new index with a single shard. This primitive expects all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node. To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{ "settings" : { "index.routing.allocation.require._name" : "shrink_node_name", "index.blocks.write" : true } } ``` once all shards are started on the shrink node. the new index can be created via: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{ "settings" : { "index.codec" : "best_compression", "index.number_of_replicas" : 1 } }' ``` This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc. The shrink operation does not modify the source index, if a shrink operation should be canceled or if the shrink failed, the target index can simply be deleted and all resources are released.
2016-05-31 04:41:44 -04:00
Shrinks an existing index into a new index with fewer primary shards.
Add primitive to shrink an index into a single shard (#18270) This adds a low level primitive operations to shrink an existing index into a new index with a single shard. This primitive expects all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node. To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{ "settings" : { "index.routing.allocation.require._name" : "shrink_node_name", "index.blocks.write" : true } } ``` once all shards are started on the shrink node. the new index can be created via: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{ "settings" : { "index.codec" : "best_compression", "index.number_of_replicas" : 1 } }' ``` This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc. The shrink operation does not modify the source index, if a shrink operation should be canceled or if the shrink failed, the target index can simply be deleted and all resources are released.
2016-05-31 04:41:44 -04:00
[source,console]
----
POST /twitter/_shrink/shrunk-twitter-index
----
// TEST[s/^/PUT twitter\n{"settings":{"index.number_of_shards":2,"blocks.write":true}}\n/]
Add primitive to shrink an index into a single shard (#18270) This adds a low level primitive operations to shrink an existing index into a new index with a single shard. This primitive expects all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node. To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{ "settings" : { "index.routing.allocation.require._name" : "shrink_node_name", "index.blocks.write" : true } } ``` once all shards are started on the shrink node. the new index can be created via: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{ "settings" : { "index.codec" : "best_compression", "index.number_of_replicas" : 1 } }' ``` This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc. The shrink operation does not modify the source index, if a shrink operation should be canceled or if the shrink failed, the target index can simply be deleted and all resources are released.
2016-05-31 04:41:44 -04:00
[[shrink-index-api-request]]
==== {api-request-title}
`POST /<index>/_shrink/<target-index>`
`PUT /<index>/_shrink/<target-index>`
[[shrink-index-api-prereqs]]
==== {api-prereq-title}
Before you can shrink an index:
* The index must be read-only.
* All primary shards for the index must reside on the same node.
* The index must have a `green` <<cluster-health,health status>>.
To make shard allocation easier, we recommend you also remove the index's
replica shards. You can later re-add replica shards as part of the shrink
operation.
You can use the following <<indices-update-settings,update index settings API>>
request to remove an index's replica shards, relocates the index's remaining
shards to the same node, and make the index read-only.
Add primitive to shrink an index into a single shard (#18270) This adds a low level primitive operations to shrink an existing index into a new index with a single shard. This primitive expects all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node. To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{ "settings" : { "index.routing.allocation.require._name" : "shrink_node_name", "index.blocks.write" : true } } ``` once all shards are started on the shrink node. the new index can be created via: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{ "settings" : { "index.codec" : "best_compression", "index.number_of_replicas" : 1 } }' ``` This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc. The shrink operation does not modify the source index, if a shrink operation should be canceled or if the shrink failed, the target index can simply be deleted and all resources are released.
2016-05-31 04:41:44 -04:00
[source,console]
Add primitive to shrink an index into a single shard (#18270) This adds a low level primitive operations to shrink an existing index into a new index with a single shard. This primitive expects all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node. To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{ "settings" : { "index.routing.allocation.require._name" : "shrink_node_name", "index.blocks.write" : true } } ``` once all shards are started on the shrink node. the new index can be created via: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{ "settings" : { "index.codec" : "best_compression", "index.number_of_replicas" : 1 } }' ``` This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc. The shrink operation does not modify the source index, if a shrink operation should be canceled or if the shrink failed, the target index can simply be deleted and all resources are released.
2016-05-31 04:41:44 -04:00
--------------------------------------------------
PUT /my_source_index/_settings
{
"settings": {
"index.number_of_replicas": 0, <1>
"index.routing.allocation.require._name": "shrink_node_name", <2>
"index.blocks.write": true <3>
}
}
Add primitive to shrink an index into a single shard (#18270) This adds a low level primitive operations to shrink an existing index into a new index with a single shard. This primitive expects all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node. To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{ "settings" : { "index.routing.allocation.require._name" : "shrink_node_name", "index.blocks.write" : true } } ``` once all shards are started on the shrink node. the new index can be created via: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{ "settings" : { "index.codec" : "best_compression", "index.number_of_replicas" : 1 } }' ``` This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc. The shrink operation does not modify the source index, if a shrink operation should be canceled or if the shrink failed, the target index can simply be deleted and all resources are released.
2016-05-31 04:41:44 -04:00
--------------------------------------------------
// TEST[s/^/PUT my_source_index\n{"settings":{"index.number_of_shards":2}}\n/]
<1> Removes replica shards for the index.
<2> Relocates the index's shards to the `shrink_node_name` node.
See <<shard-allocation-filtering>>.
<3> Prevents write operations to this index. Metadata changes, such as deleting
the index, are still allowed.
Add primitive to shrink an index into a single shard (#18270) This adds a low level primitive operations to shrink an existing index into a new index with a single shard. This primitive expects all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node. To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{ "settings" : { "index.routing.allocation.require._name" : "shrink_node_name", "index.blocks.write" : true } } ``` once all shards are started on the shrink node. the new index can be created via: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{ "settings" : { "index.codec" : "best_compression", "index.number_of_replicas" : 1 } }' ``` This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc. The shrink operation does not modify the source index, if a shrink operation should be canceled or if the shrink failed, the target index can simply be deleted and all resources are released.
2016-05-31 04:41:44 -04:00
It can take a while to relocate the source index. Progress can be tracked
with the <<cat-recovery,`_cat recovery` API>>, or the <<cluster-health,
`cluster health` API>> can be used to wait until all shards have relocated
with the `wait_for_no_relocating_shards` parameter.
Add primitive to shrink an index into a single shard (#18270) This adds a low level primitive operations to shrink an existing index into a new index with a single shard. This primitive expects all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node. To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{ "settings" : { "index.routing.allocation.require._name" : "shrink_node_name", "index.blocks.write" : true } } ``` once all shards are started on the shrink node. the new index can be created via: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{ "settings" : { "index.codec" : "best_compression", "index.number_of_replicas" : 1 } }' ``` This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc. The shrink operation does not modify the source index, if a shrink operation should be canceled or if the shrink failed, the target index can simply be deleted and all resources are released.
2016-05-31 04:41:44 -04:00
[[shrink-index-api-desc]]
==== {api-description-title}
The shrink index API allows you to shrink an existing index into a new index
with fewer primary shards. The requested number of primary shards in the target index
must be a factor of the number of shards in the source index. For example an index with
`8` primary shards can be shrunk into `4`, `2` or `1` primary shards or an index
with `15` primary shards can be shrunk into `5`, `3` or `1`. If the number
of shards in the index is a prime number it can only be shrunk into a single
primary shard. Before shrinking, a (primary or replica) copy of every shard
in the index must be present on the same node.
The current write index on a data stream cannot be shrunk. In order to shrink
the current write index, the data stream must first be
<<rollover-data-stream-ex,rolled over>> so that a new write index is created
and then the previous write index can be shrunk.
[[how-shrink-works]]
===== How shrinking works
A shrink operation:
. Creates a new target index with the same definition as the source
index, but with a smaller number of primary shards.
. Hard-links segments from the source index into the target index. (If
the file system doesn't support hard-linking, then all segments are copied
into the new index, which is a much more time consuming process. Also if using
multiple data paths, shards on different data paths require a full copy of
segment files if they are not on the same disk since hardlinks dont work across
disks)
. Recovers the target index as though it were a closed index which
had just been re-opened.
[[_shrinking_an_index]]
===== Shrink an index
To shrink `my_source_index` into a new index called `my_target_index`, issue
the following request:
Add primitive to shrink an index into a single shard (#18270) This adds a low level primitive operations to shrink an existing index into a new index with a single shard. This primitive expects all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node. To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{ "settings" : { "index.routing.allocation.require._name" : "shrink_node_name", "index.blocks.write" : true } } ``` once all shards are started on the shrink node. the new index can be created via: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{ "settings" : { "index.codec" : "best_compression", "index.number_of_replicas" : 1 } }' ``` This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc. The shrink operation does not modify the source index, if a shrink operation should be canceled or if the shrink failed, the target index can simply be deleted and all resources are released.
2016-05-31 04:41:44 -04:00
[source,console]
Add primitive to shrink an index into a single shard (#18270) This adds a low level primitive operations to shrink an existing index into a new index with a single shard. This primitive expects all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node. To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{ "settings" : { "index.routing.allocation.require._name" : "shrink_node_name", "index.blocks.write" : true } } ``` once all shards are started on the shrink node. the new index can be created via: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{ "settings" : { "index.codec" : "best_compression", "index.number_of_replicas" : 1 } }' ``` This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc. The shrink operation does not modify the source index, if a shrink operation should be canceled or if the shrink failed, the target index can simply be deleted and all resources are released.
2016-05-31 04:41:44 -04:00
--------------------------------------------------
POST /my_source_index/_shrink/my_target_index
{
"settings": {
"index.routing.allocation.require._name": null, <1>
"index.blocks.write": null <2>
}
}
Add primitive to shrink an index into a single shard (#18270) This adds a low level primitive operations to shrink an existing index into a new index with a single shard. This primitive expects all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node. To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{ "settings" : { "index.routing.allocation.require._name" : "shrink_node_name", "index.blocks.write" : true } } ``` once all shards are started on the shrink node. the new index can be created via: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{ "settings" : { "index.codec" : "best_compression", "index.number_of_replicas" : 1 } }' ``` This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc. The shrink operation does not modify the source index, if a shrink operation should be canceled or if the shrink failed, the target index can simply be deleted and all resources are released.
2016-05-31 04:41:44 -04:00
--------------------------------------------------
// TEST[continued]
Add primitive to shrink an index into a single shard (#18270) This adds a low level primitive operations to shrink an existing index into a new index with a single shard. This primitive expects all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node. To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{ "settings" : { "index.routing.allocation.require._name" : "shrink_node_name", "index.blocks.write" : true } } ``` once all shards are started on the shrink node. the new index can be created via: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{ "settings" : { "index.codec" : "best_compression", "index.number_of_replicas" : 1 } }' ``` This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc. The shrink operation does not modify the source index, if a shrink operation should be canceled or if the shrink failed, the target index can simply be deleted and all resources are released.
2016-05-31 04:41:44 -04:00
<1> Clear the allocation requirement copied from the source index.
<2> Clear the index write block copied from the source index.
The above request returns immediately once the target index has been added to
the cluster state -- it doesn't wait for the shrink operation to start.
Add primitive to shrink an index into a single shard (#18270) This adds a low level primitive operations to shrink an existing index into a new index with a single shard. This primitive expects all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node. To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{ "settings" : { "index.routing.allocation.require._name" : "shrink_node_name", "index.blocks.write" : true } } ``` once all shards are started on the shrink node. the new index can be created via: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{ "settings" : { "index.codec" : "best_compression", "index.number_of_replicas" : 1 } }' ``` This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc. The shrink operation does not modify the source index, if a shrink operation should be canceled or if the shrink failed, the target index can simply be deleted and all resources are released.
2016-05-31 04:41:44 -04:00
[IMPORTANT]
=====================================
Add primitive to shrink an index into a single shard (#18270) This adds a low level primitive operations to shrink an existing index into a new index with a single shard. This primitive expects all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node. To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{ "settings" : { "index.routing.allocation.require._name" : "shrink_node_name", "index.blocks.write" : true } } ``` once all shards are started on the shrink node. the new index can be created via: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{ "settings" : { "index.codec" : "best_compression", "index.number_of_replicas" : 1 } }' ``` This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc. The shrink operation does not modify the source index, if a shrink operation should be canceled or if the shrink failed, the target index can simply be deleted and all resources are released.
2016-05-31 04:41:44 -04:00
Indices can only be shrunk if they satisfy the following requirements:
Add primitive to shrink an index into a single shard (#18270) This adds a low level primitive operations to shrink an existing index into a new index with a single shard. This primitive expects all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node. To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{ "settings" : { "index.routing.allocation.require._name" : "shrink_node_name", "index.blocks.write" : true } } ``` once all shards are started on the shrink node. the new index can be created via: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{ "settings" : { "index.codec" : "best_compression", "index.number_of_replicas" : 1 } }' ``` This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc. The shrink operation does not modify the source index, if a shrink operation should be canceled or if the shrink failed, the target index can simply be deleted and all resources are released.
2016-05-31 04:41:44 -04:00
* the target index must not exist
* The index must have more primary shards than the target index.
* The number of primary shards in the target index must be a factor of the
number of primary shards in the source index. The source index must have
more primary shards than the target index.
* The index must not contain more than `2,147,483,519` documents in total
across all shards that will be shrunk into a single shard on the target index
as this is the maximum number of docs that can fit into a single shard.
* The node handling the shrink process must have sufficient free disk space to
accommodate a second copy of the existing index.
=====================================
The `_shrink` API is similar to the <<indices-create-index, `create index` API>>
and accepts `settings` and `aliases` parameters for the target index:
[source,console]
--------------------------------------------------
POST /my_source_index/_shrink/my_target_index
{
"settings": {
"index.number_of_replicas": 1,
"index.number_of_shards": 1, <1>
"index.codec": "best_compression" <2>
},
"aliases": {
"my_search_indices": {}
}
}
--------------------------------------------------
// TEST[s/^/PUT my_source_index\n{"settings": {"index.number_of_shards":5,"index.blocks.write": true}}\n/]
<1> The number of shards in the target index. This must be a factor of the
number of shards in the source index.
<2> Best compression will only take affect when new writes are made to the
index, such as when <<indices-forcemerge,force-merging>> the shard to a single
segment.
NOTE: Mappings may not be specified in the `_shrink` request.
[[monitor-shrink]]
===== Monitor the shrink process
The shrink process can be monitored with the <<cat-recovery,`_cat recovery`
API>>, or the <<cluster-health, `cluster health` API>> can be used to wait
until all primary shards have been allocated by setting the `wait_for_status`
parameter to `yellow`.
The `_shrink` API returns as soon as the target index has been added to the
cluster state, before any shards have been allocated. At this point, all
shards are in the state `unassigned`. If, for any reason, the target index
can't be allocated on the shrink node, its primary shard will remain
`unassigned` until it can be allocated on that node.
Once the primary shard is allocated, it moves to state `initializing`, and the
shrink process begins. When the shrink operation completes, the shard will
become `active`. At that point, Elasticsearch will try to allocate any
replicas and may decide to relocate the primary shard to another node.
Add primitive to shrink an index into a single shard (#18270) This adds a low level primitive operations to shrink an existing index into a new index with a single shard. This primitive expects all shards of the source index to allocated on a single node. Once the target index is initializing on the shrink node it takes a snapshot of the source index shards and copies all files into the target indices data folder. An [optimization](https://issues.apache.org/jira/browse/LUCENE-7300) coming in Lucene 6.1 will also allow for optional constant time copy if hard-links are supported by the filesystem. All mappings are merged into the new indexes metadata once the snapshots have been taken on the merge node. To shrink an existing index all shards must be moved to a single node (one instance of each shard) and the index must be read-only: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_settings' -d '{ "settings" : { "index.routing.allocation.require._name" : "shrink_node_name", "index.blocks.write" : true } } ``` once all shards are started on the shrink node. the new index can be created via: ```BASH $ curl -XPUT 'http://localhost:9200/logs/_shrink/logs_single_shard' -d '{ "settings" : { "index.codec" : "best_compression", "index.number_of_replicas" : 1 } }' ``` This API will perform all needed check before the new index is created and selects the shrink node based on the allocation of the source index. This call returns immediately, to monitor shrink progress the recovery API should be used since all copy operations are reflected in the recovery API with byte copy progress etc. The shrink operation does not modify the source index, if a shrink operation should be canceled or if the shrink failed, the target index can simply be deleted and all resources are released.
2016-05-31 04:41:44 -04:00
[[shrink-wait-active-shards]]
===== Wait for active shards
Because the shrink operation creates a new index to shrink the shards to,
the <<create-index-wait-for-active-shards,wait for active shards>> setting
on index creation applies to the shrink index action as well.
[[shrink-index-api-path-params]]
==== {api-path-parms-title}
`<index>`::
(Required, string)
Name of the source index to shrink.
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=target-index]
[[shrink-index-api-query-params]]
==== {api-query-parms-title}
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards]
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms]
[[shrink-index-api-request-body]]
==== {api-request-body-title}
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=target-index-aliases]
include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=target-index-settings]