2015-06-22 17:49:45 -04:00
|
|
|
[[misc-cluster]]
|
|
|
|
=== Miscellaneous cluster settings
|
|
|
|
|
|
|
|
[[cluster-read-only]]
|
|
|
|
==== Metadata
|
|
|
|
|
|
|
|
An entire cluster may be set to read-only with the following _dynamic_ setting:
|
|
|
|
|
|
|
|
`cluster.blocks.read_only`::
|
|
|
|
|
|
|
|
Make the whole cluster read only (indices do not accept write
|
|
|
|
operations), metadata is not allowed to be modified (create or delete
|
|
|
|
indices).
|
|
|
|
|
2017-05-16 11:34:37 -04:00
|
|
|
`cluster.blocks.read_only_allow_delete`::
|
|
|
|
|
|
|
|
Identical to `cluster.blocks.read_only` but allows to delete indices
|
|
|
|
to free up resources.
|
|
|
|
|
2015-06-22 17:49:45 -04:00
|
|
|
WARNING: Don't rely on this setting to prevent changes to your cluster. Any
|
|
|
|
user with access to the <<cluster-update-settings,cluster-update-settings>>
|
|
|
|
API can make the cluster read-write again.
|
|
|
|
|
|
|
|
|
2018-10-23 18:35:10 -04:00
|
|
|
[[cluster-shard-limit]]
|
|
|
|
|
|
|
|
==== Cluster Shard Limit
|
|
|
|
|
2018-11-26 19:05:12 -05:00
|
|
|
There is a soft limit on the number of shards in a cluster, based on the number
|
|
|
|
of nodes in the cluster. This is intended to prevent operations which may
|
|
|
|
unintentionally destabilize the cluster.
|
2018-10-23 18:35:10 -04:00
|
|
|
|
2018-11-30 15:05:14 -05:00
|
|
|
IMPORTANT: This limit is intended as a safety net, not a sizing recommendation. The
|
|
|
|
exact number of shards your cluster can safely support depends on your hardware
|
|
|
|
configuration and workload, but should remain well below this limit in almost
|
|
|
|
all cases, as the default limit is set quite high.
|
|
|
|
|
2018-10-23 18:35:10 -04:00
|
|
|
If an operation, such as creating a new index, restoring a snapshot of an index,
|
|
|
|
or opening a closed index would lead to the number of shards in the cluster
|
2018-11-26 19:05:12 -05:00
|
|
|
going over this limit, the operation will fail with an error indicating the
|
|
|
|
shard limit.
|
2018-10-23 18:35:10 -04:00
|
|
|
|
|
|
|
If the cluster is already over the limit, due to changes in node membership or
|
2018-11-26 19:05:12 -05:00
|
|
|
setting changes, all operations that create or open indices will fail until
|
|
|
|
either the limit is increased as described below, or some indices are
|
2018-10-23 18:35:10 -04:00
|
|
|
<<indices-open-close,closed>> or <<indices-delete-index,deleted>> to bring the
|
|
|
|
number of shards below the limit.
|
|
|
|
|
|
|
|
Replicas count towards this limit, but closed indexes do not. An index with 5
|
2018-11-26 19:05:12 -05:00
|
|
|
primary shards and 2 replicas will be counted as 15 shards. Any closed index
|
2018-10-23 18:35:10 -04:00
|
|
|
is counted as 0, no matter how many shards and replicas it contains.
|
|
|
|
|
2018-11-26 19:05:12 -05:00
|
|
|
The limit defaults to 1,000 shards per data node, and can be dynamically
|
|
|
|
adjusted using the following property:
|
2018-10-23 18:35:10 -04:00
|
|
|
|
|
|
|
`cluster.max_shards_per_node`::
|
|
|
|
|
2018-11-26 19:05:12 -05:00
|
|
|
Controls the number of shards allowed in the cluster per data node.
|
2018-10-23 18:35:10 -04:00
|
|
|
|
|
|
|
For example, a 3-node cluster with the default setting would allow 3,000 shards
|
2018-11-30 15:05:14 -05:00
|
|
|
total, across all open indexes. If the above setting is changed to 500, then
|
|
|
|
the cluster would allow 1,500 shards total.
|
2018-10-23 18:35:10 -04:00
|
|
|
|
2018-11-26 19:05:12 -05:00
|
|
|
NOTE: If there are no data nodes in the cluster, the limit will not be enforced.
|
|
|
|
This allows the creation of indices during cluster creation if dedicated master
|
|
|
|
nodes are set up before data nodes.
|
|
|
|
|
2018-09-04 18:14:18 -04:00
|
|
|
[[user-defined-data]]
|
|
|
|
==== User Defined Cluster Metadata
|
|
|
|
|
|
|
|
User-defined metadata can be stored and retrieved using the Cluster Settings API.
|
|
|
|
This can be used to store arbitrary, infrequently-changing data about the cluster
|
|
|
|
without the need to create an index to store it. This data may be stored using
|
|
|
|
any key prefixed with `cluster.metadata.`. For example, to store the email
|
|
|
|
address of the administrator of a cluster under the key `cluster.metadata.administrator`,
|
|
|
|
issue this request:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
-------------------------------
|
|
|
|
PUT /_cluster/settings
|
|
|
|
{
|
|
|
|
"persistent": {
|
|
|
|
"cluster.metadata.administrator": "sysadmin@example.com"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
-------------------------------
|
|
|
|
// CONSOLE
|
|
|
|
|
2018-10-02 15:36:13 -04:00
|
|
|
IMPORTANT: User-defined cluster metadata is not intended to store sensitive or
|
|
|
|
confidential information. Any information stored in user-defined cluster
|
|
|
|
metadata will be viewable by anyone with access to the
|
|
|
|
<<cluster-get-settings,Cluster Get Settings>> API, and is recorded in the
|
|
|
|
{es} logs.
|
|
|
|
|
2016-05-04 15:21:47 -04:00
|
|
|
[[cluster-max-tombstones]]
|
|
|
|
==== Index Tombstones
|
|
|
|
|
2016-09-21 09:27:18 -04:00
|
|
|
The cluster state maintains index tombstones to explicitly denote indices that
|
|
|
|
have been deleted. The number of tombstones maintained in the cluster state is
|
2016-05-04 15:21:47 -04:00
|
|
|
controlled by the following property, which cannot be updated dynamically:
|
|
|
|
|
|
|
|
`cluster.indices.tombstones.size`::
|
|
|
|
|
2016-09-21 09:27:18 -04:00
|
|
|
Index tombstones prevent nodes that are not part of the cluster when a delete
|
|
|
|
occurs from joining the cluster and reimporting the index as though the delete
|
|
|
|
was never issued. To keep the cluster state from growing huge we only keep the
|
|
|
|
last `cluster.indices.tombstones.size` deletes, which defaults to 500. You can
|
|
|
|
increase it if you expect nodes to be absent from the cluster and miss more
|
|
|
|
than 500 deletes. We think that is rare, thus the default. Tombstones don't take
|
2016-05-04 15:21:47 -04:00
|
|
|
up much space, but we also think that a number like 50,000 is probably too big.
|
|
|
|
|
2015-06-22 17:49:45 -04:00
|
|
|
[[cluster-logger]]
|
|
|
|
==== Logger
|
|
|
|
|
|
|
|
The settings which control logging can be updated dynamically with the
|
|
|
|
`logger.` prefix. For instance, to increase the logging level of the
|
|
|
|
`indices.recovery` module to `DEBUG`, issue this request:
|
|
|
|
|
2015-07-14 12:14:09 -04:00
|
|
|
[source,js]
|
2015-06-22 17:49:45 -04:00
|
|
|
-------------------------------
|
|
|
|
PUT /_cluster/settings
|
|
|
|
{
|
|
|
|
"transient": {
|
2016-09-14 08:08:49 -04:00
|
|
|
"logger.org.elasticsearch.indices.recovery": "DEBUG"
|
2015-06-22 17:49:45 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
-------------------------------
|
2016-09-21 09:27:18 -04:00
|
|
|
// CONSOLE
|
2018-03-22 04:18:07 -04:00
|
|
|
|
|
|
|
|
|
|
|
[[persistent-tasks-allocation]]
|
|
|
|
==== Persistent Tasks Allocations
|
|
|
|
|
|
|
|
Plugins can create a kind of tasks called persistent tasks. Those tasks are
|
|
|
|
usually long-live tasks and are stored in the cluster state, allowing the
|
|
|
|
tasks to be revived after a full cluster restart.
|
|
|
|
|
|
|
|
Every time a persistent task is created, the master nodes takes care of
|
|
|
|
assigning the task to a node of the cluster, and the assigned node will then
|
|
|
|
pick up the task and execute it locally. The process of assigning persistent
|
|
|
|
tasks to nodes is controlled by the following property, which can be updated
|
|
|
|
dynamically:
|
|
|
|
|
|
|
|
`cluster.persistent_tasks.allocation.enable`::
|
|
|
|
+
|
|
|
|
--
|
|
|
|
Enable or disable allocation for persistent tasks:
|
|
|
|
|
|
|
|
* `all` - (default) Allows persistent tasks to be assigned to nodes
|
|
|
|
* `none` - No allocations are allowed for any type of persistent task
|
|
|
|
|
|
|
|
This setting does not affect the persistent tasks that are already being executed.
|
|
|
|
Only newly created persistent tasks, or tasks that must be reassigned (after a node
|
|
|
|
left the cluster, for example), are impacted by this setting.
|
2018-10-23 18:35:10 -04:00
|
|
|
--
|