OpenSearch/docs/reference/cluster.asciidoc

107 lines
3.9 KiB
Plaintext
Raw Normal View History

[[cluster]]
= Cluster APIs
[partintro]
--
["float",id="cluster-nodes"]
== Node specification
Some cluster-level APIs may operate on a subset of the nodes which can be
specified with _node filters_. For example, the <<tasks,Task Management>>,
<<cluster-nodes-stats,Nodes Stats>>, and <<cluster-nodes-info,Nodes Info>> APIs
can all report results from a filtered set of nodes rather than from all nodes.
_Node filters_ are written as a comma-separated list of individual filters,
each of which adds or removes nodes from the chosen subset. Each filter can be
one of the following:
* `_all`, to add all nodes to the subset.
* `_local`, to add the local node to the subset.
* `_master`, to add the currently-elected master node to the subset.
* a node id or name, to add this node to the subset.
* an IP address or hostname, to add all matching nodes to the subset.
* a pattern, using `*` wildcards, which adds all nodes to the subset
whose name, address or hostname matches the pattern.
* `master:true`, `data:true`, `ingest:true` or `coordinating_only:true`, which
respectively add to the subset all master-eligible nodes, all data nodes,
all ingest nodes, and all coordinating-only nodes.
* `master:false`, `data:false`, `ingest:false` or `coordinating_only:false`,
which respectively remove from the subset all master-eligible nodes, all data
nodes, all ingest nodes, and all coordinating-only nodes.
* a pair of patterns, using `*` wildcards, of the form `attrname:attrvalue`,
which adds to the subset all nodes with a custom node attribute whose name
and value match the respective patterns. Custom node attributes are
configured by setting properties in the configuration file of the form
`node.attr.attrname: attrvalue`.
NOTE: node filters run in the order in which they are given, which is important
if using filters that remove nodes from the set. For example
`_all,master:false` means all the nodes except the master-eligible ones, but
`master:false,_all` means the same as `_all` because the `_all` filter runs
after the `master:false` filter.
NOTE: if no filters are given, the default is to select all nodes. However, if
any filters are given then they run starting with an empty chosen subset. This
means that filters such as `master:false` which remove nodes from the chosen
subset are only useful if they come after some other filters. When used on its
own, `master:false` selects no nodes.
Here are some examples of the use of node filters with the
<<cluster-nodes-info,Nodes Info>> APIs.
[source,js]
--------------------------------------------------
# If no filters are given, the default is to select all nodes
GET /_nodes
# Explicitly select all nodes
GET /_nodes/_all
# Select just the local node
GET /_nodes/_local
# Select the elected master node
GET /_nodes/_master
# Select nodes by name, which can include wildcards
GET /_nodes/node_name_goes_here
GET /_nodes/node_name_goes_*
# Select nodes by address, which can include wildcards
GET /_nodes/10.0.0.3,10.0.0.4
GET /_nodes/10.0.0.*
# Select nodes by role
GET /_nodes/_all,master:false
GET /_nodes/data:true,ingest:true
GET /_nodes/coordinating_only:true
# Select nodes by custom attribute (e.g. with something like `node.attr.rack: 2` in the configuration file)
GET /_nodes/rack:2
GET /_nodes/ra*:2
GET /_nodes/ra*:2*
--------------------------------------------------
// CONSOLE
--
include::cluster/health.asciidoc[]
include::cluster/state.asciidoc[]
2013-12-10 05:15:57 -05:00
include::cluster/stats.asciidoc[]
2013-11-29 02:21:26 -05:00
include::cluster/pending.asciidoc[]
include::cluster/reroute.asciidoc[]
include::cluster/update-settings.asciidoc[]
include::cluster/get-settings.asciidoc[]
include::cluster/nodes-stats.asciidoc[]
include::cluster/nodes-info.asciidoc[]
include::cluster/nodes-usage.asciidoc[]
include::cluster/remote-info.asciidoc[]
include::cluster/tasks.asciidoc[]
include::cluster/nodes-hot-threads.asciidoc[]
Add API to explain why a shard is or isn't assigned This adds a new `/_cluster/allocation/explain` API that explains why a shard can or cannot be allocated to nodes in the cluster. Additionally, it will show where the master *desires* to put the shard, according to the `ShardsAllocator`. It looks like this: ``` GET /_cluster/allocation/explain?pretty { "index": "only-foo", "shard": 0, "primary": false } ``` Though, you can optionally send an empty body, which means "explain the allocation for the first unassigned shard you find". The output when a shard is unassigned looks like this: ``` { "shard" : { "index" : "only-foo", "index_uuid" : "KnW0-zELRs6PK84l0r38ZA", "id" : 0, "primary" : false }, "assigned" : false, "unassigned_info" : { "reason" : "INDEX_CREATED", "at" : "2016-03-22T20:04:23.620Z" }, "nodes" : { "V-Spi0AyRZ6ZvKbaI3691w" : { "node_name" : "Susan Storm", "node_attributes" : { "bar" : "baz" }, "final_decision" : "NO", "weight" : 0.06666675, "decisions" : [ { "decider" : "filter", "decision" : "NO", "explanation" : "node does not match index include filters [foo:\"bar\"]" } ] }, "Qc6VL8c5RWaw1qXZ0Rg57g" : { "node_name" : "Slipstream", "node_attributes" : { "bar" : "baz", "foo" : "bar" }, "final_decision" : "NO", "weight" : -1.3833332, "decisions" : [ { "decider" : "same_shard", "decision" : "NO", "explanation" : "the shard cannot be allocated on the same node id [Qc6VL8c5RWaw1qXZ0Rg57g] on which it already exists" } ] }, "PzdyMZGXQdGhqTJHF_hGgA" : { "node_name" : "The Symbiote", "node_attributes" : { }, "final_decision" : "NO", "weight" : 2.3166666, "decisions" : [ { "decider" : "filter", "decision" : "NO", "explanation" : "node does not match index include filters [foo:\"bar\"]" } ] } } } ``` And when the shard *is* assigned, the output looks like: ``` { "shard" : { "index" : "only-foo", "index_uuid" : "KnW0-zELRs6PK84l0r38ZA", "id" : 0, "primary" : true }, "assigned" : true, "assigned_node_id" : "Qc6VL8c5RWaw1qXZ0Rg57g", "nodes" : { "V-Spi0AyRZ6ZvKbaI3691w" : { "node_name" : "Susan Storm", "node_attributes" : { "bar" : "baz" }, "final_decision" : "NO", "weight" : 1.4499999, "decisions" : [ { "decider" : "filter", "decision" : "NO", "explanation" : "node does not match index include filters [foo:\"bar\"]" } ] }, "Qc6VL8c5RWaw1qXZ0Rg57g" : { "node_name" : "Slipstream", "node_attributes" : { "bar" : "baz", "foo" : "bar" }, "final_decision" : "CURRENTLY_ASSIGNED", "weight" : 0.0, "decisions" : [ { "decider" : "same_shard", "decision" : "NO", "explanation" : "the shard cannot be allocated on the same node id [Qc6VL8c5RWaw1qXZ0Rg57g] on which it already exists" } ] }, "PzdyMZGXQdGhqTJHF_hGgA" : { "node_name" : "The Symbiote", "node_attributes" : { }, "final_decision" : "NO", "weight" : 3.6999998, "decisions" : [ { "decider" : "filter", "decision" : "NO", "explanation" : "node does not match index include filters [foo:\"bar\"]" } ] } } } ``` Only "NO" decisions are returned by default, but all decisions can be shown by specifying the `?include_yes_decisions=true` parameter in the request. Resolves #14593
2016-02-26 15:21:36 -05:00
include::cluster/allocation-explain.asciidoc[]