2013-08-28 19:24:34 -04:00
|
|
|
[[cluster]]
|
|
|
|
= Cluster APIs
|
|
|
|
|
|
|
|
[partintro]
|
|
|
|
--
|
|
|
|
["float",id="cluster-nodes"]
|
2013-10-13 10:46:56 -04:00
|
|
|
== Node specification
|
2013-08-28 19:24:34 -04:00
|
|
|
|
|
|
|
Most cluster level APIs allow to specify which nodes to execute on (for
|
|
|
|
example, getting the node stats for a node). Nodes can be identified in
|
|
|
|
the APIs either using their internal node id, the node name, address,
|
|
|
|
custom attributes, or just the `_local` node receiving the request. For
|
|
|
|
example, here are some sample executions of nodes info:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2013-10-13 10:46:56 -04:00
|
|
|
# Local
|
2014-01-09 05:30:28 -05:00
|
|
|
curl localhost:9200/_nodes/_local
|
2013-08-28 19:24:34 -04:00
|
|
|
# Address
|
2014-01-09 05:30:28 -05:00
|
|
|
curl localhost:9200/_nodes/10.0.0.3,10.0.0.4
|
|
|
|
curl localhost:9200/_nodes/10.0.0.*
|
2013-08-28 19:24:34 -04:00
|
|
|
# Names
|
2014-01-09 05:30:28 -05:00
|
|
|
curl localhost:9200/_nodes/node_name_goes_here
|
|
|
|
curl localhost:9200/_nodes/node_name_goes_*
|
2013-08-28 19:24:34 -04:00
|
|
|
# Attributes (set something like node.rack: 2 in the config)
|
2014-01-09 05:30:28 -05:00
|
|
|
curl localhost:9200/_nodes/rack:2
|
|
|
|
curl localhost:9200/_nodes/ra*:2
|
|
|
|
curl localhost:9200/_nodes/ra*:2*
|
2013-08-28 19:24:34 -04:00
|
|
|
--------------------------------------------------
|
|
|
|
--
|
|
|
|
|
|
|
|
include::cluster/health.asciidoc[]
|
|
|
|
|
|
|
|
include::cluster/state.asciidoc[]
|
|
|
|
|
2013-12-10 05:15:57 -05:00
|
|
|
include::cluster/stats.asciidoc[]
|
|
|
|
|
2013-11-29 02:21:26 -05:00
|
|
|
include::cluster/pending.asciidoc[]
|
|
|
|
|
2013-08-28 19:24:34 -04:00
|
|
|
include::cluster/reroute.asciidoc[]
|
|
|
|
|
|
|
|
include::cluster/update-settings.asciidoc[]
|
|
|
|
|
|
|
|
include::cluster/nodes-stats.asciidoc[]
|
|
|
|
|
|
|
|
include::cluster/nodes-info.asciidoc[]
|
|
|
|
|
2016-03-29 07:51:11 -04:00
|
|
|
include::cluster/nodes-task.asciidoc[]
|
|
|
|
|
2013-08-28 19:24:34 -04:00
|
|
|
include::cluster/nodes-hot-threads.asciidoc[]
|
Add API to explain why a shard is or isn't assigned
This adds a new `/_cluster/allocation/explain` API that explains why a
shard can or cannot be allocated to nodes in the cluster. Additionally,
it will show where the master *desires* to put the shard, according to
the `ShardsAllocator`.
It looks like this:
```
GET /_cluster/allocation/explain?pretty
{
"index": "only-foo",
"shard": 0,
"primary": false
}
```
Though, you can optionally send an empty body, which means "explain the
allocation for the first unassigned shard you find".
The output when a shard is unassigned looks like this:
```
{
"shard" : {
"index" : "only-foo",
"index_uuid" : "KnW0-zELRs6PK84l0r38ZA",
"id" : 0,
"primary" : false
},
"assigned" : false,
"unassigned_info" : {
"reason" : "INDEX_CREATED",
"at" : "2016-03-22T20:04:23.620Z"
},
"nodes" : {
"V-Spi0AyRZ6ZvKbaI3691w" : {
"node_name" : "Susan Storm",
"node_attributes" : {
"bar" : "baz"
},
"final_decision" : "NO",
"weight" : 0.06666675,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
},
"Qc6VL8c5RWaw1qXZ0Rg57g" : {
"node_name" : "Slipstream",
"node_attributes" : {
"bar" : "baz",
"foo" : "bar"
},
"final_decision" : "NO",
"weight" : -1.3833332,
"decisions" : [ {
"decider" : "same_shard",
"decision" : "NO",
"explanation" : "the shard cannot be allocated on the same node id [Qc6VL8c5RWaw1qXZ0Rg57g] on which it already exists"
} ]
},
"PzdyMZGXQdGhqTJHF_hGgA" : {
"node_name" : "The Symbiote",
"node_attributes" : { },
"final_decision" : "NO",
"weight" : 2.3166666,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
}
}
}
```
And when the shard *is* assigned, the output looks like:
```
{
"shard" : {
"index" : "only-foo",
"index_uuid" : "KnW0-zELRs6PK84l0r38ZA",
"id" : 0,
"primary" : true
},
"assigned" : true,
"assigned_node_id" : "Qc6VL8c5RWaw1qXZ0Rg57g",
"nodes" : {
"V-Spi0AyRZ6ZvKbaI3691w" : {
"node_name" : "Susan Storm",
"node_attributes" : {
"bar" : "baz"
},
"final_decision" : "NO",
"weight" : 1.4499999,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
},
"Qc6VL8c5RWaw1qXZ0Rg57g" : {
"node_name" : "Slipstream",
"node_attributes" : {
"bar" : "baz",
"foo" : "bar"
},
"final_decision" : "CURRENTLY_ASSIGNED",
"weight" : 0.0,
"decisions" : [ {
"decider" : "same_shard",
"decision" : "NO",
"explanation" : "the shard cannot be allocated on the same node id [Qc6VL8c5RWaw1qXZ0Rg57g] on which it already exists"
} ]
},
"PzdyMZGXQdGhqTJHF_hGgA" : {
"node_name" : "The Symbiote",
"node_attributes" : { },
"final_decision" : "NO",
"weight" : 3.6999998,
"decisions" : [ {
"decider" : "filter",
"decision" : "NO",
"explanation" : "node does not match index include filters [foo:\"bar\"]"
} ]
}
}
}
```
Only "NO" decisions are returned by default, but all decisions can be
shown by specifying the `?include_yes_decisions=true` parameter in the
request.
Resolves #14593
2016-02-26 15:21:36 -05:00
|
|
|
|
|
|
|
include::cluster/allocation-explain.asciidoc[]
|