2013-11-14 20:14:39 -05:00
|
|
|
[[cat-shards]]
|
2019-08-13 08:35:08 -04:00
|
|
|
=== cat shards API
|
|
|
|
++++
|
|
|
|
<titleabbrev>cat shards</titleabbrev>
|
|
|
|
++++
|
2013-11-14 20:14:39 -05:00
|
|
|
|
|
|
|
The `shards` command is the detailed view of what nodes contain which
|
|
|
|
shards. It will tell you if it's a primary or replica, the number of
|
|
|
|
docs, the bytes it takes on disk, and the node where it's located.
|
|
|
|
|
2019-08-13 08:24:53 -04:00
|
|
|
|
|
|
|
[[cat-shards-api-request]]
|
|
|
|
==== {api-request-title}
|
|
|
|
|
2019-08-23 10:57:20 -04:00
|
|
|
`GET /_cat/shards/<index>`
|
2019-08-13 08:24:53 -04:00
|
|
|
|
|
|
|
|
|
|
|
[[cat-shards-path-params]]
|
|
|
|
==== {api-path-parms-title}
|
|
|
|
|
|
|
|
include::{docdir}/rest-api/common-parms.asciidoc[tag=index]
|
|
|
|
|
|
|
|
|
|
|
|
[[cat-shards-query-params]]
|
|
|
|
==== {api-query-parms-title}
|
|
|
|
|
|
|
|
include::{docdir}/rest-api/common-parms.asciidoc[tag=bytes]
|
|
|
|
|
|
|
|
include::{docdir}/rest-api/common-parms.asciidoc[tag=http-format]
|
|
|
|
|
|
|
|
include::{docdir}/rest-api/common-parms.asciidoc[tag=cat-h]
|
|
|
|
+
|
|
|
|
--
|
|
|
|
If you do not specify which columns to include, the API returns the default
|
|
|
|
columns in the order listed below. If you explicitly specify one or more
|
|
|
|
columns, it only returns the specified columns.
|
|
|
|
|
|
|
|
Valid columns are:
|
|
|
|
|
|
|
|
`index`, `i`, `idx`::
|
|
|
|
(Default) Name of the index, such as `twitter`.
|
|
|
|
|
|
|
|
`shard`, `s`, `sh`::
|
|
|
|
(Default) Name of the shard.
|
|
|
|
|
|
|
|
`prirep`, `p`, `pr`, `primaryOrReplica`::
|
|
|
|
(Default) Shard type. Returned values are `primary` or `replica`.
|
|
|
|
|
|
|
|
`state`, `st`::
|
|
|
|
(Default) State of the shard. Returned values are:
|
|
|
|
+
|
|
|
|
* `INITIALIZING`: The shard is recovering from a peer shard or gateway.
|
|
|
|
* `RELOCATING`: The shard is relocating.
|
|
|
|
* `STARTED`: The shard has started.
|
|
|
|
* `UNASSIGNED`: The shard is not assigned to any node.
|
|
|
|
|
|
|
|
`docs`, `d`, `dc`::
|
|
|
|
(Default) Number of documents in shard, such as `25`.
|
|
|
|
|
|
|
|
`store`, `sto`::
|
|
|
|
(Default) Disk space used by the shard, such as `5kb`.
|
|
|
|
|
|
|
|
`ip`::
|
|
|
|
(Default) IP address of the node, such as `127.0.1.1`.
|
|
|
|
|
|
|
|
`id`::
|
|
|
|
(Default) ID of the node, such as `k0zy`.
|
|
|
|
|
|
|
|
`node`, `n`::
|
|
|
|
(Default) Node name, such as `I8hydUG`.
|
|
|
|
|
|
|
|
`completion.size`, `cs`, `completionSize`::
|
|
|
|
Size of completion, such as `0b`.
|
|
|
|
|
|
|
|
`fielddata.memory_size`, `fm`, `fielddataMemory`::
|
|
|
|
Used fielddata cache memory, such as `0b`.
|
|
|
|
|
|
|
|
`fielddata.evictions`, `fe`, `fielddataEvictions`::
|
|
|
|
Fielddata cache evictions, such as `0`.
|
|
|
|
|
|
|
|
`flush.total`, `ft`, `flushTotal`::
|
|
|
|
Number of flushes, such as `1`.
|
|
|
|
|
|
|
|
`flush.total_time`, `ftt`, `flushTotalTime`::
|
|
|
|
Time spent in flush, such as `1`.
|
|
|
|
|
|
|
|
`get.current`, `gc`, `getCurrent`::
|
|
|
|
Number of current get operations, such as `0`.
|
|
|
|
|
|
|
|
`get.time`, `gti`, `getTime`::
|
|
|
|
Time spent in get, such as `14ms`.
|
|
|
|
|
|
|
|
`get.total`, `gto`, `getTotal`::
|
|
|
|
Number of get operations, such as `2`.
|
|
|
|
|
|
|
|
`get.exists_time`, `geti`, `getExistsTime`::
|
|
|
|
Time spent in successful gets, such as `14ms`.
|
|
|
|
|
|
|
|
`get.exists_total`, `geto`, `getExistsTotal`::
|
|
|
|
Number of successful get operations, such as `2`.
|
|
|
|
|
|
|
|
`get.missing_time`, `gmti`, `getMissingTime`::
|
|
|
|
Time spent in failed gets, such as `0s`.
|
|
|
|
|
|
|
|
`get.missing_total`, `gmto`, `getMissingTotal`::
|
|
|
|
Number of failed get operations, such as `1`.
|
|
|
|
|
|
|
|
`indexing.delete_current`, `idc`, `indexingDeleteCurrent`::
|
|
|
|
Number of current deletion operations, such as `0`.
|
|
|
|
|
|
|
|
`indexing.delete_time`, `idti`, `indexingDeleteTime`::
|
|
|
|
Time spent in deletions, such as `2ms`.
|
|
|
|
|
|
|
|
`indexing.delete_total`, `idto`, `indexingDeleteTotal`::
|
|
|
|
Number of deletion operations, such as `2`.
|
|
|
|
|
|
|
|
`indexing.index_current`, `iic`, `indexingIndexCurrent`::
|
|
|
|
Number of current indexing operations, such as `0`.
|
|
|
|
|
|
|
|
`indexing.index_time`, `iiti`, `indexingIndexTime`::
|
|
|
|
Time spent in indexing, such as `134ms`.
|
|
|
|
|
|
|
|
`indexing.index_total`, `iito`, `indexingIndexTotal`::
|
|
|
|
Number of indexing operations, such as `1`.
|
|
|
|
|
|
|
|
`indexing.index_failed`, `iif`, `indexingIndexFailed`::
|
|
|
|
Number of failed indexing operations, such as `0`.
|
|
|
|
|
|
|
|
`merges.current`, `mc`, `mergesCurrent`::
|
|
|
|
Number of current merge operations, such as `0`.
|
|
|
|
|
|
|
|
`merges.current_docs`, `mcd`, `mergesCurrentDocs`::
|
|
|
|
Number of current merging documents, such as `0`.
|
|
|
|
|
|
|
|
`merges.current_size`, `mcs`, `mergesCurrentSize`::
|
|
|
|
Size of current merges, such as `0b`.
|
|
|
|
|
|
|
|
`merges.total`, `mt`, `mergesTotal`::
|
|
|
|
Number of completed merge operations, such as `0`.
|
|
|
|
|
|
|
|
`merges.total_docs`, `mtd`, `mergesTotalDocs`::
|
|
|
|
Number of merged documents, such as `0`.
|
|
|
|
|
|
|
|
`merges.total_size`, `mts`, `mergesTotalSize`::
|
|
|
|
Size of current merges, such as `0b`.
|
|
|
|
|
|
|
|
`merges.total_time`, `mtt`, `mergesTotalTime`::
|
|
|
|
Time spent merging documents, such as `0s`.
|
|
|
|
|
|
|
|
`query_cache.memory_size`, `qcm`, `queryCacheMemory`::
|
|
|
|
Used query cache memory, such as `0b`.
|
|
|
|
|
|
|
|
`query_cache.evictions`, `qce`, `queryCacheEvictions`::
|
|
|
|
Query cache evictions, such as `0`.
|
|
|
|
|
|
|
|
`recoverysource.type`, `rs`::
|
|
|
|
Type of recovery source.
|
|
|
|
|
|
|
|
`refresh.total`, `rto`, `refreshTotal`::
|
|
|
|
Number of refreshes, such as `16`.
|
|
|
|
|
|
|
|
`refresh.time`, `rti`, `refreshTime`::
|
|
|
|
Time spent in refreshes, such as `91ms`.
|
|
|
|
|
|
|
|
`search.fetch_current`, `sfc`, `searchFetchCurrent`::
|
|
|
|
Current fetch phase operations, such as `0`.
|
|
|
|
|
|
|
|
`search.fetch_time`, `sfti`, `searchFetchTime`::
|
|
|
|
Time spent in fetch phase, such as `37ms`.
|
|
|
|
|
|
|
|
`search.fetch_total`, `sfto`, `searchFetchTotal`::
|
|
|
|
Number of fetch operations, such as `7`.
|
|
|
|
|
|
|
|
`search.open_contexts`, `so`, `searchOpenContexts`::
|
|
|
|
Open search contexts, such as `0`.
|
|
|
|
|
|
|
|
`search.query_current`, `sqc`, `searchQueryCurrent`::
|
|
|
|
Current query phase operations, such as `0`.
|
|
|
|
|
|
|
|
`search.query_time`, `sqti`, `searchQueryTime`::
|
|
|
|
Time spent in query phase, such as `43ms`.
|
|
|
|
|
|
|
|
`search.query_total`, `sqto`, `searchQueryTotal`::
|
|
|
|
Number of query operations, such as `9`.
|
|
|
|
|
|
|
|
`search.scroll_current`, `scc`, `searchScrollCurrent`::
|
|
|
|
Open scroll contexts, such as `2`.
|
|
|
|
|
|
|
|
`search.scroll_time`, `scti`, `searchScrollTime`::
|
|
|
|
Time scroll contexts held open, such as `2m`.
|
|
|
|
|
|
|
|
`search.scroll_total`, `scto`, `searchScrollTotal`::
|
|
|
|
Completed scroll contexts, such as `1`.
|
|
|
|
|
|
|
|
`segments.count`, `sc`, `segmentsCount`::
|
|
|
|
Number of segments, such as `4`.
|
|
|
|
|
|
|
|
`segments.memory`, `sm`, `segmentsMemory`::
|
|
|
|
Memory used by segments, such as `1.4kb`.
|
|
|
|
|
|
|
|
`segments.index_writer_memory`, `siwm`, `segmentsIndexWriterMemory`::
|
|
|
|
Memory used by index writer, such as `18mb`.
|
|
|
|
|
|
|
|
`segments.version_map_memory`, `svmm`, `segmentsVersionMapMemory`::
|
|
|
|
Memory used by version map, such as `1.0kb`.
|
|
|
|
|
|
|
|
`segments.fixed_bitset_memory`, `sfbm`, `fixedBitsetMemory`::
|
|
|
|
Memory used by fixed bit sets for nested object field types and type filters for
|
|
|
|
types referred in <<parent-join,`join`>> fields, such as `1.0kb`.
|
|
|
|
|
|
|
|
`seq_no.global_checkpoint`, `sqg`, `globalCheckpoint`::
|
|
|
|
Global checkpoint.
|
|
|
|
|
|
|
|
`seq_no.local_checkpoint`, `sql`, `localCheckpoint`::
|
|
|
|
Local checkpoint.
|
|
|
|
|
|
|
|
`seq_no.max`, `sqm`, `maxSeqNo`::
|
|
|
|
Maximum sequence number.
|
|
|
|
|
|
|
|
`suggest.current`, `suc`, `suggestCurrent`::
|
|
|
|
Number of current suggest operations, such as `0`.
|
|
|
|
|
|
|
|
`suggest.time`, `suti`, `suggestTime`::
|
|
|
|
Time spent in suggest, such as `0`.
|
|
|
|
|
|
|
|
`suggest.total`, `suto`, `suggestTotal`::
|
|
|
|
Number of suggest operations, such as `0`.
|
|
|
|
|
|
|
|
`sync_id`::
|
|
|
|
Sync ID of the shard.
|
|
|
|
|
|
|
|
`unassigned.at`, `ua`::
|
|
|
|
Time at which the shard became unassigned in
|
|
|
|
https://en.wikipedia.org/wiki/List_of_UTC_time_offsets[Coordinated Universal
|
|
|
|
Time (UTC)].
|
|
|
|
|
|
|
|
`unassigned.details`, `ud`::
|
|
|
|
Details about why the shard became unassigned.
|
|
|
|
|
|
|
|
`unassigned.for`, `ua`::
|
|
|
|
Time at which the shard was requested to be unassigned in
|
|
|
|
https://en.wikipedia.org/wiki/List_of_UTC_time_offsets[Coordinated Universal
|
|
|
|
Time (UTC)].
|
|
|
|
|
|
|
|
[[reason-unassigned]]
|
|
|
|
`unassigned.reason`, `ur`::
|
|
|
|
Reason the shard is unassigned. Returned values are:
|
|
|
|
+
|
|
|
|
* `ALLOCATION_FAILED`: Unassigned as a result of a failed allocation of the shard.
|
|
|
|
* `CLUSTER_RECOVERED`: Unassigned as a result of a full cluster recovery.
|
|
|
|
* `DANGLING_INDEX_IMPORTED`: Unassigned as a result of importing a dangling index.
|
|
|
|
* `EXISTING_INDEX_RESTORED`: Unassigned as a result of restoring into a closed index.
|
|
|
|
* `INDEX_CREATED`: Unassigned as a result of an API creation of an index.
|
|
|
|
* `INDEX_REOPENED`: Unassigned as a result of opening a closed index.
|
|
|
|
* `NEW_INDEX_RESTORED`: Unassigned as a result of restoring into a new index.
|
|
|
|
* `NODE_LEFT`: Unassigned as a result of the node hosting it leaving the cluster.
|
|
|
|
* `REALLOCATED_REPLICA`: A better replica location is identified and causes the existing replica allocation to be cancelled.
|
|
|
|
* `REINITIALIZED`: When a shard moves from started back to initializing, for example, with shadow replicas.
|
|
|
|
* `REPLICA_ADDED`: Unassigned as a result of explicit addition of a replica.
|
|
|
|
* `REROUTE_CANCELLED`: Unassigned as a result of explicit cancel reroute command.
|
|
|
|
|
|
|
|
--
|
|
|
|
|
|
|
|
include::{docdir}/rest-api/common-parms.asciidoc[tag=help]
|
|
|
|
|
|
|
|
include::{docdir}/rest-api/common-parms.asciidoc[tag=local]
|
|
|
|
|
|
|
|
include::{docdir}/rest-api/common-parms.asciidoc[tag=master-timeout]
|
|
|
|
|
|
|
|
include::{docdir}/rest-api/common-parms.asciidoc[tag=cat-s]
|
|
|
|
|
|
|
|
include::{docdir}/rest-api/common-parms.asciidoc[tag=cat-v]
|
|
|
|
|
|
|
|
|
|
|
|
[[cat-shards-api-example]]
|
|
|
|
==== {api-examples-title}
|
|
|
|
|
|
|
|
[[cat-shards-api-example-single]]
|
|
|
|
===== Example with a single index
|
2017-02-22 03:18:10 -05:00
|
|
|
|
2019-09-09 13:38:14 -04:00
|
|
|
[source,console]
|
2017-02-22 03:18:10 -05:00
|
|
|
---------------------------------------------------------------------------
|
|
|
|
GET _cat/shards
|
|
|
|
---------------------------------------------------------------------------
|
|
|
|
// TEST[setup:twitter]
|
|
|
|
|
2019-08-13 08:24:53 -04:00
|
|
|
The API returns the following response:
|
2017-02-22 03:18:10 -05:00
|
|
|
|
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
|
|
|
[source,txt]
|
2017-02-22 03:18:10 -05:00
|
|
|
---------------------------------------------------------------------------
|
|
|
|
twitter 0 p STARTED 3014 31.1mb 192.168.56.10 H5dfFeA
|
|
|
|
---------------------------------------------------------------------------
|
|
|
|
// TESTRESPONSE[s/3014/\\d+/]
|
2017-05-01 15:31:02 -04:00
|
|
|
// TESTRESPONSE[s/31.1mb/\\d+(\.\\d+)?[kmg]?b/]
|
2017-02-22 03:18:10 -05:00
|
|
|
// TESTRESPONSE[s/192.168.56.10/.*/]
|
2019-06-10 09:33:32 -04:00
|
|
|
// TESTRESPONSE[s/H5dfFeA/node-0/ non_json]
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2019-08-13 08:24:53 -04:00
|
|
|
[[cat-shards-api-example-wildcard]]
|
|
|
|
===== Example with a index wildcard pattern
|
|
|
|
|
|
|
|
If your cluster has many shards, you can use a wildcard pattern in the `{index}`
|
|
|
|
path parameter to limit the API request.
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2019-08-13 08:24:53 -04:00
|
|
|
The following request returns information for any indices beginning with
|
|
|
|
`twitt`.
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2019-09-09 13:38:14 -04:00
|
|
|
[source,console]
|
2017-02-22 03:18:10 -05:00
|
|
|
---------------------------------------------------------------------------
|
|
|
|
GET _cat/shards/twitt*
|
|
|
|
---------------------------------------------------------------------------
|
|
|
|
// TEST[setup:twitter]
|
|
|
|
|
2019-08-13 08:24:53 -04:00
|
|
|
The API returns the following response:
|
2017-02-22 03:18:10 -05:00
|
|
|
|
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
|
|
|
[source,txt]
|
2017-02-22 03:18:10 -05:00
|
|
|
---------------------------------------------------------------------------
|
|
|
|
twitter 0 p STARTED 3014 31.1mb 192.168.56.10 H5dfFeA
|
|
|
|
---------------------------------------------------------------------------
|
|
|
|
// TESTRESPONSE[s/3014/\\d+/]
|
2017-05-01 15:31:02 -04:00
|
|
|
// TESTRESPONSE[s/31.1mb/\\d+(\.\\d+)?[kmg]?b/]
|
2017-02-22 03:18:10 -05:00
|
|
|
// TESTRESPONSE[s/192.168.56.10/.*/]
|
2019-06-10 09:33:32 -04:00
|
|
|
// TESTRESPONSE[s/H5dfFeA/node-0/ non_json]
|
2013-11-14 20:14:39 -05:00
|
|
|
|
|
|
|
|
|
|
|
[[relocation]]
|
2019-08-13 08:24:53 -04:00
|
|
|
===== Example with a relocating shard
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2019-09-09 13:38:14 -04:00
|
|
|
[source,console]
|
2017-02-22 03:18:10 -05:00
|
|
|
---------------------------------------------------------------------------
|
|
|
|
GET _cat/shards
|
|
|
|
---------------------------------------------------------------------------
|
|
|
|
// TEST[skip:for now, relocation cannot be recreated]
|
|
|
|
|
2019-08-13 08:24:53 -04:00
|
|
|
The API returns the following response:
|
2017-02-22 03:18:10 -05:00
|
|
|
|
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
|
|
|
[source,txt]
|
2017-02-22 03:18:10 -05:00
|
|
|
---------------------------------------------------------------------------
|
|
|
|
twitter 0 p RELOCATING 3014 31.1mb 192.168.56.10 H5dfFeA -> -> 192.168.56.30 bGG90GE
|
|
|
|
---------------------------------------------------------------------------
|
2019-06-10 09:33:32 -04:00
|
|
|
// TESTRESPONSE[non_json]
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2019-08-13 08:24:53 -04:00
|
|
|
The `RELOCATING` value in `state` column indicates the index shard is
|
|
|
|
relocating.
|
|
|
|
|
2013-11-14 20:14:39 -05:00
|
|
|
[[states]]
|
2019-08-13 08:24:53 -04:00
|
|
|
===== Example with a shard states
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2019-08-13 08:24:53 -04:00
|
|
|
Before a shard is available for use, it goes through an `INITIALIZING` state.
|
|
|
|
You can use the cat shards API to see which shards are initializing.
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2019-09-09 13:38:14 -04:00
|
|
|
[source,console]
|
2017-02-22 03:18:10 -05:00
|
|
|
---------------------------------------------------------------------------
|
|
|
|
GET _cat/shards
|
|
|
|
---------------------------------------------------------------------------
|
|
|
|
// TEST[skip:there is no guarantee to test for shards in initializing state]
|
|
|
|
|
2019-08-13 08:24:53 -04:00
|
|
|
The API returns the following response:
|
2017-02-22 03:18:10 -05:00
|
|
|
|
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
|
|
|
[source,txt]
|
2017-02-22 03:18:10 -05:00
|
|
|
---------------------------------------------------------------------------
|
|
|
|
twitter 0 p STARTED 3014 31.1mb 192.168.56.10 H5dfFeA
|
|
|
|
twitter 0 r INITIALIZING 0 14.3mb 192.168.56.30 bGG90GE
|
|
|
|
---------------------------------------------------------------------------
|
2019-06-10 09:33:32 -04:00
|
|
|
// TESTRESPONSE[non_json]
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2019-08-13 08:24:53 -04:00
|
|
|
===== Example with reasons for unassigned shards
|
|
|
|
|
|
|
|
The following request returns the `unassigned.reason` column, which indicates
|
|
|
|
why a shard is unassigned.
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2017-02-22 03:18:10 -05:00
|
|
|
|
2019-09-09 13:38:14 -04:00
|
|
|
[source,console]
|
2017-02-22 03:18:10 -05:00
|
|
|
---------------------------------------------------------------------------
|
|
|
|
GET _cat/shards?h=index,shard,prirep,state,unassigned.reason
|
|
|
|
---------------------------------------------------------------------------
|
|
|
|
// TEST[skip:for now]
|
|
|
|
|
2019-08-13 08:24:53 -04:00
|
|
|
The API returns the following response:
|
2017-02-22 03:18:10 -05:00
|
|
|
|
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
|
|
|
[source,txt]
|
2017-02-22 03:18:10 -05:00
|
|
|
---------------------------------------------------------------------------
|
|
|
|
twitter 0 p STARTED 3014 31.1mb 192.168.56.10 H5dfFeA
|
|
|
|
twitter 0 r STARTED 3014 31.1mb 192.168.56.30 bGG90GE
|
|
|
|
twitter 0 r STARTED 3014 31.1mb 192.168.56.20 I8hydUG
|
|
|
|
twitter 0 r UNASSIGNED ALLOCATION_FAILED
|
|
|
|
---------------------------------------------------------------------------
|
2019-08-13 08:24:53 -04:00
|
|
|
// TESTRESPONSE[non_json]
|