2013-11-14 20:14:39 -05:00
|
|
|
[[cat-indices]]
|
2019-07-19 14:35:36 -04:00
|
|
|
=== cat indices
|
2013-11-14 20:14:39 -05:00
|
|
|
|
|
|
|
The `indices` command provides a cross-section of each index. This
|
2016-10-11 08:46:59 -04:00
|
|
|
information *spans nodes*. For example:
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2016-10-11 08:46:59 -04:00
|
|
|
[source,js]
|
2013-11-14 20:14:39 -05:00
|
|
|
--------------------------------------------------
|
2016-10-11 08:46:59 -04:00
|
|
|
GET /_cat/indices/twi*?v&s=index
|
2013-11-14 20:14:39 -05:00
|
|
|
--------------------------------------------------
|
2016-10-11 08:46:59 -04:00
|
|
|
// CONSOLE
|
|
|
|
// TEST[setup:huge_twitter]
|
|
|
|
// TEST[s/^/PUT twitter2\n{"settings": {"number_of_replicas": 0}}\n/]
|
|
|
|
|
|
|
|
Might respond with:
|
|
|
|
|
2016-10-25 10:56:30 -04:00
|
|
|
[source,txt]
|
2016-10-11 08:46:59 -04:00
|
|
|
--------------------------------------------------
|
|
|
|
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
|
|
|
|
yellow open twitter u8FNjxh8Rfy_awN11oDKYQ 1 1 1200 0 88.1kb 88.1kb
|
2018-05-14 12:22:35 -04:00
|
|
|
green open twitter2 nYFWZEO7TUiOjLQXBaYJpA 1 0 0 0 260b 260b
|
2016-10-11 08:46:59 -04:00
|
|
|
--------------------------------------------------
|
|
|
|
// TESTRESPONSE[s/\d+(\.\d+)?[tgmk]?b/\\d+(\\.\\d+)?[tgmk]?b/]
|
2019-06-10 09:33:32 -04:00
|
|
|
// TESTRESPONSE[s/u8FNjxh8Rfy_awN11oDKYQ|nYFWZEO7TUiOjLQXBaYJpA/.+/ non_json]
|
2013-11-14 20:14:39 -05:00
|
|
|
|
|
|
|
We can tell quickly how many shards make up an index, the number of
|
2017-04-17 21:52:29 -04:00
|
|
|
docs, deleted docs, primary store size, and total store size (all shards including replicas).
|
2016-04-07 05:15:32 -04:00
|
|
|
All these exposed metrics come directly from Lucene APIs.
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2017-04-17 21:52:29 -04:00
|
|
|
*Notes:*
|
|
|
|
|
|
|
|
1. As the number of documents and deleted documents shown in this are at the lucene level,
|
|
|
|
it includes all the hidden documents (e.g. from nested documents) as well.
|
|
|
|
|
2017-11-29 03:44:25 -05:00
|
|
|
2. To get actual count of documents at the Elasticsearch level, the recommended way
|
2017-04-17 21:52:29 -04:00
|
|
|
is to use either the <<cat-count>> or the <<search-count>>
|
|
|
|
|
2014-01-13 15:18:14 -05:00
|
|
|
[float]
|
|
|
|
[[pri-flag]]
|
2019-07-19 14:35:36 -04:00
|
|
|
==== Primaries
|
2014-01-13 15:18:14 -05:00
|
|
|
|
|
|
|
The index stats by default will show them for all of an index's
|
|
|
|
shards, including replicas. A `pri` flag can be supplied to enable
|
|
|
|
the view of relevant stats in the context of only the primaries.
|
|
|
|
|
2013-11-14 20:14:39 -05:00
|
|
|
[float]
|
|
|
|
[[examples]]
|
2019-07-19 14:35:36 -04:00
|
|
|
==== Examples
|
2013-11-14 20:14:39 -05:00
|
|
|
|
|
|
|
Which indices are yellow?
|
|
|
|
|
2016-10-11 08:46:59 -04:00
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
GET /_cat/indices?v&health=yellow
|
|
|
|
--------------------------------------------------
|
|
|
|
// CONSOLE
|
|
|
|
// TEST[continued]
|
|
|
|
|
|
|
|
Which looks like:
|
|
|
|
|
2016-10-25 10:56:30 -04:00
|
|
|
[source,txt]
|
2013-11-14 20:14:39 -05:00
|
|
|
--------------------------------------------------
|
2016-10-11 08:46:59 -04:00
|
|
|
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
|
|
|
|
yellow open twitter u8FNjxh8Rfy_awN11oDKYQ 1 1 1200 0 88.1kb 88.1kb
|
2013-11-14 20:14:39 -05:00
|
|
|
--------------------------------------------------
|
2016-10-11 08:46:59 -04:00
|
|
|
// TESTRESPONSE[s/\d+(\.\d+)?[tgmk]?b/\\d+(\\.\\d+)?[tgmk]?b/]
|
2019-06-10 09:33:32 -04:00
|
|
|
// TESTRESPONSE[s/u8FNjxh8Rfy_awN11oDKYQ/.+/ non_json]
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2016-11-15 10:00:44 -05:00
|
|
|
Which index has the largest number of documents?
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2016-10-11 08:46:59 -04:00
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2016-11-15 10:00:44 -05:00
|
|
|
GET /_cat/indices?v&s=docs.count:desc
|
2016-10-11 08:46:59 -04:00
|
|
|
--------------------------------------------------
|
|
|
|
// CONSOLE
|
|
|
|
// TEST[continued]
|
|
|
|
|
|
|
|
Which looks like:
|
|
|
|
|
2016-10-25 10:56:30 -04:00
|
|
|
[source,txt]
|
2016-10-11 08:46:59 -04:00
|
|
|
--------------------------------------------------
|
|
|
|
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
|
|
|
|
yellow open twitter u8FNjxh8Rfy_awN11oDKYQ 1 1 1200 0 88.1kb 88.1kb
|
2018-05-14 12:22:35 -04:00
|
|
|
green open twitter2 nYFWZEO7TUiOjLQXBaYJpA 1 0 0 0 260b 260b
|
2016-10-11 08:46:59 -04:00
|
|
|
--------------------------------------------------
|
|
|
|
// TESTRESPONSE[s/\d+(\.\d+)?[tgmk]?b/\\d+(\\.\\d+)?[tgmk]?b/]
|
2019-06-10 09:33:32 -04:00
|
|
|
// TESTRESPONSE[s/u8FNjxh8Rfy_awN11oDKYQ|nYFWZEO7TUiOjLQXBaYJpA/.+/ non_json]
|
2016-10-11 08:46:59 -04:00
|
|
|
|
|
|
|
How many merge operations have the shards for the `twitter` completed?
|
|
|
|
|
|
|
|
[source,js]
|
2013-11-14 20:14:39 -05:00
|
|
|
--------------------------------------------------
|
2016-10-11 08:46:59 -04:00
|
|
|
GET /_cat/indices/twitter?pri&v&h=health,index,pri,rep,docs.count,mt
|
2014-01-13 15:18:14 -05:00
|
|
|
--------------------------------------------------
|
2016-10-11 08:46:59 -04:00
|
|
|
// CONSOLE
|
|
|
|
// TEST[continued]
|
2014-01-13 15:18:14 -05:00
|
|
|
|
2016-10-11 08:46:59 -04:00
|
|
|
Might look like:
|
2014-01-13 15:18:14 -05:00
|
|
|
|
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
|
|
|
[source,txt]
|
2014-01-13 15:18:14 -05:00
|
|
|
--------------------------------------------------
|
2016-10-11 08:46:59 -04:00
|
|
|
health index pri rep docs.count mt pri.mt
|
|
|
|
yellow twitter 1 1 1200 16 16
|
2013-11-14 20:14:39 -05:00
|
|
|
--------------------------------------------------
|
2019-06-10 09:33:32 -04:00
|
|
|
// TESTRESPONSE[s/16/\\d+/ non_json]
|
2014-09-22 16:03:50 -04:00
|
|
|
|
|
|
|
How much memory is used per index?
|
|
|
|
|
2016-10-11 08:46:59 -04:00
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
GET /_cat/indices?v&h=i,tm&s=tm:desc
|
|
|
|
--------------------------------------------------
|
|
|
|
// CONSOLE
|
|
|
|
// TEST[continued]
|
|
|
|
|
|
|
|
Might look like:
|
|
|
|
|
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
|
|
|
[source,txt]
|
2014-09-22 16:03:50 -04:00
|
|
|
--------------------------------------------------
|
2016-10-11 08:46:59 -04:00
|
|
|
i tm
|
|
|
|
twitter 8.1gb
|
|
|
|
twitter2 30.5kb
|
2014-09-22 16:03:50 -04:00
|
|
|
--------------------------------------------------
|
2016-10-11 08:46:59 -04:00
|
|
|
// TESTRESPONSE[s/\d+(\.\d+)?[tgmk]?b/\\d+(\\.\\d+)?[tgmk]?b/]
|
2019-06-10 09:33:32 -04:00
|
|
|
// TESTRESPONSE[non_json]
|