CONSOLEify some _cat docs

`/_cat/count`, `/_cat/fielddata`, and `/_cat/health`.

Three more files down, 141 to go.

Relates to #18160
This commit is contained in:
Nik Everett 2016-10-07 16:28:49 -04:00
parent 1bf11dc09a
commit 06049283a0
4 changed files with 146 additions and 45 deletions

View File

@ -93,9 +93,6 @@ buildRestTests.expectedUnconvertedCandidates = [
'reference/analysis/tokenfilters/stop-tokenfilter.asciidoc', 'reference/analysis/tokenfilters/stop-tokenfilter.asciidoc',
'reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc', 'reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc',
'reference/analysis/tokenfilters/word-delimiter-tokenfilter.asciidoc', 'reference/analysis/tokenfilters/word-delimiter-tokenfilter.asciidoc',
'reference/cat/count.asciidoc',
'reference/cat/fielddata.asciidoc',
'reference/cat/health.asciidoc',
'reference/cat/indices.asciidoc', 'reference/cat/indices.asciidoc',
'reference/cat/master.asciidoc', 'reference/cat/master.asciidoc',
'reference/cat/nodeattrs.asciidoc', 'reference/cat/nodeattrs.asciidoc',

View File

@ -4,17 +4,38 @@
`count` provides quick access to the document count of the entire `count` provides quick access to the document count of the entire
cluster, or individual indices. cluster, or individual indices.
[source,sh] [source,js]
-------------------------------------------------- --------------------------------------------------
% curl 192.168.56.10:9200/_cat/indices GET /_cat/count?v
green wiki1 3 0 10000 331 168.5mb 168.5mb
green wiki2 3 0 428 0 8mb 8mb
% curl 192.168.56.10:9200/_cat/count
1384314124582 19:42:04 10428
% curl 192.168.56.10:9200/_cat/count/wiki2
1384314139815 19:42:19 428
-------------------------------------------------- --------------------------------------------------
// CONSOLE
// TEST[setup:big_twitter]
// TEST[s/^/POST test\/test\?refresh\n{"test": "test"}\n/]
Looks like:
[source,js]
--------------------------------------------------
epoch timestamp count
1475868259 15:24:19 121
--------------------------------------------------
// TESTRESPONSE[s/1475868259 15:24:19/\\d+ \\d+:\\d+:\\d+/ _cat]
Or for a single index:
[source,js]
--------------------------------------------------
GET /_cat/count/twitter?v
--------------------------------------------------
// CONSOLE
// TEST[continued]
[source,js]
--------------------------------------------------
epoch timestamp count
1475868259 15:24:20 120
--------------------------------------------------
// TESTRESPONSE[s/1475868259 15:24:20/\\d+ \\d+:\\d+:\\d+/ _cat]
NOTE: The document count indicates the number of live documents and does not include deleted documents which have not yet been cleaned up by the merge process. NOTE: The document count indicates the number of live documents and does not include deleted documents which have not yet been cleaned up by the merge process.

View File

@ -1,39 +1,98 @@
[[cat-fielddata]] [[cat-fielddata]]
== cat fielddata == cat fielddata
`fielddata` shows how much heap memory is currently being used by fielddata `fielddata` shows how much heap memory is currently being used by fielddata
on every data node in the cluster. on every data node in the cluster.
[source,sh]
////
Hidden setup snippet to build an index with fielddata so our results are real:
[source,js]
-------------------------------------------------- --------------------------------------------------
% curl '192.168.56.10:9200/_cat/fielddata?v' PUT test
id host ip node field size {
bGG90GEiSGeezlbrcugAYQ myhost1 10.20.100.200 bGG90GE body 159.8kb "mappings": {
bGG90GEiSGeezlbrcugAYQ myhost1 10.20.100.200 bGG90GE text 225.7kb "test": {
H5dfFeANQaCL6xC8VxjAwg myhost2 10.20.100.201 H5dfFeA body 159.8kb "properties": {
H5dfFeANQaCL6xC8VxjAwg myhost2 10.20.100.201 H5dfFeA text 275.3kb "body": {
I8hydUG3R0q1AJ-HUEvkSQ myhost3 10.20.100.202 I8hydUG body 109.2kb "type": "text",
I8hydUG3R0q1AJ-HUEvkSQ myhost3 10.20.100.202 I8hydUG text 175.3kb "fielddata":true
},
"soul": {
"type": "text",
"fielddata":true
}
}
}
}
}
POST test/test?refresh
{
"body": "some words so there is a little field data",
"soul": "some more words"
}
# Perform a search to load the field data
POST test/_search?sort=body,soul
-------------------------------------------------- --------------------------------------------------
// CONSOLE
////
[source,js]
--------------------------------------------------
GET /_cat/fielddata?v
--------------------------------------------------
// CONSOLE
// TEST[continued]
Looks like:
[source,js]
--------------------------------------------------
id host ip node field size
Nqk-6inXQq-OxUfOUI8jNQ 127.0.0.1 127.0.0.1 Nqk-6in body 544b
Nqk-6inXQq-OxUfOUI8jNQ 127.0.0.1 127.0.0.1 Nqk-6in soul 480b
--------------------------------------------------
// TESTRESPONSE[s/544b|480b/\\d+(\\.\\d+)?[tgmk]?b/]
// TESTRESPONSE[s/Nqk-6in[^ ]*/.+/ s/soul|body/\\w+/ _cat]
Fields can be specified either as a query parameter, or in the URL path: Fields can be specified either as a query parameter, or in the URL path:
[source,sh] [source,js]
-------------------------------------------------- --------------------------------------------------
% curl '192.168.56.10:9200/_cat/fielddata?v&fields=body' GET /_cat/fielddata?v&fields=body
id host ip node field size --------------------------------------------------
bGG90GEiSGeezlbrcugAYQ myhost1 10.20.100.200 bGG90GE body 159.8kb // CONSOLE
H5dfFeANQaCL6xC8VxjAwg myhost2 10.20.100.201 H5dfFeA body 159.8kb // TEST[continued]
I8hydUG3R0q1AJ-HUEvkSQ myhost3 10.20.100.202 I8hydUG body 109.2kb
% curl '192.168.56.10:9200/_cat/fielddata/body,text?v' Which looks like:
id host ip node field size
bGG90GEiSGeezlbrcugAYQ myhost1 10.20.100.200 bGG90GE body 159.8kb [source,js]
bGG90GEiSGeezlbrcugAYQ myhost1 10.20.100.200 bGG90GE text 225.7kb
H5dfFeANQaCL6xC8VxjAwg myhost2 10.20.100.201 H5dfFeA body 159.8kb
H5dfFeANQaCL6xC8VxjAwg myhost2 10.20.100.201 H5dfFeA text 275.3kb
I8hydUG3R0q1AJ-HUEvkSQ myhost3 10.20.100.202 I8hydUG body 109.2kb
I8hydUG3R0q1AJ-HUEvkSQ myhost3 10.20.100.202 I8hydUG text 175.3kb
-------------------------------------------------- --------------------------------------------------
id host ip node field size
Nqk-6inXQq-OxUfOUI8jNQ 127.0.0.1 127.0.0.1 Nqk-6in body 544b
--------------------------------------------------
// TESTRESPONSE[s/544b|480b/\\d+(\\.\\d+)?[tgmk]?b/]
// TESTRESPONSE[s/Nqk-6in[^ ]*/.+/ _cat]
And it accepts a comma delimited list:
[source,js]
--------------------------------------------------
GET /_cat/fielddata/body,soul?v
--------------------------------------------------
// CONSOLE
// TEST[continued]
Which produces the same output as the first snippet:
[source,js]
--------------------------------------------------
id host ip node field size
Nqk-6inXQq-OxUfOUI8jNQ 127.0.0.1 127.0.0.1 Nqk-6in body 544b
Nqk-6inXQq-OxUfOUI8jNQ 127.0.0.1 127.0.0.1 Nqk-6in soul 480b
--------------------------------------------------
// TESTRESPONSE[s/544b|480b/\\d+(\\.\\d+)?[tgmk]?b/]
// TESTRESPONSE[s/Nqk-6in[^ ]*/.+/ s/soul|body/\\w+/ _cat]
The output shows the individual fielddata for the`body` and `text` fields, one row per field per node. The output shows the individual fielddata for the`body` and `text` fields, one row per field per node.

View File

@ -2,17 +2,39 @@
== cat health == cat health
`health` is a terse, one-line representation of the same information `health` is a terse, one-line representation of the same information
from `/_cluster/health`. It has one option `ts` to disable the from `/_cluster/health`.
timestamping.
[source,sh] [source,js]
-------------------------------------------------- --------------------------------------------------
% curl localhost:9200/_cat/health GET /_cat/health?v
1384308967 18:16:07 foo green 3 3 3 3 0 0 0
% curl 'localhost:9200/_cat/health?v&ts=0'
cluster status nodeTotal nodeData shards pri relo init unassign tasks
foo green 3 3 3 3 0 0 0 0
-------------------------------------------------- --------------------------------------------------
// CONSOLE
// TEST[s/^/PUT twitter\n{"settings":{"number_of_replicas": 0}}\n/]
[source,js]
--------------------------------------------------
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1475871424 16:17:04 docs_integTest green 1 1 5 5 0 0 0 0 - 100.0%
--------------------------------------------------
// TESTRESPONSE[s/1475871424 16:17:04/\\d+ \\d+:\\d+:\\d+/ s/elasticsearch/[^ ]+/ _cat]
It has one option `ts` to disable the timestamping:
[source,js]
--------------------------------------------------
GET /_cat/health?v&ts=0
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT twitter\n{"settings":{"number_of_replicas": 0}}\n/]
which looks like:
[source,js]
--------------------------------------------------
cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
elasticsearch green 1 1 5 5 0 0 0 0 - 100.0%
--------------------------------------------------
// TESTRESPONSE[s/elasticsearch/[^ ]+/ _cat]
A common use of this command is to verify the health is consistent A common use of this command is to verify the health is consistent
across nodes: across nodes:
@ -27,6 +49,7 @@ across nodes:
[3] 20:20:52 [SUCCESS] es2.vm [3] 20:20:52 [SUCCESS] es2.vm
1384309218 18:20:18 foo green 3 3 3 3 0 0 0 0 1384309218 18:20:18 foo green 3 3 3 3 0 0 0 0
-------------------------------------------------- --------------------------------------------------
// NOTCONSOLE
A less obvious use is to track recovery of a large cluster over A less obvious use is to track recovery of a large cluster over
time. With enough shards, starting a cluster, or even recovering after time. With enough shards, starting a cluster, or even recovering after
@ -42,6 +65,7 @@ to track its progress is by using this command in a delayed loop:
1384309806 18:30:06 foo green 3 3 1832 916 4 0 0 1384309806 18:30:06 foo green 3 3 1832 916 4 0 0
^C ^C
-------------------------------------------------- --------------------------------------------------
// NOTCONSOLE
In this scenario, we can tell that recovery took roughly four minutes. In this scenario, we can tell that recovery took roughly four minutes.
If this were going on for hours, we would be able to watch the If this were going on for hours, we would be able to watch the