Convert more docs to // CONSOLE
Converts docs for `_cat/segments`, `_cat/plugins` and `_cat/repositories` from `curl` to `// CONSOLE` so they are tested as part of the build and are cleaner to use in Console. They should work fine with `curl` with the `COPY AS CURL` link. Also swaps the `source` type of the response from `js` to `txt` because that is more correct. The syntax highlighter doesn't care. It looks at the text to figure out the language. So it looks a little funny for `_cat` responses regardless. Relates to #18160
This commit is contained in:
parent
1bc08ff1e5
commit
44c3b04bef
|
@ -93,10 +93,7 @@ buildRestTests.expectedUnconvertedCandidates = [
|
|||
'reference/analysis/tokenfilters/stop-tokenfilter.asciidoc',
|
||||
'reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc',
|
||||
'reference/analysis/tokenfilters/word-delimiter-tokenfilter.asciidoc',
|
||||
'reference/cat/plugins.asciidoc',
|
||||
'reference/cat/recovery.asciidoc',
|
||||
'reference/cat/repositories.asciidoc',
|
||||
'reference/cat/segments.asciidoc',
|
||||
'reference/cat/shards.asciidoc',
|
||||
'reference/cat/snapshots.asciidoc',
|
||||
'reference/cat/templates.asciidoc',
|
||||
|
|
|
@ -41,7 +41,7 @@ GET /_cat/aliases?v
|
|||
|
||||
Might respond with:
|
||||
|
||||
[source,js]
|
||||
[source,txt]
|
||||
--------------------------------------------------
|
||||
alias index filter routing.index routing.search
|
||||
alias1 test1 - - -
|
||||
|
|
|
@ -13,7 +13,7 @@ GET /_cat/allocation?v
|
|||
|
||||
Might respond with:
|
||||
|
||||
[source,js]
|
||||
[source,txt]
|
||||
--------------------------------------------------
|
||||
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
|
||||
5 260b 47.3gb 43.4gb 100.7gb 46 127.0.0.1 127.0.0.1 CSUXak2
|
||||
|
|
|
@ -14,7 +14,7 @@ GET /_cat/count?v
|
|||
|
||||
Looks like:
|
||||
|
||||
[source,js]
|
||||
[source,txt]
|
||||
--------------------------------------------------
|
||||
epoch timestamp count
|
||||
1475868259 15:24:19 121
|
||||
|
@ -30,7 +30,7 @@ GET /_cat/count/twitter?v
|
|||
// CONSOLE
|
||||
// TEST[continued]
|
||||
|
||||
[source,js]
|
||||
[source,txt]
|
||||
--------------------------------------------------
|
||||
epoch timestamp count
|
||||
1475868259 15:24:20 120
|
||||
|
|
|
@ -47,7 +47,7 @@ GET /_cat/fielddata?v
|
|||
|
||||
Looks like:
|
||||
|
||||
[source,js]
|
||||
[source,txt]
|
||||
--------------------------------------------------
|
||||
id host ip node field size
|
||||
Nqk-6inXQq-OxUfOUI8jNQ 127.0.0.1 127.0.0.1 Nqk-6in body 544b
|
||||
|
@ -67,7 +67,7 @@ GET /_cat/fielddata?v&fields=body
|
|||
|
||||
Which looks like:
|
||||
|
||||
[source,js]
|
||||
[source,txt]
|
||||
--------------------------------------------------
|
||||
id host ip node field size
|
||||
Nqk-6inXQq-OxUfOUI8jNQ 127.0.0.1 127.0.0.1 Nqk-6in body 544b
|
||||
|
@ -86,7 +86,7 @@ GET /_cat/fielddata/body,soul?v
|
|||
|
||||
Which produces the same output as the first snippet:
|
||||
|
||||
[source,js]
|
||||
[source,txt]
|
||||
--------------------------------------------------
|
||||
id host ip node field size
|
||||
Nqk-6inXQq-OxUfOUI8jNQ 127.0.0.1 127.0.0.1 Nqk-6in body 544b
|
||||
|
|
|
@ -11,7 +11,7 @@ GET /_cat/health?v
|
|||
// CONSOLE
|
||||
// TEST[s/^/PUT twitter\n{"settings":{"number_of_replicas": 0}}\n/]
|
||||
|
||||
[source,js]
|
||||
[source,txt]
|
||||
--------------------------------------------------
|
||||
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
|
||||
1475871424 16:17:04 elasticsearch green 1 1 5 5 0 0 0 0 - 100.0%
|
||||
|
@ -29,7 +29,7 @@ GET /_cat/health?v&ts=0
|
|||
|
||||
which looks like:
|
||||
|
||||
[source,js]
|
||||
[source,txt]
|
||||
--------------------------------------------------
|
||||
cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
|
||||
elasticsearch green 1 1 5 5 0 0 0 0 - 100.0%
|
||||
|
|
|
@ -14,7 +14,7 @@ GET /_cat/indices/twi*?v&s=index
|
|||
|
||||
Might respond with:
|
||||
|
||||
[source,js]
|
||||
[source,txt]
|
||||
--------------------------------------------------
|
||||
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
|
||||
yellow open twitter u8FNjxh8Rfy_awN11oDKYQ 1 1 1200 0 88.1kb 88.1kb
|
||||
|
@ -51,7 +51,7 @@ GET /_cat/indices?v&health=yellow
|
|||
|
||||
Which looks like:
|
||||
|
||||
[source,js]
|
||||
[source,txt]
|
||||
--------------------------------------------------
|
||||
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
|
||||
yellow open twitter u8FNjxh8Rfy_awN11oDKYQ 1 1 1200 0 88.1kb 88.1kb
|
||||
|
@ -71,7 +71,7 @@ GET /_cat/indices?v&s=store.size:desc
|
|||
|
||||
Which looks like:
|
||||
|
||||
[source,js]
|
||||
[source,txt]
|
||||
--------------------------------------------------
|
||||
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
|
||||
yellow open twitter u8FNjxh8Rfy_awN11oDKYQ 1 1 1200 0 88.1kb 88.1kb
|
||||
|
|
|
@ -12,7 +12,7 @@ GET /_cat/master?v
|
|||
|
||||
might respond:
|
||||
|
||||
[source,js]
|
||||
[source,txt]
|
||||
--------------------------------------------------
|
||||
id host ip node
|
||||
YzWoH_2BT-6UjVGDyPdqYg 127.0.0.1 127.0.0.1 YzWoH_2
|
||||
|
|
|
@ -12,7 +12,7 @@ GET /_cat/nodeattrs?v
|
|||
|
||||
Could look like:
|
||||
|
||||
[source,js]
|
||||
[source,txt]
|
||||
--------------------------------------------------
|
||||
node host ip attr value
|
||||
EK_AsJb 127.0.0.1 127.0.0.1 testattr test
|
||||
|
|
|
@ -11,7 +11,7 @@ GET /_cat/nodes?v
|
|||
|
||||
Might look like:
|
||||
|
||||
[source,js]
|
||||
[source,txt]
|
||||
--------------------------------------------------
|
||||
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
|
||||
127.0.0.1 65 99 42 3.07 mdi * mJw06l1
|
||||
|
|
|
@ -13,7 +13,7 @@ GET /_cat/pending_tasks?v
|
|||
|
||||
Might look like:
|
||||
|
||||
[source,js]
|
||||
[source,txt]
|
||||
--------------------------------------------------
|
||||
insertOrder timeInQueue priority source
|
||||
1685 855ms HIGH update-mapping [foo][t]
|
||||
|
|
|
@ -3,12 +3,36 @@
|
|||
|
||||
The `plugins` command provides a view per node of running plugins. This information *spans nodes*.
|
||||
|
||||
[source,sh]
|
||||
[source,js]
|
||||
------------------------------------------------------------------------------
|
||||
% curl 'localhost:9200/_cat/plugins?v'
|
||||
name component version description
|
||||
I8hydUG discovery-gce 5.0.0 The Google Compute Engine (GCE) Discovery plugin allows to use GCE API for the unicast discovery mechanism.
|
||||
I8hydUG lang-javascript 5.0.0 The JavaScript language plugin allows to have javascript as the language of scripts to execute.
|
||||
GET /_cat/plugins?v&s=component&h=name,component,version,description
|
||||
------------------------------------------------------------------------------
|
||||
// CONSOLE
|
||||
|
||||
Might look like:
|
||||
|
||||
["source","txt",subs="attributes,callouts"]
|
||||
------------------------------------------------------------------------------
|
||||
name component version description
|
||||
U7321H6 analysis-icu {version} The ICU Analysis plugin integrates Lucene ICU module into elasticsearch, adding ICU relates analysis components.
|
||||
U7321H6 analysis-kuromoji {version} The Japanese (kuromoji) Analysis plugin integrates Lucene kuromoji analysis module into elasticsearch.
|
||||
U7321H6 analysis-phonetic {version} The Phonetic Analysis plugin integrates phonetic token filter analysis with elasticsearch.
|
||||
U7321H6 analysis-smartcn {version} Smart Chinese Analysis plugin integrates Lucene Smart Chinese analysis module into elasticsearch.
|
||||
U7321H6 analysis-stempel {version} The Stempel (Polish) Analysis plugin integrates Lucene stempel (polish) analysis module into elasticsearch.
|
||||
U7321H6 discovery-azure-classic {version} The Azure Classic Discovery plugin allows to use Azure Classic API for the unicast discovery mechanism
|
||||
U7321H6 discovery-ec2 {version} The EC2 discovery plugin allows to use AWS API for the unicast discovery mechanism.
|
||||
U7321H6 discovery-file {version} Discovery file plugin enables unicast discovery from hosts stored in a file.
|
||||
U7321H6 discovery-gce {version} The Google Compute Engine (GCE) Discovery plugin allows to use GCE API for the unicast discovery mechanism.
|
||||
U7321H6 ingest-attachment {version} Ingest processor that uses Apache Tika to extract contents
|
||||
U7321H6 ingest-geoip {version} Ingest processor that uses looksup geo data based on ip adresses using the Maxmind geo database
|
||||
U7321H6 ingest-user-agent {version} Ingest processor that extracts information from a user agent
|
||||
U7321H6 jvm-example {version} Demonstrates all the pluggable Java entry points in Elasticsearch
|
||||
U7321H6 lang-javascript {version} The JavaScript language plugin allows to have javascript as the language of scripts to execute.
|
||||
U7321H6 lang-python {version} The Python language plugin allows to have python as the language of scripts to execute.
|
||||
U7321H6 mapper-murmur3 {version} The Mapper Murmur3 plugin allows to compute hashes of a field's values at index-time and to store them in the index.
|
||||
U7321H6 mapper-size {version} The Mapper Size plugin allows document to record their uncompressed size at index time.
|
||||
U7321H6 store-smb {version} The Store SMB plugin adds support for SMB stores.
|
||||
-------------------------------------------------------------------------------
|
||||
// TESTRESPONSE[s/([.()])/\\$1/ s/U7321H6/.+/ _cat]
|
||||
|
||||
We can tell quickly how many plugins per node we have and which versions.
|
||||
|
|
|
@ -1,14 +1,24 @@
|
|||
[[cat-repositories]]
|
||||
== cat repositories
|
||||
|
||||
The `repositories` command shows the snapshot repositories registered in the cluster.
|
||||
The `repositories` command shows the snapshot repositories registered in the
|
||||
cluster. For example:
|
||||
|
||||
[source,sh]
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
GET /_cat/repositories?v
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[s/^/PUT \/_snapshot\/repo1\n{"type": "fs", "settings": {"location": "repo\/1"}}\n/]
|
||||
|
||||
might looks like:
|
||||
|
||||
[source,txt]
|
||||
--------------------------------------------------
|
||||
% curl 'localhost:9200/_cat/repositories?v'
|
||||
id type
|
||||
repo1 fs
|
||||
repo2 s3
|
||||
--------------------------------------------------
|
||||
// TESTRESPONSE[s/\nrepo2 s3// _cat]
|
||||
|
||||
We can quickly see which repositories are registered and their type.
|
||||
|
|
|
@ -3,24 +3,24 @@
|
|||
|
||||
The `segments` command provides low level information about the segments
|
||||
in the shards of an index. It provides information similar to the
|
||||
link:indices-segments.html[_segments] endpoint.
|
||||
link:indices-segments.html[_segments] endpoint. For example:
|
||||
|
||||
[source,sh]
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
% curl 'http://localhost:9200/_cat/segments?v'
|
||||
index shard prirep ip segment generation docs.count [...]
|
||||
test 4 p 192.168.2.105 _0 0 1
|
||||
test1 2 p 192.168.2.105 _0 0 1
|
||||
test1 3 p 192.168.2.105 _2 2 1
|
||||
GET /_cat/segments?v
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[s/^/PUT \/test\/test\/1?refresh\n{"test":"test"}\nPUT \/test1\/test\/1?refresh\n{"test":"test"}\n/]
|
||||
|
||||
[source,sh]
|
||||
might look like:
|
||||
|
||||
["source","txt",subs="attributes,callouts"]
|
||||
--------------------------------------------------
|
||||
[...] docs.deleted size size.memory committed searchable version compound
|
||||
0 2.9kb 7818 false true 4.10.2 true
|
||||
0 2.9kb 7818 false true 4.10.2 true
|
||||
0 2.9kb 7818 false true 4.10.2 true
|
||||
index shard prirep ip segment generation docs.count docs.deleted size size.memory committed searchable version compound
|
||||
test 3 p 127.0.0.1 _0 0 1 0 3kb 2042 false true {lucene_version} true
|
||||
test1 3 p 127.0.0.1 _0 0 1 0 3kb 2042 false true {lucene_version} true
|
||||
--------------------------------------------------
|
||||
// TESTRESPONSE[s/3kb/\\d+(\\.\\d+)?[mk]?b/ s/2042/\\d+/ _cat]
|
||||
|
||||
The output shows information about index names and shard numbers in the first
|
||||
two columns.
|
||||
|
|
Loading…
Reference in New Issue