[Docs] Convert more doc snippets (#26359)

This commit converts some remaining doc snippets so that they are now
testable.
This commit is contained in:
Tanguy Leroux 2017-08-28 11:23:09 +02:00 committed by GitHub
parent f842ff1ae1
commit f95dec797d
8 changed files with 357 additions and 105 deletions

View File

@ -93,7 +93,7 @@ public class RestTestsFromSnippetsTask extends SnippetsTask {
*
* `sh` snippets that contain `curl` almost always should be marked
* with `// CONSOLE`. In the exceptionally rare cases where they are
* not communicating with Elasticsearch, like the xamples in the ec2
* not communicating with Elasticsearch, like the examples in the ec2
* and gce discovery plugins, the snippets should be marked
* `// NOTCONSOLE`. */
return snippet.language == 'js' || snippet.curl

View File

@ -32,10 +32,7 @@ buildRestTests.expectedUnconvertedCandidates = [
'reference/aggregations/matrix/stats-aggregation.asciidoc',
'reference/aggregations/metrics/tophits-aggregation.asciidoc',
'reference/cluster/allocation-explain.asciidoc',
'reference/cluster/nodes-info.asciidoc',
'reference/cluster/pending.asciidoc',
'reference/cluster/state.asciidoc',
'reference/cluster/stats.asciidoc',
'reference/cluster/tasks.asciidoc',
'reference/docs/delete-by-query.asciidoc',
'reference/docs/reindex.asciidoc',
@ -43,9 +40,6 @@ buildRestTests.expectedUnconvertedCandidates = [
'reference/index-modules/similarity.asciidoc',
'reference/index-modules/store.asciidoc',
'reference/index-modules/translog.asciidoc',
'reference/indices/recovery.asciidoc',
'reference/indices/segments.asciidoc',
'reference/indices/shard-stores.asciidoc',
'reference/search/profile.asciidoc',
]

View File

@ -6,9 +6,10 @@ the cluster nodes information.
[source,js]
--------------------------------------------------
curl -XGET 'http://localhost:9200/_nodes'
curl -XGET 'http://localhost:9200/_nodes/nodeId1,nodeId2'
GET /_nodes
GET /_nodes/nodeId1,nodeId2
--------------------------------------------------
// CONSOLE
The first command retrieves information of all the nodes in the cluster.
The second command selectively retrieves nodes information of only
@ -52,14 +53,22 @@ It also allows to get only information on `settings`, `os`, `process`, `jvm`,
[source,js]
--------------------------------------------------
curl -XGET 'http://localhost:9200/_nodes/process'
curl -XGET 'http://localhost:9200/_nodes/_all/process'
curl -XGET 'http://localhost:9200/_nodes/nodeId1,nodeId2/jvm,process'
# same as above
curl -XGET 'http://localhost:9200/_nodes/nodeId1,nodeId2/info/jvm,process'
# return just process
GET /_nodes/process
curl -XGET 'http://localhost:9200/_nodes/nodeId1,nodeId2/_all
# same as above
GET /_nodes/_all/process
# return just jvm and process of only nodeId1 and nodeId2
GET /_nodes/nodeId1,nodeId2/jvm,process
# same as above
GET /_nodes/nodeId1,nodeId2/info/jvm,process
# return all the information of only nodeId1 and nodeId2
GET /_nodes/nodeId1,nodeId2/_all
--------------------------------------------------
// CONSOLE
The `_all` flag can be set to return all the information - or you can simply omit it.
@ -110,23 +119,36 @@ the current running process:
[[plugins-info]]
==== Plugins information
`plugins` - if set, the result will contain details about the installed plugins
per node:
`plugins` - if set, the result will contain details about the installed plugins and modules per node:
* `name`: plugin name
* `version`: version of Elasticsearch the plugin was built for
* `description`: short description of the plugin's purpose
* `classname`: fully-qualified class name of the plugin's entry point
* `has_native_controller`: whether or not the plugin has a native controller process
[source,js]
--------------------------------------------------
GET /_nodes/plugins
--------------------------------------------------
// CONSOLE
// TEST[setup:node]
The result will look similar to:
[source,js]
--------------------------------------------------
{
"_nodes": ...
"cluster_name": "elasticsearch",
"nodes": {
"O70_wBv6S9aPPcAKdSUBtw": {
"USpTGYaBSIKbgSUJR2Z9lg": {
"name": "node-0",
"transport_address": "192.168.17:9300",
"host": "node-0.elastic.co",
"ip": "192.168.17",
"version": "{version}",
"build_hash": "587409e",
"roles": [
"master",
"data",
"ingest"
],
"attributes": {},
"plugins": [
{
"name": "analysis-icu",
@ -149,11 +171,41 @@ The result will look similar to:
"classname": "org.elasticsearch.ingest.useragent.IngestUserAgentPlugin",
"has_native_controller": false
}
],
"modules": [
{
"name": "lang-painless",
"version": "{version}",
"description": "An easy, safe and fast scripting language for Elasticsearch",
"classname": "org.elasticsearch.painless.PainlessPlugin",
"has_native_controller": false
}
]
}
}
}
--------------------------------------------------
// TESTRESPONSE[s/"_nodes": \.\.\./"_nodes": $body.$_path,/]
// TESTRESPONSE[s/"elasticsearch"/$body.cluster_name/]
// TESTRESPONSE[s/"USpTGYaBSIKbgSUJR2Z9lg"/\$node_name/]
// TESTRESPONSE[s/"name": "node-0"/"name": $body.$_path/]
// TESTRESPONSE[s/"transport_address": "192.168.17:9300"/"transport_address": $body.$_path/]
// TESTRESPONSE[s/"host": "node-0.elastic.co"/"host": $body.$_path/]
// TESTRESPONSE[s/"ip": "192.168.17"/"ip": $body.$_path/]
// TESTRESPONSE[s/"build_hash": "587409e"/"build_hash": $body.$_path/]
// TESTRESPONSE[s/"roles": \[[^\]]*\]/"roles": $body.$_path/]
// TESTRESPONSE[s/"attributes": \{[^\}]*\}/"attributes": $body.$_path/]
// TESTRESPONSE[s/"plugins": \[[^\]]*\]/"plugins": $body.$_path/]
// TESTRESPONSE[s/"modules": \[[^\]]*\]/"modules": $body.$_path/]
The following information are available for each plugin and module:
* `name`: plugin name
* `version`: version of Elasticsearch the plugin was built for
* `description`: short description of the plugin's purpose
* `classname`: fully-qualified class name of the plugin's entry point
* `has_native_controller`: whether or not the plugin has a native controller process
[float]
[[ingest-info]]
@ -162,16 +214,30 @@ The result will look similar to:
`ingest` - if set, the result will contain details about the available
processors per node:
* `type`: the processor type
[source,js]
--------------------------------------------------
GET /_nodes/ingest
--------------------------------------------------
// CONSOLE
// TEST[setup:node]
The result will look similar to:
[source,js]
--------------------------------------------------
{
"_nodes": ...
"cluster_name": "elasticsearch",
"nodes": {
"O70_wBv6S9aPPcAKdSUBtw": {
"USpTGYaBSIKbgSUJR2Z9lg": {
"name": "node-0",
"transport_address": "192.168.17:9300",
"host": "node-0.elastic.co",
"ip": "192.168.17",
"version": "{version}",
"build_hash": "587409e",
"roles": [],
"attributes": {},
"ingest": {
"processors": [
{
@ -221,4 +287,19 @@ The result will look similar to:
}
}
}
--------------------------------------------------
--------------------------------------------------
// TESTRESPONSE[s/"_nodes": \.\.\./"_nodes": $body.$_path,/]
// TESTRESPONSE[s/"elasticsearch"/$body.cluster_name/]
// TESTRESPONSE[s/"USpTGYaBSIKbgSUJR2Z9lg"/\$node_name/]
// TESTRESPONSE[s/"name": "node-0"/"name": $body.$_path/]
// TESTRESPONSE[s/"transport_address": "192.168.17:9300"/"transport_address": $body.$_path/]
// TESTRESPONSE[s/"host": "node-0.elastic.co"/"host": $body.$_path/]
// TESTRESPONSE[s/"ip": "192.168.17"/"ip": $body.$_path/]
// TESTRESPONSE[s/"build_hash": "587409e"/"build_hash": $body.$_path/]
// TESTRESPONSE[s/"roles": \[[^\]]*\]/"roles": $body.$_path/]
// TESTRESPONSE[s/"attributes": \{[^\}]*\}/"attributes": $body.$_path/]
// TESTRESPONSE[s/"processors": \[[^\]]*\]/"processors": $body.$_path/]
The following information are available for each ingest processor:
* `type`: the processor type

View File

@ -6,8 +6,9 @@ the whole cluster.
[source,js]
--------------------------------------------------
$ curl -XGET 'http://localhost:9200/_cluster/state'
GET /_cluster/state
--------------------------------------------------
// CONSOLE
The response provides the cluster name, the total compressed size
of the cluster state (its size when serialized for transmission over
@ -27,8 +28,9 @@ it is possible to filter the cluster state response specifying the parts in the
[source,js]
--------------------------------------------------
$ curl -XGET 'http://localhost:9200/_cluster/state/{metrics}/{indices}'
GET /_cluster/state/{metrics}/{indices}
--------------------------------------------------
// CONSOLE
`metrics` can be a comma-separated list of
@ -50,17 +52,27 @@ $ curl -XGET 'http://localhost:9200/_cluster/state/{metrics}/{indices}'
`blocks`::
Shows the `blocks` part of the response
A couple of example calls:
The following example returns only `metadata` and `routing_table` data for the `foo` and `bar` indices:
[source,js]
--------------------------------------------------
# return only metadata and routing_table data for specified indices
$ curl -XGET 'http://localhost:9200/_cluster/state/metadata,routing_table/foo,bar'
# return everything for these two indices
$ curl -XGET 'http://localhost:9200/_cluster/state/_all/foo,bar'
# Return only blocks data
$ curl -XGET 'http://localhost:9200/_cluster/state/blocks'
GET /_cluster/state/metadata,routing_table/foo,bar
--------------------------------------------------
// CONSOLE
The next example returns everything for the `foo` and `bar` indices:
[source,js]
--------------------------------------------------
GET /_cluster/state/_all/foo,bar
--------------------------------------------------
// CONSOLE
And this example return only `blocks` data:
[source,js]
--------------------------------------------------
GET /_cluster/state/blocks
--------------------------------------------------
// CONSOLE

View File

@ -8,21 +8,28 @@ versions, memory usage, cpu and installed plugins).
[source,js]
--------------------------------------------------
curl -XGET 'http://localhost:9200/_cluster/stats?human&pretty'
GET /_cluster/stats?human&pretty
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
Will return, for example:
["source","js",subs="attributes,callouts"]
--------------------------------------------------
{
"timestamp": 1459427693515,
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name": "elasticsearch",
"timestamp": 1459427693515,
"status": "green",
"indices": {
"count": 2,
"count": 1,
"shards": {
"total": 10,
"primaries": 10,
"total": 5,
"primaries": 5,
"replication": 0,
"index": {
"shards": {
@ -48,9 +55,7 @@ Will return, for example:
},
"store": {
"size": "16.2kb",
"size_in_bytes": 16684,
"throttle_time": "0s",
"throttle_time_in_millis": 0
"size_in_bytes": 16684
},
"fielddata": {
"memory_size": "0b",
@ -83,6 +88,8 @@ Will return, for example:
"term_vectors_memory_in_bytes": 0,
"norms_memory": "384b",
"norms_memory_in_bytes": 384,
"points_memory" : "0b",
"points_memory_in_bytes" : 0,
"doc_values_memory": "744b",
"doc_values_memory_in_bytes": 744,
"index_writer_memory": "0b",
@ -91,10 +98,8 @@ Will return, for example:
"version_map_memory_in_bytes": 0,
"fixed_bit_set": "0b",
"fixed_bit_set_memory_in_bytes": 0,
"max_unsafe_auto_id_timestamp" : -9223372036854775808,
"file_sizes": {}
},
"percolator": {
"num_queries": 0
}
},
"nodes": {
@ -188,8 +193,22 @@ Will return, for example:
"classname": "org.elasticsearch.ingest.useragent.IngestUserAgentPlugin",
"has_native_controller": false
}
]
],
"network_types" : {
"transport_types" : {
"netty4" : 1
},
"http_types" : {
"netty4" : 1
}
}
}
}
--------------------------------------------------
// TESTRESPONSE[s/"plugins": \[[^\]]*\]/"plugins": $body.$_path/]
// TESTRESPONSE[s/: (\-)?[0-9]+/: $body.$_path/]
// TESTRESPONSE[s/: "[^"]*"/: $body.$_path/]
////
The TESTRESPONSE above replace all the fields values by the expected ones in the test,
because we don't really care about the field values but we want to check the fields names.
////

View File

@ -15,12 +15,60 @@ GET index1,index2/_recovery?human
To see cluster-wide recovery status simply leave out the index names.
//////////////////////////
Here we create a repository and snapshot index1 in
order to restore it right after and prints out the
indices recovery result.
[source,js]
--------------------------------------------------
# create the index
PUT index1
{"settings": {"index.number_of_shards": 1}}
# create the repository
PUT /_snapshot/my_repository
{"type": "fs","settings": {"location": "recovery_asciidoc" }}
# snapshot the index
PUT /_snapshot/my_repository/snap_1?wait_for_completion=true
# delete the index
DELETE index1
# and restore the snapshot
POST /_snapshot/my_repository/snap_1/_restore?wait_for_completion=true
--------------------------------------------------
// CONSOLE
[source,js]
--------------------------------------------------
{
"snapshot": {
"snapshot": "snap_1",
"indices": [
"index1"
],
"shards": {
"total": 1,
"failed": 0,
"successful": 1
}
}
}
--------------------------------------------------
// TESTRESPONSE
//////////////////////////
[source,js]
--------------------------------------------------
GET /_recovery?human
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT index1\n{"settings": {"index.number_of_shards": 1}}\n/]
// TEST[continued]
Response:
[source,js]
@ -34,16 +82,20 @@ Response:
"primary" : true,
"start_time" : "2014-02-24T12:15:59.716",
"start_time_in_millis": 1393244159716,
"stop_time" : "0s",
"stop_time_in_millis" : 0,
"total_time" : "2.9m",
"total_time_in_millis" : 175576,
"source" : {
"repository" : "my_repository",
"snapshot" : "my_snapshot",
"index" : "index1"
"index" : "index1",
"version" : "{version}"
},
"target" : {
"id" : "ryqJ5lO5S4-lSFbGntkEkg",
"hostname" : "my.fqdn",
"host" : "my.fqdn",
"transport_address" : "my.fqdn",
"ip" : "10.0.1.7",
"name" : "my_es_node"
},
@ -64,7 +116,11 @@ Response:
"percent" : "94.5%"
},
"total_time" : "0s",
"total_time_in_millis" : 0
"total_time_in_millis" : 0,
"source_throttle_time" : "0s",
"source_throttle_time_in_millis" : 0,
"target_throttle_time" : "0s",
"target_throttle_time_in_millis" : 0
},
"translog" : {
"recovered" : 0,
@ -74,7 +130,7 @@ Response:
"total_time" : "0s",
"total_time_in_millis" : 0,
},
"start" : {
"verify_index" : {
"check_index_time" : "0s",
"check_index_time_in_millis" : 0,
"total_time" : "0s",
@ -84,7 +140,12 @@ Response:
}
}
--------------------------------------------------
// We should really assert that this is up to date but that is hard!
// TESTRESPONSE[s/: (\-)?[0-9]+/: $body.$_path/]
// TESTRESPONSE[s/: "[^"]*"/: $body.$_path/]
////
The TESTRESPONSE above replace all the fields values by the expected ones in the test,
because we don't really care about the field values but we want to check the fields names.
////
The above response shows a single index recovering a single shard. In this case, the source of the recovery is a snapshot repository
and the target of the recovery is the node with name "my_es_node".
@ -97,6 +158,8 @@ In some cases a higher level of detail may be preferable. Setting "detailed=true
--------------------------------------------------
GET _recovery?human&detailed=true
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT index1\n{"settings": {"index.number_of_shards": 1}}\n/]
Response:
@ -117,13 +180,15 @@ Response:
"total_time_in_millis" : 2115,
"source" : {
"id" : "RGMdRc-yQWWKIBM4DGvwqQ",
"hostname" : "my.fqdn",
"host" : "my.fqdn",
"transport_address" : "my.fqdn",
"ip" : "10.0.1.7",
"name" : "my_es_node"
},
"target" : {
"id" : "RGMdRc-yQWWKIBM4DGvwqQ",
"hostname" : "my.fqdn",
"host" : "my.fqdn",
"transport_address" : "my.fqdn",
"ip" : "10.0.1.7",
"name" : "my_es_node"
},
@ -154,20 +219,27 @@ Response:
"name" : "segments_2",
"length" : 251,
"recovered" : 251
},
...
}
]
},
"total_time" : "2ms",
"total_time_in_millis" : 2
"total_time_in_millis" : 2,
"source_throttle_time" : "0s",
"source_throttle_time_in_millis" : 0,
"target_throttle_time" : "0s",
"target_throttle_time_in_millis" : 0
},
"translog" : {
"recovered" : 71,
"total" : 0,
"percent" : "100.0%",
"total_on_start" : 0,
"total_time" : "2.0s",
"total_time_in_millis" : 2025
},
"start" : {
"verify_index" : {
"check_index_time" : 0,
"check_index_time_in_millis" : 0,
"total_time" : "88ms",
"total_time_in_millis" : 88
}
@ -175,7 +247,15 @@ Response:
}
}
--------------------------------------------------
// We should really assert that this is up to date but that is hard!
// TESTRESPONSE[s/"source" : \{[^}]*\}/"source" : $body.$_path/]
// TESTRESPONSE[s/"details" : \[[^\]]*\]//]
// TESTRESPONSE[s/: (\-)?[0-9]+/: $body.$_path/]
// TESTRESPONSE[s/: "[^"]*"/: $body.$_path/]
////
The TESTRESPONSE above replace all the fields values by the expected ones in the test,
because we don't really care about the field values but we want to check the fields names.
It also removes the "details" part which is important in this doc but really hard to test.
////
This response shows a detailed listing (truncated for brevity) of the actual files recovered and their sizes.

View File

@ -6,36 +6,78 @@ is built with. Allows to be used to provide more information on the
state of a shard and an index, possibly optimization information, data
"wasted" on deletes, and so on.
Endpoints include segments for a specific index, several indices, or
all:
Endpoints include segments for a specific index:
[source,js]
--------------------------------------------------
curl -XGET 'http://localhost:9200/test/_segments'
curl -XGET 'http://localhost:9200/test1,test2/_segments'
curl -XGET 'http://localhost:9200/_segments'
GET /test/_segments
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT test\n{"settings":{"number_of_shards":1, "number_of_replicas": 0}}\nPOST test\/test\?refresh\n{"test": "test"}\n/]
// TESTSETUP
For several indices:
[source,js]
--------------------------------------------------
GET /test1,test2/_segments
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT test1\nPUT test2\n/]
Or for all indices:
[source,js]
--------------------------------------------------
GET /_segments
--------------------------------------------------
// CONSOLE
Response:
[source,js]
--------------------------------------------------
{
...
"_3": {
"generation": 3,
"num_docs": 1121,
"deleted_docs": 53,
"size_in_bytes": 228288,
"memory_in_bytes": 3211,
"committed": true,
"search": true,
"version": "4.6",
"compound": true
}
...
"_shards": ...
"indices": {
"test": {
"shards": {
"0": [
{
"routing": {
"state": "STARTED",
"primary": true,
"node": "zDC_RorJQCao9xf9pg3Fvw"
},
"num_committed_segments": 0,
"num_search_segments": 1,
"segments": {
"_0": {
"generation": 0,
"num_docs": 1,
"deleted_docs": 0,
"size_in_bytes": 3800,
"memory_in_bytes": 1410,
"committed": false,
"search": true,
"version": "7.0.0",
"compound": true,
"attributes": {
}
}
}
}
]
}
}
}
}
--------------------------------------------------
// TESTRESPONSE[s/"_shards": \.\.\./"_shards": $body._shards,/]
// TESTRESPONSE[s/"node": "zDC_RorJQCao9xf9pg3Fvw"/"node": $body.$_path/]
// TESTRESPONSE[s/"attributes": \{[^}]*\}/"attributes": $body.$_path/]
// TESTRESPONSE[s/: (\-)?[0-9]+/: $body.$_path/]
// TESTRESPONSE[s/7\.0\.0/$body.$_path/]
_0:: The key of the JSON document is the name of the segment. This name
is used to generate file names: all files starting with this
@ -74,6 +116,8 @@ compound:: Whether the segment is stored in a compound file. When true, this
means that Lucene merged all files from the segment in a single
one in order to save file descriptors.
attributes:: Contains information about whether high compression was enabled
[float]
=== Verbose mode
@ -83,8 +127,9 @@ NOTE: The format of the additional detail information is labelled as experimenta
[source,js]
--------------------------------------------------
curl -XGET 'http://localhost:9200/test/_segments?verbose=true'
GET /test/_segments?verbose=true
--------------------------------------------------
// CONSOLE
Response:
@ -92,7 +137,7 @@ Response:
--------------------------------------------------
{
...
"_3": {
"_0": {
...
"ram_tree": [
{
@ -114,3 +159,5 @@ Response:
...
}
--------------------------------------------------
// NOTCONSOLE
//Response is too verbose to be fully shown in documentation, so we just show the relevant bit and don't test the response.

View File

@ -17,10 +17,17 @@ indices, or all:
[source,js]
--------------------------------------------------
curl -XGET 'http://localhost:9200/test/_shard_stores'
curl -XGET 'http://localhost:9200/test1,test2/_shard_stores'
curl -XGET 'http://localhost:9200/_shard_stores'
# return information of only index test
GET /test/_shard_stores
# return information of only test1 and test2 indices
GET /test1,test2/_shard_stores
# return information of all indices
GET /_shard_stores
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT test\nPUT test1\nPUT test2\n/]
The scope of shards to list store information can be changed through
`status` param. Defaults to 'yellow' and 'red'. 'yellow' lists store information of
@ -30,8 +37,11 @@ Use 'green' to list store information for shards with all assigned copies.
[source,js]
--------------------------------------------------
curl -XGET 'http://localhost:9200/_shard_stores?status=green'
GET /_shard_stores?status=green
--------------------------------------------------
// CONSOLE
// TEST[setup:node]
// TEST[s/^/PUT my-index\n{"settings":{"number_of_shards":1, "number_of_replicas": 0}}\nPOST my-index\/test\?refresh\n{"test": "test"}\n/]
Response:
@ -40,27 +50,36 @@ The shard stores information is grouped by indices and shard ids.
[source,js]
--------------------------------------------------
{
...
"0": { <1>
"stores": [ <2>
{
"sPa3OgxLSYGvQ4oPs-Tajw": { <3>
"name": "node_t0",
"transport_address": "local[1]",
"attributes": {
"mode": "local"
"indices": {
"my-index": {
"shards": {
"0": { <1>
"stores": [ <2>
{
"sPa3OgxLSYGvQ4oPs-Tajw": { <3>
"name": "node_t0",
"ephemeral_id" : "9NlXRFGCT1m8tkvYCMK-8A",
"transport_address": "local[1]",
"attributes": {}
},
"allocation_id": "2iNySv_OQVePRX-yaRH_lQ", <4>
"allocation" : "primary|replica|unused" <5>
"store_exception": ... <6>
}
},
"allocation_id": "2iNySv_OQVePRX-yaRH_lQ", <4>
"allocation" : "primary" | "replica" | "unused", <5>
"store_exception": ... <6>
},
...
]
},
...
]
}
}
}
}
}
--------------------------------------------------
// TESTRESPONSE[s/"store_exception": \.\.\.//]
// TESTRESPONSE[s/"sPa3OgxLSYGvQ4oPs-Tajw"/\$node_name/]
// TESTRESPONSE[s/: "[^"]*"/: $body.$_path/]
// TESTRESPONSE[s/"attributes": \{[^}]*\}/"attributes": $body.$_path/]
<1> The key is the corresponding shard id for the store information
<2> A list of store information for all copies of the shard
<3> The node information that hosts a copy of the store, the key