2013-11-14 20:14:39 -05:00
|
|
|
[[cat]]
|
2019-07-19 14:35:36 -04:00
|
|
|
== cat APIs
|
2013-11-14 20:14:39 -05:00
|
|
|
|
|
|
|
["float",id="intro"]
|
2019-07-19 14:35:36 -04:00
|
|
|
=== Introduction
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2020-02-25 07:19:35 -05:00
|
|
|
JSON is great... for computers. Even if it's pretty-printed, trying
|
|
|
|
to find relationships in the data is tedious. Human eyes, especially
|
|
|
|
when looking at a terminal, need compact and aligned text. The cat APIs
|
|
|
|
aim to meet this need.
|
|
|
|
|
|
|
|
[IMPORTANT]
|
|
|
|
====
|
|
|
|
cat APIs are only intended for human consumption using the
|
|
|
|
{kibana-ref}/console-kibana.html[Kibana console] or command line. They are _not_
|
|
|
|
intended for use by applications. For application consumption, we recommend
|
|
|
|
using a corresponding JSON API.
|
|
|
|
====
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2014-01-06 12:23:24 -05:00
|
|
|
All the cat commands accept a query string parameter `help` to see all
|
2014-03-07 14:21:43 -05:00
|
|
|
the headers and info they provide, and the `/_cat` command alone lists all
|
2013-12-05 08:36:27 -05:00
|
|
|
the available commands.
|
|
|
|
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2013-11-14 20:14:39 -05:00
|
|
|
[[common-parameters]]
|
2019-07-19 14:35:36 -04:00
|
|
|
=== Common parameters
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2013-11-14 20:14:39 -05:00
|
|
|
[[verbose]]
|
2019-07-19 14:35:36 -04:00
|
|
|
==== Verbose
|
2013-11-14 20:14:39 -05:00
|
|
|
|
|
|
|
Each of the commands accepts a query string parameter `v` to turn on
|
2016-10-06 13:31:18 -04:00
|
|
|
verbose output. For example:
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2019-09-09 13:38:14 -04:00
|
|
|
[source,console]
|
2016-10-06 13:31:18 -04:00
|
|
|
--------------------------------------------------
|
|
|
|
GET /_cat/master?v
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
Might respond with:
|
|
|
|
|
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
|
|
|
[source,txt]
|
2013-11-14 20:14:39 -05:00
|
|
|
--------------------------------------------------
|
2016-10-06 13:31:18 -04:00
|
|
|
id host ip node
|
|
|
|
u_n93zwxThWHi1PDBJAGAg 127.0.0.1 127.0.0.1 u_n93zw
|
2013-11-14 20:14:39 -05:00
|
|
|
--------------------------------------------------
|
2019-06-10 09:33:32 -04:00
|
|
|
// TESTRESPONSE[s/u_n93zw(xThWHi1PDBJAGAg)?/.+/ non_json]
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2013-12-23 18:14:56 -05:00
|
|
|
[[help]]
|
2019-07-19 14:35:36 -04:00
|
|
|
==== Help
|
2013-12-23 18:14:56 -05:00
|
|
|
|
2014-01-06 12:23:24 -05:00
|
|
|
Each of the commands accepts a query string parameter `help` which will
|
2016-10-06 13:31:18 -04:00
|
|
|
output its available columns. For example:
|
2013-12-23 18:14:56 -05:00
|
|
|
|
2019-09-09 13:38:14 -04:00
|
|
|
[source,console]
|
2013-12-23 18:14:56 -05:00
|
|
|
--------------------------------------------------
|
2016-10-06 13:31:18 -04:00
|
|
|
GET /_cat/master?help
|
2013-12-23 18:14:56 -05:00
|
|
|
--------------------------------------------------
|
2016-10-06 13:31:18 -04:00
|
|
|
|
2018-02-15 09:56:01 -05:00
|
|
|
Might respond with:
|
2016-10-06 13:31:18 -04:00
|
|
|
|
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
|
|
|
[source,txt]
|
2016-10-06 13:31:18 -04:00
|
|
|
--------------------------------------------------
|
|
|
|
id | | node id
|
|
|
|
host | h | host name
|
|
|
|
ip | | ip address
|
|
|
|
node | n | node name
|
|
|
|
--------------------------------------------------
|
2019-06-10 09:33:32 -04:00
|
|
|
// TESTRESPONSE[s/[|]/[|]/ non_json]
|
2013-12-23 18:14:56 -05:00
|
|
|
|
2018-02-15 09:56:01 -05:00
|
|
|
NOTE: `help` is not supported if any optional url parameter is used.
|
2020-08-03 10:22:36 -04:00
|
|
|
For example `GET _cat/shards/my-index-000001?help` or `GET _cat/indices/my-index-*?help`
|
2018-02-15 09:56:01 -05:00
|
|
|
results in an error. Use `GET _cat/shards?help` or `GET _cat/indices?help`
|
|
|
|
instead.
|
|
|
|
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2013-12-23 18:14:56 -05:00
|
|
|
[[headers]]
|
2019-07-19 14:35:36 -04:00
|
|
|
==== Headers
|
2013-12-23 18:14:56 -05:00
|
|
|
|
2014-01-06 12:23:24 -05:00
|
|
|
Each of the commands accepts a query string parameter `h` which forces
|
2016-10-06 13:31:18 -04:00
|
|
|
only those columns to appear. For example:
|
2013-12-23 18:14:56 -05:00
|
|
|
|
2019-09-09 13:38:14 -04:00
|
|
|
[source,console]
|
2016-10-06 13:31:18 -04:00
|
|
|
--------------------------------------------------
|
|
|
|
GET /_cat/nodes?h=ip,port,heapPercent,name
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
Responds with:
|
|
|
|
|
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
|
|
|
[source,txt]
|
2013-12-23 18:14:56 -05:00
|
|
|
--------------------------------------------------
|
2016-10-06 13:31:18 -04:00
|
|
|
127.0.0.1 9300 27 sLBaIGK
|
2013-12-23 18:14:56 -05:00
|
|
|
--------------------------------------------------
|
2019-06-10 09:33:32 -04:00
|
|
|
// TESTRESPONSE[s/9300 27 sLBaIGK/\\d+ \\d+ .+/ non_json]
|
2013-12-23 18:14:56 -05:00
|
|
|
|
2015-05-27 06:05:32 -04:00
|
|
|
You can also request multiple columns using simple wildcards like
|
2018-04-19 07:47:20 -04:00
|
|
|
`/_cat/thread_pool?h=ip,queue*` to get all headers (or aliases) starting
|
|
|
|
with `queue`.
|
2015-05-27 06:05:32 -04:00
|
|
|
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2013-11-14 20:14:39 -05:00
|
|
|
[[numeric-formats]]
|
2019-07-19 14:35:36 -04:00
|
|
|
==== Numeric formats
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2016-04-15 06:24:54 -04:00
|
|
|
Many commands provide a few types of numeric output, either a byte, size
|
|
|
|
or a time value. By default, these types are human-formatted,
|
2013-11-14 20:14:39 -05:00
|
|
|
for example, `3.5mb` instead of `3763212`. The human values are not
|
|
|
|
sortable numerically, so in order to operate on these values where
|
|
|
|
order is important, you can change it.
|
|
|
|
|
|
|
|
Say you want to find the largest index in your cluster (storage used
|
|
|
|
by all the shards, not number of documents). The `/_cat/indices` API
|
2020-01-14 08:21:32 -05:00
|
|
|
is ideal. You only need to add three things to the API request:
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2020-01-14 08:21:32 -05:00
|
|
|
. The `bytes` query string parameter with a value of `b` to get byte-level resolution.
|
|
|
|
. The `s` (sort) parameter with a value of `store.size:desc` to sort the output
|
|
|
|
by shard storage in descending order.
|
|
|
|
. The `v` (verbose) parameter to include column headings in the response.
|
|
|
|
|
|
|
|
[source,console]
|
|
|
|
--------------------------------------------------
|
|
|
|
GET /_cat/indices?bytes=b&s=store.size:desc&v
|
|
|
|
--------------------------------------------------
|
2020-08-03 10:22:36 -04:00
|
|
|
// TEST[setup:my_index_huge]
|
|
|
|
// TEST[s/^/PUT my-index-000002\n{"settings": {"number_of_replicas": 0}}\n/]
|
2020-01-14 08:21:32 -05:00
|
|
|
|
|
|
|
The API returns the following response:
|
|
|
|
|
|
|
|
[source,txt]
|
2013-11-14 20:14:39 -05:00
|
|
|
--------------------------------------------------
|
2020-08-03 10:22:36 -04:00
|
|
|
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
|
|
|
|
yellow open my-index-000001 u8FNjxh8Rfy_awN11oDKYQ 1 1 1200 0 72171 72171
|
|
|
|
green open my-index-000002 nYFWZEO7TUiOjLQXBaYJpA 1 0 0 0 230 230
|
2013-11-14 20:14:39 -05:00
|
|
|
--------------------------------------------------
|
2020-01-14 08:21:32 -05:00
|
|
|
// TESTRESPONSE[s/72171|230/\\d+/]
|
|
|
|
// TESTRESPONSE[s/u8FNjxh8Rfy_awN11oDKYQ|nYFWZEO7TUiOjLQXBaYJpA/.+/ non_json]
|
2020-08-06 04:40:55 -04:00
|
|
|
// TESTRESPONSE[skip:"AwaitsFix https://github.com/elastic/elasticsearch/issues/51619"]
|
2013-11-14 20:14:39 -05:00
|
|
|
|
2016-04-15 06:24:54 -04:00
|
|
|
If you want to change the <<time-units,time units>>, use `time` parameter.
|
|
|
|
|
|
|
|
If you want to change the <<size-units,size units>>, use `size` parameter.
|
|
|
|
|
|
|
|
If you want to change the <<byte-units,byte units>>, use `bytes` parameter.
|
|
|
|
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2019-07-19 14:35:36 -04:00
|
|
|
==== Response as text, json, smile, yaml or cbor
|
2016-02-04 12:47:08 -05:00
|
|
|
|
|
|
|
[source,sh]
|
|
|
|
--------------------------------------------------
|
2016-10-06 13:31:18 -04:00
|
|
|
% curl 'localhost:9200/_cat/indices?format=json&pretty'
|
2016-02-04 12:47:08 -05:00
|
|
|
[
|
|
|
|
{
|
|
|
|
"pri.store.size": "650b",
|
|
|
|
"health": "yellow",
|
|
|
|
"status": "open",
|
2020-08-03 10:22:36 -04:00
|
|
|
"index": "my-index-000001",
|
2016-02-04 12:47:08 -05:00
|
|
|
"pri": "5",
|
|
|
|
"rep": "1",
|
|
|
|
"docs.count": "0",
|
|
|
|
"docs.deleted": "0",
|
|
|
|
"store.size": "650b"
|
|
|
|
}
|
|
|
|
]
|
|
|
|
--------------------------------------------------
|
2016-10-06 13:31:18 -04:00
|
|
|
// NOTCONSOLE
|
2016-02-04 12:47:08 -05:00
|
|
|
|
2016-02-05 08:47:27 -05:00
|
|
|
Currently supported formats (for the `?format=` parameter):
|
2016-02-04 12:47:08 -05:00
|
|
|
- text (default)
|
|
|
|
- json
|
|
|
|
- smile
|
|
|
|
- yaml
|
|
|
|
- cbor
|
|
|
|
|
2016-02-05 08:47:27 -05:00
|
|
|
Alternatively you can set the "Accept" HTTP header to the appropriate media format.
|
2016-02-04 12:47:08 -05:00
|
|
|
All formats above are supported, the GET parameter takes precedence over the header.
|
2016-02-05 08:47:27 -05:00
|
|
|
For example:
|
2016-02-04 12:47:08 -05:00
|
|
|
|
|
|
|
[source,sh]
|
|
|
|
--------------------------------------------------
|
2016-10-06 13:31:18 -04:00
|
|
|
% curl '192.168.56.10:9200/_cat/indices?pretty' -H "Accept: application/json"
|
2016-02-04 12:47:08 -05:00
|
|
|
[
|
|
|
|
{
|
|
|
|
"pri.store.size": "650b",
|
|
|
|
"health": "yellow",
|
|
|
|
"status": "open",
|
2020-08-03 10:22:36 -04:00
|
|
|
"index": "my-index-000001",
|
2016-02-04 12:47:08 -05:00
|
|
|
"pri": "5",
|
|
|
|
"rep": "1",
|
|
|
|
"docs.count": "0",
|
|
|
|
"docs.deleted": "0",
|
|
|
|
"store.size": "650b"
|
|
|
|
}
|
|
|
|
]
|
|
|
|
--------------------------------------------------
|
2016-10-06 13:31:18 -04:00
|
|
|
// NOTCONSOLE
|
2016-02-04 12:47:08 -05:00
|
|
|
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2016-10-11 12:29:22 -04:00
|
|
|
[[sort]]
|
2019-07-19 14:35:36 -04:00
|
|
|
==== Sort
|
2016-10-11 12:29:22 -04:00
|
|
|
|
|
|
|
Each of the commands accepts a query string parameter `s` which sorts the table by
|
|
|
|
the columns specified as the parameter value. Columns are specified either by name or by
|
|
|
|
alias, and are provided as a comma separated string. By default, sorting is done in
|
|
|
|
ascending fashion. Appending `:desc` to a column will invert the ordering for
|
|
|
|
that column. `:asc` is also accepted but exhibits the same behavior as the default sort order.
|
|
|
|
|
|
|
|
For example, with a sort string `s=column1,column2:desc,column3`, the table will be
|
|
|
|
sorted in ascending order by column1, in descending order by column2, and in ascending
|
|
|
|
order by column3.
|
|
|
|
|
|
|
|
[source,sh]
|
|
|
|
--------------------------------------------------
|
2016-11-10 18:00:30 -05:00
|
|
|
GET _cat/templates?v&s=order:desc,index_patterns
|
2016-10-11 12:29:22 -04:00
|
|
|
--------------------------------------------------
|
|
|
|
//CONSOLE
|
|
|
|
|
|
|
|
returns:
|
|
|
|
|
Enforce that responses in docs are valid json (#26249)
All of the snippets in our docs marked with `// TESTRESPONSE` are
checked against the response from Elasticsearch but, due to the
way they are implemented they are actually parsed as YAML instead
of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately
that means that invalid JSON has snuck into the exmples!
This adds a step during the build to parse them as JSON and fail
the build if they don't parse.
But no! It isn't quite that simple. The displayed text of some of
these responses looks like:
```
{
...
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
Note the `...` which isn't valid json but we like it anyway and want
it in the output. We use substitution rules to convert the `...`
into the response we expect. That yields a response that looks like:
```
{
"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,
"aggregations": {
"range": {
"buckets": [
{
"to": 1.4436576E12,
"to_as_string": "10-2015",
"doc_count": 7,
"key": "*-10-2015"
},
{
"from": 1.4436576E12,
"from_as_string": "10-2015",
"doc_count": 0,
"key": "10-2015-*"
}
]
}
}
}
```
That is what the tests consume but it isn't valid JSON! Oh no! We don't
want to go update all the substitution rules because that'd be huge and,
ultimately, wouldn't buy much. So we quote the `$body.took` bits before
parsing the JSON.
Note the responses that we use for the `_cat` APIs are all converted into
regexes and there is no expectation that they are valid JSON.
Closes #26233
2017-08-17 09:02:10 -04:00
|
|
|
[source,txt]
|
2016-10-11 12:29:22 -04:00
|
|
|
--------------------------------------------------
|
2016-11-10 18:00:30 -05:00
|
|
|
name index_patterns order version
|
|
|
|
pizza_pepperoni [*pepperoni*] 2
|
|
|
|
sushi_california_roll [*avocado*] 1 1
|
|
|
|
pizza_hawaiian [*pineapples*] 1
|
2016-10-11 12:29:22 -04:00
|
|
|
--------------------------------------------------
|
|
|
|
|
2014-01-20 03:23:00 -05:00
|
|
|
include::cat/alias.asciidoc[]
|
|
|
|
|
2013-11-14 20:14:39 -05:00
|
|
|
include::cat/allocation.asciidoc[]
|
|
|
|
|
2020-03-02 10:28:55 -05:00
|
|
|
include::cat/anomaly-detectors.asciidoc[]
|
|
|
|
|
2013-11-14 20:14:39 -05:00
|
|
|
include::cat/count.asciidoc[]
|
|
|
|
|
2020-02-26 05:09:37 -05:00
|
|
|
include::cat/dataframeanalytics.asciidoc[]
|
|
|
|
|
2020-02-26 12:20:36 -05:00
|
|
|
include::cat/datafeeds.asciidoc[]
|
|
|
|
|
2014-06-03 05:05:26 -04:00
|
|
|
include::cat/fielddata.asciidoc[]
|
|
|
|
|
2013-11-14 20:14:39 -05:00
|
|
|
include::cat/health.asciidoc[]
|
|
|
|
|
|
|
|
include::cat/indices.asciidoc[]
|
|
|
|
|
|
|
|
include::cat/master.asciidoc[]
|
|
|
|
|
2015-07-29 09:56:57 -04:00
|
|
|
include::cat/nodeattrs.asciidoc[]
|
|
|
|
|
2013-11-14 20:14:39 -05:00
|
|
|
include::cat/nodes.asciidoc[]
|
|
|
|
|
2013-11-29 02:08:54 -05:00
|
|
|
include::cat/pending_tasks.asciidoc[]
|
|
|
|
|
2014-06-03 05:05:26 -04:00
|
|
|
include::cat/plugins.asciidoc[]
|
|
|
|
|
2013-11-14 20:14:39 -05:00
|
|
|
include::cat/recovery.asciidoc[]
|
|
|
|
|
2015-10-22 05:00:51 -04:00
|
|
|
include::cat/repositories.asciidoc[]
|
|
|
|
|
2013-11-14 20:14:39 -05:00
|
|
|
include::cat/shards.asciidoc[]
|
2014-03-16 07:14:44 -04:00
|
|
|
|
2015-05-27 06:05:32 -04:00
|
|
|
include::cat/segments.asciidoc[]
|
2015-10-22 05:00:51 -04:00
|
|
|
|
|
|
|
include::cat/snapshots.asciidoc[]
|
2016-09-20 04:40:23 -04:00
|
|
|
|
2020-03-18 11:29:18 -04:00
|
|
|
include::cat/tasks.asciidoc[]
|
|
|
|
|
2016-09-20 04:40:23 -04:00
|
|
|
include::cat/templates.asciidoc[]
|
2020-03-18 11:29:18 -04:00
|
|
|
|
|
|
|
include::cat/thread_pool.asciidoc[]
|
|
|
|
|
|
|
|
include::cat/trainedmodel.asciidoc[]
|
|
|
|
|
2020-03-18 19:54:03 -04:00
|
|
|
include::cat/transforms.asciidoc[]
|