OpenSearch/docs/reference/cat/thread_pool.asciidoc

144 lines
4.7 KiB
Plaintext
Raw Normal View History

[[cat-thread-pool]]
== cat thread pool
The `thread_pool` command shows cluster wide thread pool statistics per node. By default the active, queue and rejected
statistics are returned for all thread pools.
[source,js]
--------------------------------------------------
GET /_cat/thread_pool
--------------------------------------------------
// CONSOLE
Which looks like:
[source,txt]
--------------------------------------------------
node-0 bulk 0 0 0
node-0 fetch_shard_started 0 0 0
node-0 fetch_shard_store 0 0 0
node-0 flush 0 0 0
node-0 force_merge 0 0 0
node-0 generic 0 0 0
node-0 get 0 0 0
node-0 index 0 0 0
node-0 listener 0 0 0
node-0 management 1 0 0
node-0 refresh 0 0 0
node-0 search 0 0 0
node-0 snapshot 0 0 0
node-0 warmer 0 0 0
--------------------------------------------------
// TESTRESPONSE[s/\d+/\\d+/ _cat]
The first column is the node name
[source,txt]
--------------------------------------------------
node_name
node-0
--------------------------------------------------
The second column is the thread pool name
[source,txt]
--------------------------------------------------
name
bulk
fetch_shard_started
fetch_shard_store
flush
force_merge
generic
get
index
listener
management
refresh
search
snapshot
warmer
--------------------------------------------------
The next three columns show the active, queue, and rejected statistics for each thread pool
[source,txt]
--------------------------------------------------
active queue rejected
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
1 0 0
0 0 0
0 0 0
0 0 0
0 0 0
--------------------------------------------------
The cat thread pool API accepts a `thread_pool_patterns` URL parameter for specifying a
comma-separated list of regular expressions to match thread pool names.
[source,js]
--------------------------------------------------
GET /_cat/thread_pool/generic?v&h=id,name,active,rejected,completed
--------------------------------------------------
// CONSOLE
which looks like:
Enforce that responses in docs are valid json (#26249) All of the snippets in our docs marked with `// TESTRESPONSE` are checked against the response from Elasticsearch but, due to the way they are implemented they are actually parsed as YAML instead of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately that means that invalid JSON has snuck into the exmples! This adds a step during the build to parse them as JSON and fail the build if they don't parse. But no! It isn't quite that simple. The displayed text of some of these responses looks like: ``` { ... "aggregations": { "range": { "buckets": [ { "to": 1.4436576E12, "to_as_string": "10-2015", "doc_count": 7, "key": "*-10-2015" }, { "from": 1.4436576E12, "from_as_string": "10-2015", "doc_count": 0, "key": "10-2015-*" } ] } } } ``` Note the `...` which isn't valid json but we like it anyway and want it in the output. We use substitution rules to convert the `...` into the response we expect. That yields a response that looks like: ``` { "took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits, "aggregations": { "range": { "buckets": [ { "to": 1.4436576E12, "to_as_string": "10-2015", "doc_count": 7, "key": "*-10-2015" }, { "from": 1.4436576E12, "from_as_string": "10-2015", "doc_count": 0, "key": "10-2015-*" } ] } } } ``` That is what the tests consume but it isn't valid JSON! Oh no! We don't want to go update all the substitution rules because that'd be huge and, ultimately, wouldn't buy much. So we quote the `$body.took` bits before parsing the JSON. Note the responses that we use for the `_cat` APIs are all converted into regexes and there is no expectation that they are valid JSON. Closes #26233
2017-08-17 09:02:10 -04:00
[source,txt]
--------------------------------------------------
id name active rejected completed
0EWUhXeBQtaVGlexUeVwMg generic 0 0 70
--------------------------------------------------
// TESTRESPONSE[s/0EWUhXeBQtaVGlexUeVwMg/[\\w-]+/ s/\d+/\\d+/ _cat]
Here the host columns and the active, rejected and completed suggest thread pool statistics are displayed.
All <<modules-threadpool,built-in thread pools>> and custom thread pools are available.
[float]
==== Thread Pool Fields
For each thread pool, you can load details about it by using the field names
in the table below.
[cols="<,<,<",options="header"]
|=======================================================================
|Field Name |Alias |Description
|`type` |`t` |The current (*) type of thread pool (`fixed` or `scaling`)
|`active` |`a` |The number of active threads in the current thread pool
|`size` |`s` |The number of threads in the current thread pool
|`queue` |`q` |The number of tasks in the queue for the current thread pool
|`queue_size` |`qs` |The maximum number of tasks permitted in the queue for the current thread pool
|`rejected` |`r` |The number of tasks rejected by the thread pool executor
|`largest` |`l` |The highest number of active threads in the current thread pool
|`completed` |`c` |The number of tasks completed by the thread pool executor
|`min` |`mi` |The configured minimum number of active threads allowed in the current thread pool
|`max` |`ma` |The configured maximum number of active threads allowed in the current thread pool
|`keep_alive` |`k` |The configured keep alive time for threads
|=======================================================================
[float]
=== Other Fields
In addition to details about each thread pool, it is also convenient to get an
understanding of where those thread pools reside. As such, you can request
other details like the `ip` of the responding node(s).
[cols="<,<,<",options="header"]
|=======================================================================
|Field Name |Alias |Description
|`node_id` |`id` |The unique node ID
|`ephemeral_id`|`eid` |The ephemeral node ID
|`pid` |`p` |The process ID of the running node
|`host` |`h` |The hostname for the current node
|`ip` |`i` |The IP address for the current node
|`port` |`po` |The bound transport port for the current node
|=======================================================================