225 lines
8.1 KiB
Plaintext
225 lines
8.1 KiB
Plaintext
[[modules-threadpool]]
|
|
=== Thread pools
|
|
|
|
A node uses several thread pools to manage memory consumption.
|
|
Queues associated with many of the thread pools enable pending requests
|
|
to be held instead of discarded.
|
|
|
|
There are several thread pools, but the important ones include:
|
|
|
|
`generic`::
|
|
For generic operations (for example, background node discovery).
|
|
Thread pool type is `scaling`.
|
|
|
|
[[search-threadpool]]
|
|
`search`::
|
|
For count/search/suggest operations. Thread pool type is
|
|
`fixed_auto_queue_size` with a size of `int((`<<node.processors,
|
|
`# of allocated processors`>>`pass:[ * ]3) / 2) + 1`, and initial queue_size of
|
|
`1000`.
|
|
|
|
[[search-throttled]]`search_throttled`::
|
|
For count/search/suggest/get operations on `search_throttled indices`.
|
|
Thread pool type is `fixed_auto_queue_size` with a size of `1`, and initial
|
|
queue_size of `100`.
|
|
|
|
`get`::
|
|
For get operations. Thread pool type is `fixed`
|
|
with a size of <<node.processors, `# of allocated processors`>>,
|
|
queue_size of `1000`.
|
|
|
|
`analyze`::
|
|
For analyze requests. Thread pool type is `fixed` with a size of `1`, queue
|
|
size of `16`.
|
|
|
|
`write`::
|
|
For single-document index/delete/update and bulk requests. Thread pool type
|
|
is `fixed` with a size of <<node.processors, `# of allocated processors`>>,
|
|
queue_size of `10000`. The maximum size for this pool is
|
|
`pass:[1 + ]`<<node.processors, `# of allocated processors`>>.
|
|
|
|
`snapshot`::
|
|
For snapshot/restore operations. Thread pool type is `scaling` with a
|
|
keep-alive of `5m` and a max of `min(5, (`<<node.processors,
|
|
`# of allocated processors`>>`) / 2)`.
|
|
|
|
`warmer`::
|
|
For segment warm-up operations. Thread pool type is `scaling` with a
|
|
keep-alive of `5m` and a max of `min(5, (`<<node.processors,
|
|
`# of allocated processors`>>`) / 2)`.
|
|
|
|
`refresh`::
|
|
For refresh operations. Thread pool type is `scaling` with a
|
|
keep-alive of `5m` and a max of `min(10, (`<<node.processors,
|
|
`# of allocated processors`>>`) / 2)`.
|
|
|
|
`listener`::
|
|
Mainly for java client executing of action when listener threaded is set to
|
|
`true`. Thread pool type is `scaling` with a default max of
|
|
`min(10, (`<<node.processors, `# of allocated processors`>>`) / 2)`.
|
|
|
|
`fetch_shard_started`::
|
|
For listing shard states.
|
|
Thread pool type is `scaling` with keep-alive of `5m` and a default maximum
|
|
size of `pass:[2 * ]`<<node.processors, `# of allocated processors`>>.
|
|
|
|
`fetch_shard_store`::
|
|
For listing shard stores.
|
|
Thread pool type is `scaling` with keep-alive of `5m` and a default maximum
|
|
size of `pass:[2 * ]`<<node.processors, `# of allocated processors`>>.
|
|
|
|
`flush`::
|
|
For <<indices-flush,flush>>, <<indices-synced-flush-api,synced flush>>, and
|
|
<<index-modules-translog, translog>> `fsync` operations. Thread pool type is
|
|
`scaling` with a keep-alive of `5m` and a default maximum size of `min(5, (`
|
|
<<node.processors, `# of allocated processors`>>`) / 2)`.
|
|
|
|
`force_merge`::
|
|
For <<indices-forcemerge,force merge>> operations.
|
|
Thread pool type is `fixed` with a size of 1 and an unbounded queue size.
|
|
|
|
`management`::
|
|
For cluster management.
|
|
Thread pool type is `scaling` with a keep-alive of `5m` and a default
|
|
maximum size of `5`.
|
|
|
|
`system_read`::
|
|
For read operations on system indices.
|
|
Thread pool type is `fixed` and a default maximum size of
|
|
`min(5, (`<<node.processors, `# of allocated processors`>>`) / 2)`.
|
|
|
|
Changing a specific thread pool can be done by setting its type-specific
|
|
parameters; for example, changing the number of threads in the `write` thread
|
|
pool:
|
|
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
thread_pool:
|
|
write:
|
|
size: 30
|
|
--------------------------------------------------
|
|
|
|
[[thread-pool-types]]
|
|
==== Thread pool types
|
|
|
|
The following are the types of thread pools and their respective parameters:
|
|
|
|
[[fixed-thread-pool]]
|
|
===== `fixed`
|
|
|
|
The `fixed` thread pool holds a fixed size of threads to handle the
|
|
requests with a queue (optionally bounded) for pending requests that
|
|
have no threads to service them.
|
|
|
|
The `size` parameter controls the number of threads.
|
|
|
|
The `queue_size` allows to control the size of the queue of pending
|
|
requests that have no threads to execute them. By default, it is set to
|
|
`-1` which means its unbounded. When a request comes in and the queue is
|
|
full, it will abort the request.
|
|
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
thread_pool:
|
|
write:
|
|
size: 30
|
|
queue_size: 1000
|
|
--------------------------------------------------
|
|
|
|
[[fixed-auto-queue-size]]
|
|
===== `fixed_auto_queue_size`
|
|
|
|
experimental[]
|
|
|
|
deprecated[7.7.0,The experimental `fixed_auto_queue_size` thread pool type is
|
|
deprecated and will be removed in 8.0.]
|
|
|
|
The `fixed_auto_queue_size` thread pool holds a fixed size of threads to handle
|
|
the requests with a bounded queue for pending requests that have no threads to
|
|
service them. It's similar to the `fixed` threadpool, however, the `queue_size`
|
|
automatically adjusts according to calculations based on
|
|
https://en.wikipedia.org/wiki/Little%27s_law[Little's Law]. These calculations
|
|
will potentially adjust the `queue_size` up or down by 50 every time
|
|
`auto_queue_frame_size` operations have been completed.
|
|
|
|
The `size` parameter controls the number of threads.
|
|
|
|
The `queue_size` allows to control the initial size of the queue of pending
|
|
requests that have no threads to execute them.
|
|
|
|
The `min_queue_size` setting controls the minimum amount the `queue_size` can be
|
|
adjusted to.
|
|
|
|
The `max_queue_size` setting controls the maximum amount the `queue_size` can be
|
|
adjusted to.
|
|
|
|
The `auto_queue_frame_size` setting controls the number of operations during
|
|
which measurement is taken before the queue is adjusted. It should be large
|
|
enough that a single operation cannot unduly bias the calculation.
|
|
|
|
The `target_response_time` is a time value setting that indicates the targeted
|
|
average response time for tasks in the thread pool queue. If tasks are routinely
|
|
above this time, the thread pool queue will be adjusted down so that tasks are
|
|
rejected.
|
|
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
thread_pool:
|
|
search:
|
|
size: 30
|
|
queue_size: 500
|
|
min_queue_size: 10
|
|
max_queue_size: 1000
|
|
auto_queue_frame_size: 2000
|
|
target_response_time: 1s
|
|
--------------------------------------------------
|
|
|
|
[[scaling-thread-pool]]
|
|
===== `scaling`
|
|
|
|
The `scaling` thread pool holds a dynamic number of threads. This
|
|
number is proportional to the workload and varies between the value of
|
|
the `core` and `max` parameters.
|
|
|
|
The `keep_alive` parameter determines how long a thread should be kept
|
|
around in the thread pool without it doing any work.
|
|
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
thread_pool:
|
|
warmer:
|
|
core: 1
|
|
max: 8
|
|
keep_alive: 2m
|
|
--------------------------------------------------
|
|
|
|
[[node.processors]]
|
|
==== Allocated processors setting
|
|
|
|
The number of processors is automatically detected, and the thread pool settings
|
|
are automatically set based on it. In some cases it can be useful to override
|
|
the number of detected processors. This can be done by explicitly setting the
|
|
`node.processors` setting.
|
|
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
node.processors: 2
|
|
--------------------------------------------------
|
|
|
|
There are a few use-cases for explicitly overriding the `node.processors`
|
|
setting:
|
|
|
|
. If you are running multiple instances of {es} on the same host but want want
|
|
{es} to size its thread pools as if it only has a fraction of the CPU, you
|
|
should override the `node.processors` setting to the desired fraction, for
|
|
example, if you're running two instances of {es} on a 16-core machine, set
|
|
`node.processors` to 8. Note that this is an expert-level use case and there's
|
|
a lot more involved than just setting the `node.processors` setting as there are
|
|
other considerations like changing the number of garbage collector threads,
|
|
pinning processes to cores, and so on.
|
|
. Sometimes the number of processors is wrongly detected and in such cases
|
|
explicitly setting the `node.processors` setting will workaround such issues.
|
|
|
|
In order to check the number of processors detected, use the nodes info
|
|
API with the `os` flag.
|