462 lines
17 KiB
Plaintext
462 lines
17 KiB
Plaintext
[[modules-node]]
|
|
=== Node
|
|
|
|
Any time that you start an instance of {es}, you are starting a _node_. A
|
|
collection of connected nodes is called a <<modules-cluster,cluster>>. If you
|
|
are running a single node of {es}, then you have a cluster of one node.
|
|
|
|
Every node in the cluster can handle <<modules-http,HTTP>> and
|
|
<<modules-transport,Transport>> traffic by default. The transport layer is used
|
|
exclusively for communication between nodes; the HTTP layer is used by REST
|
|
clients.
|
|
[[modules-node-description]]
|
|
// tag::modules-node-description-tag[]
|
|
All nodes know about all the other nodes in the cluster and can forward client
|
|
requests to the appropriate node.
|
|
|
|
By default, a node is all of the following types: master-eligible, data, ingest,
|
|
and (if available) machine learning. All data nodes are also transform nodes.
|
|
// end::modules-node-description-tag[]
|
|
|
|
TIP: As the cluster grows and in particular if you have large {ml} jobs or
|
|
{ctransforms}, consider separating dedicated master-eligible nodes from
|
|
dedicated data nodes, {ml} nodes, and {transform} nodes.
|
|
|
|
[[node-roles]]
|
|
==== Node roles
|
|
|
|
You can define the roles of a node by setting `node.roles`. If you don't
|
|
configure this setting, then the node has the following roles by default:
|
|
|
|
* `master`
|
|
* `data`
|
|
* `data_content`
|
|
* `data_hot`
|
|
* `data_warm`
|
|
* `data_cold`
|
|
* `ingest`
|
|
* `ml`
|
|
* `remote_cluster_client`
|
|
|
|
NOTE: If you set `node.roles`, the node is assigned only the roles you specify.
|
|
|
|
<<master-node,Master-eligible node>>::
|
|
|
|
A node that has the `master` role (default), which makes it eligible to be
|
|
<<modules-discovery,elected as the _master_ node>>, which controls the cluster.
|
|
|
|
<<data-node,Data node>>::
|
|
|
|
A node that has the `data` role (default). Data nodes hold data and perform data
|
|
related operations such as CRUD, search, and aggregations. A node with the `data` role can fill any of the specialised data node roles.
|
|
|
|
<<node-ingest-node,Ingest node>>::
|
|
|
|
A node that has the `ingest` role (default). Ingest nodes are able to apply an
|
|
<<pipeline,ingest pipeline>> to a document in order to transform and enrich the
|
|
document before indexing. With a heavy ingest load, it makes sense to use
|
|
dedicated ingest nodes and to not include the `ingest` role from nodes that have
|
|
the `master` or `data` roles.
|
|
|
|
<<remote-node,Remote-eligible node>>::
|
|
|
|
A node that has the `remote_cluster_client` role (default), which makes it
|
|
eligible to act as a remote client. By default, any node in the cluster can act
|
|
as a cross-cluster client and connect to remote clusters.
|
|
|
|
<<ml-node,Machine learning node>>::
|
|
|
|
A node that has `xpack.ml.enabled` and the `ml` role, which is the default
|
|
behavior in the {es} {default-dist}. If you want to use {ml-features}, there
|
|
must be at least one {ml} node in your cluster. For more information about
|
|
{ml-features}, see {ml-docs}/index.html[Machine learning in the {stack}].
|
|
+
|
|
IMPORTANT: If you use the {oss-dist}, do not add the `ml` role. Otherwise, the
|
|
node fails to start.
|
|
|
|
<<transform-node,{transform-cap} node>>::
|
|
|
|
A node that has the `transform` role. If you want to use {transforms}, there
|
|
be at least one {transform} node in your cluster. For more information, see
|
|
<<transform-settings>> and <<transforms>>.
|
|
|
|
[NOTE]
|
|
[[coordinating-node]]
|
|
.Coordinating node
|
|
===============================================
|
|
|
|
Requests like search requests or bulk-indexing requests may involve data held
|
|
on different data nodes. A search request, for example, is executed in two
|
|
phases which are coordinated by the node which receives the client request --
|
|
the _coordinating node_.
|
|
|
|
In the _scatter_ phase, the coordinating node forwards the request to the data
|
|
nodes which hold the data. Each data node executes the request locally and
|
|
returns its results to the coordinating node. In the _gather_ phase, the
|
|
coordinating node reduces each data node's results into a single global
|
|
result set.
|
|
|
|
Every node is implicitly a coordinating node. This means that a node that has
|
|
an explicit empty list of roles via `node.roles` will only act as a coordinating
|
|
node, which cannot be disabled. As a result, such a node needs to have enough
|
|
memory and CPU in order to deal with the gather phase.
|
|
|
|
===============================================
|
|
|
|
[[master-node]]
|
|
==== Master-eligible node
|
|
|
|
The master node is responsible for lightweight cluster-wide actions such as
|
|
creating or deleting an index, tracking which nodes are part of the cluster,
|
|
and deciding which shards to allocate to which nodes. It is important for
|
|
cluster health to have a stable master node.
|
|
|
|
Any master-eligible node that is not a <<voting-only-node,voting-only node>> may
|
|
be elected to become the master node by the <<modules-discovery,master election
|
|
process>>.
|
|
|
|
IMPORTANT: Master nodes must have access to the `data/` directory (just like
|
|
`data` nodes) as this is where the cluster state is persisted between node
|
|
restarts.
|
|
|
|
[[dedicated-master-node]]
|
|
===== Dedicated master-eligible node
|
|
|
|
It is important for the health of the cluster that the elected master node has
|
|
the resources it needs to fulfill its responsibilities. If the elected master
|
|
node is overloaded with other tasks then the cluster may not operate well. In
|
|
particular, indexing and searching your data can be very resource-intensive, so
|
|
in large or high-throughput clusters it is a good idea to avoid using the
|
|
master-eligible nodes for tasks such as indexing and searching. You can do this
|
|
by configuring three of your nodes to be dedicated master-eligible nodes.
|
|
Dedicated master-eligible nodes only have the `master` role, allowing them to
|
|
focus on managing the cluster. While master nodes can also behave as
|
|
<<coordinating-node,coordinating nodes>> and route search and indexing requests
|
|
from clients to data nodes, it is better _not_ to use dedicated master nodes for
|
|
this purpose.
|
|
|
|
To create a dedicated master-eligible node, set:
|
|
|
|
[source,yaml]
|
|
-------------------
|
|
node.roles: [ master ]
|
|
-------------------
|
|
|
|
[[voting-only-node]]
|
|
===== Voting-only master-eligible node
|
|
|
|
A voting-only master-eligible node is a node that participates in
|
|
<<modules-discovery,master elections>> but which will not act as the cluster's
|
|
elected master node. In particular, a voting-only node can serve as a tiebreaker
|
|
in elections.
|
|
|
|
It may seem confusing to use the term "master-eligible" to describe a
|
|
voting-only node since such a node is not actually eligible to become the master
|
|
at all. This terminology is an unfortunate consequence of history:
|
|
master-eligible nodes are those nodes that participate in elections and perform
|
|
certain tasks during cluster state publications, and voting-only nodes have the
|
|
same responsibilities even if they can never become the elected master.
|
|
|
|
To configure a master-eligible node as a voting-only node, include `master` and
|
|
`voting_only` in the list of roles. For example to create a voting-only data
|
|
node:
|
|
|
|
[source,yaml]
|
|
-------------------
|
|
node.roles: [ data, master, voting_only ]
|
|
-------------------
|
|
|
|
IMPORTANT: The `voting_only` role requires the {default-dist} of {es} and is not
|
|
supported in the {oss-dist}. If you use the {oss-dist} and add the `voting_only`
|
|
role then the node will fail to start. Also note that only nodes with the
|
|
`master` role can be marked as having the `voting_only` role.
|
|
|
|
High availability (HA) clusters require at least three master-eligible nodes, at
|
|
least two of which are not voting-only nodes. Such a cluster will be able to
|
|
elect a master node even if one of the nodes fails.
|
|
|
|
Since voting-only nodes never act as the cluster's elected master, they may
|
|
require require less heap and a less powerful CPU than the true master nodes.
|
|
However all master-eligible nodes, including voting-only nodes, require
|
|
reasonably fast persistent storage and a reliable and low-latency network
|
|
connection to the rest of the cluster, since they are on the critical path for
|
|
<<cluster-state-publishing,publishing cluster state updates>>.
|
|
|
|
Voting-only master-eligible nodes may also fill other roles in your cluster.
|
|
For instance, a node may be both a data node and a voting-only master-eligible
|
|
node. A _dedicated_ voting-only master-eligible nodes is a voting-only
|
|
master-eligible node that fills no other roles in the cluster. To create a
|
|
dedicated voting-only master-eligible node in the {default-dist}, set:
|
|
|
|
[source,yaml]
|
|
-------------------
|
|
node.roles: [ master, voting_only ]
|
|
-------------------
|
|
|
|
[[data-node]]
|
|
==== Data node
|
|
|
|
Data nodes hold the shards that contain the documents you have indexed. Data
|
|
nodes handle data related operations like CRUD, search, and aggregations.
|
|
These operations are I/O-, memory-, and CPU-intensive. It is important to
|
|
monitor these resources and to add more data nodes if they are overloaded.
|
|
|
|
The main benefit of having dedicated data nodes is the separation of the master
|
|
and data roles.
|
|
|
|
To create a dedicated data node, set:
|
|
[source,yaml]
|
|
----
|
|
node.roles: [ data ]
|
|
----
|
|
|
|
In a multi-tier deployment architecture, you use specialised data roles to assign data nodes to specific tiers: `data_content`,`data_hot`,
|
|
`data_warm`, or `data_cold`. A node can belong to multiple tiers, but a node that has one of the specialised data roles cannot have the
|
|
generic `data` role.
|
|
|
|
[[data-content-node]]
|
|
==== [x-pack]#Content data node#
|
|
|
|
Content data nodes accommodate user-created content. They enable operations like CRUD,
|
|
search and aggregations.
|
|
|
|
To create a dedicated content node, set:
|
|
[source,yaml]
|
|
----
|
|
node.roles: [ data_content ]
|
|
----
|
|
|
|
[[data-hot-node]]
|
|
==== [x-pack]#Hot data node#
|
|
|
|
Hot data nodes store time series data as it enters {es}. The hot tier must be fast for
|
|
both reads and writes, and requires more hardware resources (such as SSD drives).
|
|
|
|
To create a dedicated hot node, set:
|
|
[source,yaml]
|
|
----
|
|
node.roles: [ data_hot ]
|
|
----
|
|
|
|
[[data-warm-node]]
|
|
==== [x-pack]#Warm data node#
|
|
|
|
Warm data nodes store indices that are no longer being regularly updated, but are still being
|
|
queried. Query volume is usually at a lower frequency than it was while the index was in the hot tier.
|
|
Less performant hardware can usually be used for nodes in this tier.
|
|
|
|
To create a dedicated warm node, set:
|
|
[source,yaml]
|
|
----
|
|
node.roles: [ data_warm ]
|
|
----
|
|
|
|
[[data-cold-node]]
|
|
==== [x-pack]#Cold data node#
|
|
|
|
Cold data nodes store read-only indices that are accessed less frequently. This tier uses less performant hardware and may leverage snapshot-backed indices to minimize the resources required.
|
|
|
|
To create a dedicated cold node, set:
|
|
[source,yaml]
|
|
----
|
|
node.roles: [ data_cold ]
|
|
----
|
|
|
|
[[node-ingest-node]]
|
|
==== Ingest node
|
|
|
|
Ingest nodes can execute pre-processing pipelines, composed of one or more
|
|
ingest processors. Depending on the type of operations performed by the ingest
|
|
processors and the required resources, it may make sense to have dedicated
|
|
ingest nodes, that will only perform this specific task.
|
|
|
|
To create a dedicated ingest node, set:
|
|
|
|
[source,yaml]
|
|
----
|
|
node.roles: [ ingest ]
|
|
----
|
|
|
|
[[node-ingest-node-setting]]
|
|
// tag::node-ingest-tag[]
|
|
`node.ingest` {ess-icon}::
|
|
Determines whether a node is an ingest node. <<ingest,Ingest nodes>> can apply
|
|
an ingest pipeline to transform and enrich a document before indexing. Default:
|
|
`true`.
|
|
// end::node-ingest-tag[]
|
|
|
|
[[coordinating-only-node]]
|
|
==== Coordinating only node
|
|
|
|
If you take away the ability to be able to handle master duties, to hold data,
|
|
and pre-process documents, then you are left with a _coordinating_ node that
|
|
can only route requests, handle the search reduce phase, and distribute bulk
|
|
indexing. Essentially, coordinating only nodes behave as smart load balancers.
|
|
|
|
Coordinating only nodes can benefit large clusters by offloading the
|
|
coordinating node role from data and master-eligible nodes. They join the
|
|
cluster and receive the full <<cluster-state,cluster state>>, like every other
|
|
node, and they use the cluster state to route requests directly to the
|
|
appropriate place(s).
|
|
|
|
WARNING: Adding too many coordinating only nodes to a cluster can increase the
|
|
burden on the entire cluster because the elected master node must await
|
|
acknowledgement of cluster state updates from every node! The benefit of
|
|
coordinating only nodes should not be overstated -- data nodes can happily
|
|
serve the same purpose.
|
|
|
|
To create a dedicated coordinating node, set:
|
|
|
|
[source,yaml]
|
|
----
|
|
node.roles: [ ]
|
|
----
|
|
|
|
[[remote-node]]
|
|
==== Remote-eligible node
|
|
|
|
By default, any node in a cluster can act as a cross-cluster client and connect
|
|
to <<modules-remote-clusters,remote clusters>>. Once connected, you can search
|
|
remote clusters using <<modules-cross-cluster-search,{ccs}>>. You can also sync
|
|
data between clusters using <<xpack-ccr,{ccr}>>.
|
|
|
|
[source,yaml]
|
|
----
|
|
node.roles: [ remote_cluster_client ]
|
|
----
|
|
|
|
[[ml-node]]
|
|
==== [xpack]#Machine learning node#
|
|
|
|
The {ml-features} provide {ml} nodes, which run jobs and handle {ml} API
|
|
requests. If `xpack.ml.enabled` is set to `true` and the node does not have the
|
|
`ml` role, the node can service API requests but it cannot run jobs.
|
|
|
|
If you want to use {ml-features} in your cluster, you must enable {ml}
|
|
(set `xpack.ml.enabled` to `true`) on all master-eligible nodes. If you want to
|
|
use {ml-features} in clients (including {kib}), it must also be enabled on all
|
|
coordinating nodes. If you have the {oss-dist}, do not use these settings.
|
|
|
|
For more information about these settings, see <<ml-settings>>.
|
|
|
|
To create a dedicated {ml} node in the {default-dist}, set:
|
|
|
|
[source,yaml]
|
|
----
|
|
node.roles: [ ml ]
|
|
xpack.ml.enabled: true <1>
|
|
----
|
|
<1> The `xpack.ml.enabled` setting is enabled by default.
|
|
|
|
[[transform-node]]
|
|
==== [xpack]#{transform-cap} node#
|
|
|
|
{transform-cap} nodes run {transforms} and handle {transform} API requests. If
|
|
you have the {oss-dist}, do not use these settings. For more information, see
|
|
<<transform-settings>>.
|
|
|
|
To create a dedicated {transform} node in the {default-dist}, set:
|
|
|
|
[source,yaml]
|
|
----
|
|
node.roles: [ transform ]
|
|
----
|
|
|
|
[[change-node-role]]
|
|
==== Changing the role of a node
|
|
|
|
Each data node maintains the following data on disk:
|
|
|
|
* the shard data for every shard allocated to that node,
|
|
* the index metadata corresponding with every shard allocated to that node, and
|
|
* the cluster-wide metadata, such as settings and index templates.
|
|
|
|
Similarly, each master-eligible node maintains the following data on disk:
|
|
|
|
* the index metadata for every index in the cluster, and
|
|
* the cluster-wide metadata, such as settings and index templates.
|
|
|
|
Each node checks the contents of its data path at startup. If it discovers
|
|
unexpected data then it will refuse to start. This is to avoid importing
|
|
unwanted <<modules-gateway-dangling-indices,dangling indices>> which can lead
|
|
to a red cluster health. To be more precise, nodes without the `data` role will
|
|
refuse to start if they find any shard data on disk at startup, and nodes
|
|
without both the `master` and `data` roles will refuse to start if they have any
|
|
index metadata on disk at startup.
|
|
|
|
It is possible to change the roles of a node by adjusting its
|
|
`elasticsearch.yml` file and restarting it. This is known as _repurposing_ a
|
|
node. In order to satisfy the checks for unexpected data described above, you
|
|
must perform some extra steps to prepare a node for repurposing when starting
|
|
the node without the `data` or `master` roles.
|
|
|
|
* If you want to repurpose a data node by removing the `data` role then you
|
|
should first use an <<allocation-filtering,allocation filter>> to safely
|
|
migrate all the shard data onto other nodes in the cluster.
|
|
|
|
* If you want to repurpose a node to have neither the `data` nor `master` roles
|
|
then it is simplest to start a brand-new node with an empty data path and the
|
|
desired roles. You may find it safest to use an
|
|
<<allocation-filtering,allocation filter>> to migrate the shard data elsewhere
|
|
in the cluster first.
|
|
|
|
If it is not possible to follow these extra steps then you may be able to use
|
|
the <<node-tool-repurpose,`elasticsearch-node repurpose`>> tool to delete any
|
|
excess data that prevents a node from starting.
|
|
|
|
[discrete]
|
|
=== Node data path settings
|
|
|
|
[[data-path]]
|
|
==== `path.data`
|
|
|
|
Every data and master-eligible node requires access to a data directory where
|
|
shards and index and cluster metadata will be stored. The `path.data` defaults
|
|
to `$ES_HOME/data` but can be configured in the `elasticsearch.yml` config
|
|
file an absolute path or a path relative to `$ES_HOME` as follows:
|
|
|
|
[source,yaml]
|
|
----
|
|
path.data: /var/elasticsearch/data
|
|
----
|
|
|
|
Like all node settings, it can also be specified on the command line as:
|
|
|
|
[source,sh]
|
|
----
|
|
./bin/elasticsearch -Epath.data=/var/elasticsearch/data
|
|
----
|
|
|
|
TIP: When using the `.zip` or `.tar.gz` distributions, the `path.data` setting
|
|
should be configured to locate the data directory outside the {es} home
|
|
directory, so that the home directory can be deleted without deleting your data!
|
|
The RPM and Debian distributions do this for you already.
|
|
|
|
[discrete]
|
|
[[max-local-storage-nodes]]
|
|
=== `node.max_local_storage_nodes`
|
|
|
|
The <<data-path,data path>> can be shared by multiple nodes, even by nodes from
|
|
different clusters. It is recommended however to only run one node of {es} using
|
|
the same data path. This setting is deprecated in 7.x and will be removed in
|
|
version 8.0.
|
|
|
|
By default, {es} is configured to prevent more than one node from sharing the
|
|
same data path. To allow for more than one node (e.g., on your development
|
|
machine), use the setting `node.max_local_storage_nodes` and set this to a
|
|
positive integer larger than one.
|
|
|
|
WARNING: Never run different node types (i.e. master, data) from the same data
|
|
directory. This can lead to unexpected data loss.
|
|
|
|
[discrete]
|
|
[[other-node-settings]]
|
|
=== Other node settings
|
|
|
|
More node settings can be found in <<settings>> and <<important-settings>>,
|
|
including:
|
|
|
|
* <<cluster-name,`cluster.name`>>
|
|
* <<node-name,`node.name`>>
|
|
* <<modules-network,network settings>>
|