2013-08-28 19:24:34 -04:00
|
|
|
[[modules-node]]
|
|
|
|
== Node
|
|
|
|
|
2016-01-13 10:00:24 -05:00
|
|
|
Any time that you start an instance of Elasticsearch, you are starting a
|
|
|
|
_node_. A collection of connected nodes is called a
|
|
|
|
<<modules-cluster,cluster>>. If you are running a single node of Elasticsearch,
|
|
|
|
then you have a cluster of one node.
|
|
|
|
|
|
|
|
Every node in the cluster can handle <<modules-http,HTTP>> and
|
|
|
|
<<modules-transport,Transport>> traffic by default. The transport layer
|
2016-11-10 05:54:36 -05:00
|
|
|
is used exclusively for communication between nodes and the
|
2016-01-13 10:00:24 -05:00
|
|
|
{javaclient}/transport-client.html[Java `TransportClient`]; the HTTP layer is
|
|
|
|
used only by external REST clients.
|
|
|
|
|
|
|
|
All nodes know about all the other nodes in the cluster and can forward client
|
|
|
|
requests to the appropriate node. Besides that, each node serves one or more
|
|
|
|
purpose:
|
|
|
|
|
|
|
|
<<master-node,Master-eligible node>>::
|
|
|
|
|
|
|
|
A node that has `node.master` set to `true` (default), which makes it eligible
|
|
|
|
to be <<modules-discovery-zen,elected as the _master_ node>>, which controls
|
|
|
|
the cluster.
|
|
|
|
|
|
|
|
<<data-node,Data node>>::
|
|
|
|
|
|
|
|
A node that has `node.data` set to `true` (default). Data nodes hold data and
|
|
|
|
perform data related operations such as CRUD, search, and aggregations.
|
|
|
|
|
2016-03-04 13:45:49 -05:00
|
|
|
<<ingest,Ingest node>>::
|
2016-01-13 10:00:24 -05:00
|
|
|
|
2016-03-15 14:03:18 -04:00
|
|
|
A node that has `node.ingest` set to `true` (default). Ingest nodes are able
|
|
|
|
to apply an <<pipeline,ingest pipeline>> to a document in order to transform
|
|
|
|
and enrich the document before indexing. With a heavy ingest load, it makes
|
|
|
|
sense to use dedicated ingest nodes and to mark the master and data nodes as
|
|
|
|
`node.ingest: false`.
|
2016-01-13 10:00:24 -05:00
|
|
|
|
|
|
|
[NOTE]
|
|
|
|
[[coordinating-node]]
|
|
|
|
.Coordinating node
|
|
|
|
===============================================
|
|
|
|
|
|
|
|
Requests like search requests or bulk-indexing requests may involve data held
|
|
|
|
on different data nodes. A search request, for example, is executed in two
|
|
|
|
phases which are coordinated by the node which receives the client request --
|
|
|
|
the _coordinating node_.
|
|
|
|
|
|
|
|
In the _scatter_ phase, the coordinating node forwards the request to the data
|
|
|
|
nodes which hold the data. Each data node executes the request locally and
|
|
|
|
returns its results to the coordinating node. In the _gather_ phase, the
|
|
|
|
coordinating node reduces each data node's results into a single global
|
|
|
|
resultset.
|
|
|
|
|
2016-03-04 13:45:49 -05:00
|
|
|
Every node is implicitly a coordinating node. This means that a node that has
|
|
|
|
all three `node.master`, `node.data` and `node.ingest` set to `false` will
|
|
|
|
only act as a coordinating node, which cannot be disabled. As a result, such
|
|
|
|
a node needs to have enough memory and CPU in order to deal with the gather
|
|
|
|
phase.
|
2016-01-13 10:00:24 -05:00
|
|
|
|
|
|
|
===============================================
|
|
|
|
|
|
|
|
[float]
|
|
|
|
[[master-node]]
|
|
|
|
=== Master Eligible Node
|
|
|
|
|
|
|
|
The master node is responsible for lightweight cluster-wide actions such as
|
|
|
|
creating or deleting an index, tracking which nodes are part of the cluster,
|
|
|
|
and deciding which shards to allocate to which nodes. It is important for
|
|
|
|
cluster health to have a stable master node.
|
|
|
|
|
|
|
|
Any master-eligible node (all nodes by default) may be elected to become the
|
|
|
|
master node by the <<modules-discovery-zen,master election process>>.
|
|
|
|
|
2017-06-13 17:03:42 -04:00
|
|
|
IMPORTANT: Master nodes must have access to the `data/` directory (just like
|
2016-07-17 15:10:33 -04:00
|
|
|
`data` nodes) as this is where the cluster state is persisted between node restarts.
|
|
|
|
|
2016-01-13 10:00:24 -05:00
|
|
|
Indexing and searching your data is CPU-, memory-, and I/O-intensive work
|
|
|
|
which can put pressure on a node's resources. To ensure that your master
|
|
|
|
node is stable and not under pressure, it is a good idea in a bigger
|
|
|
|
cluster to split the roles between dedicated master-eligible nodes and
|
|
|
|
dedicated data nodes.
|
|
|
|
|
|
|
|
While master nodes can also behave as <<coordinating-node,coordinating nodes>>
|
|
|
|
and route search and indexing requests from clients to data nodes, it is
|
|
|
|
better _not_ to use dedicated master nodes for this purpose. It is important
|
|
|
|
for the stability of the cluster that master-eligible nodes do as little work
|
|
|
|
as possible.
|
|
|
|
|
2017-06-13 17:03:42 -04:00
|
|
|
To create a dedicated master-eligible node, set:
|
2016-01-13 10:00:24 -05:00
|
|
|
|
|
|
|
[source,yaml]
|
|
|
|
-------------------
|
|
|
|
node.master: true <1>
|
|
|
|
node.data: false <2>
|
2016-03-04 13:45:49 -05:00
|
|
|
node.ingest: false <3>
|
2018-09-05 20:43:44 -04:00
|
|
|
cluster.remote.connect: false <4>
|
2016-01-13 10:00:24 -05:00
|
|
|
-------------------
|
|
|
|
<1> The `node.master` role is enabled by default.
|
|
|
|
<2> Disable the `node.data` role (enabled by default).
|
2016-03-04 13:45:49 -05:00
|
|
|
<3> Disable the `node.ingest` role (enabled by default).
|
2017-10-03 09:15:30 -04:00
|
|
|
<4> Disable cross-cluster search (enabled by default).
|
2016-01-13 10:00:24 -05:00
|
|
|
|
2017-06-13 17:03:42 -04:00
|
|
|
ifdef::include-xpack[]
|
|
|
|
NOTE: These settings apply only when {xpack} is not installed. To create a
|
|
|
|
dedicated master-eligible node when {xpack} is installed, see <<modules-node-xpack,{xpack} node settings>>.
|
|
|
|
endif::include-xpack[]
|
|
|
|
|
|
|
|
|
2016-01-13 10:00:24 -05:00
|
|
|
[float]
|
|
|
|
[[split-brain]]
|
|
|
|
==== Avoiding split brain with `minimum_master_nodes`
|
|
|
|
|
|
|
|
To prevent data loss, it is vital to configure the
|
|
|
|
`discovery.zen.minimum_master_nodes` setting (which defaults to `1`) so that
|
|
|
|
each master-eligible node knows the _minimum number of master-eligible nodes_
|
|
|
|
that must be visible in order to form a cluster.
|
|
|
|
|
|
|
|
To explain, imagine that you have a cluster consisting of two master-eligible
|
|
|
|
nodes. A network failure breaks communication between these two nodes. Each
|
|
|
|
node sees one master-eligible node... itself. With `minimum_master_nodes` set
|
|
|
|
to the default of `1`, this is sufficient to form a cluster. Each node elects
|
|
|
|
itself as the new master (thinking that the other master-eligible node has
|
|
|
|
died) and the result is two clusters, or a _split brain_. These two nodes
|
|
|
|
will never rejoin until one node is restarted. Any data that has been written
|
|
|
|
to the restarted node will be lost.
|
|
|
|
|
|
|
|
Now imagine that you have a cluster with three master-eligible nodes, and
|
|
|
|
`minimum_master_nodes` set to `2`. If a network split separates one node from
|
|
|
|
the other two nodes, the side with one node cannot see enough master-eligible
|
|
|
|
nodes and will realise that it cannot elect itself as master. The side with
|
|
|
|
two nodes will elect a new master (if needed) and continue functioning
|
|
|
|
correctly. As soon as the network split is resolved, the single node will
|
|
|
|
rejoin the cluster and start serving requests again.
|
|
|
|
|
|
|
|
This setting should be set to a _quorum_ of master-eligible nodes:
|
|
|
|
|
|
|
|
(master_eligible_nodes / 2) + 1
|
|
|
|
|
|
|
|
In other words, if there are three master-eligible nodes, then minimum master
|
|
|
|
nodes should be set to `(3 / 2) + 1` or `2`:
|
|
|
|
|
|
|
|
[source,yaml]
|
|
|
|
----------------------------
|
|
|
|
discovery.zen.minimum_master_nodes: 2 <1>
|
|
|
|
----------------------------
|
|
|
|
<1> Defaults to `1`.
|
|
|
|
|
2017-11-03 04:48:48 -04:00
|
|
|
To be able to remain available when one of the master-eligible nodes fails,
|
|
|
|
clusters should have at least three master-eligible nodes, with
|
|
|
|
`minimum_master_nodes` set accordingly. A <<rolling-upgrades,rolling upgrade>>,
|
|
|
|
performed without any downtime, also requires at least three master-eligible
|
|
|
|
nodes to avoid the possibility of data loss if a network split occurs while the
|
|
|
|
upgrade is in progress.
|
|
|
|
|
2016-01-13 10:00:24 -05:00
|
|
|
This setting can also be changed dynamically on a live cluster with the
|
|
|
|
<<cluster-update-settings,cluster update settings API>>:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
----------------------------
|
|
|
|
PUT _cluster/settings
|
|
|
|
{
|
|
|
|
"transient": {
|
|
|
|
"discovery.zen.minimum_master_nodes": 2
|
|
|
|
}
|
|
|
|
}
|
|
|
|
----------------------------
|
2016-05-09 09:42:23 -04:00
|
|
|
// CONSOLE
|
2016-04-29 10:42:03 -04:00
|
|
|
// TEST[catch:/cannot set discovery.zen.minimum_master_nodes to more than the current master nodes/]
|
2016-01-13 10:00:24 -05:00
|
|
|
|
|
|
|
TIP: An advantage of splitting the master and data roles between dedicated
|
|
|
|
nodes is that you can have just three master-eligible nodes and set
|
|
|
|
`minimum_master_nodes` to `2`. You never have to change this setting, no
|
|
|
|
matter how many dedicated data nodes you add to the cluster.
|
|
|
|
|
|
|
|
|
|
|
|
[float]
|
|
|
|
[[data-node]]
|
|
|
|
=== Data Node
|
|
|
|
|
|
|
|
Data nodes hold the shards that contain the documents you have indexed. Data
|
|
|
|
nodes handle data related operations like CRUD, search, and aggregations.
|
|
|
|
These operations are I/O-, memory-, and CPU-intensive. It is important to
|
|
|
|
monitor these resources and to add more data nodes if they are overloaded.
|
|
|
|
|
|
|
|
The main benefit of having dedicated data nodes is the separation of the
|
|
|
|
master and data roles.
|
|
|
|
|
|
|
|
To create a dedicated data node, set:
|
|
|
|
|
|
|
|
[source,yaml]
|
|
|
|
-------------------
|
|
|
|
node.master: false <1>
|
|
|
|
node.data: true <2>
|
2016-03-04 13:45:49 -05:00
|
|
|
node.ingest: false <3>
|
2018-09-05 20:43:44 -04:00
|
|
|
cluster.remote.connect: false <4>
|
2016-01-13 10:00:24 -05:00
|
|
|
-------------------
|
|
|
|
<1> Disable the `node.master` role (enabled by default).
|
|
|
|
<2> The `node.data` role is enabled by default.
|
2016-03-04 13:45:49 -05:00
|
|
|
<3> Disable the `node.ingest` role (enabled by default).
|
2017-10-03 09:15:30 -04:00
|
|
|
<4> Disable cross-cluster search (enabled by default).
|
2016-01-13 10:00:24 -05:00
|
|
|
|
2017-06-13 17:03:42 -04:00
|
|
|
ifdef::include-xpack[]
|
|
|
|
NOTE: These settings apply only when {xpack} is not installed. To create a
|
|
|
|
dedicated data node when {xpack} is installed, see <<modules-node-xpack,{xpack} node settings>>.
|
|
|
|
endif::include-xpack[]
|
|
|
|
|
2016-01-13 10:00:24 -05:00
|
|
|
[float]
|
2016-03-04 13:45:49 -05:00
|
|
|
[[node-ingest-node]]
|
|
|
|
=== Ingest Node
|
|
|
|
|
|
|
|
Ingest nodes can execute pre-processing pipelines, composed of one or more
|
|
|
|
ingest processors. Depending on the type of operations performed by the ingest
|
|
|
|
processors and the required resources, it may make sense to have dedicated
|
|
|
|
ingest nodes, that will only perform this specific task.
|
|
|
|
|
|
|
|
To create a dedicated ingest node, set:
|
|
|
|
|
|
|
|
[source,yaml]
|
|
|
|
-------------------
|
|
|
|
node.master: false <1>
|
|
|
|
node.data: false <2>
|
|
|
|
node.ingest: true <3>
|
2018-09-05 20:43:44 -04:00
|
|
|
cluster.remote.connect: false <4>
|
2016-03-04 13:45:49 -05:00
|
|
|
-------------------
|
|
|
|
<1> Disable the `node.master` role (enabled by default).
|
|
|
|
<2> Disable the `node.data` role (enabled by default).
|
|
|
|
<3> The `node.ingest` role is enabled by default.
|
2017-03-27 23:14:34 -04:00
|
|
|
<4> Disable cross-cluster search (enabled by default).
|
2016-03-04 13:45:49 -05:00
|
|
|
|
2017-06-13 17:03:42 -04:00
|
|
|
ifdef::include-xpack[]
|
|
|
|
NOTE: These settings apply only when {xpack} is not installed. To create a
|
|
|
|
dedicated ingest node when {xpack} is installed, see <<modules-node-xpack,{xpack} node settings>>.
|
|
|
|
endif::include-xpack[]
|
|
|
|
|
2016-03-04 13:45:49 -05:00
|
|
|
[float]
|
|
|
|
[[coordinating-only-node]]
|
|
|
|
=== Coordinating only node
|
|
|
|
|
|
|
|
If you take away the ability to be able to handle master duties, to hold data,
|
|
|
|
and pre-process documents, then you are left with a _coordinating_ node that
|
|
|
|
can only route requests, handle the search reduce phase, and distribute bulk
|
|
|
|
indexing. Essentially, coordinating only nodes behave as smart load balancers.
|
|
|
|
|
|
|
|
Coordinating only nodes can benefit large clusters by offloading the
|
|
|
|
coordinating node role from data and master-eligible nodes. They join the
|
|
|
|
cluster and receive the full <<cluster-state,cluster state>>, like every other
|
|
|
|
node, and they use the cluster state to route requests directly to the
|
2016-01-13 10:00:24 -05:00
|
|
|
appropriate place(s).
|
|
|
|
|
2016-03-04 13:45:49 -05:00
|
|
|
WARNING: Adding too many coordinating only nodes to a cluster can increase the
|
|
|
|
burden on the entire cluster because the elected master node must await
|
|
|
|
acknowledgement of cluster state updates from every node! The benefit of
|
|
|
|
coordinating only nodes should not be overstated -- data nodes can happily
|
|
|
|
serve the same purpose.
|
2016-01-13 10:00:24 -05:00
|
|
|
|
2017-03-27 23:14:34 -04:00
|
|
|
To create a dedicated coordinating node, set:
|
2016-01-13 10:00:24 -05:00
|
|
|
|
|
|
|
[source,yaml]
|
|
|
|
-------------------
|
|
|
|
node.master: false <1>
|
|
|
|
node.data: false <2>
|
2016-03-04 13:45:49 -05:00
|
|
|
node.ingest: false <3>
|
2018-09-05 20:43:44 -04:00
|
|
|
cluster.remote.connect: false <4>
|
2016-01-13 10:00:24 -05:00
|
|
|
-------------------
|
|
|
|
<1> Disable the `node.master` role (enabled by default).
|
|
|
|
<2> Disable the `node.data` role (enabled by default).
|
2016-03-04 13:45:49 -05:00
|
|
|
<3> Disable the `node.ingest` role (enabled by default).
|
2017-03-27 23:14:34 -04:00
|
|
|
<4> Disable cross-cluster search (enabled by default).
|
2016-01-13 10:00:24 -05:00
|
|
|
|
2017-06-13 17:03:42 -04:00
|
|
|
ifdef::include-xpack[]
|
|
|
|
NOTE: These settings apply only when {xpack} is not installed. To create a
|
|
|
|
dedicated coordinating node when {xpack} is installed, see <<modules-node-xpack,{xpack} node settings>>.
|
|
|
|
endif::include-xpack[]
|
|
|
|
|
2016-01-13 10:00:24 -05:00
|
|
|
[float]
|
|
|
|
== Node data path settings
|
|
|
|
|
|
|
|
[float]
|
|
|
|
[[data-path]]
|
|
|
|
=== `path.data`
|
|
|
|
|
|
|
|
Every data and master-eligible node requires access to a data directory where
|
|
|
|
shards and index and cluster metadata will be stored. The `path.data` defaults
|
|
|
|
to `$ES_HOME/data` but can be configured in the `elasticsearch.yml` config
|
|
|
|
file an absolute path or a path relative to `$ES_HOME` as follows:
|
|
|
|
|
|
|
|
[source,yaml]
|
|
|
|
-----------------------
|
|
|
|
path.data: /var/elasticsearch/data
|
|
|
|
-----------------------
|
|
|
|
|
|
|
|
Like all node settings, it can also be specified on the command line as:
|
|
|
|
|
|
|
|
[source,sh]
|
|
|
|
-----------------------
|
2016-05-19 14:08:08 -04:00
|
|
|
./bin/elasticsearch -Epath.data=/var/elasticsearch/data
|
2016-01-13 10:00:24 -05:00
|
|
|
-----------------------
|
|
|
|
|
|
|
|
TIP: When using the `.zip` or `.tar.gz` distributions, the `path.data` setting
|
|
|
|
should be configured to locate the data directory outside the Elasticsearch
|
|
|
|
home directory, so that the home directory can be deleted without deleting
|
|
|
|
your data! The RPM and Debian distributions do this for you already.
|
|
|
|
|
|
|
|
|
|
|
|
[float]
|
|
|
|
[[max-local-storage-nodes]]
|
|
|
|
=== `node.max_local_storage_nodes`
|
|
|
|
|
2016-08-17 12:13:09 -04:00
|
|
|
The <<data-path,data path>> can be shared by multiple nodes, even by nodes from different
|
|
|
|
clusters. This is very useful for testing failover and different configurations on your development
|
|
|
|
machine. In production, however, it is recommended to run only one node of Elasticsearch per server.
|
2016-01-13 10:00:24 -05:00
|
|
|
|
2016-08-17 12:13:09 -04:00
|
|
|
By default, Elasticsearch is configured to prevent more than one node from sharing the same data
|
|
|
|
path. To allow for more than one node (e.g., on your development machine), use the setting
|
2016-10-10 16:51:47 -04:00
|
|
|
`node.max_local_storage_nodes` and set this to a positive integer larger than one.
|
2016-01-13 10:00:24 -05:00
|
|
|
|
2016-08-17 12:13:09 -04:00
|
|
|
WARNING: Never run different node types (i.e. master, data) from the same data directory. This can
|
|
|
|
lead to unexpected data loss.
|
2016-01-13 10:00:24 -05:00
|
|
|
|
|
|
|
[float]
|
|
|
|
== Other node settings
|
|
|
|
|
|
|
|
More node settings can be found in <<modules,Modules>>. Of particular note are
|
2016-04-03 10:09:24 -04:00
|
|
|
the <<cluster.name,`cluster.name`>>, the <<node.name,`node.name`>> and the
|
2016-01-13 10:00:24 -05:00
|
|
|
<<modules-network,network settings>>.
|
2017-06-13 17:03:42 -04:00
|
|
|
|
|
|
|
ifdef::include-xpack[]
|
2018-06-06 15:39:24 -04:00
|
|
|
include::ml-node.asciidoc[]
|
2017-06-13 17:03:42 -04:00
|
|
|
endif::include-xpack[]
|