2018-03-19 19:48:07 -04:00
|
|
|
[role="xpack"]
|
2018-06-22 18:39:34 -04:00
|
|
|
[testenv="basic"]
|
2018-03-19 19:48:07 -04:00
|
|
|
[[es-monitoring-collectors]]
|
2018-04-16 17:57:42 -04:00
|
|
|
== Collectors
|
2018-03-19 19:48:07 -04:00
|
|
|
|
2019-12-02 16:31:47 -05:00
|
|
|
[IMPORTANT]
|
|
|
|
=========================
|
|
|
|
{metricbeat} is the recommended method for collecting and shipping monitoring
|
|
|
|
data to a monitoring cluster.
|
|
|
|
|
2020-06-11 13:16:53 -04:00
|
|
|
If you have previously configured legacy collection methods, you should migrate
|
|
|
|
to using {metricbeat} collection methods. Use either {metricbeat} collection or
|
|
|
|
legacy collection methods; do not use both.
|
2019-12-02 16:31:47 -05:00
|
|
|
|
|
|
|
Learn more about <<configuring-metricbeat>>.
|
|
|
|
=========================
|
|
|
|
|
2018-04-04 18:25:14 -04:00
|
|
|
Collectors, as their name implies, collect things. Each collector runs once for
|
|
|
|
each collection interval to obtain data from the public APIs in {es} and {xpack}
|
|
|
|
that it chooses to monitor. When the data collection is finished, the data is
|
|
|
|
handed in bulk to the <<es-monitoring-exporters,exporters>> to be sent to the
|
|
|
|
monitoring clusters. Regardless of the number of exporters, each collector only
|
|
|
|
runs once per collection interval.
|
2018-03-19 19:48:07 -04:00
|
|
|
|
|
|
|
There is only one collector per data type gathered. In other words, for any
|
|
|
|
monitoring document that is created, it comes from a single collector rather
|
2019-12-02 16:31:47 -05:00
|
|
|
than being merged from multiple collectors. The {es} {monitor-features}
|
|
|
|
currently have a few collectors because the goal is to minimize overlap between
|
|
|
|
them for optimal performance.
|
2018-03-19 19:48:07 -04:00
|
|
|
|
|
|
|
Each collector can create zero or more monitoring documents. For example,
|
|
|
|
the `index_stats` collector collects all index statistics at the same time to
|
|
|
|
avoid many unnecessary calls.
|
|
|
|
|
|
|
|
[options="header"]
|
|
|
|
|=======================
|
|
|
|
| Collector | Data Types | Description
|
|
|
|
| Cluster Stats | `cluster_stats`
|
2019-04-30 16:22:15 -04:00
|
|
|
| Gathers details about the cluster state, including parts of the actual cluster
|
|
|
|
state (for example `GET /_cluster/state`) and statistics about it (for example,
|
|
|
|
`GET /_cluster/stats`). This produces a single document type. In versions prior
|
|
|
|
to X-Pack 5.5, this was actually three separate collectors that resulted in
|
|
|
|
three separate types: `cluster_stats`, `cluster_state`, and `cluster_info`. In
|
|
|
|
5.5 and later, all three are combined into `cluster_stats`. This only runs on
|
|
|
|
the _elected_ master node and the data collected (`cluster_stats`) largely
|
|
|
|
controls the UI. When this data is not present, it indicates either a
|
|
|
|
misconfiguration on the elected master node, timeouts related to the collection
|
|
|
|
of the data, or issues with storing the data. Only a single document is produced
|
|
|
|
per collection.
|
2018-03-19 19:48:07 -04:00
|
|
|
| Index Stats | `indices_stats`, `index_stats`
|
|
|
|
| Gathers details about the indices in the cluster, both in summary and
|
|
|
|
individually. This creates many documents that represent parts of the index
|
2019-04-30 16:22:15 -04:00
|
|
|
statistics output (for example, `GET /_stats`). This information only needs to
|
|
|
|
be collected once, so it is collected on the _elected_ master node. The most
|
|
|
|
common failure for this collector relates to an extreme number of indices -- and
|
|
|
|
therefore time to gather them -- resulting in timeouts. One summary
|
|
|
|
`indices_stats` document is produced per collection and one `index_stats`
|
|
|
|
document is produced per index, per collection.
|
2018-03-19 19:48:07 -04:00
|
|
|
| Index Recovery | `index_recovery`
|
|
|
|
| Gathers details about index recovery in the cluster. Index recovery represents
|
|
|
|
the assignment of _shards_ at the cluster level. If an index is not recovered,
|
2019-04-30 16:22:15 -04:00
|
|
|
it is not usable. This also corresponds to shard restoration via snapshots. This
|
|
|
|
information only needs to be collected once, so it is collected on the _elected_
|
|
|
|
master node. The most common failure for this collector relates to an extreme
|
|
|
|
number of shards -- and therefore time to gather them -- resulting in timeouts.
|
|
|
|
This creates a single document that contains all recoveries by default, which
|
|
|
|
can be quite large, but it gives the most accurate picture of recovery in the
|
|
|
|
production cluster.
|
2018-03-19 19:48:07 -04:00
|
|
|
| Shards | `shards`
|
|
|
|
| Gathers details about all _allocated_ shards for all indices, particularly
|
2019-04-30 16:22:15 -04:00
|
|
|
including what node the shard is allocated to. This information only needs to be
|
|
|
|
collected once, so it is collected on the _elected_ master node. The collector
|
|
|
|
uses the local cluster state to get the routing table without any network
|
|
|
|
timeout issues unlike most other collectors. Each shard is represented by a
|
|
|
|
separate monitoring document.
|
2018-03-19 19:48:07 -04:00
|
|
|
| Jobs | `job_stats`
|
2019-04-30 16:22:15 -04:00
|
|
|
| Gathers details about all machine learning job statistics (for example, `GET
|
|
|
|
/_ml/anomaly_detectors/_stats`). This information only needs to be collected
|
|
|
|
once, so it is collected on the _elected_ master node. However, for the master
|
|
|
|
node to be able to perform the collection, the master node must have
|
|
|
|
`xpack.ml.enabled` set to true (default) and a license level that supports {ml}.
|
2018-03-19 19:48:07 -04:00
|
|
|
| Node Stats | `node_stats`
|
|
|
|
| Gathers details about the running node, such as memory utilization and CPU
|
2019-04-30 16:22:15 -04:00
|
|
|
usage (for example, `GET /_nodes/_local/stats`). This runs on _every_ node with
|
2019-12-02 16:31:47 -05:00
|
|
|
{monitor-features} enabled. One common failure results in the timeout of the node
|
2019-04-30 16:22:15 -04:00
|
|
|
stats request due to too many segment files. As a result, the collector spends
|
|
|
|
too much time waiting for the file system stats to be calculated until it
|
|
|
|
finally times out. A single `node_stats` document is created per collection.
|
|
|
|
This is collected per node to help to discover issues with nodes communicating
|
|
|
|
with each other, but not with the monitoring cluster (for example, intermittent
|
|
|
|
network issues or memory pressure).
|
2018-03-19 19:48:07 -04:00
|
|
|
|=======================
|
|
|
|
|
2019-12-02 16:31:47 -05:00
|
|
|
The {es} {monitor-features} use a single threaded scheduler to run the
|
|
|
|
collection of {es} monitoring data by all of the appropriate collectors on each
|
|
|
|
node. This scheduler is managed locally by each node and its interval is
|
|
|
|
controlled by specifying the `xpack.monitoring.collection.interval`, which
|
|
|
|
defaults to 10 seconds (`10s`), at either the node or cluster level.
|
2018-04-16 17:57:42 -04:00
|
|
|
|
2018-03-19 19:48:07 -04:00
|
|
|
Fundamentally, each collector works on the same principle. Per collection
|
2019-12-02 16:31:47 -05:00
|
|
|
interval, each collector is checked to see whether it should run and then the
|
|
|
|
appropriate collectors run. The failure of an individual collector does not
|
2018-04-16 17:57:42 -04:00
|
|
|
impact any other collector.
|
2018-03-19 19:48:07 -04:00
|
|
|
|
|
|
|
Once collection has completed, all of the monitoring data is passed to the
|
2019-12-02 16:31:47 -05:00
|
|
|
exporters to route the monitoring data to the monitoring clusters.
|
2018-03-19 19:48:07 -04:00
|
|
|
|
|
|
|
If gaps exist in the monitoring charts in {kib}, it is typically because either
|
|
|
|
a collector failed or the monitoring cluster did not receive the data (for
|
|
|
|
example, it was being restarted). In the event that a collector fails, a logged
|
|
|
|
error should exist on the node that attempted to perform the collection.
|
|
|
|
|
|
|
|
NOTE: Collection is currently done serially, rather than in parallel, to avoid
|
|
|
|
extra overhead on the elected master node. The downside to this approach
|
|
|
|
is that collectors might observe a different version of the cluster state
|
|
|
|
within the same collection period. In practice, this does not make a
|
|
|
|
significant difference and running the collectors in parallel would not
|
|
|
|
prevent such a possibility.
|
|
|
|
|
2018-04-04 18:25:14 -04:00
|
|
|
For more information about the configuration options for the collectors, see
|
2018-03-19 19:48:07 -04:00
|
|
|
<<monitoring-collection-settings>>.
|
2018-04-04 18:25:14 -04:00
|
|
|
|
2020-07-23 12:42:33 -04:00
|
|
|
[discrete]
|
2018-04-04 18:25:14 -04:00
|
|
|
[[es-monitoring-stack]]
|
2019-09-24 13:35:06 -04:00
|
|
|
==== Collecting data from across the Elastic Stack
|
2018-04-04 18:25:14 -04:00
|
|
|
|
2019-12-02 16:31:47 -05:00
|
|
|
{es} {monitor-features} also receive monitoring data from other parts of the
|
2018-04-04 18:25:14 -04:00
|
|
|
Elastic Stack. In this way, it serves as an unscheduled monitoring data
|
|
|
|
collector for the stack.
|
|
|
|
|
|
|
|
By default, data collection is disabled. {es} monitoring data is not
|
|
|
|
collected and all monitoring data from other sources such as {kib}, Beats, and
|
|
|
|
Logstash is ignored. You must set `xpack.monitoring.collection.enabled` to `true`
|
|
|
|
to enable the collection of monitoring data. See <<monitoring-settings>>.
|
|
|
|
|
|
|
|
Once data is received, it is forwarded to the exporters
|
|
|
|
to be routed to the monitoring cluster like all monitoring data.
|
|
|
|
|
|
|
|
WARNING: Because this stack-level "collector" lives outside of the collection
|
2019-12-02 16:31:47 -05:00
|
|
|
interval of {es} {monitor-features}, it is not impacted by the
|
2018-04-04 18:25:14 -04:00
|
|
|
`xpack.monitoring.collection.interval` setting. Therefore, data is passed to the
|
|
|
|
exporters whenever it is received. This behavior can result in indices for {kib},
|
|
|
|
Logstash, or Beats being created somewhat unexpectedly.
|
|
|
|
|
|
|
|
While the monitoring data is collected and processed, some production cluster
|
|
|
|
metadata is added to incoming documents. This metadata enables {kib} to link the
|
|
|
|
monitoring data to the appropriate cluster. If this linkage is unimportant to
|
|
|
|
the infrastructure that you're monitoring, it might be simpler to configure
|
|
|
|
Logstash and Beats to report monitoring data directly to the monitoring cluster.
|
|
|
|
This scenario also prevents the production cluster from adding extra overhead
|
|
|
|
related to monitoring data, which can be very useful when there are a large
|
|
|
|
number of Logstash nodes or Beats.
|
|
|
|
|
|
|
|
For more information about typical monitoring architectures, see
|
2019-09-27 17:58:10 -04:00
|
|
|
<<how-monitoring-works>>.
|