Recommend Metricbeat for 7.x (#49758)

* Recommend Metricbeat for 7.x

* Fix typo in link to configuring-metricbeat

* [DOCS] Fixes build error and some terminology

* Add to local exporter page per review feedback
This commit is contained in:
cachedout 2019-12-02 21:31:47 +00:00 committed by GitHub
parent f1fd41cb53
commit c4cc90be1c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 278 additions and 212 deletions

View File

@ -3,48 +3,58 @@
[[collecting-monitoring-data]]
== Collecting monitoring data
If you enable the Elastic {monitor-features} in your cluster, you can
optionally collect metrics about {es}. By default, monitoring is enabled but
data collection is disabled.
[IMPORTANT]
=========================
{metricbeat} is the recommended method for collecting and shipping monitoring
data to a monitoring cluster.
This method involves sending the metrics to the monitoring cluster by using
exporters. For an alternative method, see <<configuring-metricbeat>>.
If you have previously configured internal collection, you should migrate to
using {metricbeat} collection. Use either {metricbeat} collection or
internal collection; do not use both.
NOTE: If you want to collect monitoring data from sources such as Beats and {ls}
and route it to a monitoring cluster, you must follow this method. You cannot
use {metricbeat} to ship the monitoring data for those products yet.
Learn more about <<configuring-metricbeat>>.
=========================
Advanced monitoring settings enable you to control how frequently data is
collected, configure timeouts, and set the retention period for locally-stored
monitoring indices. You can also adjust how monitoring data is displayed.
== Collecting monitoring data
To learn about monitoring in general, see <<monitor-elasticsearch-cluster>>.
If you enable the Elastic {monitor-features} in your cluster, you can
optionally collect metrics about {es}. By default, monitoring is enabled but
data collection is disabled.
This method involves sending the metrics to the monitoring cluster by using
exporters. For the recommended method, see <<configuring-metricbeat>>.
Advanced monitoring settings enable you to control how frequently data is
collected, configure timeouts, and set the retention period for locally-stored
monitoring indices. You can also adjust how monitoring data is displayed.
To learn about monitoring in general, see <<monitor-elasticsearch-cluster>>.
. Configure your cluster to collect monitoring data:
.. Verify that the `xpack.monitoring.enabled` setting is `true`, which is its
default value, on each node in the cluster. For more information, see
<<monitoring-settings>>.
.. Verify that the `xpack.monitoring.enabled` setting is `true`, which is its
default value, on each node in the cluster. For more information, see
<<monitoring-settings>>.
.. Verify that the `xpack.monitoring.elasticsearch.collection.enabled` setting
is `true`, which is its default value, on each node in the cluster.
.. Verify that the `xpack.monitoring.elasticsearch.collection.enabled` setting
is `true`, which is its default value, on each node in the cluster.
+
--
NOTE: You can specify this setting in either the `elasticsearch.yml` on each
node or across the cluster as a dynamic cluster setting. If {es}
{security-features} are enabled, you must have `monitor` cluster privileges to
NOTE: You can specify this setting in either the `elasticsearch.yml` on each
node or across the cluster as a dynamic cluster setting. If {es}
{security-features} are enabled, you must have `monitor` cluster privileges to
view the cluster settings and `manage` cluster privileges to change them.
For more information, see <<monitoring-settings>> and <<cluster-update-settings>>.
--
.. Set the `xpack.monitoring.collection.enabled` setting to `true` on each
node in the cluster. By default, it is is disabled (`false`).
+
node in the cluster. By default, it is is disabled (`false`).
+
--
NOTE: You can specify this setting in either the `elasticsearch.yml` on each
node or across the cluster as a dynamic cluster setting. If {es}
{security-features} are enabled, you must have `monitor` cluster privileges to
NOTE: You can specify this setting in either the `elasticsearch.yml` on each
node or across the cluster as a dynamic cluster setting. If {es}
{security-features} are enabled, you must have `monitor` cluster privileges to
view the cluster settings and `manage` cluster privileges to change them.
For example, use the following APIs to review and change this setting:
@ -61,21 +71,21 @@ PUT _cluster/settings
}
----------------------------------
Alternatively, you can enable this setting in {kib}. In the side navigation,
click *Monitoring*. If data collection is disabled, you are prompted to turn it
on.
Alternatively, you can enable this setting in {kib}. In the side navigation,
click *Monitoring*. If data collection is disabled, you are prompted to turn it
on.
For more
For more
information, see <<monitoring-settings>> and <<cluster-update-settings>>.
--
.. Optional: Specify which indices you want to monitor.
.. Optional: Specify which indices you want to monitor.
+
--
By default, the monitoring agent collects data from all {es} indices.
To collect data from particular indices, configure the
`xpack.monitoring.collection.indices` setting. You can specify multiple indices
as a comma-separated list or use an index pattern to match multiple indices. For
`xpack.monitoring.collection.indices` setting. You can specify multiple indices
as a comma-separated list or use an index pattern to match multiple indices. For
example:
[source,yaml]
@ -84,36 +94,36 @@ xpack.monitoring.collection.indices: logstash-*, index1, test2
----------------------------------
You can prepend `-` to explicitly exclude index names or
patterns. For example, to include all indices that start with `test` except
patterns. For example, to include all indices that start with `test` except
`test3`, you could specify `test*,-test3`. To include system indices such as
.security and .kibana, add `.*` to the list of included names.
For example `.*,test*,-test3`
--
.. Optional: Specify how often to collect monitoring data. The default value for
the `xpack.monitoring.collection.interval` setting 10 seconds. See
.. Optional: Specify how often to collect monitoring data. The default value for
the `xpack.monitoring.collection.interval` setting 10 seconds. See
<<monitoring-settings>>.
. Identify where to store monitoring data.
. Identify where to store monitoring data.
+
--
By default, the data is stored on the same cluster by using a
<<local-exporter,`local` exporter>>. Alternatively, you can use an <<http-exporter,`http` exporter>> to send data to
a separate _monitoring cluster_.
By default, the data is stored on the same cluster by using a
<<local-exporter,`local` exporter>>. Alternatively, you can use an <<http-exporter,`http` exporter>> to send data to
a separate _monitoring cluster_.
IMPORTANT: The {es} {monitor-features} use ingest pipelines, therefore the
cluster that stores the monitoring data must have at least one
<<ingest,ingest node>>.
cluster that stores the monitoring data must have at least one
<<ingest,ingest node>>.
For more information about typical monitoring architectures,
For more information about typical monitoring architectures,
see <<how-monitoring-works>>.
--
. If you choose to use an `http` exporter:
. If you choose to use an `http` exporter:
.. On the cluster that you want to monitor (often called the _production cluster_),
configure each node to send metrics to your monitoring cluster. Configure an
HTTP exporter in the `xpack.monitoring.exporters` settings in the
.. On the cluster that you want to monitor (often called the _production cluster_),
configure each node to send metrics to your monitoring cluster. Configure an
HTTP exporter in the `xpack.monitoring.exporters` settings in the
`elasticsearch.yml` file. For example:
+
--
@ -122,19 +132,19 @@ HTTP exporter in the `xpack.monitoring.exporters` settings in the
xpack.monitoring.exporters:
id1:
type: http
host: ["http://es-mon-1:9200", "http://es-mon2:9200"]
host: ["http://es-mon-1:9200", "http://es-mon2:9200"]
--------------------------------------------------
--
.. If the Elastic {security-features} are enabled on the monitoring cluster, you
.. If the Elastic {security-features} are enabled on the monitoring cluster, you
must provide appropriate credentials when data is shipped to the monitoring cluster:
... Create a user on the monitoring cluster that has the
<<built-in-roles,`remote_monitoring_agent` built-in role>>.
Alternatively, use the
... Create a user on the monitoring cluster that has the
<<built-in-roles,`remote_monitoring_agent` built-in role>>.
Alternatively, use the
<<built-in-users,`remote_monitoring_user` built-in user>>.
... Add the user ID and password settings to the HTTP exporter settings in the
... Add the user ID and password settings to the HTTP exporter settings in the
`elasticsearch.yml` file on each node. +
+
--
@ -145,19 +155,19 @@ For example:
xpack.monitoring.exporters:
id1:
type: http
host: ["http://es-mon-1:9200", "http://es-mon2:9200"]
auth.username: remote_monitoring_user
host: ["http://es-mon-1:9200", "http://es-mon2:9200"]
auth.username: remote_monitoring_user
auth.password: YOUR_PASSWORD
--------------------------------------------------
--
.. If you configured the monitoring cluster to use
<<configuring-tls,encrypted communications>>, you must use the HTTPS protocol in
the `host` setting. You must also specify the trusted CA certificates that will
be used to verify the identity of the nodes in the monitoring cluster.
.. If you configured the monitoring cluster to use
<<configuring-tls,encrypted communications>>, you must use the HTTPS protocol in
the `host` setting. You must also specify the trusted CA certificates that will
be used to verify the identity of the nodes in the monitoring cluster.
*** To add a CA certificate to an {es} node's trusted certificates, you can
specify the location of the PEM encoded certificate with the
*** To add a CA certificate to an {es} node's trusted certificates, you can
specify the location of the PEM encoded certificate with the
`certificate_authorities` setting. For example:
+
--
@ -166,7 +176,7 @@ specify the location of the PEM encoded certificate with the
xpack.monitoring.exporters:
id1:
type: http
host: ["https://es-mon1:9200", "https://es-mon2:9200"]
host: ["https://es-mon1:9200", "https://es-mon2:9200"]
auth:
username: remote_monitoring_user
password: YOUR_PASSWORD
@ -194,12 +204,12 @@ xpack.monitoring.exporters:
--------------------------------------------------
--
. Configure your cluster to route monitoring data from sources such as {kib},
. Configure your cluster to route monitoring data from sources such as {kib},
Beats, and {ls} to the monitoring cluster. For information about configuring
each product to collect and send monitoring data, see <<monitor-elasticsearch-cluster>>.
. If you updated settings in the `elasticsearch.yml` files on your production
cluster, restart {es}. See <<stopping-elasticsearch>> and <<starting-elasticsearch>>.
. If you updated settings in the `elasticsearch.yml` files on your production
cluster, restart {es}. See <<stopping-elasticsearch>> and <<starting-elasticsearch>>.
+
--
TIP: You may want to temporarily {ref}/modules-cluster.html[disable shard
@ -208,7 +218,7 @@ reallocation during the install process.
--
. Optional:
<<config-monitoring-indices,Configure the indices that store the monitoring data>>.
. Optional:
<<config-monitoring-indices,Configure the indices that store the monitoring data>>.
. {kibana-ref}/monitoring-data.html[View the monitoring data in {kib}].
. {kibana-ref}/monitoring-data.html[View the monitoring data in {kib}].

View File

@ -3,6 +3,18 @@
[[es-monitoring-collectors]]
== Collectors
[IMPORTANT]
=========================
{metricbeat} is the recommended method for collecting and shipping monitoring
data to a monitoring cluster.
If you have previously configured internal collection, you should migrate to
using {metricbeat} collection. Use either {metricbeat} collection or
internal collection; do not use both.
Learn more about <<configuring-metricbeat>>.
=========================
Collectors, as their name implies, collect things. Each collector runs once for
each collection interval to obtain data from the public APIs in {es} and {xpack}
that it chooses to monitor. When the data collection is finished, the data is
@ -12,9 +24,9 @@ runs once per collection interval.
There is only one collector per data type gathered. In other words, for any
monitoring document that is created, it comes from a single collector rather
than being merged from multiple collectors. {monitoring} for {es} currently has
a few collectors because the goal is to minimize overlap between them for
optimal performance.
than being merged from multiple collectors. The {es} {monitor-features}
currently have a few collectors because the goal is to minimize overlap between
them for optimal performance.
Each collector can create zero or more monitoring documents. For example,
the `index_stats` collector collects all index statistics at the same time to
@ -70,7 +82,7 @@ node to be able to perform the collection, the master node must have
| Node Stats | `node_stats`
| Gathers details about the running node, such as memory utilization and CPU
usage (for example, `GET /_nodes/_local/stats`). This runs on _every_ node with
{monitoring} enabled. One common failure results in the timeout of the node
{monitor-features} enabled. One common failure results in the timeout of the node
stats request due to too many segment files. As a result, the collector spends
too much time waiting for the file system stats to be calculated until it
finally times out. A single `node_stats` document is created per collection.
@ -79,19 +91,19 @@ with each other, but not with the monitoring cluster (for example, intermittent
network issues or memory pressure).
|=======================
{monitoring} uses a single threaded scheduler to run the collection of {es}
monitoring data by all of the appropriate collectors on each node. This
scheduler is managed locally by each node and its interval is controlled by
specifying the `xpack.monitoring.collection.interval`, which defaults to 10
seconds (`10s`), at either the node or cluster level.
The {es} {monitor-features} use a single threaded scheduler to run the
collection of {es} monitoring data by all of the appropriate collectors on each
node. This scheduler is managed locally by each node and its interval is
controlled by specifying the `xpack.monitoring.collection.interval`, which
defaults to 10 seconds (`10s`), at either the node or cluster level.
Fundamentally, each collector works on the same principle. Per collection
interval, each collector is checked to see whether it should run and then the
appropriate collectors run. The failure of an individual collector does not
interval, each collector is checked to see whether it should run and then the
appropriate collectors run. The failure of an individual collector does not
impact any other collector.
Once collection has completed, all of the monitoring data is passed to the
exporters to route the monitoring data to the monitoring clusters.
exporters to route the monitoring data to the monitoring clusters.
If gaps exist in the monitoring charts in {kib}, it is typically because either
a collector failed or the monitoring cluster did not receive the data (for
@ -112,7 +124,7 @@ For more information about the configuration options for the collectors, see
[[es-monitoring-stack]]
==== Collecting data from across the Elastic Stack
{monitoring} in {es} also receives monitoring data from other parts of the
{es} {monitor-features} also receive monitoring data from other parts of the
Elastic Stack. In this way, it serves as an unscheduled monitoring data
collector for the stack.
@ -125,7 +137,7 @@ Once data is received, it is forwarded to the exporters
to be routed to the monitoring cluster like all monitoring data.
WARNING: Because this stack-level "collector" lives outside of the collection
interval of {monitoring} for {es}, it is not impacted by the
interval of {es} {monitor-features}, it is not impacted by the
`xpack.monitoring.collection.interval` setting. Therefore, data is passed to the
exporters whenever it is received. This behavior can result in indices for {kib},
Logstash, or Beats being created somewhat unexpectedly.

View File

@ -3,6 +3,18 @@
[[es-monitoring-exporters]]
== Exporters
[IMPORTANT]
=========================
{metricbeat} is the recommended method for collecting and shipping monitoring
data to a monitoring cluster.
If you have previously configured internal collection, you should migrate to
using {metricbeat} collection. Use either {metricbeat} collection or
internal collection; do not use both.
Learn more about <<configuring-metricbeat>>.
=========================
The purpose of exporters is to take data collected from any Elastic Stack
source and route it to the monitoring cluster. It is possible to configure
more than one exporter, but the general and default setup is to use a single
@ -11,13 +23,13 @@ exporter.
There are two types of exporters in {es}:
`local`::
The default exporter used by {monitoring} for {es}. This exporter routes data
back into the _same_ cluster. See <<local-exporter>>.
The default exporter used by {es} {monitor-features}. This exporter routes data
back into the _same_ cluster. See <<local-exporter>>.
`http`::
The preferred exporter, which you can use to route data into any supported
{es} cluster accessible via HTTP. Production environments should always use a
separate monitoring cluster. See <<http-exporter>>.
separate monitoring cluster. See <<http-exporter>>.
Both exporters serve the same purpose: to set up the monitoring cluster and route
monitoring data. However, they perform these tasks in very different ways. Even
@ -34,43 +46,43 @@ IMPORTANT: It is critical that all nodes share the same setup. Otherwise,
monitoring data might be routed in different ways or to different places.
When the exporters route monitoring data into the monitoring cluster, they use
`_bulk` indexing for optimal performance. All monitoring data is forwarded in
bulk to all enabled exporters on the same node. From there, the exporters
`_bulk` indexing for optimal performance. All monitoring data is forwarded in
bulk to all enabled exporters on the same node. From there, the exporters
serialize the monitoring data and send a bulk request to the monitoring cluster.
There is no queuing--in memory or persisted to disk--so any failure during the
export results in the loss of that batch of monitoring data. This design limits
There is no queuing--in memory or persisted to disk--so any failure during the
export results in the loss of that batch of monitoring data. This design limits
the impact on {es} and the assumption is that the next pass will succeed.
Routing monitoring data involves indexing it into the appropriate monitoring
indices. Once the data is indexed, it exists in a monitoring index that, by
default, is named with a daily index pattern. For {es} monitoring data, this is
an index that matches `.monitoring-es-6-*`. From there, the data lives inside
the monitoring cluster and must be curated or cleaned up as necessary. If you do
not curate the monitoring data, it eventually fills up the nodes and the cluster
might fail due to lack of disk space.
Routing monitoring data involves indexing it into the appropriate monitoring
indices. Once the data is indexed, it exists in a monitoring index that, by
default, is named with a daily index pattern. For {es} monitoring data, this is
an index that matches `.monitoring-es-6-*`. From there, the data lives inside
the monitoring cluster and must be curated or cleaned up as necessary. If you do
not curate the monitoring data, it eventually fills up the nodes and the cluster
might fail due to lack of disk space.
TIP: You are strongly recommended to manage the curation of indices and
TIP: You are strongly recommended to manage the curation of indices and
particularly the monitoring indices. To do so, you can take advantage of the
<<local-exporter-cleaner,cleaner service>> or
<<local-exporter-cleaner,cleaner service>> or
{curator-ref-current}/index.html[Elastic Curator].
//TO-DO: Add information about index lifecycle management https://github.com/elastic/x-pack-elasticsearch/issues/2814
There is also a disk watermark (known as the flood stage
watermark), which protects clusters from running out of disk space. When this
feature is triggered, it makes all indices (including monitoring indices)
read-only until the issue is fixed and a user manually makes the index writeable
again. While an active monitoring index is read-only, it will naturally fail to
write (index) new data and will continuously log errors that indicate the write
failure. For more information, see
There is also a disk watermark (known as the flood stage
watermark), which protects clusters from running out of disk space. When this
feature is triggered, it makes all indices (including monitoring indices)
read-only until the issue is fixed and a user manually makes the index writeable
again. While an active monitoring index is read-only, it will naturally fail to
write (index) new data and will continuously log errors that indicate the write
failure. For more information, see
{ref}/disk-allocator.html[Disk-based Shard Allocation].
[float]
[[es-monitoring-default-exporter]]
=== Default exporters
If a node or cluster does not explicitly define an {monitoring} exporter, the
following default exporter is used:
If a node or cluster does not explicitly define an exporter, the following
default exporter is used:
[source,yaml]
---------------------------------------------------
@ -117,7 +129,7 @@ information, see <<http-exporter-settings>>.
WARNING: Some users create their own templates that match _all_ index patterns,
which therefore impact the monitoring indices that get created. It is critical
that you do not disable `_source` storage for the monitoring indices. If you do,
{monitoring} for {kib} does not work and you cannot visualize monitoring data
{kib} {monitor-features} do not work and you cannot visualize monitoring data
for your cluster.
The following table lists the ingest pipelines that are required before an
@ -127,7 +139,7 @@ exporter can route monitoring data:
|=======================
| Pipeline | Purpose
| `xpack_monitoring_2` | Upgrades X-Pack monitoring data coming from X-Pack
5.0 - 5.4 to be compatible with the format used in {monitoring} 5.5.
5.0 - 5.4 to be compatible with the format used in 5.5 {monitor-features}.
| `xpack_monitoring_6` | A placeholder pipeline that is empty.
|=======================
@ -153,8 +165,9 @@ whether this update has occurred is by checking for the presence of indices
matching `.monitoring-es-6-*` (or more concretely the existence of the
new pipeline). Versions prior to 5.5 used `.monitoring-es-2-*`.
Each resource that is created by an {monitoring} exporter has a `version` field,
which is used to determine whether the resource should be replaced. The `version`
field value represents the latest version of {monitoring} that changed the
resource. If a resource is edited by someone or something external to
{monitoring}, those changes are lost the next time an automatic update occurs.
Each resource that is created by an exporter has a `version` field,
which is used to determine whether the resource should be replaced. The `version`
field value represents the latest version of {monitor-features} that changed the
resource. If a resource is edited by someone or something external to the
{monitor-features}, those changes are lost the next time an automatic update
occurs.

View File

@ -10,18 +10,12 @@ Each {es} node, {ls} node, {kib} instance, and Beat is considered unique in the
cluster based on its persistent UUID, which is written to the
<<path-settings,`path.data`>> directory when the node or instance starts.
Monitoring documents are just ordinary JSON documents built by monitoring each
Monitoring documents are just ordinary JSON documents built by monitoring each
{stack} component at a specified collection interval. If you want to alter the
templates for these indices, see <<config-monitoring-indices>>.
Each component in the {stack} is responsible for monitoring itself and then
forwarding those documents to the production cluster for both routing and
indexing (storage). The routing and indexing processes in {es} are handled by
what are called <<es-monitoring-collectors,collectors>> and
<<es-monitoring-exporters,exporters>>.
Alternatively, you can use {metricbeat} to collect monitoring data and ship it
directly to the monitoring cluster.
{metricbeat} is used to collect monitoring data and to ship it directly to the
monitoring cluster.
To learn how to collect monitoring data, see:
@ -32,7 +26,7 @@ To learn how to collect monitoring data, see:
* Monitoring Beats:
** {auditbeat-ref}/monitoring.html[{auditbeat}]
** {filebeat-ref}/monitoring.html[{filebeat}]
** {functionbeat-ref}/monitoring.html[{functionbeat}]
** {functionbeat-ref}/monitoring.html[{functionbeat}]
** {heartbeat-ref}/monitoring.html[{heartbeat}]
** {metricbeat-ref}/monitoring.html[{metricbeat}]
** {packetbeat-ref}/monitoring.html[{packetbeat}]

View File

@ -3,16 +3,29 @@
[[http-exporter]]
=== HTTP exporters
The `http` exporter is the preferred exporter in {monitoring} because it enables
the use of a separate monitoring cluster. As a secondary benefit, it avoids
using a production cluster node as a coordinating node for indexing monitoring
data because all requests are HTTP requests to the monitoring cluster.
[IMPORTANT]
=========================
{metricbeat} is the recommended method for collecting and shipping monitoring
data to a monitoring cluster.
The `http` exporter uses the low-level {es} REST Client, which enables it to
send its data to any {es} cluster it can access through the network. Its requests
make use of the <<common-options-response-filtering,`filter_path`>> parameter to
reduce bandwidth whenever possible, which helps to ensure that communications
between the production and monitoring clusters are as lightweight as possible.
If you have previously configured internal collection, you should migrate to
using {metricbeat} collection. Use either {metricbeat} collection or
internal collection; do not use both.
Learn more about <<configuring-metricbeat>>.
=========================
The `http` exporter is the preferred exporter in the {es} {monitor-features}
because it enables the use of a separate monitoring cluster. As a secondary
benefit, it avoids using a production cluster node as a coordinating node for
indexing monitoring data because all requests are HTTP requests to the
monitoring cluster.
The `http` exporter uses the low-level {es} REST Client, which enables it to
send its data to any {es} cluster it can access through the network. Its requests
make use of the <<common-options-response-filtering,`filter_path`>> parameter to
reduce bandwidth whenever possible, which helps to ensure that communications
between the production and monitoring clusters are as lightweight as possible.
The `http` exporter supports a number of settings that control how it
communicates over HTTP to remote clusters. In most cases, it is not
@ -43,13 +56,13 @@ xpack.monitoring.exporters:
----------------------------------
<1> A `local` exporter defined explicitly whose arbitrary name is `my_local`.
<2> An `http` exporter defined whose arbitrary name is `my_remote`. This name
uniquely defines the exporter but is otherwise unused.
<3> `host` is a required setting for `http` exporters. It must specify the HTTP
port rather than the transport port. The default port value is `9200`.
<2> An `http` exporter defined whose arbitrary name is `my_remote`. This name
uniquely defines the exporter but is otherwise unused.
<3> `host` is a required setting for `http` exporters. It must specify the HTTP
port rather than the transport port. The default port value is `9200`.
<4> User authentication for those using {stack} {security-features} or some other
form of user authentication protecting the cluster.
<5> See <<http-exporter-settings>> for all TLS/SSL settings. If not supplied,
<5> See <<http-exporter-settings>> for all TLS/SSL settings. If not supplied,
the default node-level TLS/SSL settings are used.
<6> Optional base path to prefix any outgoing request with in order to
work with proxies.
@ -57,13 +70,13 @@ the default node-level TLS/SSL settings are used.
The array-based key/value format sends one header per value.
<8> A mechanism for changing the date suffix used by default.
NOTE: The `http` exporter accepts an array of `hosts` and it will round robin
through the list. It is a good idea to take advantage of that feature when the
NOTE: The `http` exporter accepts an array of `hosts` and it will round robin
through the list. It is a good idea to take advantage of that feature when the
monitoring cluster contains more than one node.
Unlike the `local` exporter, _every_ node that uses the `http` exporter attempts
to check and create the resources that it needs. The `http` exporter avoids
re-checking the resources unless something triggers it to perform the checks
to check and create the resources that it needs. The `http` exporter avoids
re-checking the resources unless something triggers it to perform the checks
again. These triggers include:
* The production cluster's node restarts.
@ -73,21 +86,21 @@ again. These triggers include:
The easiest way to trigger a check is to disable, then re-enable the exporter.
WARNING: This resource management behavior can create a hole for users that
delete monitoring resources. Since the `http` exporter does not re-check its
resources unless one of the triggers occurs, this can result in malformed index
WARNING: This resource management behavior can create a hole for users that
delete monitoring resources. Since the `http` exporter does not re-check its
resources unless one of the triggers occurs, this can result in malformed index
mappings.
Unlike the `local` exporter, the `http` exporter is inherently routing requests
outside of the cluster. This situation means that the exporter must provide a
username and password when the monitoring cluster requires one (or other
outside of the cluster. This situation means that the exporter must provide a
username and password when the monitoring cluster requires one (or other
appropriate security configurations, such as TLS/SSL settings).
IMPORTANT: When discussing security relative to the `http` exporter, it is
critical to remember that all users are managed on the monitoring cluster. This
is particularly important to remember when you move from development
environments to production environments, where you often have dedicated
critical to remember that all users are managed on the monitoring cluster. This
is particularly important to remember when you move from development
environments to production environments, where you often have dedicated
monitoring clusters.
For more information about the configuration options for the `http` exporter,
For more information about the configuration options for the `http` exporter,
see <<http-exporter-settings>>.

View File

@ -6,14 +6,14 @@
[partintro]
--
The {stack} {monitor-features} provide a way to keep a pulse on the health and
performance of your {es} cluster.
performance of your {es} cluster.
* <<monitoring-overview>>
* <<how-monitoring-works>>
* <<monitoring-production>>
* <<collecting-monitoring-data>>
* <<configuring-metricbeat>>
* <<configuring-filebeat>>
* <<collecting-monitoring-data>>
* <<config-monitoring-indices>>
* <<es-monitoring-collectors>>
* <<es-monitoring-exporters>>

View File

@ -3,10 +3,22 @@
[[local-exporter]]
=== Local exporters
The `local` exporter is the default exporter in {monitoring}. It routes data
back into the same (local) cluster. In other words, it uses the production
[IMPORTANT]
=========================
{metricbeat} is the recommended method for collecting and shipping monitoring
data to a monitoring cluster.
If you have previously configured internal collection, you should migrate to
using {metricbeat} collection. Use either {metricbeat} collection or
internal collection; do not use both.
Learn more about <<configuring-metricbeat>>.
=========================
The `local` exporter is the default exporter in {monitoring}. It routes data
back into the same (local) cluster. In other words, it uses the production
cluster as the monitoring cluster. For example:
[source,yaml]
---------------------------------------------------
xpack.monitoring.exporters.my_local_exporter: <1>
@ -16,57 +28,57 @@ xpack.monitoring.exporters.my_local_exporter: <1>
This exporter exists to provide a convenient option when hardware is simply not
available. It is also a way for developers to get an idea of what their actions
do for pre-production clusters when they do not have the time or resources to
provide a separate monitoring cluster. However, this exporter has disadvantages
do for pre-production clusters when they do not have the time or resources to
provide a separate monitoring cluster. However, this exporter has disadvantages
that impact the local cluster:
* All indexing impacts the local cluster and the nodes that hold the monitoring
* All indexing impacts the local cluster and the nodes that hold the monitoring
indices' shards.
* Most collectors run on the elected master node. Therefore most indexing occurs
* Most collectors run on the elected master node. Therefore most indexing occurs
with the elected master node as the coordinating node, which is a bad practice.
* Any usage of {monitoring} for {kib} uses the local cluster's resources for
* Any usage of {monitoring} for {kib} uses the local cluster's resources for
searches and aggregations, which means that they might not be available for
non-monitoring tasks.
* If the local cluster goes down, the monitoring cluster has inherently gone
* If the local cluster goes down, the monitoring cluster has inherently gone
down with it (and vice versa), which generally defeats the purpose of monitoring.
For the `local` exporter, all setup occurs only on the elected master node. This
means that if you do not see any monitoring templates or ingest pipelines, the
elected master node is having issues or it is not configured in the same way.
Unlike the `http` exporter, the `local` exporter has the advantage of accessing
the monitoring cluster's up-to-date cluster state. It can therefore always check
that the templates and ingest pipelines exist without a performance penalty. If
the elected master node encounters errors while trying to create the monitoring
resources, it logs errors, ignores that collection, and tries again after the
For the `local` exporter, all setup occurs only on the elected master node. This
means that if you do not see any monitoring templates or ingest pipelines, the
elected master node is having issues or it is not configured in the same way.
Unlike the `http` exporter, the `local` exporter has the advantage of accessing
the monitoring cluster's up-to-date cluster state. It can therefore always check
that the templates and ingest pipelines exist without a performance penalty. If
the elected master node encounters errors while trying to create the monitoring
resources, it logs errors, ignores that collection, and tries again after the
next collection.
The elected master node is the only node to set up resources for the `local`
The elected master node is the only node to set up resources for the `local`
exporter. Therefore all other nodes wait for the resources to be set up before
indexing any monitoring data from their own collectors. Each of these nodes logs
a message indicating that they are waiting for the resources to be set up.
indexing any monitoring data from their own collectors. Each of these nodes logs
a message indicating that they are waiting for the resources to be set up.
One benefit of the `local` exporter is that it lives within the cluster and
therefore no extra configuration is required when the cluster is secured with
{stack} {security-features}. All operations, including indexing operations, that
occur from a `local` exporter make use of the internal transport mechanisms
within {es}. This behavior enables the exporter to be used without providing any
user credentials when {security-features} are enabled.
user credentials when {security-features} are enabled.
For more information about the configuration options for the `local` exporter,
For more information about the configuration options for the `local` exporter,
see <<local-exporter-settings>>.
[[local-exporter-cleaner]]
==== Cleaner service
One feature of the `local` exporter, which is not present in the `http` exporter,
is a cleaner service. The cleaner service runs once per day at 01:00 AM UTC on
One feature of the `local` exporter, which is not present in the `http` exporter,
is a cleaner service. The cleaner service runs once per day at 01:00 AM UTC on
the elected master node.
The role of the cleaner service is to clean, or curate, the monitoring indices
that are older than a configurable amount of time (the default is `7d`). This
cleaner exists as part of the `local` exporter as a safety mechanism. The `http`
exporter does not make use of it because it could enable a single misconfigured
node to prematurely curate data from other production clusters that share the
node to prematurely curate data from other production clusters that share the
same monitoring cluster.
In a dedicated monitoring cluster, the cleaning service can be used without
@ -78,6 +90,6 @@ xpack.monitoring.collection.enabled: false <1>
xpack.monitoring.history.duration: 3d <2>
---------------------------------------------------
<1> Disable the collection of data on the monitoring cluster.
<2> Lower the default history duration from `7d` to `3d`. The minimum value is
`1d`. This setting can be modified only when using a Gold or higher level
<2> Lower the default history duration from `7d` to `3d`. The minimum value is
`1d`. This setting can be modified only when using a Gold or higher level
license. For the Basic license level, it uses the default of 7 days.

View File

@ -8,13 +8,25 @@ not. For example, you can use {metricbeat} to ship monitoring data about {kib},
{es}, {ls}, and Beats to the monitoring cluster.
//If you are sending your data to the {esms-init}, see <<esms>>.
If you have at least a gold license, using a dedicated monitoring cluster also
[IMPORTANT]
=========================
{metricbeat} is the recommended method for collecting and shipping monitoring
data to a monitoring cluster.
If you have previously configured internal collection, you should migrate to
using {metricbeat} collection. Use either {metricbeat} collection or
internal collection; do not use both.
Learn more about <<configuring-metricbeat>>.
=========================
If you have at least a gold license, using a dedicated monitoring cluster also
enables you to monitor multiple clusters from a central location.
To store monitoring data in a separate cluster:
. Set up the {es} cluster you want to use as the monitoring cluster.
For example, you might set up a two host cluster with the nodes `es-mon-1` and
. Set up the {es} cluster you want to use as the monitoring cluster.
For example, you might set up a two host cluster with the nodes `es-mon-1` and
`es-mon-2`.
+
--
@ -27,10 +39,10 @@ cluster; it does not need to be a dedicated ingest node.
===============================
--
.. (Optional) Verify that the collection of monitoring data is disabled on the
monitoring cluster. By default, the `xpack.monitoring.collection.enabled` setting
is `false`.
+
.. (Optional) Verify that the collection of monitoring data is disabled on the
monitoring cluster. By default, the `xpack.monitoring.collection.enabled` setting
is `false`.
+
--
For example, you can use the following APIs to review and change this setting:
@ -48,28 +60,28 @@ PUT _cluster/settings
// TEST[skip:security errs]
--
.. If the {es} {security-features} are enabled on the monitoring cluster, create
users that can send and retrieve monitoring data.
.. If the {es} {security-features} are enabled on the monitoring cluster, create
users that can send and retrieve monitoring data.
+
--
NOTE: If you plan to use {kib} to view monitoring data, username and password
credentials must be valid on both the {kib} server and the monitoring cluster.
NOTE: If you plan to use {kib} to view monitoring data, username and password
credentials must be valid on both the {kib} server and the monitoring cluster.
--
*** If you plan to use {metricbeat} to collect data about {es} or {kib},
create a user that has the `remote_monitoring_collector` built-in role and a
user that has the `remote_monitoring_agent`
<<built-in-roles-remote-monitoring-agent,built-in role>>. Alternatively, use the
`remote_monitoring_user` <<built-in-users,built-in user>>.
*** If you plan to use {metricbeat} to collect data about {es} or {kib},
create a user that has the `remote_monitoring_collector` built-in role and a
user that has the `remote_monitoring_agent`
<<built-in-roles-remote-monitoring-agent,built-in role>>. Alternatively, use the
`remote_monitoring_user` <<built-in-users,built-in user>>.
*** If you plan to use HTTP exporters to route data through your production
cluster, create a user that has the `remote_monitoring_agent`
<<built-in-roles-remote-monitoring-agent,built-in role>>.
*** If you plan to use HTTP exporters to route data through your production
cluster, create a user that has the `remote_monitoring_agent`
<<built-in-roles-remote-monitoring-agent,built-in role>>.
+
--
For example, the
following request creates a `remote_monitor` user that has the
For example, the
following request creates a `remote_monitor` user that has the
`remote_monitoring_agent` role:
[source,console]
@ -83,11 +95,11 @@ POST /_security/user/remote_monitor
---------------------------------------------------------------
// TEST[skip:needs-gold+-license]
Alternatively, use the `remote_monitoring_user` <<built-in-users,built-in user>>.
Alternatively, use the `remote_monitoring_user` <<built-in-users,built-in user>>.
--
. Configure your production cluster to collect data and send it to the
monitoring cluster.
. Configure your production cluster to collect data and send it to the
monitoring cluster.
** <<configuring-metricbeat,Use {metricbeat}>>.
@ -97,13 +109,13 @@ monitoring cluster.
{logstash-ref}/configuring-logstash.html[Configure {ls} to collect data and send it to the monitoring cluster].
. (Optional) Configure the Beats to collect data and send it to the monitoring
cluster.
cluster.
** {auditbeat-ref}/monitoring.html[Auditbeat]
** {filebeat-ref}/monitoring.html[Filebeat]
** {heartbeat-ref}/monitoring.html[Heartbeat]
** {metricbeat-ref}/monitoring.html[Metricbeat]
** {packetbeat-ref}/monitoring.html[Packetbeat]
** {winlogbeat-ref}/monitoring.html[Winlogbeat]
** {winlogbeat-ref}/monitoring.html[Winlogbeat]
. (Optional) Configure {kib} to collect data and send it to the monitoring cluster:
@ -111,13 +123,13 @@ cluster.
** {kibana-ref}/monitoring-kibana.html[Use HTTP exporters].
. (Optional) Create a dedicated {kib} instance for monitoring, rather than using
a single {kib} instance to access both your production cluster and monitoring
. (Optional) Create a dedicated {kib} instance for monitoring, rather than using
a single {kib} instance to access both your production cluster and monitoring
cluster.
.. (Optional) Disable the collection of monitoring data in this {kib} instance.
Set the `xpack.monitoring.kibana.collection.enabled` setting to `false` in the
`kibana.yml` file. For more information about this setting, see
{kibana-ref}/monitoring-settings-kb.html[Monitoring settings in {kib}].
.. (Optional) Disable the collection of monitoring data in this {kib} instance.
Set the `xpack.monitoring.kibana.collection.enabled` setting to `false` in the
`kibana.yml` file. For more information about this setting, see
{kibana-ref}/monitoring-settings-kb.html[Monitoring settings in {kib}].
. {kibana-ref}/monitoring-data.html[Configure {kib} to retrieve and display the monitoring data].
. {kibana-ref}/monitoring-data.html[Configure {kib} to retrieve and display the monitoring data].