[DOCS] Add X-Pack monitoring details for Elasticsearch (elastic/x-pack-elasticsearch#4328)
Original commit: elastic/x-pack-elasticsearch@0d160df9b6
This commit is contained in:
parent
0ac8b78986
commit
fa44406cea
|
@ -1,6 +1,6 @@
|
|||
[role="xpack"]
|
||||
[[es-monitoring-collectors]]
|
||||
=== Collectors
|
||||
== Collectors
|
||||
|
||||
Collectors, as their name implies, collect things. Each collector runs once for
|
||||
each collection interval to obtain data from the public APIs in {es} and {xpack}
|
||||
|
@ -85,17 +85,19 @@ discover issues with nodes communicating with each other, but not with the
|
|||
monitoring cluster (for example, intermittent network issues or memory pressure).
|
||||
|=======================
|
||||
|
||||
{monitoring} uses a single threaded scheduler to run the collection of {es}
|
||||
monitoring data by all of the appropriate collectors on each node. This
|
||||
scheduler is managed locally by each node and its interval is controlled by
|
||||
specifying the `xpack.monitoring.collection.interval`, which defaults to 10
|
||||
seconds (`10s`), at either the node or cluster level.
|
||||
|
||||
Fundamentally, each collector works on the same principle. Per collection
|
||||
interval, which defaults to 10 seconds (`10s`), each collector is checked to
|
||||
see whether it should run and then the appropriate collectors run. The failure
|
||||
of an individual collector does not impact any other collector.
|
||||
interval, each collector is checked to see whether it should run and then the
|
||||
appropriate collectors run. The failure of an individual collector does not
|
||||
impact any other collector.
|
||||
|
||||
Once collection has completed, all of the monitoring data is passed to the
|
||||
exporters to route the monitoring data to the monitoring clusters.
|
||||
|
||||
The collection interval can be configured dynamically and you can also disable
|
||||
data collection. This can be very useful when you are using a separate
|
||||
monitoring cluster to automatically take advantage of the cleaner service.
|
||||
exporters to route the monitoring data to the monitoring clusters.
|
||||
|
||||
If gaps exist in the monitoring charts in {kib}, it is typically because either
|
||||
a collector failed or the monitoring cluster did not receive the data (for
|
||||
|
@ -112,8 +114,9 @@ NOTE: Collection is currently done serially, rather than in parallel, to avoid
|
|||
For more information about the configuration options for the collectors, see
|
||||
<<monitoring-collection-settings>>.
|
||||
|
||||
[float]
|
||||
[[es-monitoring-stack]]
|
||||
==== Collecting data from across the Elastic Stack
|
||||
=== Collecting data from across the Elastic Stack
|
||||
|
||||
{monitoring} in {es} also receives monitoring data from other parts of the
|
||||
Elastic Stack. In this way, it serves as an unscheduled monitoring data
|
||||
|
|
|
@ -12,4 +12,4 @@ indices. You can also adjust how monitoring data is displayed. For more
|
|||
information, see <<es-monitoring>>.
|
||||
|
||||
include::indices.asciidoc[]
|
||||
include::{xes-repo-dir}/settings/monitoring-settings.asciidoc[]
|
||||
include::{xes-repo-dir}/settings/monitoring-settings.asciidoc[]
|
|
@ -1,6 +1,6 @@
|
|||
[role="xpack"]
|
||||
[[es-monitoring-exporters]]
|
||||
=== Exporters
|
||||
== Exporters
|
||||
|
||||
The purpose of exporters is to take data collected from any Elastic Stack
|
||||
source and route it to the monitoring cluster. It is possible to configure
|
||||
|
@ -11,13 +11,12 @@ There are two types of exporters in {es}:
|
|||
|
||||
`local`::
|
||||
The default exporter used by {monitoring} for {es}. This exporter routes data
|
||||
back into the _same_ cluster.
|
||||
back into the _same_ cluster. See <<local-exporter>>.
|
||||
|
||||
`http`::
|
||||
The preferred exporter, which you can use to route data into any supported
|
||||
{es} cluster accessible via HTTP. Production environments should always use a
|
||||
separate monitoring cluster. For more information, see
|
||||
{xpack-ref}/monitoring-production.html[Monitoring in a production environment].
|
||||
separate monitoring cluster. See <<http-exporter>>.
|
||||
|
||||
Both exporters serve the same purpose: to set up the monitoring cluster and route
|
||||
monitoring data. However, they perform these tasks in very different ways. Even
|
||||
|
@ -34,13 +33,49 @@ IMPORTANT: It is critical that all nodes share the same setup. Otherwise,
|
|||
monitoring data might be routed in different ways or to different places.
|
||||
|
||||
When the exporters route monitoring data into the monitoring cluster, they use
|
||||
`_bulk` indexing for optimal performance. There is no queuing--in memory or
|
||||
persisted to disk--so any failure during the export results in the loss of
|
||||
that batch of monitoring data. This design limits the impact on {es} and the
|
||||
assumption is that the next pass will succeed.
|
||||
`_bulk` indexing for optimal performance. All monitoring data is forwarded in
|
||||
bulk to all enabled exporters on the same node. From there, the exporters
|
||||
serialize the monitoring data and send a bulk request to the monitoring cluster.
|
||||
There is no queuing--in memory or persisted to disk--so any failure during the
|
||||
export results in the loss of that batch of monitoring data. This design limits
|
||||
the impact on {es} and the assumption is that the next pass will succeed.
|
||||
|
||||
Routing monitoring data involves indexing it into the appropriate monitoring
|
||||
indices. Once the data is indexed, it exists in a monitoring index that, by
|
||||
default, is named with a daily index pattern. For {es} monitoring data, this is
|
||||
an index that matches `.monitoring-es-6-*`. From there, the data lives inside
|
||||
the monitoring cluster and must be curated or cleaned up as necessary. If you do
|
||||
not curate the monitoring data, it eventually fills up the nodes and the cluster
|
||||
might fail due to lack of disk space.
|
||||
|
||||
TIP: You are strongly recommended to manage the curation of indices and
|
||||
particularly the monitoring indices. To do so, you can take advantage of the
|
||||
<<local-exporter-cleaner,cleaner service>> or
|
||||
{curator-ref-current}/index.html[Elastic Curator].
|
||||
|
||||
//TO-DO: Add information about index lifecycle management https://github.com/elastic/x-pack-elasticsearch/issues/2814
|
||||
|
||||
When using cluster alerts, {watcher} creates daily `.watcher_history*` indices.
|
||||
These are not managed by {monitoring} and they are not curated automatically. It
|
||||
is therefore critical that you curate these indices to avoid an undesirable and
|
||||
unexpected increase in the number of shards and indices and eventually the
|
||||
amount of disk usage. If you are using a `local` exporter, you can set the
|
||||
`xpack.watcher.history.cleaner_service.enabled` setting to `true` and curate the
|
||||
`.watcher_history*` indices by using the
|
||||
<<local-exporter-cleaner,cleaner service>>. See <<general-notification-settings>>.
|
||||
|
||||
There is also a disk watermark (known as the flood stage
|
||||
watermark), which protects clusters from running out of disk space. When this
|
||||
feature is triggered, it makes all indices (including monitoring indices)
|
||||
read-only until the issue is fixed and a user manually makes the index writeable
|
||||
again. While an active monitoring index is read-only, it will naturally fail to
|
||||
write (index) new data and will continuously log errors that indicate the write
|
||||
failure. For more information, see
|
||||
{ref}/disk-allocator.html[Disk-based Shard Allocation].
|
||||
|
||||
[float]
|
||||
[[es-monitoring-default-exporter]]
|
||||
==== Default exporters
|
||||
=== Default exporters
|
||||
|
||||
If a node or cluster does not explicitly define an {monitoring} exporter, the
|
||||
following default exporter is used:
|
||||
|
@ -58,8 +93,9 @@ If another exporter is already defined, the default exporter is _not_ created.
|
|||
When you define a new exporter, if the default exporter exists, it is
|
||||
automatically removed.
|
||||
|
||||
[float]
|
||||
[[es-monitoring-templates]]
|
||||
==== Exporter templates and ingest pipelines
|
||||
=== Exporter templates and ingest pipelines
|
||||
|
||||
Before exporters can route monitoring data, they must set up certain {es}
|
||||
resources. These resources include templates and ingest pipelines. The
|
||||
|
@ -124,3 +160,12 @@ subsequent data on the monitoring cluster. The easiest way to determine
|
|||
whether this update has occurred is by checking for the presence of indices
|
||||
matching `.monitoring-es-6-*` (or more concretely the existence of the
|
||||
new pipeline). Versions prior to 5.5 used `.monitoring-es-2-*`.
|
||||
|
||||
Each resource that is created by an {monitoring} exporter has a `version` field,
|
||||
which is used to determine whether the resource should be replaced. The `version`
|
||||
field value represents the latest version of {monitoring} that changed the
|
||||
resource. If a resource is edited by someone or something external to
|
||||
{monitoring}, those changes are lost the next time an automatic update occurs.
|
||||
|
||||
include::local-export.asciidoc[]
|
||||
include::http-export.asciidoc[]
|
||||
|
|
|
@ -1,18 +1,22 @@
|
|||
[role="xpack"]
|
||||
[[http-exporter]]
|
||||
== HTTP Exporter
|
||||
=== HTTP Exporters
|
||||
|
||||
When you configure
|
||||
an exporter in `elasticsearch.yml`, the default `local` exporter is disabled.
|
||||
The `http` exporter is the preferred exporter in {monitoring} because it enables
|
||||
the use of a separate monitoring cluster. As a secondary benefit, it avoids
|
||||
using a production cluster node as a coordinating node for indexing monitoring
|
||||
data because all requests are HTTP requests to the monitoring cluster.
|
||||
|
||||
The `http` exporter uses the low-level {es} REST Client. This allows
|
||||
the `http` exporter to send its data to any {es} cluster it can access
|
||||
through the network.
|
||||
The `http` exporter uses the low-level {es} REST Client, which enables it to
|
||||
send its data to any {es} cluster it can access through the network. Its requests
|
||||
make use of the <<common-options-response-filtering,`filter_path`>> parameter to
|
||||
reduce bandwidth whenever possible, which helps to ensure that communications
|
||||
between the production and monitoring clusters are as lightweight as possible.
|
||||
|
||||
The `http` exporter supports a number of settings that control how it
|
||||
communicates over HTTP to remote clusters. In most cases, it is not
|
||||
necessary to explicitly configure these settings. For detailed
|
||||
descriptions, see <<monitoring-settings,Monitoring Settings>>.
|
||||
descriptions, see <<monitoring-settings>>.
|
||||
|
||||
[source,yaml]
|
||||
----------------------------------
|
||||
|
@ -21,7 +25,7 @@ xpack.monitoring.exporters:
|
|||
type: local
|
||||
my_remote: <2>
|
||||
type: http
|
||||
host: [ "10.1.2.3", ... ] <3>
|
||||
host: [ "10.1.2.3:9200", ... ] <3>
|
||||
auth: <4>
|
||||
username: my_username
|
||||
password: changeme
|
||||
|
@ -38,15 +42,75 @@ xpack.monitoring.exporters:
|
|||
|
||||
----------------------------------
|
||||
<1> A `local` exporter defined explicitly whose arbitrary name is `my_local`.
|
||||
<2> An `http` exporter defined whose arbitrary name is `my_remote`.
|
||||
<3> `host` is a required setting for `http` exporters, which can take a few
|
||||
different forms as described in the table below.
|
||||
<2> An `http` exporter defined whose arbitrary name is `my_remote`. This name
|
||||
uniquely defines the exporter but is otherwise unused.
|
||||
<3> `host` is a required setting for `http` exporters. It must specify the HTTP
|
||||
port rather than the transport port. The default port value is `9200`.
|
||||
<4> User authentication for those using {security} or some other
|
||||
form of user authentication protecting the cluster.
|
||||
<5> See below for all TLS / SSL settings. If not supplied, the default
|
||||
node-level TLS / SSL settings will be used.
|
||||
<5> See <<http-exporter-settings>> for all TLS/SSL settings. If not supplied,
|
||||
the default node-level TLS/SSL settings are used.
|
||||
<6> Optional base path to prefix any outgoing request with in order to
|
||||
work with proxies.
|
||||
<7> Arbitrary key/value pairs to define as headers to send with every request.
|
||||
The array-based key/value format sends one header per value.
|
||||
<8> A mechanism for changing the date suffix used by default.
|
||||
|
||||
NOTE: The `http` exporter accepts an array of `hosts` and it will round robin
|
||||
through the list. It is a good idea to take advantage of that feature when the
|
||||
monitoring cluster contains more than one node.
|
||||
|
||||
Unlike the `local` exporter, _every_ node that uses the `http` exporter attempts
|
||||
to check and create the resources that it needs. The `http` exporter avoids
|
||||
re-checking the resources unless something triggers it to perform the checks
|
||||
again. These triggers include:
|
||||
|
||||
* The production cluster's node restarts.
|
||||
* A connection failure to the monitoring cluster.
|
||||
* The license on the production cluster changes.
|
||||
* The `http` exporter is dynamically updated (and it is therefore replaced).
|
||||
|
||||
The easiest way to trigger a check is to disable, then re-enable the exporter.
|
||||
|
||||
WARNING: This resource management behavior can create a hole for users that
|
||||
delete monitoring resources. Since the `http` exporter does not re-check its
|
||||
resources unless one of the triggers occurs, this can result in malformed index
|
||||
mappings.
|
||||
|
||||
Unlike the `local` exporter, the `http` exporter is inherently routing requests
|
||||
outside of the cluster. This situation means that the exporter must provide a
|
||||
username and password when the monitoring cluster requires one (or other
|
||||
appropriate security configurations, such as TLS/SSL settings).
|
||||
|
||||
IMPORTANT: When discussing security relative to the `http` exporter, it is
|
||||
critical to remember that all users are managed on the monitoring cluster. This
|
||||
is particularly important to remember when you move from development
|
||||
environments to production environments, where you often have dedicated
|
||||
monitoring clusters.
|
||||
|
||||
For more information about the configuration options for the `http` exporter,
|
||||
see <<http-exporter-settings>>.
|
||||
|
||||
[float]
|
||||
[[http-exporter-dns]]
|
||||
==== Using DNS Hosts in HTTP Exporters
|
||||
|
||||
{monitoring} runs inside of the the JVM security manager. When the JVM has the
|
||||
security manager enabled, the JVM changes the duration so that it caches DNS
|
||||
lookups indefinitely (for example, the mapping of a DNS hostname to an IP
|
||||
address). For this reason, if you are in an environment where the DNS response
|
||||
might change from time-to-time (for example, talking to any load balanced cloud
|
||||
provider), you are strongly discouraged from using DNS hostnames.
|
||||
|
||||
Alternatively, you can set the JVM security property `networkaddress.cache.ttl`,
|
||||
which accepts values in seconds. This property must be set for the node's JVM that
|
||||
uses {monitoring} for {es} when using DNS that can change IP addresses. If you
|
||||
do not apply this setting, the connection consistently fails after the IP
|
||||
address changes.
|
||||
|
||||
IMPORTANT: JVM security properties are different than system properties. They
|
||||
cannot be set at startup via `-D` system property settings and instead they must
|
||||
be set in code before the security manager has been setup _or_, more
|
||||
appropriately, in the `$JAVA_HOME/lib/security/java.security` file.
|
||||
|
||||
Restarting the node (and therefore the JVM) results in its cache being flushed.
|
||||
|
|
|
@ -11,22 +11,35 @@ Each {es} node is considered unique based on its persistent UUID, which is
|
|||
written on first start to its <<path-settings,`path.data`>> directory, which
|
||||
defaults to `./data`.
|
||||
|
||||
You can view the monitoring data from {kib} where it’s easy to spot issues at a
|
||||
All settings associated with {monitoring} in {es} must be set in either the
|
||||
`elasticsearch.yml` file for each node or, where possible, in the dynamic
|
||||
cluster settings. For more information, see <<configuring-monitoring>>.
|
||||
|
||||
[[es-monitoring-overview]]
|
||||
{es} is also at the core of {monitoring} across the Elastic Stack. In all cases,
|
||||
{monitoring} documents are just ordinary JSON documents built by monitoring each
|
||||
Elastic Stack component at some collection interval, then indexing those
|
||||
documents into the monitoring cluster. Each component in the stack is
|
||||
responsible for monitoring itself and then forwarding those documents to {es}
|
||||
for both routing and indexing (storage).
|
||||
|
||||
The routing and indexing processes in {es} are handled by what are called
|
||||
<<es-monitoring-collectors,collectors>> and
|
||||
<<es-monitoring-exporters,exporters>>. In the past, collectors and exporters
|
||||
were considered to be part of a monitoring "agent", but that term is generally
|
||||
not used anymore.
|
||||
|
||||
You can view monitoring data from {kib} where it’s easy to spot issues at a
|
||||
glance or delve into the system behavior over time to diagnose operational
|
||||
issues. In addition to the built-in status warnings, you can also set up custom
|
||||
alerts based on the data in the monitoring indices.
|
||||
|
||||
This section focuses on the {es} monitoring infrastructure and setup. For an
|
||||
introduction to monitoring your Elastic stack, including Logstash and {kib}, see
|
||||
{xpack-ref}/xpack-monitoring.html[Monitoring the Elastic Stack].
|
||||
|
||||
* <<configuring-monitoring>>
|
||||
* <<es-monitoring-overview>>
|
||||
* <<stats-export>>
|
||||
* <<http-exporter>>
|
||||
For an introduction to monitoring your Elastic stack, including Beats, Logstash,
|
||||
and {kib}, see {xpack-ref}/xpack-monitoring.html[Monitoring the Elastic Stack].
|
||||
|
||||
--
|
||||
|
||||
include::overview.asciidoc[]
|
||||
include::stats-export.asciidoc[]
|
||||
include::http-export.asciidoc[]
|
||||
include::collectors.asciidoc[]
|
||||
include::exporters.asciidoc[]
|
||||
include::pause-export.asciidoc[]
|
||||
|
||||
|
|
|
@ -0,0 +1,82 @@
|
|||
[role="xpack"]
|
||||
[[local-exporter]]
|
||||
=== Local Exporters
|
||||
|
||||
The `local` exporter is the default exporter in {monitoring}. It routes data
|
||||
back into the same (local) cluster. In other words, it uses the production
|
||||
cluster as the monitoring cluster. For example:
|
||||
|
||||
[source,yaml]
|
||||
---------------------------------------------------
|
||||
xpack.monitoring.exporters.my_local_exporter: <1>
|
||||
type: local
|
||||
---------------------------------------------------
|
||||
<1> The exporter name uniquely defines the exporter, but it is otherwise unused.
|
||||
|
||||
This exporter exists to provide a convenient option when hardware is simply not
|
||||
available. It is also a way for developers to get an idea of what their actions
|
||||
do for pre-production clusters when they do not have the time or resources to
|
||||
provide a separate monitoring cluster. However, this exporter has disadvantages
|
||||
that impact the local cluster:
|
||||
|
||||
* All indexing impacts the local cluster and the nodes that hold the monitoring
|
||||
indices' shards.
|
||||
* Most collectors run on the elected master node. Therefore most indexing occurs
|
||||
with the elected master node as the coordinating node, which is a bad practice.
|
||||
* Any usage of {monitoring} for {kib} uses the local cluster's resources for
|
||||
searches and aggregations, which means that they might not be available for
|
||||
non-monitoring tasks.
|
||||
* If the local cluster goes down, the monitoring cluster has inherently gone
|
||||
down with it (and vice versa), which generally defeats the purpose of monitoring.
|
||||
|
||||
For the `local` exporter, all setup occurs only on the elected master node. This
|
||||
means that if you do not see any monitoring templates or ingest pipelines, the
|
||||
elected master node is having issues or it is not configured in the same way.
|
||||
Unlike the `http` exporter, the `local` exporter has the advantage of accessing
|
||||
the monitoring cluster's up-to-date cluster state. It can therefore always check
|
||||
that the templates and ingest pipelines exist without a performance penalty. If
|
||||
the elected master node encounters errors while trying to create the monitoring
|
||||
resources, it logs errors, ignores that collection, and tries again after the
|
||||
next collection.
|
||||
|
||||
The elected master node is the only node to set up resources for the `local`
|
||||
exporter. Therefore all other nodes wait for the resources to be set up before
|
||||
indexing any monitoring data from their own collectors. Each of these nodes logs
|
||||
a message indicating that they are waiting for the resources to be set up.
|
||||
|
||||
One benefit of the `local` exporter is that it lives within the cluster and
|
||||
therefore no extra configuration is required when the cluster is secured with
|
||||
{security}. All operations, including indexing operations, that occur from a
|
||||
`local` exporter make use of the internal transport mechanisms within {es}. This
|
||||
behavior enables the exporter to be used without providing any user credentials
|
||||
when {security} is enabled.
|
||||
|
||||
For more information about the configuration options for the `local` exporter,
|
||||
see <<local-exporter-settings>>.
|
||||
|
||||
[[local-exporter-cleaner]]
|
||||
==== Cleaner Service
|
||||
|
||||
One feature of the `local` exporter, which is not present in the `http` exporter,
|
||||
is a cleaner service. The cleaner service runs once per day at 01:00 AM UTC on
|
||||
the elected master node.
|
||||
|
||||
The role of the cleaner service is to clean, or curate, the monitoring indices
|
||||
that are older than a configurable amount of time (the default is `7d`). This
|
||||
cleaner exists as part of the `local` exporter as a safety mechanism. The `http`
|
||||
exporter does not make use of it because it could enable a single misconfigured
|
||||
node to prematurely curate data from other production clusters that share the
|
||||
same monitoring cluster.
|
||||
|
||||
In a dedicated monitoring cluster, the cleaning service can be used without
|
||||
having to also monitor the monitoring cluster. For example:
|
||||
|
||||
[source,yaml]
|
||||
---------------------------------------------------
|
||||
xpack.monitoring.collection.enabled: false <1>
|
||||
xpack.monitoring.history.duration: 3d <2>
|
||||
---------------------------------------------------
|
||||
<1> Disable the collection of data on the monitoring cluster.
|
||||
<2> Lower the default history duration from `7d` to `3d`. The minimum value is
|
||||
`1d`. This setting can be modified only when using a Gold or higher level
|
||||
license. For the Basic license level, it uses the default of 7 days.
|
|
@ -1,26 +0,0 @@
|
|||
[role="xpack"]
|
||||
[[es-monitoring-overview]]
|
||||
== Monitoring Overview
|
||||
++++
|
||||
<titleabbrev>Overview</titleabbrev>
|
||||
++++
|
||||
|
||||
{es} is at the core of the {monitoring}. In all cases, {monitoring} documents
|
||||
are just ordinary JSON documents built by monitoring each Elastic Stack
|
||||
component at some collection interval (`10s` by default), then indexing those
|
||||
documents into the monitoring cluster. Each component in the stack is
|
||||
responsible for monitoring itself and then forwarding those documents to {es}
|
||||
for both routing and indexing (storage).
|
||||
|
||||
For {es}, this process is handled by what are called
|
||||
<<es-monitoring-collectors,collectors>> and
|
||||
<<es-monitoring-exporters,exporters>>. In the past, collectors and exporters
|
||||
were considered to be part of a monitoring "agent", but that term is generally
|
||||
not used anymore.
|
||||
|
||||
All settings associated with {monitoring} in {es} must be set in either the
|
||||
`elasticsearch.yml` file for each node or, where possible, in the dynamic
|
||||
cluster settings. For more information, see <<monitoring-settings>>.
|
||||
|
||||
include::collectors.asciidoc[]
|
||||
include::exporters.asciidoc[]
|
|
@ -0,0 +1,35 @@
|
|||
[role="xpack"]
|
||||
[[pause-export]]
|
||||
== Pausing Data Collection
|
||||
|
||||
To stop generating {monitoring} data in {es}, disable data collection:
|
||||
|
||||
[source,yaml]
|
||||
---------------------------------------------------
|
||||
xpack.monitoring.collection.enabled: false
|
||||
---------------------------------------------------
|
||||
|
||||
When this setting is `false`, {es} monitoring data is not collected and all
|
||||
monitoring data from other sources such as {kib}, Beats, and Logstash is ignored.
|
||||
|
||||
You can update this setting by using the
|
||||
{ref}/cluster-update-settings.html[Cluster Update Settings API].
|
||||
|
||||
If you want to separately disable a specific exporter, you can specify the
|
||||
`enabled` setting (which defaults to `true`) per exporter. For example:
|
||||
|
||||
[source,yaml]
|
||||
---------------------------------------------------
|
||||
xpack.monitoring.exporters.my_http_exporter:
|
||||
type: http
|
||||
host: ["10.1.2.3:9200", "10.1.2.4:9200"]
|
||||
enabled: false <1>
|
||||
---------------------------------------------------
|
||||
<1> Disable the named exporter. If the same name as an existing exporter is not
|
||||
used, then this will create a completely new exporter that is completely
|
||||
ignored. This value can be set dynamically by using cluster settings.
|
||||
|
||||
NOTE: Defining a disabled exporter prevents the default exporter from being
|
||||
created.
|
||||
|
||||
To re-start data collection, re-enable these settings.
|
|
@ -1,18 +0,0 @@
|
|||
[role="xpack"]
|
||||
[[stats-export]]
|
||||
== Collecting Data from Particular Indices
|
||||
|
||||
By default, the monitoring agent collects data from all {es} indices.
|
||||
To collect data from particular indices, configure the
|
||||
`xpack.monitoring.collection.indices` setting in `elasticsearch.yml`.
|
||||
You can specify multiple indices as a comma-separated list or
|
||||
use an index pattern to match multiple indices:
|
||||
|
||||
[source,yaml]
|
||||
----------------------------------
|
||||
xpack.monitoring.collection.indices: logstash-*, index1, test2
|
||||
----------------------------------
|
||||
|
||||
You can prepend `+` or `-` to explicitly include or exclude index
|
||||
names or patterns. For example, to include all indices that
|
||||
start with `test` except `test3`, you could specify `+test*,-test3`.
|
Loading…
Reference in New Issue