OpenSearch/docs/reference/monitoring/configuring-filebeat.asciidoc

188 lines
6.2 KiB
Plaintext
Raw Normal View History

[role="xpack"]
[testenv="basic"]
[[configuring-filebeat]]
== Collecting {es} log data with {filebeat}
[subs="attributes"]
++++
<titleabbrev>Collecting log data with {filebeat}</titleabbrev>
++++
You can use {filebeat} to monitor the {es} log files, collect log events, and
ship them to the monitoring cluster. Your recent logs are visible on the
*Monitoring* page in {kib}.
//NOTE: The tagged regions are re-used in the Stack Overview.
. Verify that {es} is running and that the monitoring cluster is ready to
receive data from {filebeat}.
+
--
TIP: In production environments, we strongly recommend using a separate cluster
(referred to as the _monitoring cluster_) to store the data. Using a separate
monitoring cluster prevents production cluster outages from impacting your
ability to access your monitoring data. It also prevents monitoring activities
from impacting the performance of your production cluster. See
<<monitoring-production>>.
--
. Enable the collection of monitoring data on your cluster.
+
--
include::configuring-metricbeat.asciidoc[tag=enable-collection]
For more information, see <<monitoring-settings>> and <<cluster-update-settings>>.
--
. Identify which logs you want to monitor.
+
--
The {filebeat} {es} module can handle
{stack-ov}/audit-log-output.html[audit logs],
{ref}/logging.html#deprecation-logging[deprecation logs],
{ref}/gc-logging.html[gc logs], {ref}/logging.html[server logs], and
{ref}/index-modules-slowlog.html[slow logs].
For more information about the location of your {es} logs, see the
{ref}/path-settings.html[path.logs] setting.
IMPORTANT: If there are both structured (`*.json`) and unstructured (plain text)
versions of the logs, you must use the structured logs. Otherwise, they might
not appear in the appropriate context in {kib}.
--
. {filebeat-ref}/filebeat-installation.html[Install {filebeat}] on the {es}
nodes that contain logs that you want to monitor.
. Identify where to send the log data.
+
--
// tag::output-elasticsearch[]
For example, specify {es} output information for your monitoring cluster in
the {filebeat} configuration file (`filebeat.yml`):
[source,yaml]
----------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["http://es-mon-1:9200", "http://es-mon2:9200"] <1>
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
----------------------------------
<1> In this example, the data is stored on a monitoring cluster with nodes
`es-mon-1` and `es-mon-2`.
If you configured the monitoring cluster to use encrypted communications, you
must access it via HTTPS. For example, use a `hosts` setting like
`https://es-mon-1:9200`.
IMPORTANT: The {es} {monitor-features} use ingest pipelines, therefore the
cluster that stores the monitoring data must have at least one
<<ingest,ingest node>>.
If {es} {security-features} are enabled on the monitoring cluster, you must
provide a valid user ID and password so that {filebeat} can send metrics
successfully.
For more information about these configuration options, see
{filebeat-ref}/elasticsearch-output.html[Configure the {es} output].
// end::output-elasticsearch[]
--
. Optional: Identify where to visualize the data.
+
--
// tag::setup-kibana[]
{filebeat} provides example {kib} dashboards, visualizations and searches. To
load the dashboards into the appropriate {kib} instance, specify the
`setup.kibana` information in the {filebeat} configuration file
(`filebeat.yml`) on each node:
[source,yaml]
----------------------------------
setup.kibana:
host: "localhost:5601"
#username: "my_kibana_user"
#password: "YOUR_PASSWORD"
----------------------------------
TIP: In production environments, we strongly recommend using a dedicated {kib}
instance for your monitoring cluster.
If {security-features} are enabled, you must provide a valid user ID and
password so that {filebeat} can connect to {kib}:
.. Create a user on the monitoring cluster that has the
{stack-ov}/built-in-roles.html[`kibana_user` built-in role] or equivalent
privileges.
.. Add the `username` and `password` settings to the {es} output information in
the {filebeat} configuration file. The example shows a hard-coded password, but
you should store sensitive values in the
{filebeat-ref}/keystore.html[secrets keystore].
See {filebeat-ref}/setup-kibana-endpoint.html[Configure the {kib} endpoint].
// end::setup-kibana[]
--
. Enable the {es} module and set up the initial {filebeat} environment on each
node.
+
--
// tag::enable-es-module[]
For example:
["source","sh",subs="attributes,callouts"]
----------------------------------------------------------------------
filebeat modules enable elasticsearch
filebeat setup -e
----------------------------------------------------------------------
For more information, see
{filebeat-ref}/filebeat-module-elasticsearch.html[{es} module].
// end::enable-es-module[]
--
. Configure the {es} module in {filebeat} on each node.
+
--
// tag::configure-es-module[]
If the logs that you want to monitor aren't in the default location, set the
appropriate path variables in the `modules.d/elasticsearch.yml` file. See
{filebeat-ref}/filebeat-module-elasticsearch.html#configuring-elasticsearch-module[Configure the {es} module].
IMPORTANT: If there are JSON logs, configure the `var.paths` settings to point
to them instead of the plain text logs.
// end::configure-es-module[]
--
. {filebeat-ref}/filebeat-starting.html[Start {filebeat}] on each node.
+
--
NOTE: Depending on how youve installed {filebeat}, you might see errors related
to file ownership or permissions when you try to run {filebeat} modules. See
{beats-ref}/config-file-permissions.html[Config file ownership and permissions].
--
. Check whether the appropriate indices exist on the monitoring cluster.
+
--
For example, use the {ref}/cat-indices.html[cat indices] command to verify
that there are new `filebeat-*` indices.
TIP: If you want to use the *Monitoring* UI in {kib}, there must also be
`.monitoring-*` indices. Those indices are generated when you collect metrics
about {stack} products. For example, see <<configuring-metricbeat>>.
--
. {kibana-ref}/monitoring-data.html[View the monitoring data in {kib}].