216 lines
8.5 KiB
Plaintext
216 lines
8.5 KiB
Plaintext
[[settings]]
|
|
== Configuring Elasticsearch
|
|
|
|
Elasticsearch ships with good defaults and requires very little configuration.
|
|
Most settings can be changed on a running cluster using the
|
|
<<cluster-update-settings>> API.
|
|
|
|
The configuration files should contain settings which are node-specific (such
|
|
as `node.name` and paths), or settings which a node requires in order to be
|
|
able to join a cluster, such as `cluster.name` and `network.host`.
|
|
|
|
[float]
|
|
=== Config file location
|
|
|
|
Elasticsearch has two configuration files:
|
|
|
|
* `elasticsearch.yml` for configuring Elasticsearch, and
|
|
* `log4j2.properties` for configuring Elasticsearch logging.
|
|
|
|
These files are located in the config directory, whose location defaults to
|
|
`$ES_HOME/config/`. The Debian and RPM packages set the config directory
|
|
location to `/etc/elasticsearch/`.
|
|
|
|
The location of the config directory can be changed with the `path.conf`
|
|
setting, as follows:
|
|
|
|
[source,sh]
|
|
-------------------------------
|
|
./bin/elasticsearch -Epath.conf=/path/to/my/config/
|
|
-------------------------------
|
|
|
|
[float]
|
|
=== Config file format
|
|
|
|
The configuration format is http://www.yaml.org/[YAML]. Here is an
|
|
example of changing the path of the data and logs directories:
|
|
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
path:
|
|
data: /var/lib/elasticsearch
|
|
logs: /var/log/elasticsearch
|
|
--------------------------------------------------
|
|
|
|
Settings can also be flattened as follows:
|
|
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
path.data: /var/lib/elasticsearch
|
|
path.logs: /var/log/elasticsearch
|
|
--------------------------------------------------
|
|
|
|
[float]
|
|
=== Environment variable subsitution
|
|
|
|
Environment variables referenced with the `${...}` notation within the
|
|
configuration file will be replaced with the value of the environment
|
|
variable, for instance:
|
|
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
node.name: ${HOSTNAME}
|
|
network.host: ${ES_NETWORK_HOST}
|
|
--------------------------------------------------
|
|
|
|
[float]
|
|
=== Prompting for settings
|
|
|
|
For settings that you do not wish to store in the configuration file, you can
|
|
use the value `${prompt.text}` or `${prompt.secret}` and start Elasticsearch
|
|
in the foreground. `${prompt.secret}` has echoing disabled so that the value
|
|
entered will not be shown in your terminal; `${prompt.text}` will allow you to
|
|
see the value as you type it in. For example:
|
|
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
node:
|
|
name: ${prompt.text}
|
|
--------------------------------------------------
|
|
|
|
When starting Elasticsearch, you will be prompted to enter the actual value
|
|
like so:
|
|
|
|
[source,sh]
|
|
--------------------------------------------------
|
|
Enter value for [node.name]:
|
|
--------------------------------------------------
|
|
|
|
NOTE: Elasticsearch will not start if `${prompt.text}` or `${prompt.secret}`
|
|
is used in the settings and the process is run as a service or in the background.
|
|
|
|
[float]
|
|
=== Setting default settings
|
|
|
|
New default settings may be specified on the command line using the
|
|
`default.` prefix. This will specify a value that will be used by
|
|
default unless another value is specified in the config file.
|
|
|
|
For instance, if Elasticsearch is started as follows:
|
|
|
|
[source,sh]
|
|
---------------------------
|
|
./bin/elasticsearch -Edefault.node.name=My_Node
|
|
---------------------------
|
|
|
|
the value for `node.name` will be `My_Node`, unless it is overwritten on the
|
|
command line with `es.node.name` or in the config file with `node.name`.
|
|
|
|
[float]
|
|
[[logging]]
|
|
== Logging configuration
|
|
|
|
Elasticsearch uses http://logging.apache.org/log4j/2.x/[Log4j 2] for
|
|
logging. Log4j 2 can be configured using the log4j2.properties
|
|
file. Elasticsearch exposes three properties, `${sys:es.logs.base_path},
|
|
`${sys:es.logs.cluster_name}`, and `${sys:es.logs.node_name} (if the node name
|
|
is explicitly set via `node.name`) that can be referenced in the configuration
|
|
file to determine the location of the log files. The property
|
|
`${sys:es.logs.base_path}` will resolve to the log directory,
|
|
`${sys:es.logs.cluster_name}` will resolve to the cluster name (used as the
|
|
prefix of log filenames in the default configuration), and
|
|
`${sys:es.logs.node_name}` will resolve to the node name (if the node name is
|
|
explicitly set).
|
|
|
|
For example, if your log directory (`path.logs`) is `/var/log/elasticsearch` and
|
|
your cluster is named `production` then `${sys:es.logs.base_path}` will resolve
|
|
to `/var/log/elasticsearch` and
|
|
`${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log`
|
|
will resolve to `/var/log/elasticsearch/production.log`.
|
|
|
|
[source,properties]
|
|
--------------------------------------------------
|
|
appender.rolling.type = RollingFile <1>
|
|
appender.rolling.name = rolling
|
|
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log <2>
|
|
appender.rolling.layout.type = PatternLayout
|
|
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %.10000m%n
|
|
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}.log <3>
|
|
appender.rolling.policies.type = Policies
|
|
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy <4>
|
|
appender.rolling.policies.time.interval = 1 <5>
|
|
appender.rolling.policies.time.modulate = true <6>
|
|
--------------------------------------------------
|
|
|
|
<1> Configure the `RollingFile` appender
|
|
<2> Log to `/var/log/elasticsearch/production.log`
|
|
<3> Roll logs to `/var/log/elasticsearch/production-yyyy-MM-dd.log`
|
|
<4> Using a time-based roll policy
|
|
<5> Roll logs on a daily basis
|
|
<6> Align rolls on the day boundary (as opposed to rolling every twenty-four
|
|
hours)
|
|
|
|
If you append `.gz` or `.zip` to `appender.rolling.filePattern`, then the logs
|
|
will be compressed as they are rolled.
|
|
|
|
If you want to retain log files for a specified period of time, you can use a
|
|
rollover strategy with a delete action.
|
|
|
|
[source,properties]
|
|
--------------------------------------------------
|
|
appender.rolling.strategy.type = DefaultRolloverStrategy <1>
|
|
appender.rolling.strategy.action.type = Delete <2>
|
|
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path} <3>
|
|
appender.rolling.strategy.action.condition.type = IfLastModified <4>
|
|
appender.rolling.strategy.action.condition.age = 7D <5>
|
|
appender.rolling.strategy.action.PathConditions.type = IfFileName <6>
|
|
appender.rolling.strategy.action.PathConditions.glob = ${sys:es.logs.cluster_name}-* <7>
|
|
--------------------------------------------------
|
|
|
|
<1> Configure the `DefaultRolloverStrategy`
|
|
<2> Configure the `Delete` action for handling rollovers
|
|
<3> The base path to the Elasticsearch logs
|
|
<4> The condition to apply when handling rollovers
|
|
<5> Retain logs for seven days
|
|
<6> Only delete files older than seven days if they match the specified glob
|
|
<7> Delete files from the base path matching the glob
|
|
`${sys:es.logs.cluster_name}-*`; this is the glob that log files are rolled
|
|
to; this is needed to only delete the rolled Elasticsearch logs but not also
|
|
delete the deprecation and slow logs
|
|
|
|
Multiple configuration files can be loaded (in which case they will get merged)
|
|
as long as they are named `log4j2.properties` and have the Elasticsearch config
|
|
directory as an ancestor; this is useful for plugins that expose additional
|
|
loggers. The logger section contains the java packages and their corresponding
|
|
log level. The appender section contains the destinations for the logs.
|
|
Extensive information on how to customize logging and all the supported
|
|
appenders can be found on the
|
|
http://logging.apache.org/log4j/2.x/manual/configuration.html[Log4j
|
|
documentation].
|
|
|
|
[float]
|
|
[[deprecation-logging]]
|
|
=== Deprecation logging
|
|
|
|
In addition to regular logging, Elasticsearch allows you to enable logging
|
|
of deprecated actions. For example this allows you to determine early, if
|
|
you need to migrate certain functionality in the future. By default,
|
|
deprecation logging is enabled at the WARN level, the level at which all
|
|
deprecation log messages will be emitted.
|
|
|
|
[source,properties]
|
|
--------------------------------------------------
|
|
logger.deprecation.level = warn
|
|
--------------------------------------------------
|
|
|
|
This will create a daily rolling deprecation log file in your log directory.
|
|
Check this file regularly, especially when you intend to upgrade to a new
|
|
major version.
|
|
|
|
The default logging configuration has set the roll policy for the deprecation
|
|
logs to roll and compress after 1 GB, and to preserve a maximum of five log
|
|
files (four rolled logs, and the active log).
|
|
|
|
You can disable it in the `config/log4j2.properties` file by setting the deprecation
|
|
log level to `error`.
|