NOTE: {xpack}/index.html[X-Pack] is preinstalled in this image.
Please take a few minutes to familiarize yourself with {xpack}/security-getting-started.html[X-Pack Security] and how to change default passwords. The default password for the `elastic` user is `changeme`.
NOTE: X-Pack includes a trial license for 30 days. After that, you can obtain one of the https://www.elastic.co/subscriptions[available subscriptions] or {xpack}/security-settings.html[disable Security]. The Basic license is free and includes the https://www.elastic.co/products/x-pack/monitoring[Monitoring] extension.
Then configure the `sysctl` setting as you would for Linux:
+
["source","sh"]
--------------------------------------------
sysctl -w vm.max_map_count=262144
--------------------------------------------
+
* OSX with https://docs.docker.com/engine/installation/mac/#docker-toolbox[Docker Toolbox]
+
The `vm_max_map_count` setting must be set via docker-machine:
+
["source","sh"]
--------------------------------------------
docker-machine ssh
sudo sysctl -w vm.max_map_count=262144
--------------------------------------------
=========================
The following example brings up a cluster comprising two Elasticsearch nodes.
To bring up the cluster, use the <<docker-prod-cluster-composefile,`docker-compose.yml`>> and just type:
ifeval::["{release-state}"=="unreleased"]
WARNING: Version {version} of the Elasticsearch Docker image has not yet been released, so a `docker-compose.yml` is not available for this version.
endif::[]
ifeval::["{release-state}"!="unreleased"]
["source","sh"]
--------------------------------------------
docker-compose up
--------------------------------------------
endif::[]
[NOTE]
`docker-compose` is not pre-installed with Docker on Linux.
Instructions for installing it can be found on the https://docs.docker.com/compose/install/#install-using-pip[docker-compose webpage].
The node `elasticsearch1` listens on `localhost:9200` while `elasticsearch2` talks to `elasticsearch1` over a Docker network.
This example also uses https://docs.docker.com/engine/tutorials/dockervolumes[Docker named volumes], called `esdata1` and `esdata2` which will be created if not already present.
[[docker-prod-cluster-composefile]]
`docker-compose.yml`:
ifeval::["{release-state}"=="unreleased"]
WARNING: Version {version} of the Elasticsearch Docker image has not yet been released, so a `docker-compose.yml` is not available for this version.
To stop the cluster, type `docker-compose down`. Data volumes will persist, so it's possible to start the cluster again with the same data using `docker-compose up`.
To destroy the cluster **and the data volumes** just type `docker-compose down -v`.
Elasticsearch loads its configuration from files under `/usr/share/elasticsearch/config/`. These configuration files are documented in <<settings>> and <<jvm-options>>.
The image offers several methods for configuring Elasticsearch settings with the conventional approach being to provide customized files, i.e. `elasticsearch.yml`, but it's also possible to use environment variables to set options:
===== A. Present the parameters via Docker environment variables
For example, to define the cluster name with `docker run` you can pass `-e "cluster.name=mynewclustername"`. Double quotes are required.
===== B. Bind-mounted configuration
Create your custom config file and mount this over the image's corresponding file.
For example, bind-mounting a `custom_elasticsearch.yml` with `docker run` can be accomplished with the parameter:
IMPORTANT: The container **runs Elasticsearch as user `elasticsearch` using uid:gid `1000:1000`**. Bind mounted host directories and files, such as `custom_elasticsearch.yml` above, **need to be accessible by this user**. For the https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html#path-settings[data and log dirs], such as `/usr/share/elasticsearch/data`, write access is required as well.
In some environments, it may make more sense to prepare a custom image containing your configuration. A `Dockerfile` to achieve this may be as simple as:
["source","sh",subs="attributes"]
--------------------------------------------
FROM docker.elastic.co/elasticsearch/elasticsearch:{version}
. Elasticsearch runs inside the container as user `elasticsearch` using uid:gid `1000:1000`. If you are bind-mounting a local directory or file, ensure it is readable by this user, while the <<path-settings,data and log dirs>> additionally require write access.
+
. It is important to ensure increased ulimits for <<setting-system-settings,nofile>> and <<max-number-threads-check,nproc>> are available for the Elasticsearch containers. Verify the https://github.com/moby/moby/tree/ea4d1243953e6b652082305a9c3cda8656edab26/contrib/init[init system] for the Docker daemon is already setting those to acceptable values and, if needed, adjust them in the Daemon, or override them per container, for example using `docker run`:
+
--ulimit nofile=65536:65536
+
NOTE: One way of checking the Docker daemon defaults for the aforementioned ulimits is by running:
. Swapping needs to be disabled for performance and node stability. This can be achieved through any of the methods mentioned in the <<setup-configuration-memory,Elasticsearch docs>>. If you opt for the `bootstrap.memory_lock: true` approach, apart from defining it through any of the <<docker-configuration-methods,configuration methods>>, you will additionally need the `memlock: true` ulimit, either defined in the https://docs.docker.com/engine/reference/commandline/dockerd/#default-ulimits[Docker Daemon] or specifically set for the container. This has been demonstrated earlier in the <<docker-prod-cluster-composefile,docker-compose.yml>>, or using `docker run`:
. The image https://docs.docker.com/engine/reference/builder/#/expose[exposes] TCP ports 9200 and 9300. For clusters it is recommended to randomize the published ports with `--publish-all`, unless you are pinning one container per host.
+
. Use the `ES_JAVA_OPTS` environment variable to set heap size, e.g. to use 16GB use `-e ES_JAVA_OPTS="-Xms16g -Xmx16g"` with `docker run`. It is also recommended to set a https://docs.docker.com/engine/reference/run/#user-memory-constraints[memory limit] for the container.
+
. Pin your deployments to a specific version of the Elasticsearch Docker image, e.g. +docker.elastic.co/elasticsearch/elasticsearch:{version}+.
+
. Always use a volume bound on `/usr/share/elasticsearch/data`, as shown in the <<docker-cli-run-prod-mode,production example>>, for the following reasons:
+
.. The data of your elasticsearch node won't be lost if the container is killed
.. Elasticsearch is I/O sensitive and the Docker storage driver is not ideal for fast I/O
.. It allows the use of advanced https://docs.docker.com/engine/extend/plugins/#volume-plugins[Docker volume plugins]
+
. If you are using the devicemapper storage driver (default on at least RedHat (rpm) based distributions) make sure you are not using the default `loop-lvm` mode. Configure docker-engine to use https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#configure-docker-with-devicemapper[direct-lvm] instead.
+
. Consider centralizing your logs by using a different https://docs.docker.com/engine/admin/logging/overview/[logging driver]. Also note that the default json-file logging driver is not ideally suited for production use.