[[docker]] === Install {es} with Docker {es} is also available as Docker images. The images use https://hub.docker.com/_/centos/[centos:7] as the base image. A list of all published Docker images and tags is available at https://www.docker.elastic.co[www.docker.elastic.co]. The source files are in https://github.com/elastic/elasticsearch/blob/{branch}/distribution/docker[Github]. These images are free to use under the Elastic license. They contain open source and free commercial features and access to paid commercial features. {xpack-ref}/license-management.html[Start a 30-day trial] to try out all of the paid commercial features. See the https://www.elastic.co/subscriptions[Subscriptions] page for information about Elastic license levels. ==== Pulling the image Obtaining {es} for Docker is as simple as issuing a +docker pull+ command against the Elastic Docker registry. ifeval::["{release-state}"=="unreleased"] WARNING: Version {version} of {es} has not yet been released, so no Docker image is currently available for this version. endif::[] ifeval::["{release-state}"!="unreleased"] ["source","sh",subs="attributes"] -------------------------------------------- docker pull {docker-repo}:{version} -------------------------------------------- Alternatively, you can download other Docker images that contain only features available under the Apache 2.0 license. To download the images, go to https://www.docker.elastic.co[www.docker.elastic.co]. endif::[] [[docker-cli-run]] ==== Running {es} from the command line [[docker-cli-run-dev-mode]] ===== Development mode ifeval::["{release-state}"=="unreleased"] WARNING: Version {version} of the {es} Docker image has not yet been released. endif::[] ifeval::["{release-state}"!="unreleased"] {es} can be quickly started for development or testing use with the following command: ["source","sh",subs="attributes"] -------------------------------------------- docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" {docker-image} -------------------------------------------- Note the use of <> that allows bypassing the <> in a single-node development cluster. endif::[] [[docker-cli-run-prod-mode]] ===== Production mode [[docker-prod-prerequisites]] [IMPORTANT] ========================= The `vm.max_map_count` kernel setting needs to be set to at least `262144` for production use. Depending on your platform: * Linux + -- The `vm.max_map_count` setting should be set permanently in `/etc/sysctl.conf`: [source,sh] -------------------------------------------- $ grep vm.max_map_count /etc/sysctl.conf vm.max_map_count=262144 -------------------------------------------- To apply the setting on a live system type: `sysctl -w vm.max_map_count=262144` -- * macOS with https://docs.docker.com/engine/installation/mac/#/docker-for-mac[Docker for Mac] + -- The `vm.max_map_count` setting must be set within the xhyve virtual machine: ["source","sh"] -------------------------------------------- $ screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty -------------------------------------------- Just press enter and configure the `sysctl` setting as you would for Linux: ["source","sh"] -------------------------------------------- sysctl -w vm.max_map_count=262144 -------------------------------------------- -- * Windows and macOS with https://www.docker.com/products/docker-toolbox[Docker Toolbox] + -- The `vm.max_map_count` setting must be set via docker-machine: ["source","txt"] -------------------------------------------- docker-machine ssh sudo sysctl -w vm.max_map_count=262144 -------------------------------------------- -- ========================= The following example brings up a cluster comprising two {es} nodes. To bring up the cluster, use the <> and just type: ifeval::["{release-state}"=="unreleased"] WARNING: Version {version} of {es} has not yet been released, so a `docker-compose.yml` is not available for this version. endif::[] ifeval::["{release-state}"!="unreleased"] ["source","sh"] -------------------------------------------- docker-compose up -------------------------------------------- endif::[] [NOTE] `docker-compose` is not pre-installed with Docker on Linux. Instructions for installing it can be found on the https://docs.docker.com/compose/install/#install-using-pip[Docker Compose webpage]. The node `es01` listens on `localhost:9200` while `es02` talks to `es01` over a Docker network. This example also uses https://docs.docker.com/engine/tutorials/dockervolumes[Docker named volumes], called `esdata01` and `esdata02` which will be created if not already present. [[docker-prod-cluster-composefile]] `docker-compose.yml`: ifeval::["{release-state}"=="unreleased"] WARNING: Version {version} of {es} has not yet been released, so a `docker-compose.yml` is not available for this version. endif::[] ifeval::["{release-state}"!="unreleased"] ["source","yaml",subs="attributes"] -------------------------------------------- version: '2.2' services: es01: image: {docker-image} container_name: es01 environment: - node.name=es01 - discovery.seed_hosts=es02 - cluster.initial_master_nodes=es01,es02 - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata01:/usr/share/elasticsearch/data ports: - 9200:9200 networks: - esnet es02: image: {docker-image} container_name: es02 environment: - node.name=es02 - discovery.seed_hosts=es01 - cluster.initial_master_nodes=es01,es02 - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata02:/usr/share/elasticsearch/data networks: - esnet volumes: esdata01: driver: local esdata02: driver: local networks: esnet: -------------------------------------------- endif::[] To stop the cluster, type `docker-compose down`. Data volumes will persist, so it's possible to start the cluster again with the same data using `docker-compose up`. To destroy the cluster **and the data volumes**, just type `docker-compose down -v`. ===== Inspect status of cluster: ["source","txt"] -------------------------------------------- curl http://127.0.0.1:9200/_cat/health 1472225929 15:38:49 docker-cluster green 2 2 4 2 0 0 0 0 - 100.0% -------------------------------------------- // NOTCONSOLE Log messages go to the console and are handled by the configured Docker logging driver. By default you can access logs with `docker logs`. [[docker-configuration-methods]] ==== Configuring {es} with Docker {es} loads its configuration from files under `/usr/share/elasticsearch/config/`. These configuration files are documented in <> and <>. The image offers several methods for configuring {es} settings with the conventional approach being to provide customized files, that is to say `elasticsearch.yml`, but it's also possible to use environment variables to set options: ===== A. Present the parameters via Docker environment variables For example, to define the cluster name with `docker run` you can pass `-e "cluster.name=mynewclustername"`. Double quotes are required. ===== B. Bind-mounted configuration Create your custom config file and mount this over the image's corresponding file. For example, bind-mounting a `custom_elasticsearch.yml` with `docker run` can be accomplished with the parameter: ["source","sh"] -------------------------------------------- -v full_path_to/custom_elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -------------------------------------------- IMPORTANT: The container **runs {es} as user `elasticsearch` using uid:gid `1000:1000`**. Bind mounted host directories and files, such as `custom_elasticsearch.yml` above, **need to be accessible by this user**. For the <>, such as `/usr/share/elasticsearch/data`, write access is required as well. Also see note 1 below. ===== C. Customized image In some environments, it may make more sense to prepare a custom image containing your configuration. A `Dockerfile` to achieve this may be as simple as: ["source","sh",subs="attributes"] -------------------------------------------- FROM docker.elastic.co/elasticsearch/elasticsearch:{version} COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/ -------------------------------------------- You could then build and try the image with something like: ["source","sh"] -------------------------------------------- docker build --tag=elasticsearch-custom . docker run -ti -v /usr/share/elasticsearch/data elasticsearch-custom -------------------------------------------- Some plugins require additional security permissions. You have to explicitly accept them either by attaching a `tty` when you run the Docker image and accepting yes at the prompts, or inspecting the security permissions separately and if you are comfortable with them adding the `--batch` flag to the plugin install command. See {plugins}/_other_command_line_parameters.html[Plugin Management documentation] for more details. [[override-image-default]] ===== D. Override the image's default https://docs.docker.com/engine/reference/run/#cmd-default-command-or-options[CMD] Options can be passed as command-line options to the {es} process by overriding the default command for the image. For example: ["source","sh"] -------------------------------------------- docker run bin/elasticsearch -Ecluster.name=mynewclustername -------------------------------------------- [[next-getting-started-tls-docker]] ==== Configuring SSL/TLS with the {es} Docker image See <>. ==== Notes for production use and defaults We have collected a number of best practices for production use. Any Docker parameters mentioned below assume the use of `docker run`. . By default, {es} runs inside the container as user `elasticsearch` using uid:gid `1000:1000`. + -- CAUTION: One exception is https://docs.openshift.com/container-platform/3.6/creating_images/guidelines.html#openshift-specific-guidelines[Openshift], which runs containers using an arbitrarily assigned user ID. Openshift will present persistent volumes with the gid set to `0` which will work without any adjustments. If you are bind-mounting a local directory or file, ensure it is readable by this user, while the <> additionally require write access. A good strategy is to grant group access to gid `1000` or `0` for the local directory. As an example, to prepare a local directory for storing data through a bind-mount: mkdir esdatadir chmod g+rwx esdatadir chgrp 1000 esdatadir As a last resort, you can also force the container to mutate the ownership of any bind-mounts used for the <> through the environment variable `TAKE_FILE_OWNERSHIP`. In this case, they will be owned by uid:gid `1000:0` providing read/write access to the {es} process as required. -- . It is important to ensure increased ulimits for <> and <> are available for the {es} containers. Verify the https://github.com/moby/moby/tree/ea4d1243953e6b652082305a9c3cda8656edab26/contrib/init[init system] for the Docker daemon is already setting those to acceptable values and, if needed, adjust them in the Daemon, or override them per container, for example using `docker run`: + -- --ulimit nofile=65535:65535 NOTE: One way of checking the Docker daemon defaults for the aforementioned ulimits is by running: docker run --rm centos:7 /bin/bash -c 'ulimit -Hn && ulimit -Sn && ulimit -Hu && ulimit -Su' -- . Swapping needs to be disabled for performance and node stability. This can be achieved through any of the methods mentioned in the <>. If you opt for the `bootstrap.memory_lock: true` approach, apart from defining it through any of the <>, you will additionally need the `memlock: true` ulimit, either defined in the https://docs.docker.com/engine/reference/commandline/dockerd/#default-ulimits[Docker Daemon] or specifically set for the container. This is demonstrated above in the <>. If using `docker run`: + -- -e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1 -- . The image https://docs.docker.com/engine/reference/builder/#/expose[exposes] TCP ports 9200 and 9300. For clusters it is recommended to randomize the published ports with `--publish-all`, unless you are pinning one container per host. . Use the `ES_JAVA_OPTS` environment variable to set heap size. For example, to use 16GB, use `-e ES_JAVA_OPTS="-Xms16g -Xmx16g"` with `docker run`. + -- NOTE: You still need to <> even if you are https://docs.docker.com/config/containers/resource_constraints/#limit-a-containers-access-to-memory[limiting memory access] to the container. -- . Pin your deployments to a specific version of the {es} Docker image, for example +docker.elastic.co/elasticsearch/elasticsearch:{version}+. . Always use a volume bound on `/usr/share/elasticsearch/data`, as shown in the <>, for the following reasons: .. The data of your {es} node won't be lost if the container is killed .. {es} is I/O sensitive and the Docker storage driver is not ideal for fast I/O .. It allows the use of advanced https://docs.docker.com/engine/extend/plugins/#volume-plugins[Docker volume plugins] . If you are using the devicemapper storage driver, make sure you are not using the default `loop-lvm` mode. Configure docker-engine to use https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#configure-docker-with-devicemapper[direct-lvm] instead. . Consider centralizing your logs by using a different https://docs.docker.com/engine/admin/logging/overview/[logging driver]. Also note that the default json-file logging driver is not ideally suited for production use. include::next-steps.asciidoc[]