Update Docker docs for 6.0.0-rc2 (#27166)
* Update Docker docs for 6.0.0-rc2 * Update the docs to match the new Docker "image flavours" of "basic", "platinum", and "oss". * Clarifications for Openshift and bind-mounts * Bump docker-compose 2.x format to 2.2 * Combine Docker Toolbox instructions for setting vm.max_map_count for both macOS + Windows * devicemapper is not the default storage driver any more on RHEL
This commit is contained in:
parent
fd73e5fa41
commit
b71f7d3559
|
@ -14,7 +14,8 @@ release-state can be: released | prerelease | unreleased
|
||||||
:issue: https://github.com/elastic/elasticsearch/issues/
|
:issue: https://github.com/elastic/elasticsearch/issues/
|
||||||
:pull: https://github.com/elastic/elasticsearch/pull/
|
:pull: https://github.com/elastic/elasticsearch/pull/
|
||||||
|
|
||||||
:docker-image: docker.elastic.co/elasticsearch/elasticsearch:{version}
|
:docker-repo: docker.elastic.co/elasticsearch/elasticsearch
|
||||||
|
:docker-image: {docker-repo}:{version}
|
||||||
:plugin_url: https://artifacts.elastic.co/downloads/elasticsearch-plugins
|
:plugin_url: https://artifacts.elastic.co/downloads/elasticsearch-plugins
|
||||||
|
|
||||||
///////
|
///////
|
||||||
|
|
|
@ -37,7 +37,9 @@ Elasticsearch on Windows. MSIs may be downloaded from the Elasticsearch website.
|
||||||
|
|
||||||
`docker`::
|
`docker`::
|
||||||
|
|
||||||
An image is available for running Elasticsearch as a Docker container. It ships with {xpack-ref}/index.html[X-Pack] pre-installed and may be downloaded from the Elastic Docker Registry.
|
Images are available for running Elasticsearch as Docker containers. They may be
|
||||||
|
downloaded from the Elastic Docker Registry. The default image ships with
|
||||||
|
{xpack-ref}/index.html[X-Pack] pre-installed.
|
||||||
+
|
+
|
||||||
<<docker>>
|
<<docker>>
|
||||||
|
|
||||||
|
|
|
@ -1,32 +1,54 @@
|
||||||
[[docker]]
|
[[docker]]
|
||||||
=== Install Elasticsearch with Docker
|
=== Install Elasticsearch with Docker
|
||||||
|
|
||||||
Elasticsearch is also available as a Docker image.
|
Elasticsearch is also available as Docker images.
|
||||||
The image is built with {xpack-ref}/index.html[X-Pack] and uses https://hub.docker.com/_/centos/[centos:7] as the base image.
|
The images use https://hub.docker.com/_/centos/[centos:7] as the base image and
|
||||||
The source code can be found on https://github.com/elastic/elasticsearch-docker/tree/{branch}[GitHub].
|
are available with {xpack-ref}/xpack-introduction.html[X-Pack].
|
||||||
|
|
||||||
==== Security note
|
A list of all published Docker images and tags can be found in https://www.docker.elastic.co[www.docker.elastic.co]. The source code can be found
|
||||||
|
on https://github.com/elastic/elasticsearch-docker/tree/{branch}[GitHub].
|
||||||
|
|
||||||
NOTE: {xpack-ref}/index.html[X-Pack] is preinstalled in this image.
|
==== Image types
|
||||||
Please take a few minutes to familiarize yourself with {xpack-ref}/security-getting-started.html[X-Pack Security] and how to change default passwords. The default password for the `elastic` user is `changeme`.
|
|
||||||
|
|
||||||
NOTE: X-Pack includes a trial license for 30 days. After that, you can obtain one of the https://www.elastic.co/subscriptions[available subscriptions] or {ref}/security-settings.html[disable Security]. The Basic license is free and includes the https://www.elastic.co/products/x-pack/monitoring[Monitoring] extension.
|
The images are available in three different configurations or "flavors". The
|
||||||
|
`basic` flavor, which is the default, ships with X-Pack Basic features
|
||||||
|
pre-installed and automatically activated with a free licence. The `platinum`
|
||||||
|
flavor features all X-Pack functionally under a 30-day trial licence. The `oss`
|
||||||
|
flavor does not include X-Pack, and contains only open-source Elasticsearch.
|
||||||
|
|
||||||
|
NOTE: {xpack-ref}/xpack-security.html[X-Pack Security] is enabled in the `platinum`
|
||||||
|
image. To access your cluster, it's necessary to set an initial password for the
|
||||||
|
`elastic` user. The initial password can be set at start up time via the
|
||||||
|
`ELASTIC_PASSWORD` environment variable:
|
||||||
|
|
||||||
|
["source","txt",subs="attributes"]
|
||||||
|
--------------------------------------------
|
||||||
|
docker run -e ELASTIC_PASSWORD=MagicWord {docker-repo}-platinum:{version}
|
||||||
|
--------------------------------------------
|
||||||
|
|
||||||
|
NOTE: The `platinum` image includes a trial license for 30 days. After that, you
|
||||||
|
can obtain one of the https://www.elastic.co/subscriptions[available
|
||||||
|
subscriptions] or revert to a Basic licence. The Basic license is free and
|
||||||
|
includes a selection of X-Pack features.
|
||||||
|
|
||||||
Obtaining Elasticsearch for Docker is as simple as issuing a +docker pull+ command against the Elastic Docker registry.
|
Obtaining Elasticsearch for Docker is as simple as issuing a +docker pull+ command against the Elastic Docker registry.
|
||||||
|
|
||||||
ifeval::["{release-state}"=="unreleased"]
|
ifeval::["{release-state}"=="unreleased"]
|
||||||
|
|
||||||
WARNING: Version {version} of Elasticsearch has not yet been released, so no Docker image is currently available for this version.
|
WARNING: Version {version} of Elasticsearch has not yet been released, so no
|
||||||
|
Docker image is currently available for this version.
|
||||||
|
|
||||||
endif::[]
|
endif::[]
|
||||||
|
|
||||||
ifeval::["{release-state}"!="unreleased"]
|
ifeval::["{release-state}"!="unreleased"]
|
||||||
|
|
||||||
The Docker image can be retrieved with the following command:
|
Docker images can be retrieved with the following commands:
|
||||||
|
|
||||||
["source","sh",subs="attributes"]
|
["source","sh",subs="attributes"]
|
||||||
--------------------------------------------
|
--------------------------------------------
|
||||||
docker pull {docker-image}
|
docker pull {docker-repo}:{version}
|
||||||
|
docker pull {docker-repo}-platinum:{version}
|
||||||
|
docker pull {docker-repo}-oss:{version}
|
||||||
--------------------------------------------
|
--------------------------------------------
|
||||||
|
|
||||||
endif::[]
|
endif::[]
|
||||||
|
@ -76,7 +98,7 @@ vm.max_map_count=262144
|
||||||
+
|
+
|
||||||
To apply the setting on a live system type: `sysctl -w vm.max_map_count=262144`
|
To apply the setting on a live system type: `sysctl -w vm.max_map_count=262144`
|
||||||
+
|
+
|
||||||
* OSX with https://docs.docker.com/engine/installation/mac/#/docker-for-mac[Docker for Mac]
|
* macOS with https://docs.docker.com/engine/installation/mac/#/docker-for-mac[Docker for Mac]
|
||||||
+
|
+
|
||||||
The `vm.max_map_count` setting must be set within the xhyve virtual machine:
|
The `vm.max_map_count` setting must be set within the xhyve virtual machine:
|
||||||
+
|
+
|
||||||
|
@ -93,11 +115,11 @@ Then configure the `sysctl` setting as you would for Linux:
|
||||||
sysctl -w vm.max_map_count=262144
|
sysctl -w vm.max_map_count=262144
|
||||||
--------------------------------------------
|
--------------------------------------------
|
||||||
+
|
+
|
||||||
* OSX with https://docs.docker.com/engine/installation/mac/#docker-toolbox[Docker Toolbox]
|
* Windows and macOS with https://www.docker.com/products/docker-toolbox[Docker Toolbox]
|
||||||
+
|
+
|
||||||
The `vm.max_map_count` setting must be set via docker-machine:
|
The `vm.max_map_count` setting must be set via docker-machine:
|
||||||
+
|
+
|
||||||
["source","sh"]
|
["source","txt"]
|
||||||
--------------------------------------------
|
--------------------------------------------
|
||||||
docker-machine ssh
|
docker-machine ssh
|
||||||
sudo sysctl -w vm.max_map_count=262144
|
sudo sysctl -w vm.max_map_count=262144
|
||||||
|
@ -109,7 +131,8 @@ To bring up the cluster, use the <<docker-prod-cluster-composefile,`docker-compo
|
||||||
|
|
||||||
ifeval::["{release-state}"=="unreleased"]
|
ifeval::["{release-state}"=="unreleased"]
|
||||||
|
|
||||||
WARNING: Version {version} of the Elasticsearch Docker image has not yet been released, so a `docker-compose.yml` is not available for this version.
|
WARNING: Version {version} of Elasticsearch has not yet been released, so a
|
||||||
|
`docker-compose.yml` is not available for this version.
|
||||||
|
|
||||||
endif::[]
|
endif::[]
|
||||||
|
|
||||||
|
@ -124,9 +147,11 @@ endif::[]
|
||||||
|
|
||||||
[NOTE]
|
[NOTE]
|
||||||
`docker-compose` is not pre-installed with Docker on Linux.
|
`docker-compose` is not pre-installed with Docker on Linux.
|
||||||
Instructions for installing it can be found on the https://docs.docker.com/compose/install/#install-using-pip[docker-compose webpage].
|
Instructions for installing it can be found on the
|
||||||
|
https://docs.docker.com/compose/install/#install-using-pip[Docker Compose webpage].
|
||||||
|
|
||||||
The node `elasticsearch1` listens on `localhost:9200` while `elasticsearch2` talks to `elasticsearch1` over a Docker network.
|
The node `elasticsearch` listens on `localhost:9200` while `elasticsearch2`
|
||||||
|
talks to `elasticsearch` over a Docker network.
|
||||||
|
|
||||||
This example also uses https://docs.docker.com/engine/tutorials/dockervolumes[Docker named volumes], called `esdata1` and `esdata2` which will be created if not already present.
|
This example also uses https://docs.docker.com/engine/tutorials/dockervolumes[Docker named volumes], called `esdata1` and `esdata2` which will be created if not already present.
|
||||||
|
|
||||||
|
@ -134,18 +159,19 @@ This example also uses https://docs.docker.com/engine/tutorials/dockervolumes[Do
|
||||||
`docker-compose.yml`:
|
`docker-compose.yml`:
|
||||||
ifeval::["{release-state}"=="unreleased"]
|
ifeval::["{release-state}"=="unreleased"]
|
||||||
|
|
||||||
WARNING: Version {version} of the Elasticsearch Docker image has not yet been released, so a `docker-compose.yml` is not available for this version.
|
WARNING: Version {version} of Elasticsearch has not yet been released, so a
|
||||||
|
`docker-compose.yml` is not available for this version.
|
||||||
|
|
||||||
endif::[]
|
endif::[]
|
||||||
|
|
||||||
ifeval::["{release-state}"!="unreleased"]
|
ifeval::["{release-state}"!="unreleased"]
|
||||||
["source","yaml",subs="attributes"]
|
["source","yaml",subs="attributes"]
|
||||||
--------------------------------------------
|
--------------------------------------------
|
||||||
version: '2'
|
version: 2.2
|
||||||
services:
|
services:
|
||||||
elasticsearch1:
|
elasticsearch:
|
||||||
image: docker.elastic.co/elasticsearch/elasticsearch:{version}
|
image: {docker-image}
|
||||||
container_name: elasticsearch1
|
container_name: elasticsearch
|
||||||
environment:
|
environment:
|
||||||
- cluster.name=docker-cluster
|
- cluster.name=docker-cluster
|
||||||
- bootstrap.memory_lock=true
|
- bootstrap.memory_lock=true
|
||||||
|
@ -154,7 +180,6 @@ services:
|
||||||
memlock:
|
memlock:
|
||||||
soft: -1
|
soft: -1
|
||||||
hard: -1
|
hard: -1
|
||||||
mem_limit: 1g
|
|
||||||
volumes:
|
volumes:
|
||||||
- esdata1:/usr/share/elasticsearch/data
|
- esdata1:/usr/share/elasticsearch/data
|
||||||
ports:
|
ports:
|
||||||
|
@ -162,17 +187,17 @@ services:
|
||||||
networks:
|
networks:
|
||||||
- esnet
|
- esnet
|
||||||
elasticsearch2:
|
elasticsearch2:
|
||||||
image: docker.elastic.co/elasticsearch/elasticsearch:{version}
|
image: {docker-image}
|
||||||
|
container_name: elasticsearch2
|
||||||
environment:
|
environment:
|
||||||
- cluster.name=docker-cluster
|
- cluster.name=docker-cluster
|
||||||
- bootstrap.memory_lock=true
|
- bootstrap.memory_lock=true
|
||||||
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
|
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
|
||||||
- "discovery.zen.ping.unicast.hosts=elasticsearch1"
|
- "discovery.zen.ping.unicast.hosts=elasticsearch"
|
||||||
ulimits:
|
ulimits:
|
||||||
memlock:
|
memlock:
|
||||||
soft: -1
|
soft: -1
|
||||||
hard: -1
|
hard: -1
|
||||||
mem_limit: 1g
|
|
||||||
volumes:
|
volumes:
|
||||||
- esdata2:/usr/share/elasticsearch/data
|
- esdata2:/usr/share/elasticsearch/data
|
||||||
networks:
|
networks:
|
||||||
|
@ -190,20 +215,16 @@ networks:
|
||||||
endif::[]
|
endif::[]
|
||||||
|
|
||||||
To stop the cluster, type `docker-compose down`. Data volumes will persist, so it's possible to start the cluster again with the same data using `docker-compose up`.
|
To stop the cluster, type `docker-compose down`. Data volumes will persist, so it's possible to start the cluster again with the same data using `docker-compose up`.
|
||||||
To destroy the cluster **and the data volumes** just type `docker-compose down -v`.
|
To destroy the cluster **and the data volumes**, just type `docker-compose down -v`.
|
||||||
|
|
||||||
===== Inspect status of cluster:
|
===== Inspect status of cluster:
|
||||||
|
|
||||||
["source","sh"]
|
["source","txt"]
|
||||||
--------------------------------------------
|
--------------------------------------------
|
||||||
curl -u elastic http://127.0.0.1:9200/_cat/health
|
curl http://127.0.0.1:9200/_cat/health
|
||||||
Enter host password for user 'elastic':
|
|
||||||
1472225929 15:38:49 docker-cluster green 2 2 4 2 0 0 0 0 - 100.0%
|
1472225929 15:38:49 docker-cluster green 2 2 4 2 0 0 0 0 - 100.0%
|
||||||
--------------------------------------------
|
--------------------------------------------
|
||||||
// NOTCONSOLE
|
// NOTCONSOLE
|
||||||
// This is demonstrating curl. Console will prompt you for a username and
|
|
||||||
// password so no need to demonstrate that. Converting this would not show the
|
|
||||||
// important `-u elastic` parameters for `curl`.
|
|
||||||
|
|
||||||
Log messages go to the console and are handled by the configured Docker logging driver. By default you can access logs with `docker logs`.
|
Log messages go to the console and are handled by the configured Docker logging driver. By default you can access logs with `docker logs`.
|
||||||
|
|
||||||
|
@ -225,7 +246,7 @@ For example, bind-mounting a `custom_elasticsearch.yml` with `docker run` can be
|
||||||
--------------------------------------------
|
--------------------------------------------
|
||||||
-v full_path_to/custom_elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
|
-v full_path_to/custom_elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
|
||||||
--------------------------------------------
|
--------------------------------------------
|
||||||
IMPORTANT: The container **runs Elasticsearch as user `elasticsearch` using uid:gid `1000:1000`**. Bind mounted host directories and files, such as `custom_elasticsearch.yml` above, **need to be accessible by this user**. For the https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html#path-settings[data and log dirs], such as `/usr/share/elasticsearch/data`, write access is required as well.
|
IMPORTANT: The container **runs Elasticsearch as user `elasticsearch` using uid:gid `1000:1000`**. Bind mounted host directories and files, such as `custom_elasticsearch.yml` above, **need to be accessible by this user**. For the https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html#path-settings[data and log dirs], such as `/usr/share/elasticsearch/data`, write access is required as well. Also see note 1 below.
|
||||||
|
|
||||||
===== C. Customized image
|
===== C. Customized image
|
||||||
In some environments, it may make more sense to prepare a custom image containing your configuration. A `Dockerfile` to achieve this may be as simple as:
|
In some environments, it may make more sense to prepare a custom image containing your configuration. A `Dockerfile` to achieve this may be as simple as:
|
||||||
|
@ -233,10 +254,7 @@ In some environments, it may make more sense to prepare a custom image containin
|
||||||
["source","sh",subs="attributes"]
|
["source","sh",subs="attributes"]
|
||||||
--------------------------------------------
|
--------------------------------------------
|
||||||
FROM docker.elastic.co/elasticsearch/elasticsearch:{version}
|
FROM docker.elastic.co/elasticsearch/elasticsearch:{version}
|
||||||
ADD elasticsearch.yml /usr/share/elasticsearch/config/
|
COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/
|
||||||
USER root
|
|
||||||
RUN chown elasticsearch:elasticsearch config/elasticsearch.yml
|
|
||||||
USER elasticsearch
|
|
||||||
--------------------------------------------
|
--------------------------------------------
|
||||||
|
|
||||||
You could then build and try the image with something like:
|
You could then build and try the image with something like:
|
||||||
|
@ -260,10 +278,19 @@ docker run <various parameters> bin/elasticsearch -Ecluster.name=mynewclusternam
|
||||||
==== Notes for production use and defaults
|
==== Notes for production use and defaults
|
||||||
|
|
||||||
We have collected a number of best practices for production use.
|
We have collected a number of best practices for production use.
|
||||||
|
Any Docker parameters mentioned below assume the use of `docker run`.
|
||||||
|
|
||||||
NOTE: Any Docker parameters mentioned below assume the use of `docker run`.
|
. By default, Elasticsearch runs inside the container as user `elasticsearch` using uid:gid `1000:1000`.
|
||||||
|
+
|
||||||
. Elasticsearch runs inside the container as user `elasticsearch` using uid:gid `1000:1000`. If you are bind-mounting a local directory or file, ensure it is readable by this user, while the <<path-settings,data and log dirs>> additionally require write access.
|
CAUTION: One exception is https://docs.openshift.com/container-platform/3.6/creating_images/guidelines.html#openshift-specific-guidelines[Openshift] which runs containers using an arbitrarily assigned user ID. Openshift will present persistent volumes with the gid set to `0` which will work without any adjustments.
|
||||||
|
+
|
||||||
|
If you are bind-mounting a local directory or file, ensure it is readable by this user, while the <<path-settings,data and log dirs>> additionally require write access. A good strategy is to grant group access to gid `1000` or `0` for the local directory. As an example, to prepare a local directory for storing data through a bind-mount:
|
||||||
|
+
|
||||||
|
mkdir esdatadir
|
||||||
|
chmod g+rwx esdatadir
|
||||||
|
chgrp 1000 esdatadir
|
||||||
|
+
|
||||||
|
As a last resort, you can also force the container to mutate the ownership of any bind-mounts used for the <<path-settings,data and log dirs>> through the environment variable `TAKE_FILE_OWNERSHIP`; in this case they will be owned by uid:gid `1000:0` providing read/write access to the elasticsearch process as required.
|
||||||
+
|
+
|
||||||
. It is important to ensure increased ulimits for <<setting-system-settings,nofile>> and <<max-number-threads-check,nproc>> are available for the Elasticsearch containers. Verify the https://github.com/moby/moby/tree/ea4d1243953e6b652082305a9c3cda8656edab26/contrib/init[init system] for the Docker daemon is already setting those to acceptable values and, if needed, adjust them in the Daemon, or override them per container, for example using `docker run`:
|
. It is important to ensure increased ulimits for <<setting-system-settings,nofile>> and <<max-number-threads-check,nproc>> are available for the Elasticsearch containers. Verify the https://github.com/moby/moby/tree/ea4d1243953e6b652082305a9c3cda8656edab26/contrib/init[init system] for the Docker daemon is already setting those to acceptable values and, if needed, adjust them in the Daemon, or override them per container, for example using `docker run`:
|
||||||
+
|
+
|
||||||
|
@ -273,13 +300,22 @@ NOTE: One way of checking the Docker daemon defaults for the aforementioned ulim
|
||||||
+
|
+
|
||||||
docker run --rm centos:7 /bin/bash -c 'ulimit -Hn && ulimit -Sn && ulimit -Hu && ulimit -Su'
|
docker run --rm centos:7 /bin/bash -c 'ulimit -Hn && ulimit -Sn && ulimit -Hu && ulimit -Su'
|
||||||
+
|
+
|
||||||
. Swapping needs to be disabled for performance and node stability. This can be achieved through any of the methods mentioned in the <<setup-configuration-memory,Elasticsearch docs>>. If you opt for the `bootstrap.memory_lock: true` approach, apart from defining it through any of the <<docker-configuration-methods,configuration methods>>, you will additionally need the `memlock: true` ulimit, either defined in the https://docs.docker.com/engine/reference/commandline/dockerd/#default-ulimits[Docker Daemon] or specifically set for the container. This has been demonstrated earlier in the <<docker-prod-cluster-composefile,docker-compose.yml>>, or using `docker run`:
|
. Swapping needs to be disabled for performance and node stability. This can be
|
||||||
|
achieved through any of the methods mentioned in the
|
||||||
|
<<setup-configuration-memory,Elasticsearch docs>>. If you opt for the
|
||||||
|
`bootstrap.memory_lock: true` approach, apart from defining it through any of
|
||||||
|
the <<docker-configuration-methods,configuration methods>>, you will
|
||||||
|
additionally need the `memlock: true` ulimit, either defined in the
|
||||||
|
https://docs.docker.com/engine/reference/commandline/dockerd/#default-ulimits[Docker
|
||||||
|
Daemon] or specifically set for the container. This is demonstrated above in the
|
||||||
|
<<docker-prod-cluster-composefile,docker-compose.yml>>. If using `docker run`:
|
||||||
+
|
+
|
||||||
-e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1
|
-e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1
|
||||||
+
|
+
|
||||||
. The image https://docs.docker.com/engine/reference/builder/#/expose[exposes] TCP ports 9200 and 9300. For clusters it is recommended to randomize the published ports with `--publish-all`, unless you are pinning one container per host.
|
. The image https://docs.docker.com/engine/reference/builder/#/expose[exposes] TCP ports 9200 and 9300. For clusters it is recommended to randomize the published ports with `--publish-all`, unless you are pinning one container per host.
|
||||||
+
|
+
|
||||||
. Use the `ES_JAVA_OPTS` environment variable to set heap size, e.g. to use 16GB use `-e ES_JAVA_OPTS="-Xms16g -Xmx16g"` with `docker run`. It is also recommended to set a https://docs.docker.com/engine/reference/run/#user-memory-constraints[memory limit] for the container.
|
. Use the `ES_JAVA_OPTS` environment variable to set heap size, e.g. to use 16GB
|
||||||
|
use `-e ES_JAVA_OPTS="-Xms16g -Xmx16g"` with `docker run`.
|
||||||
+
|
+
|
||||||
. Pin your deployments to a specific version of the Elasticsearch Docker image, e.g. +docker.elastic.co/elasticsearch/elasticsearch:{version}+.
|
. Pin your deployments to a specific version of the Elasticsearch Docker image, e.g. +docker.elastic.co/elasticsearch/elasticsearch:{version}+.
|
||||||
+
|
+
|
||||||
|
@ -289,7 +325,10 @@ NOTE: One way of checking the Docker daemon defaults for the aforementioned ulim
|
||||||
.. Elasticsearch is I/O sensitive and the Docker storage driver is not ideal for fast I/O
|
.. Elasticsearch is I/O sensitive and the Docker storage driver is not ideal for fast I/O
|
||||||
.. It allows the use of advanced https://docs.docker.com/engine/extend/plugins/#volume-plugins[Docker volume plugins]
|
.. It allows the use of advanced https://docs.docker.com/engine/extend/plugins/#volume-plugins[Docker volume plugins]
|
||||||
+
|
+
|
||||||
. If you are using the devicemapper storage driver (default on at least RedHat (rpm) based distributions) make sure you are not using the default `loop-lvm` mode. Configure docker-engine to use https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#configure-docker-with-devicemapper[direct-lvm] instead.
|
. If you are using the devicemapper storage driver, make sure you are not using
|
||||||
|
the default `loop-lvm` mode. Configure docker-engine to use
|
||||||
|
https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#configure-docker-with-devicemapper[direct-lvm]
|
||||||
|
instead.
|
||||||
+
|
+
|
||||||
. Consider centralizing your logs by using a different https://docs.docker.com/engine/admin/logging/overview/[logging driver]. Also note that the default json-file logging driver is not ideally suited for production use.
|
. Consider centralizing your logs by using a different https://docs.docker.com/engine/admin/logging/overview/[logging driver]. Also note that the default json-file logging driver is not ideally suited for production use.
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue