111 lines
4.2 KiB
Plaintext
111 lines
4.2 KiB
Plaintext
[[repository-hdfs]]
|
|
=== Hadoop HDFS Repository Plugin
|
|
|
|
The HDFS repository plugin adds support for using HDFS File System as a repository for
|
|
{ref}/modules-snapshots.html[Snapshot/Restore].
|
|
|
|
[[repository-hdfs-install]]
|
|
[float]
|
|
==== Installation
|
|
|
|
This plugin can be installed through the plugin manager:
|
|
|
|
[source,sh]
|
|
----------------------------------------------------------------
|
|
sudo bin/elasticsearch-plugin install repository-hdfs
|
|
----------------------------------------------------------------
|
|
|
|
The plugin must be installed on _every_ node in the cluster, and each node must
|
|
be restarted after installation.
|
|
|
|
This plugin can be downloaded for <<plugin-management-custom-url,offline install>> from
|
|
{plugin_url}/repository-hdfs/{version}/repository-hdfs-{version}.zip.
|
|
|
|
[[repository-hdfs-remove]]
|
|
[float]
|
|
==== Removal
|
|
|
|
The plugin can be removed by specifying the _installed_ package:
|
|
|
|
[source,sh]
|
|
----------------------------------------------------------------
|
|
sudo bin/elasticsearch-plugin remove repository-hdfs
|
|
----------------------------------------------------------------
|
|
|
|
The node must be stopped before removing the plugin.
|
|
|
|
[[repository-hdfs-usage]]
|
|
==== Getting started with HDFS
|
|
|
|
The HDFS snapshot/restore plugin is built against the latest Apache Hadoop 2.x (currently 2.7.1). If the distro you are using is not protocol
|
|
compatible with Apache Hadoop, consider replacing the Hadoop libraries inside the plugin folder with your own (you might have to adjust the security permissions required).
|
|
|
|
Even if Hadoop is already installed on the Elasticsearch nodes, for security reasons, the required libraries need to be placed under the plugin folder. Note that in most cases, if the distro is compatible, one simply needs to configure the repository with the appropriate Hadoop configuration files (see below).
|
|
|
|
Windows Users::
|
|
Using Apache Hadoop on Windows is problematic and thus it is not recommended. For those _really_ wanting to use it, make sure you place the elusive `winutils.exe` under the
|
|
plugin folder and point `HADOOP_HOME` variable to it; this should minimize the amount of permissions Hadoop requires (though one would still have to add some more).
|
|
|
|
[[repository-hdfs-config]]
|
|
==== Configuration Properties
|
|
|
|
Once installed, define the configuration for the `hdfs` repository through the
|
|
{ref}/modules-snapshots.html[REST API]:
|
|
|
|
[source,js]
|
|
----
|
|
PUT _snapshot/my_hdfs_repository
|
|
{
|
|
"type": "hdfs",
|
|
"settings": {
|
|
"uri": "hdfs://namenode:8020/",
|
|
"path": "elasticsearch/respositories/my_hdfs_repository",
|
|
"conf.dfs.client.read.shortcircuit": "true"
|
|
}
|
|
}
|
|
----
|
|
// CONSOLE
|
|
// TEST[skip:we don't have hdfs set up while testing this]
|
|
|
|
The following settings are supported:
|
|
|
|
[horizontal]
|
|
`uri`::
|
|
|
|
The uri address for hdfs. ex: "hdfs://<host>:<port>/". (Required)
|
|
|
|
`path`::
|
|
|
|
The file path within the filesystem where data is stored/loaded. ex: "path/to/file". (Required)
|
|
|
|
`load_defaults`::
|
|
|
|
Whether to load the default Hadoop configuration or not. (Enabled by default)
|
|
|
|
`conf.<key>`::
|
|
|
|
Inlined configuration parameter to be added to Hadoop configuration. (Optional)
|
|
Only client oriented properties from the hadoop http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml[core] and http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml[hdfs] configuration files will be recognized by the plugin.
|
|
|
|
`compress`::
|
|
|
|
Whether to compress the metadata or not. (Disabled by default)
|
|
|
|
`chunk_size`::
|
|
|
|
Override the chunk size. (Disabled by default)
|
|
|
|
|
|
Alternatively, you can define the `hdfs` repository and its settings in your `elasticsearch.yml`:
|
|
[source,yaml]
|
|
----
|
|
repositories:
|
|
hdfs:
|
|
uri: "hdfs://<host>:<port>/" \# required - HDFS address only
|
|
path: "some/path" \# required - path within the file-system where data is stored/loaded
|
|
load_defaults: "true" \# optional - whether to load the default Hadoop configuration (default) or not
|
|
conf.<key> : "<value>" \# optional - 'inlined' key=value added to the Hadoop configuration
|
|
compress: "false" \# optional - whether to compress the metadata or not (default)
|
|
chunk_size: "10mb" \# optional - chunk size (disabled by default)
|
|
----
|