[DOCS] Clarify expected availability of HDFS for the HDFS Repository (#25220)

If a cluster is configured with an HDFS repository and a node is started, that node must be able 
to reach HDFS, or else when it attempts to add the repository from the cluster state at start up 
it will fail to connect and the repository will be left in an inconsistent state. Adding a blurb in the 
docs to outline the expected availability for HDFS when using the repository plugin.
This commit is contained in:
James Baiera 2017-06-16 09:47:44 -04:00 committed by GitHub
parent 39d9c8aa67
commit 9c65073852
1 changed files with 9 additions and 0 deletions

View File

@ -76,6 +76,15 @@ The following settings are supported:
the pattern with the hostname of the node at runtime (see
link:repository-hdfs-security-runtime[Creating the Secure Repository]).
[[repository-hdfs-availability]]
[float]
===== A Note on HDFS Availablility
When you initialize a repository, its settings are persisted in the cluster state. When a node comes online, it will
attempt to initialize all repositories for which it has settings. If your cluster has an HDFS repository configured, then
all nodes in the cluster must be able to reach HDFS when starting. If not, then the node will fail to initialize the
repository at start up and the repository will be unusable. If this happens, you will need to remove and re-add the
repository or restart the offending node.
[[repository-hdfs-security]]
==== Hadoop Security