HDFS-10855. Fix typos in HDFS documents. Contributed by Yiqun Lin.

This commit is contained in:
Xiao Chen 2016-09-10 23:23:34 -07:00
parent a99bf26a08
commit cc01ed7026
6 changed files with 9 additions and 9 deletions

View File

@ -163,7 +163,7 @@
<name>dfs.namenode.http-bind-host</name>
<value></value>
<description>
The actual adress the HTTP server will bind to. If this optional address
The actual address the HTTP server will bind to. If this optional address
is set, it overrides only the hostname portion of dfs.namenode.http-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node HTTP server listen on all
@ -243,7 +243,7 @@
<name>dfs.namenode.https-bind-host</name>
<value></value>
<description>
The actual adress the HTTPS server will bind to. If this optional address
The actual address the HTTPS server will bind to. If this optional address
is set, it overrides only the hostname portion of dfs.namenode.https-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node HTTPS server listen on all

View File

@ -89,7 +89,7 @@ Note 2: For the erasure coded files with striping layout, the suitable storage p
When a file or directory is created, its storage policy is *unspecified*. The storage policy can be specified using the "[`storagepolicies -setStoragePolicy`](#Set_Storage_Policy)" command. The effective storage policy of a file or directory is resolved by the following rules.
1. If the file or directory is specificed with a storage policy, return it.
1. If the file or directory is specified with a storage policy, return it.
2. For an unspecified file or directory, if it is the root directory, return the *default storage policy*. Otherwise, return its parent's effective storage policy.

View File

@ -87,7 +87,7 @@ In order to deploy an HA cluster, you should prepare the following:
* **NameNode machines** - the machines on which you run the Active and Standby NameNodes should have equivalent hardware to each other, and equivalent hardware to what would be used in a non-HA cluster.
* **Shared storage** - you will need to have a shared directory which the NameNode machines have read/write access to. Typically this is a remote filer which supports NFS and is mounted on each of the NameNode machines. Currently only a single shared edits directory is supported. Thus, the availability of the system is limited by the availability of this shared edits directory, and therefore in order to remove all single points of failure there needs to be redundancy for the shared edits directory. Specifically, multiple network paths to the storage, and redundancy in the storage itself (disk, network, and power). Beacuse of this, it is recommended that the shared storage server be a high-quality dedicated NAS appliance rather than a simple Linux server.
* **Shared storage** - you will need to have a shared directory which the NameNode machines have read/write access to. Typically this is a remote filer which supports NFS and is mounted on each of the NameNode machines. Currently only a single shared edits directory is supported. Thus, the availability of the system is limited by the availability of this shared edits directory, and therefore in order to remove all single points of failure there needs to be redundancy for the shared edits directory. Specifically, multiple network paths to the storage, and redundancy in the storage itself (disk, network, and power). Because of this, it is recommended that the shared storage server be a high-quality dedicated NAS appliance rather than a simple Linux server.
Note that, in an HA cluster, the Standby NameNodes also perform checkpoints of the namespace state, and thus it is not necessary to run a Secondary NameNode, CheckpointNode, or BackupNode in an HA cluster. In fact, to do so would be an error. This also allows one who is reconfiguring a non-HA-enabled HDFS cluster to be HA-enabled to reuse the hardware which they had previously dedicated to the Secondary NameNode.
@ -137,7 +137,7 @@ The order in which you set these configurations is unimportant, but the values y
* **dfs.namenode.rpc-address.[nameservice ID].[name node ID]** - the fully-qualified RPC address for each NameNode to listen on
For both of the previously-configured NameNode IDs, set the full address and
IPC port of the NameNode processs. Note that this results in two separate
IPC port of the NameNode process. Note that this results in two separate
configuration options. For example:
<property>

View File

@ -86,7 +86,7 @@ The solution is to have separate setting for server endpoints to force binding t
<name>dfs.namenode.http-bind-host</name>
<value>0.0.0.0</value>
<description>
The actual adress the HTTP server will bind to. If this optional address
The actual address the HTTP server will bind to. If this optional address
is set, it overrides only the hostname portion of dfs.namenode.http-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node HTTP server listen on all
@ -98,7 +98,7 @@ The solution is to have separate setting for server endpoints to force binding t
<name>dfs.namenode.https-bind-host</name>
<value>0.0.0.0</value>
<description>
The actual adress the HTTPS server will bind to. If this optional address
The actual address the HTTPS server will bind to. If this optional address
is set, it overrides only the hostname portion of dfs.namenode.https-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node HTTPS server listen on all

View File

@ -148,7 +148,7 @@ It's strongly recommended for the users to update a few configuration properties
characters. The machine name format can be a single host, a "*", a Java regular expression, or an IPv4 address. The access
privilege uses rw or ro to specify read/write or read-only access of the machines to exports. If the access privilege is not provided, the default is read-only. Entries are separated by ";".
For example: "192.168.0.0/22 rw ; \\\\w\*\\\\.example\\\\.com ; host1.test.org ro;". Only the NFS gateway needs to restart after
this property is updated. Note that, here Java regular expression is differnt with the regrulation expression used in
this property is updated. Note that, here Java regular expression is different with the regulation expression used in
Linux NFS export table, such as, using "\\\\w\*\\\\.example\\\\.com" instead of "\*.example.com", "192\\\\.168\\\\.0\\\\.(11|22)"
instead of "192.168.0.[11|22]" and so on.

View File

@ -143,7 +143,7 @@ Hence on Cluster X, where the `core-site.xml` is set to make the default fs to u
### Pathname Usage Best Practices
When one is within a cluster, it is recommended to use the pathname of type (1) above instead of a fully qualified URI like (2). Futher, applications should not use the knowledge of the mount points and use a path like `hdfs://namenodeContainingUserDirs:port/joe/foo/bar` to refer to a file in a particular namenode. One should use `/user/joe/foo/bar` instead.
When one is within a cluster, it is recommended to use the pathname of type (1) above instead of a fully qualified URI like (2). Further, applications should not use the knowledge of the mount points and use a path like `hdfs://namenodeContainingUserDirs:port/joe/foo/bar` to refer to a file in a particular namenode. One should use `/user/joe/foo/bar` instead.
### Renaming Pathnames Across Namespaces