Fix typos in HDFS documents. (#5665)

This commit is contained in:
liang3zy22 2023-05-18 00:28:01 +08:00 committed by GitHub
parent a90c722143
commit 482897a0f6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 7 additions and 7 deletions

View File

@ -96,7 +96,7 @@ The effective storage policy can be retrieved by the "[`storagepolicies -getStor
* **dfs.datanode.data.dir** - on each data node, the comma-separated storage locations should be tagged with their storage types. This allows storage policies to place the blocks on different storage types according to policy. For example:
1. A datanode storage location /grid/dn/disk0 on DISK should be configured with `[DISK]file:///grid/dn/disk0`
2. A datanode storage location /grid/dn/ssd0 on SSD can should configured with `[SSD]file:///grid/dn/ssd0`
2. A datanode storage location /grid/dn/ssd0 on SSD should be configured with `[SSD]file:///grid/dn/ssd0`
3. A datanode storage location /grid/dn/archive0 on ARCHIVE should be configured with `[ARCHIVE]file:///grid/dn/archive0`
4. A datanode storage location /grid/dn/ram0 on RAM_DISK should be configured with `[RAM_DISK]file:///grid/dn/ram0`
5. A datanode storage location /grid/dn/nvdimm0 on NVDIMM should be configured with `[NVDIMM]file:///grid/dn/nvdimm0`

View File

@ -167,7 +167,7 @@ Perform the following steps:
* Add the new Namenode related config to the configuration file.
* Propagate the configuration file to the all the nodes in the cluster.
* Propagate the configuration file to all the nodes in the cluster.
* Start the new Namenode and Secondary/Backup.

View File

@ -205,7 +205,7 @@ The order in which you set these configurations is unimportant, but the values y
* **dfs.client.failover.proxy.provider.[nameservice ID]** - the Java class that HDFS clients use to contact the Active NameNode
Configure the name of the Java class which will be used by the DFS Client to
Configure the name of the Java class which will be used by the HDFS Client to
determine which NameNode is the current Active, and therefore which NameNode is
currently serving client requests. The two implementations which currently
ship with Hadoop are the **ConfiguredFailoverProxyProvider** and the

View File

@ -109,7 +109,7 @@ Quotas are managed by a set of commands available only to the administrator.
Reporting Command
-----------------
An an extension to the count command of the HDFS shell reports quota values and the current count of names and bytes in use.
An extension to the count command of the HDFS shell reports quota values and the current count of names and bytes in use.
* `hadoop fs -count -q [-h] [-v] [-t [comma-separated list of storagetypes]] <directory>...<directory>`

View File

@ -183,7 +183,7 @@ The snapshot path is returned in these methods.
#### Delete Snapshots
Delete a snapshot of from a snapshottable directory.
Delete a snapshot from a snapshottable directory.
This operation requires owner privilege of the snapshottable directory.
* Command:

View File

@ -67,7 +67,7 @@ Wildcard entries in the `CLASSPATH` are now supported by libhdfs.
Thread Safe
-----------
libdhfs is thread safe.
libhdfs is thread safe.
* Concurrency and Hadoop FS "handles"

View File

@ -53,7 +53,7 @@ Architecture
For transparent encryption, we introduce a new abstraction to HDFS: the *encryption zone*. An encryption zone is a special directory whose contents will be transparently encrypted upon write and transparently decrypted upon read. Each encryption zone is associated with a single *encryption zone key* which is specified when the zone is created. Each file within an encryption zone has its own unique *data encryption key (DEK)*. DEKs are never handled directly by HDFS. Instead, HDFS only ever handles an *encrypted data encryption key (EDEK)*. Clients decrypt an EDEK, and then use the subsequent DEK to read and write data. HDFS datanodes simply see a stream of encrypted bytes.
A very important use case of encryption is to "switch it on" and ensure all files across the entire filesystem are encrypted. To support this strong guarantee without losing the flexibility of using different encryption zone keys in different parts of the filesystem, HDFS allows *nested encryption zones*. After an encryption zone is created (e.g. on the root directory `/`), a user can create more encryption zones on its descendant directories (e.g. `/home/alice`) with different keys. The EDEK of a file will generated using the encryption zone key from the closest ancestor encryption zone.
A very important use case of encryption is to "switch it on" and ensure all files across the entire filesystem are encrypted. To support this strong guarantee without losing the flexibility of using different encryption zone keys in different parts of the filesystem, HDFS allows *nested encryption zones*. After an encryption zone is created (e.g. on the root directory `/`), a user can create more encryption zones on its descendant directories (e.g. `/home/alice`) with different keys. The EDEK of a file will be generated using the encryption zone key from the closest ancestor encryption zone.
A new cluster service is required to manage encryption keys: the Hadoop Key Management Server (KMS). In the context of HDFS encryption, the KMS performs three basic responsibilities: