HDFS-5297. Merging r1561849 from trunk to branch-2.

git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/branches/branch-2@1561854 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Arpit Agarwal 2014-01-27 21:19:16 +00:00
parent 63d3c13f4d
commit d1d74049aa
12 changed files with 58 additions and 62 deletions

View File

@ -862,6 +862,9 @@ Release 2.3.0 - UNRELEASED
HDFS-5343. When cat command is issued on snapshot files getting unexpected result. HDFS-5343. When cat command is issued on snapshot files getting unexpected result.
(Sathish via umamahesh) (Sathish via umamahesh)
HDFS-5297. Fix dead links in HDFS site documents. (Akira Ajisaka via
Arpit Agarwal)
Release 2.2.0 - 2013-10-13 Release 2.2.0 - 2013-10-13
INCOMPATIBLE CHANGES INCOMPATIBLE CHANGES

View File

@ -19,8 +19,6 @@
HDFS Federation HDFS Federation
\[ {{{./index.html}Go Back}} \]
%{toc|section=1|fromDepth=0} %{toc|section=1|fromDepth=0}
This guide provides an overview of the HDFS Federation feature and This guide provides an overview of the HDFS Federation feature and

View File

@ -20,8 +20,6 @@
Offline Edits Viewer Guide Offline Edits Viewer Guide
\[ {{{./index.html}Go Back}} \]
%{toc|section=1|fromDepth=0} %{toc|section=1|fromDepth=0}
* Overview * Overview

View File

@ -18,8 +18,6 @@
Offline Image Viewer Guide Offline Image Viewer Guide
\[ {{{./index.html}Go Back}} \]
%{toc|section=1|fromDepth=0} %{toc|section=1|fromDepth=0}
* Overview * Overview
@ -64,9 +62,9 @@ Offline Image Viewer Guide
but no data recorded. The default record delimiter is a tab, but but no data recorded. The default record delimiter is a tab, but
this may be changed via the -delimiter command line argument. This this may be changed via the -delimiter command line argument. This
processor is designed to create output that is easily analyzed by processor is designed to create output that is easily analyzed by
other tools, such as [36]Apache Pig. See the [37]Analyzing Results other tools, such as {{{http://pig.apache.org}Apache Pig}}. See
section for further information on using this processor to analyze the {{Analyzing Results}} section for further information on using
the contents of fsimage files. this processor to analyze the contents of fsimage files.
[[4]] XML creates an XML document of the fsimage and includes all of the [[4]] XML creates an XML document of the fsimage and includes all of the
information within the fsimage, similar to the lsr processor. The information within the fsimage, similar to the lsr processor. The

View File

@ -18,8 +18,6 @@
HDFS Permissions Guide HDFS Permissions Guide
\[ {{{./index.html}Go Back}} \]
%{toc|section=1|fromDepth=0} %{toc|section=1|fromDepth=0}
* Overview * Overview
@ -55,8 +53,10 @@ HDFS Permissions Guide
* If the user name matches the owner of foo, then the owner * If the user name matches the owner of foo, then the owner
permissions are tested; permissions are tested;
* Else if the group of foo matches any of member of the groups list, * Else if the group of foo matches any of member of the groups list,
then the group permissions are tested; then the group permissions are tested;
* Otherwise the other permissions of foo are tested. * Otherwise the other permissions of foo are tested.
If a permissions check fails, the client operation fails. If a permissions check fails, the client operation fails.

View File

@ -18,8 +18,6 @@
HDFS Quotas Guide HDFS Quotas Guide
\[ {{{./index.html}Go Back}} \]
%{toc|section=1|fromDepth=0} %{toc|section=1|fromDepth=0}
* Overview * Overview

View File

@ -108,9 +108,11 @@ HDFS Users Guide
The following documents describe how to install and set up a Hadoop The following documents describe how to install and set up a Hadoop
cluster: cluster:
* {{Single Node Setup}} for first-time users. * {{{../hadoop-common/SingleCluster.html}Single Node Setup}}
for first-time users.
* {{Cluster Setup}} for large, distributed clusters. * {{{../hadoop-common/ClusterSetup.html}Cluster Setup}}
for large, distributed clusters.
The rest of this document assumes the user is able to set up and run a The rest of this document assumes the user is able to set up and run a
HDFS with at least one DataNode. For the purpose of this document, both HDFS with at least one DataNode. For the purpose of this document, both
@ -136,7 +138,8 @@ HDFS Users Guide
for a command. These commands support most of the normal files system for a command. These commands support most of the normal files system
operations like copying files, changing file permissions, etc. It also operations like copying files, changing file permissions, etc. It also
supports a few HDFS specific operations like changing replication of supports a few HDFS specific operations like changing replication of
files. For more information see {{{File System Shell Guide}}}. files. For more information see {{{../hadoop-common/FileSystemShell.html}
File System Shell Guide}}.
** DFSAdmin Command ** DFSAdmin Command
@ -169,7 +172,7 @@ HDFS Users Guide
of racks and datanodes attached to the tracks as viewed by the of racks and datanodes attached to the tracks as viewed by the
NameNode. NameNode.
For command usage, see {{{dfsadmin}}}. For command usage, see {{{../hadoop-common/CommandsManual.html#dfsadmin}dfsadmin}}.
* Secondary NameNode * Secondary NameNode
@ -203,7 +206,8 @@ HDFS Users Guide
So that the check pointed image is always ready to be read by the So that the check pointed image is always ready to be read by the
primary NameNode if necessary. primary NameNode if necessary.
For command usage, see {{{secondarynamenode}}}. For command usage,
see {{{../hadoop-common/CommandsManual.html#secondarynamenode}secondarynamenode}}.
* Checkpoint Node * Checkpoint Node
@ -245,7 +249,7 @@ HDFS Users Guide
Multiple checkpoint nodes may be specified in the cluster configuration Multiple checkpoint nodes may be specified in the cluster configuration
file. file.
For command usage, see {{{namenode}}}. For command usage, see {{{../hadoop-common/CommandsManual.html#namenode}namenode}}.
* Backup Node * Backup Node
@ -287,7 +291,7 @@ HDFS Users Guide
For a complete discussion of the motivation behind the creation of the For a complete discussion of the motivation behind the creation of the
Backup node and Checkpoint node, see {{{https://issues.apache.org/jira/browse/HADOOP-4539}HADOOP-4539}}. Backup node and Checkpoint node, see {{{https://issues.apache.org/jira/browse/HADOOP-4539}HADOOP-4539}}.
For command usage, see {{{namenode}}}. For command usage, see {{{../hadoop-common/CommandsManual.html#namenode}namenode}}.
* Import Checkpoint * Import Checkpoint
@ -310,7 +314,7 @@ HDFS Users Guide
verifies that the image in <<<dfs.namenode.checkpoint.dir>>> is consistent, verifies that the image in <<<dfs.namenode.checkpoint.dir>>> is consistent,
but does not modify it in any way. but does not modify it in any way.
For command usage, see {{{namenode}}}. For command usage, see {{{../hadoop-common/CommandsManual.html#namenode}namenode}}.
* Rebalancer * Rebalancer
@ -337,7 +341,7 @@ HDFS Users Guide
A brief administrator's guide for rebalancer as a PDF is attached to A brief administrator's guide for rebalancer as a PDF is attached to
{{{https://issues.apache.org/jira/browse/HADOOP-1652}HADOOP-1652}}. {{{https://issues.apache.org/jira/browse/HADOOP-1652}HADOOP-1652}}.
For command usage, see {{{balancer}}}. For command usage, see {{{../hadoop-common/CommandsManual.html#balancer}balancer}}.
* Rack Awareness * Rack Awareness
@ -379,8 +383,9 @@ HDFS Users Guide
most of the recoverable failures. By default fsck ignores open files most of the recoverable failures. By default fsck ignores open files
but provides an option to select all files during reporting. The HDFS but provides an option to select all files during reporting. The HDFS
fsck command is not a Hadoop shell command. It can be run as fsck command is not a Hadoop shell command. It can be run as
<<<bin/hadoop fsck>>>. For command usage, see {{{fsck}}}. fsck can be run on the <<<bin/hadoop fsck>>>. For command usage, see
whole file system or on a subset of files. {{{../hadoop-common/CommandsManual.html#fsck}fsck}}. fsck can be run on
the whole file system or on a subset of files.
* fetchdt * fetchdt
@ -393,7 +398,8 @@ HDFS Users Guide
command. It can be run as <<<bin/hadoop fetchdt DTfile>>>. After you got command. It can be run as <<<bin/hadoop fetchdt DTfile>>>. After you got
the token you can run an HDFS command without having Kerberos tickets, the token you can run an HDFS command without having Kerberos tickets,
by pointing <<<HADOOP_TOKEN_FILE_LOCATION>>> environmental variable to the by pointing <<<HADOOP_TOKEN_FILE_LOCATION>>> environmental variable to the
delegation token file. For command usage, see {{{fetchdt}}} command. delegation token file. For command usage, see
{{{../hadoop-common/CommandsManual.html#fetchdt}fetchdt}} command.
* Recovery Mode * Recovery Mode
@ -427,10 +433,11 @@ HDFS Users Guide
let alone to restart HDFS from scratch. HDFS allows administrators to let alone to restart HDFS from scratch. HDFS allows administrators to
go back to earlier version of Hadoop and rollback the cluster to the go back to earlier version of Hadoop and rollback the cluster to the
state it was in before the upgrade. HDFS upgrade is described in more state it was in before the upgrade. HDFS upgrade is described in more
detail in {{{Hadoop Upgrade}}} Wiki page. HDFS can have one such backup at a detail in {{{http://wiki.apache.org/hadoop/Hadoop_Upgrade}Hadoop Upgrade}}
time. Before upgrading, administrators need to remove existing backup Wiki page. HDFS can have one such backup at a time. Before upgrading,
using bin/hadoop dfsadmin <<<-finalizeUpgrade>>> command. The following administrators need to remove existing backupusing bin/hadoop dfsadmin
briefly describes the typical upgrade procedure: <<<-finalizeUpgrade>>> command. The following briefly describes the
typical upgrade procedure:
* Before upgrading Hadoop software, finalize if there an existing * Before upgrading Hadoop software, finalize if there an existing
backup. <<<dfsadmin -upgradeProgress>>> status can tell if the cluster backup. <<<dfsadmin -upgradeProgress>>> status can tell if the cluster
@ -450,7 +457,7 @@ HDFS Users Guide
* stop the cluster and distribute earlier version of Hadoop. * stop the cluster and distribute earlier version of Hadoop.
* start the cluster with rollback option. (<<<bin/start-dfs.h -rollback>>>). * start the cluster with rollback option. (<<<bin/start-dfs.sh -rollback>>>).
* File Permissions and Security * File Permissions and Security
@ -465,14 +472,15 @@ HDFS Users Guide
* Scalability * Scalability
Hadoop currently runs on clusters with thousands of nodes. The Hadoop currently runs on clusters with thousands of nodes. The
{{{PoweredBy}}} Wiki page lists some of the organizations that deploy Hadoop {{{http://wiki.apache.org/hadoop/PoweredBy}PoweredBy}} Wiki page lists
on large clusters. HDFS has one NameNode for each cluster. Currently some of the organizations that deploy Hadoop on large clusters.
the total memory available on NameNode is the primary scalability HDFS has one NameNode for each cluster. Currently the total memory
limitation. On very large clusters, increasing average size of files available on NameNode is the primary scalability limitation.
stored in HDFS helps with increasing cluster size without increasing On very large clusters, increasing average size of files stored in
memory requirements on NameNode. The default configuration may not HDFS helps with increasing cluster size without increasing memory
suite very large clustes. The {{{FAQ}}} Wiki page lists suggested requirements on NameNode. The default configuration may not suite
configuration improvements for large Hadoop clusters. very large clusters. The {{{http://wiki.apache.org/hadoop/FAQ}FAQ}}
Wiki page lists suggested configuration improvements for large Hadoop clusters.
* Related Documentation * Related Documentation
@ -481,19 +489,22 @@ HDFS Users Guide
documentation about Hadoop and HDFS. The following list is a starting documentation about Hadoop and HDFS. The following list is a starting
point for further exploration: point for further exploration:
* {{{Hadoop Site}}}: The home page for the Apache Hadoop site. * {{{http://hadoop.apache.org}Hadoop Site}}: The home page for
the Apache Hadoop site.
* {{{Hadoop Wiki}}}: The home page (FrontPage) for the Hadoop Wiki. Unlike * {{{http://wiki.apache.org/hadoop/FrontPage}Hadoop Wiki}}:
The home page (FrontPage) for the Hadoop Wiki. Unlike
the released documentation, which is part of Hadoop source tree, the released documentation, which is part of Hadoop source tree,
Hadoop Wiki is regularly edited by Hadoop Community. Hadoop Wiki is regularly edited by Hadoop Community.
* {{{FAQ}}}: The FAQ Wiki page. * {{{http://wiki.apache.org/hadoop/FAQ}FAQ}}: The FAQ Wiki page.
* {{{Hadoop JavaDoc API}}}. * {{{../../api/index.html}Hadoop JavaDoc API}}.
* {{{Hadoop User Mailing List}}}: core-user[at]hadoop.apache.org. * Hadoop User Mailing List: user[at]hadoop.apache.org.
* Explore {{{src/hdfs/hdfs-default.xml}}}. It includes brief description of * Explore {{{./hdfs-default.xml}hdfs-default.xml}}. It includes
most of the configuration variables available. brief description of most of the configuration variables available.
* {{{Hadoop Commands Guide}}}: Hadoop commands usage. * {{{../hadoop-common/CommandsManual.html}Hadoop Commands Guide}}:
Hadoop commands usage.

View File

@ -18,8 +18,6 @@
HFTP Guide HFTP Guide
\[ {{{./index.html}Go Back}} \]
%{toc|section=1|fromDepth=0} %{toc|section=1|fromDepth=0}
* Introduction * Introduction

View File

@ -19,8 +19,6 @@
HDFS Short-Circuit Local Reads HDFS Short-Circuit Local Reads
\[ {{{./index.html}Go Back}} \]
%{toc|section=1|fromDepth=0} %{toc|section=1|fromDepth=0}
* {Background} * {Background}

View File

@ -18,8 +18,6 @@
WebHDFS REST API WebHDFS REST API
\[ {{{./index.html}Go Back}} \]
%{toc|section=1|fromDepth=0} %{toc|section=1|fromDepth=0}
* {Document Conventions} * {Document Conventions}
@ -54,7 +52,7 @@ WebHDFS REST API
* {{{Status of a File/Directory}<<<GETFILESTATUS>>>}} * {{{Status of a File/Directory}<<<GETFILESTATUS>>>}}
(see {{{../../api/org/apache/hadoop/fs/FileSystem.html}FileSystem}}.getFileStatus) (see {{{../../api/org/apache/hadoop/fs/FileSystem.html}FileSystem}}.getFileStatus)
* {{<<<LISTSTATUS>>>}} * {{{List a Directory}<<<LISTSTATUS>>>}}
(see {{{../../api/org/apache/hadoop/fs/FileSystem.html}FileSystem}}.listStatus) (see {{{../../api/org/apache/hadoop/fs/FileSystem.html}FileSystem}}.listStatus)
* {{{Get Content Summary of a Directory}<<<GETCONTENTSUMMARY>>>}} * {{{Get Content Summary of a Directory}<<<GETCONTENTSUMMARY>>>}}
@ -109,7 +107,7 @@ WebHDFS REST API
* {{{Append to a File}<<<APPEND>>>}} * {{{Append to a File}<<<APPEND>>>}}
(see {{{../../api/org/apache/hadoop/fs/FileSystem.html}FileSystem}}.append) (see {{{../../api/org/apache/hadoop/fs/FileSystem.html}FileSystem}}.append)
* {{{Concatenate Files}<<<CONCAT>>>}} * {{{Concat File(s)}<<<CONCAT>>>}}
(see {{{../../api/org/apache/hadoop/fs/FileSystem.html}FileSystem}}.concat) (see {{{../../api/org/apache/hadoop/fs/FileSystem.html}FileSystem}}.concat)
* HTTP DELETE * HTTP DELETE
@ -871,7 +869,7 @@ Content-Length: 0
* {Error Responses} * {Error Responses}
When an operation fails, the server may throw an exception. When an operation fails, the server may throw an exception.
The JSON schema of error responses is defined in {{<<<RemoteException>>> JSON schema}}. The JSON schema of error responses is defined in {{{RemoteException JSON Schema}}}.
The table below shows the mapping from exceptions to HTTP response codes. The table below shows the mapping from exceptions to HTTP response codes.
** {HTTP Response Codes} ** {HTTP Response Codes}
@ -1119,7 +1117,7 @@ Transfer-Encoding: chunked
See also: See also:
{{{FileStatus Properties}<<<FileStatus>>> Properties}}, {{{FileStatus Properties}<<<FileStatus>>> Properties}},
{{{Status of a File/Directory}<<<GETFILESTATUS>>>}}, {{{Status of a File/Directory}<<<GETFILESTATUS>>>}},
{{{../../api/org/apache/hadoop/fs/FileStatus}FileStatus}} {{{../../api/org/apache/hadoop/fs/FileStatus.html}FileStatus}}
*** {FileStatus Properties} *** {FileStatus Properties}
@ -1232,7 +1230,7 @@ var fileStatusProperties =
See also: See also:
{{{FileStatus Properties}<<<FileStatus>>> Properties}}, {{{FileStatus Properties}<<<FileStatus>>> Properties}},
{{{List a Directory}<<<LISTSTATUS>>>}}, {{{List a Directory}<<<LISTSTATUS>>>}},
{{{../../api/org/apache/hadoop/fs/FileStatus}FileStatus}} {{{../../api/org/apache/hadoop/fs/FileStatus.html}FileStatus}}
** {Long JSON Schema} ** {Long JSON Schema}
@ -1275,7 +1273,7 @@ var fileStatusProperties =
See also: See also:
{{{Get Home Directory}<<<GETHOMEDIRECTORY>>>}}, {{{Get Home Directory}<<<GETHOMEDIRECTORY>>>}},
{{{../../api/org/apache/hadoop/fs/Path}Path}} {{{../../api/org/apache/hadoop/fs/Path.html}Path}}
** {RemoteException JSON Schema} ** {RemoteException JSON Schema}

View File

@ -18,8 +18,6 @@
HDFS High Availability HDFS High Availability
\[ {{{./index.html}Go Back}} \]
%{toc|section=1|fromDepth=0} %{toc|section=1|fromDepth=0}
* {Purpose} * {Purpose}

View File

@ -18,8 +18,6 @@
HDFS High Availability Using the Quorum Journal Manager HDFS High Availability Using the Quorum Journal Manager
\[ {{{./index.html}Go Back}} \]
%{toc|section=1|fromDepth=0} %{toc|section=1|fromDepth=0}
* {Purpose} * {Purpose}