HADOOP-9277. Improve javadoc for FileContext. Contributed by Andrew Wang.
git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1443710 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
parent
362da383fd
commit
c3d09010c7
|
@ -149,6 +149,8 @@ Trunk (Unreleased)
|
||||||
HADOOP-8924. Add maven plugin alternative to shell script to save
|
HADOOP-8924. Add maven plugin alternative to shell script to save
|
||||||
package-info.java. (Chris Nauroth via suresh)
|
package-info.java. (Chris Nauroth via suresh)
|
||||||
|
|
||||||
|
HADOOP-9277. Improve javadoc for FileContext. (Andrew Wang via suresh)
|
||||||
|
|
||||||
BUG FIXES
|
BUG FIXES
|
||||||
|
|
||||||
HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang)
|
HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang)
|
||||||
|
|
|
@ -57,70 +57,60 @@ import org.apache.hadoop.security.token.Token;
|
||||||
import org.apache.hadoop.util.ShutdownHookManager;
|
import org.apache.hadoop.util.ShutdownHookManager;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* The FileContext class provides an interface to the application writer for
|
* The FileContext class provides an interface for users of the Hadoop
|
||||||
* using the Hadoop file system.
|
* file system. It exposes a number of file system operations, e.g. create,
|
||||||
* It provides a set of methods for the usual operation: create, open,
|
* open, list.
|
||||||
* list, etc
|
|
||||||
*
|
*
|
||||||
* <p>
|
* <h2>Path Names</h2>
|
||||||
* <b> *** Path Names *** </b>
|
|
||||||
* <p>
|
|
||||||
*
|
*
|
||||||
* The Hadoop file system supports a URI name space and URI names.
|
* The Hadoop file system supports a URI namespace and URI names. This enables
|
||||||
* It offers a forest of file systems that can be referenced using fully
|
* multiple types of file systems to be referenced using fully-qualified URIs.
|
||||||
* qualified URIs.
|
* Two common Hadoop file system implementations are
|
||||||
* Two common Hadoop file systems implementations are
|
|
||||||
* <ul>
|
* <ul>
|
||||||
* <li> the local file system: file:///path
|
* <li>the local file system: file:///path
|
||||||
* <li> the hdfs file system hdfs://nnAddress:nnPort/path
|
* <li>the HDFS file system: hdfs://nnAddress:nnPort/path
|
||||||
* </ul>
|
* </ul>
|
||||||
*
|
*
|
||||||
* While URI names are very flexible, it requires knowing the name or address
|
* The Hadoop file system also supports additional naming schemes besides URIs.
|
||||||
* of the server. For convenience one often wants to access the default system
|
* Hadoop has the concept of a <i>default file system</i>, which implies a
|
||||||
* in one's environment without knowing its name/address. This has an
|
* default URI scheme and authority. This enables <i>slash-relative names</i>
|
||||||
* additional benefit that it allows one to change one's default fs
|
* relative to the default FS, which are more convenient for users and
|
||||||
* (e.g. admin moves application from cluster1 to cluster2).
|
* application writers. The default FS is typically set by the user's
|
||||||
|
* environment, though it can also be manually specified.
|
||||||
* <p>
|
* <p>
|
||||||
*
|
*
|
||||||
* To facilitate this, Hadoop supports a notion of a default file system.
|
* Hadoop also supports <i>working-directory-relative</i> names, which are paths
|
||||||
* The user can set his default file system, although this is
|
* relative to the current working directory (similar to Unix). The working
|
||||||
* typically set up for you in your environment via your default config.
|
* directory can be in a different file system than the default FS.
|
||||||
* A default file system implies a default scheme and authority; slash-relative
|
|
||||||
* names (such as /for/bar) are resolved relative to that default FS.
|
|
||||||
* Similarly a user can also have working-directory-relative names (i.e. names
|
|
||||||
* not starting with a slash). While the working directory is generally in the
|
|
||||||
* same default FS, the wd can be in a different FS.
|
|
||||||
* <p>
|
* <p>
|
||||||
* Hence Hadoop path names can be one of:
|
* Thus, Hadoop path names can be specified as one of the following:
|
||||||
* <ul>
|
* <ul>
|
||||||
* <li> fully qualified URI: scheme://authority/path
|
* <li>a fully-qualified URI: scheme://authority/path (e.g.
|
||||||
* <li> slash relative names: /path relative to the default file system
|
* hdfs://nnAddress:nnPort/foo/bar)
|
||||||
* <li> wd-relative names: path relative to the working dir
|
* <li>a slash-relative name: path relative to the default file system (e.g.
|
||||||
* </ul>
|
* /foo/bar)
|
||||||
|
* <li>a working-directory-relative name: path relative to the working dir (e.g.
|
||||||
|
* foo/bar)
|
||||||
|
* </ul>
|
||||||
* Relative paths with scheme (scheme:foo/bar) are illegal.
|
* Relative paths with scheme (scheme:foo/bar) are illegal.
|
||||||
*
|
*
|
||||||
* <p>
|
* <h2>Role of FileContext and Configuration Defaults</h2>
|
||||||
* <b>****The Role of the FileContext and configuration defaults****</b>
|
*
|
||||||
* <p>
|
* The FileContext is the analogue of per-process file-related state in Unix. It
|
||||||
* The FileContext provides file namespace context for resolving file names;
|
* contains two properties:
|
||||||
* it also contains the umask for permissions, In that sense it is like the
|
*
|
||||||
* per-process file-related state in Unix system.
|
* <ul>
|
||||||
* These two properties
|
* <li>the default file system (for resolving slash-relative names)
|
||||||
* <ul>
|
* <li>the umask (for file permissions)
|
||||||
* <li> default file system i.e your slash)
|
* </ul>
|
||||||
* <li> umask
|
* In general, these properties are obtained from the default configuration file
|
||||||
* </ul>
|
* in the user's environment (see {@link Configuration}).
|
||||||
* in general, are obtained from the default configuration file
|
*
|
||||||
* in your environment, (@see {@link Configuration}).
|
* Further file system properties are specified on the server-side. File system
|
||||||
*
|
* operations default to using these server-side defaults unless otherwise
|
||||||
* No other configuration parameters are obtained from the default config as
|
* specified.
|
||||||
* far as the file context layer is concerned. All file system instances
|
* <p>
|
||||||
* (i.e. deployments of file systems) have default properties; we call these
|
* The file system related server-side defaults are:
|
||||||
* server side (SS) defaults. Operation like create allow one to select many
|
|
||||||
* properties: either pass them in as explicit parameters or use
|
|
||||||
* the SS properties.
|
|
||||||
* <p>
|
|
||||||
* The file system related SS defaults are
|
|
||||||
* <ul>
|
* <ul>
|
||||||
* <li> the home directory (default is "/user/userName")
|
* <li> the home directory (default is "/user/userName")
|
||||||
* <li> the initial wd (only for local fs)
|
* <li> the initial wd (only for local fs)
|
||||||
|
@ -131,34 +121,34 @@ import org.apache.hadoop.util.ShutdownHookManager;
|
||||||
* <li> checksum option. (checksumType and bytesPerChecksum)
|
* <li> checksum option. (checksumType and bytesPerChecksum)
|
||||||
* </ul>
|
* </ul>
|
||||||
*
|
*
|
||||||
* <p>
|
* <h2>Example Usage</h2>
|
||||||
* <b> *** Usage Model for the FileContext class *** </b>
|
*
|
||||||
* <p>
|
|
||||||
* Example 1: use the default config read from the $HADOOP_CONFIG/core.xml.
|
* Example 1: use the default config read from the $HADOOP_CONFIG/core.xml.
|
||||||
* Unspecified values come from core-defaults.xml in the release jar.
|
* Unspecified values come from core-defaults.xml in the release jar.
|
||||||
* <ul>
|
* <ul>
|
||||||
* <li> myFContext = FileContext.getFileContext(); // uses the default config
|
* <li> myFContext = FileContext.getFileContext(); // uses the default config
|
||||||
* // which has your default FS
|
* // which has your default FS
|
||||||
* <li> myFContext.create(path, ...);
|
* <li> myFContext.create(path, ...);
|
||||||
* <li> myFContext.setWorkingDir(path)
|
* <li> myFContext.setWorkingDir(path);
|
||||||
* <li> myFContext.open (path, ...);
|
* <li> myFContext.open (path, ...);
|
||||||
|
* <li>...
|
||||||
* </ul>
|
* </ul>
|
||||||
* Example 2: Get a FileContext with a specific URI as the default FS
|
* Example 2: Get a FileContext with a specific URI as the default FS
|
||||||
* <ul>
|
* <ul>
|
||||||
* <li> myFContext = FileContext.getFileContext(URI)
|
* <li> myFContext = FileContext.getFileContext(URI);
|
||||||
* <li> myFContext.create(path, ...);
|
* <li> myFContext.create(path, ...);
|
||||||
* ...
|
* <li>...
|
||||||
* </ul>
|
* </ul>
|
||||||
* Example 3: FileContext with local file system as the default
|
* Example 3: FileContext with local file system as the default
|
||||||
* <ul>
|
* <ul>
|
||||||
* <li> myFContext = FileContext.getLocalFSFileContext()
|
* <li> myFContext = FileContext.getLocalFSFileContext();
|
||||||
* <li> myFContext.create(path, ...);
|
* <li> myFContext.create(path, ...);
|
||||||
* <li> ...
|
* <li> ...
|
||||||
* </ul>
|
* </ul>
|
||||||
* Example 4: Use a specific config, ignoring $HADOOP_CONFIG
|
* Example 4: Use a specific config, ignoring $HADOOP_CONFIG
|
||||||
* Generally you should not need use a config unless you are doing
|
* Generally you should not need use a config unless you are doing
|
||||||
* <ul>
|
* <ul>
|
||||||
* <li> configX = someConfigSomeOnePassedToYou.
|
* <li> configX = someConfigSomeOnePassedToYou;
|
||||||
* <li> myFContext = getFileContext(configX); // configX is not changed,
|
* <li> myFContext = getFileContext(configX); // configX is not changed,
|
||||||
* // is passed down
|
* // is passed down
|
||||||
* <li> myFContext.create(path, ...);
|
* <li> myFContext.create(path, ...);
|
||||||
|
|
Loading…
Reference in New Issue