MAPREDUCE-3275. Added documentation for AM WebApp Proxy. Contributed by Robert Evans.

git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1195579 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Arun Murthy 2011-10-31 17:33:29 +00:00
parent 9db078212f
commit b3f90c7060
2 changed files with 67 additions and 6 deletions

View File

@ -452,6 +452,9 @@ Release 0.23.0 - Unreleased
MAPREDUCE-3146. Added a MR specific command line to dump logs for a MAPREDUCE-3146. Added a MR specific command line to dump logs for a
given TaskAttemptID. (Siddharth Seth via vinodkv) given TaskAttemptID. (Siddharth Seth via vinodkv)
MAPREDUCE-3275. Added documentation for AM WebApp Proxy. (Robert Evans via
acmurthy)
OPTIMIZATIONS OPTIMIZATIONS
MAPREDUCE-2026. Make JobTracker.getJobCounters() and MAPREDUCE-2026. Make JobTracker.getJobCounters() and

View File

@ -100,6 +100,8 @@ Hadoop MapReduce Next Generation - Cluster Setup
| ResourceManager | YARN_RESOURCEMANAGER_OPTS | | ResourceManager | YARN_RESOURCEMANAGER_OPTS |
*--------------------------------------+--------------------------------------+ *--------------------------------------+--------------------------------------+
| NodeManager | YARN_NODEMANAGER_OPTS | | NodeManager | YARN_NODEMANAGER_OPTS |
*--------------------------------------+--------------------------------------+
| WebAppProxy | YARN_PROXYSERVER_OPTS |
*--------------------------------------+--------------------------------------+ *--------------------------------------+--------------------------------------+
For example, To configure Namenode to use parallelGC, the following For example, To configure Namenode to use parallelGC, the following
@ -450,7 +452,14 @@ Hadoop MapReduce Next Generation - Cluster Setup
Run a script to start NodeManagers on all slaves: Run a script to start NodeManagers on all slaves:
---- ----
$ $YARN_HOME/bin/hdfs start nodemanager --config $HADOOP_CONF_DIR $ $YARN_HOME/bin/yarn start nodemanager --config $HADOOP_CONF_DIR
----
Start a standalone WebAppProxy server. If multiple servers
are used with load balancing it should be run on each of them:
----
$ $YARN_HOME/bin/yarn start proxyserver --config $HADOOP_CONF_DIR
---- ----
Start the MapReduce JobHistory Server with the following command, run on the Start the MapReduce JobHistory Server with the following command, run on the
@ -485,9 +494,17 @@ Hadoop MapReduce Next Generation - Cluster Setup
Run a script to stop NodeManagers on all slaves: Run a script to stop NodeManagers on all slaves:
---- ----
$ $YARN_HOME/bin/hdfs stop nodemanager --config $HADOOP_CONF_DIR $ $YARN_HOME/bin/yarn stop nodemanager --config $HADOOP_CONF_DIR
---- ----
Stop the WebAppProxy server. If multiple servers are used with load
balancing it should be run on each of them:
----
$ $YARN_HOME/bin/yarn stop proxyserver --config $HADOOP_CONF_DIR
----
Stop the MapReduce JobHistory Server with the following command, run on the Stop the MapReduce JobHistory Server with the following command, run on the
designated server: designated server:
@ -502,7 +519,7 @@ Hadoop MapReduce Next Generation - Cluster Setup
to run Hadoop in <<secure mode>> with strong, Kerberos-based to run Hadoop in <<secure mode>> with strong, Kerberos-based
authentication. authentication.
* <<<User Acccounts for Hadoop Daemons>>> * <<<User Accounts for Hadoop Daemons>>>
Ensure that HDFS and YARN daemons run as different Unix users, for e.g. Ensure that HDFS and YARN daemons run as different Unix users, for e.g.
<<<hdfs>>> and <<<yarn>>>. Also, ensure that the MapReduce JobHistory <<<hdfs>>> and <<<yarn>>>. Also, ensure that the MapReduce JobHistory
@ -751,6 +768,31 @@ KVNO Timestamp Principal
* <<<conf/yarn-site.xml>>> * <<<conf/yarn-site.xml>>>
* WebAppProxy
The <<<WebAppProxy>>> provides a proxy between the web applications
exported by an application and an end user. If security is enabled
it will warn users before accessing a potentially unsafe web application.
Authentication and authorization using the proxy is handled just like
any other privileged web application.
*-------------------------+-------------------------+------------------------+
|| Parameter || Value || Notes |
*-------------------------+-------------------------+------------------------+
| <<<yarn.web-proxy.address>>> | | |
| | <<<WebAppProxy>>> host:port for proxy to AM web apps. | |
| | | <host:port> if this is the same as <<<yarn.resourcemanager.webapp.address>>>|
| | | or it is not defined then the <<<ResourceManager>>> will run the proxy|
| | | otherwise a standalone proxy server will need to be launched.|
*-------------------------+-------------------------+------------------------+
| <<<yarn.web-proxy.keytab>>> | | |
| | </etc/security/keytab/web-app.service.keytab> | |
| | | Kerberos keytab file for the WebAppProxy. |
*-------------------------+-------------------------+------------------------+
| <<<yarn.web-proxy.principal>>> | wap/_HOST@REALM.TLD | |
| | | Kerberos principal name for the WebAppProxy. |
*-------------------------+-------------------------+------------------------+
* LinuxContainerExecutor * LinuxContainerExecutor
A <<<ContainerExecutor>>> used by YARN framework which define how any A <<<ContainerExecutor>>> used by YARN framework which define how any
@ -968,7 +1010,15 @@ KVNO Timestamp Principal
Run a script to start NodeManagers on all slaves as <yarn>: Run a script to start NodeManagers on all slaves as <yarn>:
---- ----
[yarn]$ $YARN_HOME/bin/hdfs start nodemanager --config $HADOOP_CONF_DIR [yarn]$ $YARN_HOME/bin/yarn start nodemanager --config $HADOOP_CONF_DIR
----
Start a standalone WebAppProxy server. Run on the WebAppProxy
server as <yarn>. If multiple servers are used with load balancing
it should be run on each of them:
----
[yarn]$ $YARN_HOME/bin/yarn start proxyserver --config $HADOOP_CONF_DIR
---- ----
Start the MapReduce JobHistory Server with the following command, run on the Start the MapReduce JobHistory Server with the following command, run on the
@ -1003,7 +1053,15 @@ KVNO Timestamp Principal
Run a script to stop NodeManagers on all slaves as <yarn>: Run a script to stop NodeManagers on all slaves as <yarn>:
---- ----
[yarn]$ $YARN_HOME/bin/hdfs stop nodemanager --config $HADOOP_CONF_DIR [yarn]$ $YARN_HOME/bin/yarn stop nodemanager --config $HADOOP_CONF_DIR
----
Stop the WebAppProxy server. Run on the WebAppProxy server as
<yarn>. If multiple servers are used with load balancing it
should be run on each of them:
----
[yarn]$ $YARN_HOME/bin/yarn stop proxyserver --config $HADOOP_CONF_DIR
---- ----
Stop the MapReduce JobHistory Server with the following command, run on the Stop the MapReduce JobHistory Server with the following command, run on the