MAPREDUCE-3275. Added documentation for AM WebApp Proxy. Contributed by Robert Evans.
git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1195579 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
parent
9db078212f
commit
b3f90c7060
|
@ -452,6 +452,9 @@ Release 0.23.0 - Unreleased
|
|||
MAPREDUCE-3146. Added a MR specific command line to dump logs for a
|
||||
given TaskAttemptID. (Siddharth Seth via vinodkv)
|
||||
|
||||
MAPREDUCE-3275. Added documentation for AM WebApp Proxy. (Robert Evans via
|
||||
acmurthy)
|
||||
|
||||
OPTIMIZATIONS
|
||||
|
||||
MAPREDUCE-2026. Make JobTracker.getJobCounters() and
|
||||
|
|
|
@ -100,6 +100,8 @@ Hadoop MapReduce Next Generation - Cluster Setup
|
|||
| ResourceManager | YARN_RESOURCEMANAGER_OPTS |
|
||||
*--------------------------------------+--------------------------------------+
|
||||
| NodeManager | YARN_NODEMANAGER_OPTS |
|
||||
*--------------------------------------+--------------------------------------+
|
||||
| WebAppProxy | YARN_PROXYSERVER_OPTS |
|
||||
*--------------------------------------+--------------------------------------+
|
||||
|
||||
For example, To configure Namenode to use parallelGC, the following
|
||||
|
@ -450,7 +452,14 @@ Hadoop MapReduce Next Generation - Cluster Setup
|
|||
Run a script to start NodeManagers on all slaves:
|
||||
|
||||
----
|
||||
$ $YARN_HOME/bin/hdfs start nodemanager --config $HADOOP_CONF_DIR
|
||||
$ $YARN_HOME/bin/yarn start nodemanager --config $HADOOP_CONF_DIR
|
||||
----
|
||||
|
||||
Start a standalone WebAppProxy server. If multiple servers
|
||||
are used with load balancing it should be run on each of them:
|
||||
|
||||
----
|
||||
$ $YARN_HOME/bin/yarn start proxyserver --config $HADOOP_CONF_DIR
|
||||
----
|
||||
|
||||
Start the MapReduce JobHistory Server with the following command, run on the
|
||||
|
@ -485,9 +494,17 @@ Hadoop MapReduce Next Generation - Cluster Setup
|
|||
Run a script to stop NodeManagers on all slaves:
|
||||
|
||||
----
|
||||
$ $YARN_HOME/bin/hdfs stop nodemanager --config $HADOOP_CONF_DIR
|
||||
$ $YARN_HOME/bin/yarn stop nodemanager --config $HADOOP_CONF_DIR
|
||||
----
|
||||
|
||||
Stop the WebAppProxy server. If multiple servers are used with load
|
||||
balancing it should be run on each of them:
|
||||
|
||||
----
|
||||
$ $YARN_HOME/bin/yarn stop proxyserver --config $HADOOP_CONF_DIR
|
||||
----
|
||||
|
||||
|
||||
Stop the MapReduce JobHistory Server with the following command, run on the
|
||||
designated server:
|
||||
|
||||
|
@ -502,7 +519,7 @@ Hadoop MapReduce Next Generation - Cluster Setup
|
|||
to run Hadoop in <<secure mode>> with strong, Kerberos-based
|
||||
authentication.
|
||||
|
||||
* <<<User Acccounts for Hadoop Daemons>>>
|
||||
* <<<User Accounts for Hadoop Daemons>>>
|
||||
|
||||
Ensure that HDFS and YARN daemons run as different Unix users, for e.g.
|
||||
<<<hdfs>>> and <<<yarn>>>. Also, ensure that the MapReduce JobHistory
|
||||
|
@ -751,6 +768,31 @@ KVNO Timestamp Principal
|
|||
|
||||
* <<<conf/yarn-site.xml>>>
|
||||
|
||||
* WebAppProxy
|
||||
|
||||
The <<<WebAppProxy>>> provides a proxy between the web applications
|
||||
exported by an application and an end user. If security is enabled
|
||||
it will warn users before accessing a potentially unsafe web application.
|
||||
Authentication and authorization using the proxy is handled just like
|
||||
any other privileged web application.
|
||||
|
||||
*-------------------------+-------------------------+------------------------+
|
||||
|| Parameter || Value || Notes |
|
||||
*-------------------------+-------------------------+------------------------+
|
||||
| <<<yarn.web-proxy.address>>> | | |
|
||||
| | <<<WebAppProxy>>> host:port for proxy to AM web apps. | |
|
||||
| | | <host:port> if this is the same as <<<yarn.resourcemanager.webapp.address>>>|
|
||||
| | | or it is not defined then the <<<ResourceManager>>> will run the proxy|
|
||||
| | | otherwise a standalone proxy server will need to be launched.|
|
||||
*-------------------------+-------------------------+------------------------+
|
||||
| <<<yarn.web-proxy.keytab>>> | | |
|
||||
| | </etc/security/keytab/web-app.service.keytab> | |
|
||||
| | | Kerberos keytab file for the WebAppProxy. |
|
||||
*-------------------------+-------------------------+------------------------+
|
||||
| <<<yarn.web-proxy.principal>>> | wap/_HOST@REALM.TLD | |
|
||||
| | | Kerberos principal name for the WebAppProxy. |
|
||||
*-------------------------+-------------------------+------------------------+
|
||||
|
||||
* LinuxContainerExecutor
|
||||
|
||||
A <<<ContainerExecutor>>> used by YARN framework which define how any
|
||||
|
@ -968,7 +1010,15 @@ KVNO Timestamp Principal
|
|||
Run a script to start NodeManagers on all slaves as <yarn>:
|
||||
|
||||
----
|
||||
[yarn]$ $YARN_HOME/bin/hdfs start nodemanager --config $HADOOP_CONF_DIR
|
||||
[yarn]$ $YARN_HOME/bin/yarn start nodemanager --config $HADOOP_CONF_DIR
|
||||
----
|
||||
|
||||
Start a standalone WebAppProxy server. Run on the WebAppProxy
|
||||
server as <yarn>. If multiple servers are used with load balancing
|
||||
it should be run on each of them:
|
||||
|
||||
----
|
||||
[yarn]$ $YARN_HOME/bin/yarn start proxyserver --config $HADOOP_CONF_DIR
|
||||
----
|
||||
|
||||
Start the MapReduce JobHistory Server with the following command, run on the
|
||||
|
@ -1003,7 +1053,15 @@ KVNO Timestamp Principal
|
|||
Run a script to stop NodeManagers on all slaves as <yarn>:
|
||||
|
||||
----
|
||||
[yarn]$ $YARN_HOME/bin/hdfs stop nodemanager --config $HADOOP_CONF_DIR
|
||||
[yarn]$ $YARN_HOME/bin/yarn stop nodemanager --config $HADOOP_CONF_DIR
|
||||
----
|
||||
|
||||
Stop the WebAppProxy server. Run on the WebAppProxy server as
|
||||
<yarn>. If multiple servers are used with load balancing it
|
||||
should be run on each of them:
|
||||
|
||||
----
|
||||
[yarn]$ $YARN_HOME/bin/yarn stop proxyserver --config $HADOOP_CONF_DIR
|
||||
----
|
||||
|
||||
Stop the MapReduce JobHistory Server with the following command, run on the
|
||||
|
|
Loading…
Reference in New Issue