Merge -c 1390218 from trunk to branch-2 to fix YARN-9. Rename YARN_HOME to HADOOP_YARN_HOME. Contributed by Vinod K V.

git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/branches/branch-2@1390219 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Arun Murthy 2012-09-25 23:40:14 +00:00
parent 1901c501c8
commit c51c5c1297
14 changed files with 83 additions and 63 deletions

View File

@ -29,6 +29,8 @@ Release 2.0.3-alpha - Unreleased
Release 2.0.2-alpha - 2012-09-07 Release 2.0.2-alpha - 2012-09-07
YARN-9. Rename YARN_HOME to HADOOP_YARN_HOME. (vinodkv via acmurthy)
NEW FEATURES NEW FEATURES
YARN-1. Promote YARN to be a sub-project of Apache Hadoop. (acmurthy) YARN-1. Promote YARN to be a sub-project of Apache Hadoop. (acmurthy)

View File

@ -22,7 +22,7 @@
# #
# YARN_SLAVES File naming remote hosts. # YARN_SLAVES File naming remote hosts.
# Default is ${YARN_CONF_DIR}/slaves. # Default is ${YARN_CONF_DIR}/slaves.
# YARN_CONF_DIR Alternate conf dir. Default is ${YARN_HOME}/conf. # YARN_CONF_DIR Alternate conf dir. Default is ${HADOOP_YARN_HOME}/conf.
# YARN_SLAVE_SLEEP Seconds to sleep between spawning remote commands. # YARN_SLAVE_SLEEP Seconds to sleep between spawning remote commands.
# YARN_SSH_OPTS Options passed to ssh when running remote commands. # YARN_SSH_OPTS Options passed to ssh when running remote commands.
## ##

View File

@ -41,7 +41,7 @@
# more than one command (fs, dfs, fsck, # more than one command (fs, dfs, fsck,
# dfsadmin etc) # dfsadmin etc)
# #
# YARN_CONF_DIR Alternate conf dir. Default is ${YARN_HOME}/conf. # YARN_CONF_DIR Alternate conf dir. Default is ${HADOOP_YARN_HOME}/conf.
# #
# YARN_ROOT_LOGGER The root appender. Default is INFO,console # YARN_ROOT_LOGGER The root appender. Default is INFO,console
# #
@ -116,43 +116,43 @@ fi
CLASSPATH="${HADOOP_CONF_DIR}:${YARN_CONF_DIR}:${CLASSPATH}" CLASSPATH="${HADOOP_CONF_DIR}:${YARN_CONF_DIR}:${CLASSPATH}"
# for developers, add Hadoop classes to CLASSPATH # for developers, add Hadoop classes to CLASSPATH
if [ -d "$YARN_HOME/yarn-api/target/classes" ]; then if [ -d "$HADOOP_YARN_HOME/yarn-api/target/classes" ]; then
CLASSPATH=${CLASSPATH}:$YARN_HOME/yarn-api/target/classes CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/yarn-api/target/classes
fi fi
if [ -d "$YARN_HOME/yarn-common/target/classes" ]; then if [ -d "$HADOOP_YARN_HOME/yarn-common/target/classes" ]; then
CLASSPATH=${CLASSPATH}:$YARN_HOME/yarn-common/target/classes CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/yarn-common/target/classes
fi fi
if [ -d "$YARN_HOME/yarn-mapreduce/target/classes" ]; then if [ -d "$HADOOP_YARN_HOME/yarn-mapreduce/target/classes" ]; then
CLASSPATH=${CLASSPATH}:$YARN_HOME/yarn-mapreduce/target/classes CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/yarn-mapreduce/target/classes
fi fi
if [ -d "$YARN_HOME/yarn-master-worker/target/classes" ]; then if [ -d "$HADOOP_YARN_HOME/yarn-master-worker/target/classes" ]; then
CLASSPATH=${CLASSPATH}:$YARN_HOME/yarn-master-worker/target/classes CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/yarn-master-worker/target/classes
fi fi
if [ -d "$YARN_HOME/yarn-server/yarn-server-nodemanager/target/classes" ]; then if [ -d "$HADOOP_YARN_HOME/yarn-server/yarn-server-nodemanager/target/classes" ]; then
CLASSPATH=${CLASSPATH}:$YARN_HOME/yarn-server/yarn-server-nodemanager/target/classes CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/yarn-server/yarn-server-nodemanager/target/classes
fi fi
if [ -d "$YARN_HOME/yarn-server/yarn-server-common/target/classes" ]; then if [ -d "$HADOOP_YARN_HOME/yarn-server/yarn-server-common/target/classes" ]; then
CLASSPATH=${CLASSPATH}:$YARN_HOME/yarn-server/yarn-server-common/target/classes CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/yarn-server/yarn-server-common/target/classes
fi fi
if [ -d "$YARN_HOME/yarn-server/yarn-server-resourcemanager/target/classes" ]; then if [ -d "$HADOOP_YARN_HOME/yarn-server/yarn-server-resourcemanager/target/classes" ]; then
CLASSPATH=${CLASSPATH}:$YARN_HOME/yarn-server/yarn-server-resourcemanager/target/classes CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/yarn-server/yarn-server-resourcemanager/target/classes
fi fi
if [ -d "$YARN_HOME/build/test/classes" ]; then if [ -d "$HADOOP_YARN_HOME/build/test/classes" ]; then
CLASSPATH=${CLASSPATH}:$YARN_HOME/target/test/classes CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/target/test/classes
fi fi
if [ -d "$YARN_HOME/build/tools" ]; then if [ -d "$HADOOP_YARN_HOME/build/tools" ]; then
CLASSPATH=${CLASSPATH}:$YARN_HOME/build/tools CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/build/tools
fi fi
CLASSPATH=${CLASSPATH}:$YARN_HOME/${YARN_DIR}/* CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/${YARN_DIR}/*
CLASSPATH=${CLASSPATH}:$YARN_HOME/${YARN_LIB_JARS_DIR}/* CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/${YARN_LIB_JARS_DIR}/*
# so that filenames w/ spaces are handled correctly in loops below # so that filenames w/ spaces are handled correctly in loops below
IFS= IFS=
# default log directory & file # default log directory & file
if [ "$YARN_LOG_DIR" = "" ]; then if [ "$YARN_LOG_DIR" = "" ]; then
YARN_LOG_DIR="$YARN_HOME/logs" YARN_LOG_DIR="$HADOOP_YARN_HOME/logs"
fi fi
if [ "$YARN_LOGFILE" = "" ]; then if [ "$YARN_LOGFILE" = "" ]; then
YARN_LOGFILE='yarn.log' YARN_LOGFILE='yarn.log'
@ -210,7 +210,7 @@ fi
# cygwin path translation # cygwin path translation
if $cygwin; then if $cygwin; then
CLASSPATH=`cygpath -p -w "$CLASSPATH"` CLASSPATH=`cygpath -p -w "$CLASSPATH"`
YARN_HOME=`cygpath -w "$YARN_HOME"` HADOOP_YARN_HOME=`cygpath -w "$HADOOP_YARN_HOME"`
YARN_LOG_DIR=`cygpath -w "$YARN_LOG_DIR"` YARN_LOG_DIR=`cygpath -w "$YARN_LOG_DIR"`
TOOL_PATH=`cygpath -p -w "$TOOL_PATH"` TOOL_PATH=`cygpath -p -w "$TOOL_PATH"`
fi fi
@ -224,8 +224,8 @@ YARN_OPTS="$YARN_OPTS -Dhadoop.log.dir=$YARN_LOG_DIR"
YARN_OPTS="$YARN_OPTS -Dyarn.log.dir=$YARN_LOG_DIR" YARN_OPTS="$YARN_OPTS -Dyarn.log.dir=$YARN_LOG_DIR"
YARN_OPTS="$YARN_OPTS -Dhadoop.log.file=$YARN_LOGFILE" YARN_OPTS="$YARN_OPTS -Dhadoop.log.file=$YARN_LOGFILE"
YARN_OPTS="$YARN_OPTS -Dyarn.log.file=$YARN_LOGFILE" YARN_OPTS="$YARN_OPTS -Dyarn.log.file=$YARN_LOGFILE"
YARN_OPTS="$YARN_OPTS -Dyarn.home.dir=$YARN_HOME" YARN_OPTS="$YARN_OPTS -Dyarn.home.dir=$HADOOP_YARN_HOME"
YARN_OPTS="$YARN_OPTS -Dhadoop.home.dir=$YARN_HOME" YARN_OPTS="$YARN_OPTS -Dhadoop.home.dir=$HADOOP_YARN_HOME"
YARN_OPTS="$YARN_OPTS -Dhadoop.root.logger=${YARN_ROOT_LOGGER:-INFO,console}" YARN_OPTS="$YARN_OPTS -Dhadoop.root.logger=${YARN_ROOT_LOGGER:-INFO,console}"
YARN_OPTS="$YARN_OPTS -Dyarn.root.logger=${YARN_ROOT_LOGGER:-INFO,console}" YARN_OPTS="$YARN_OPTS -Dyarn.root.logger=${YARN_ROOT_LOGGER:-INFO,console}"
if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then

View File

@ -49,7 +49,7 @@ then
fi fi
# Allow alternate conf dir location. # Allow alternate conf dir location.
export YARN_CONF_DIR="${HADOOP_CONF_DIR:-$YARN_HOME/conf}" export YARN_CONF_DIR="${HADOOP_CONF_DIR:-$HADOOP_YARN_HOME/conf}"
#check to see it is specified whether to use the slaves or the #check to see it is specified whether to use the slaves or the
# masters file # masters file

View File

@ -20,7 +20,7 @@
# #
# Environment Variables # Environment Variables
# #
# YARN_CONF_DIR Alternate conf dir. Default is ${YARN_HOME}/conf. # YARN_CONF_DIR Alternate conf dir. Default is ${HADOOP_YARN_HOME}/conf.
# YARN_LOG_DIR Where log files are stored. PWD by default. # YARN_LOG_DIR Where log files are stored. PWD by default.
# YARN_MASTER host:path where hadoop code should be rsync'd from # YARN_MASTER host:path where hadoop code should be rsync'd from
# YARN_PID_DIR The pid files are stored. /tmp by default. # YARN_PID_DIR The pid files are stored. /tmp by default.
@ -76,7 +76,7 @@ fi
# get log directory # get log directory
if [ "$YARN_LOG_DIR" = "" ]; then if [ "$YARN_LOG_DIR" = "" ]; then
export YARN_LOG_DIR="$YARN_HOME/logs" export YARN_LOG_DIR="$HADOOP_YARN_HOME/logs"
fi fi
if [ ! -w "$YARN_LOG_DIR" ] ; then if [ ! -w "$YARN_LOG_DIR" ] ; then
@ -115,13 +115,13 @@ case $startStop in
if [ "$YARN_MASTER" != "" ]; then if [ "$YARN_MASTER" != "" ]; then
echo rsync from $YARN_MASTER echo rsync from $YARN_MASTER
rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*' --exclude='contrib/hod/logs/*' $YARN_MASTER/ "$YARN_HOME" rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*' --exclude='contrib/hod/logs/*' $YARN_MASTER/ "$HADOOP_YARN_HOME"
fi fi
hadoop_rotate_log $log hadoop_rotate_log $log
echo starting $command, logging to $log echo starting $command, logging to $log
cd "$YARN_HOME" cd "$HADOOP_YARN_HOME"
nohup nice -n $YARN_NICENESS "$YARN_HOME"/bin/yarn --config $YARN_CONF_DIR $command "$@" > "$log" 2>&1 < /dev/null & nohup nice -n $YARN_NICENESS "$HADOOP_YARN_HOME"/bin/yarn --config $YARN_CONF_DIR $command "$@" > "$log" 2>&1 < /dev/null &
echo $! > $pid echo $! > $pid
sleep 1; head "$log" sleep 1; head "$log"
;; ;;

View File

@ -34,5 +34,5 @@ DEFAULT_LIBEXEC_DIR="$bin"/../libexec
HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR} HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
. $HADOOP_LIBEXEC_DIR/yarn-config.sh . $HADOOP_LIBEXEC_DIR/yarn-config.sh
exec "$bin/slaves.sh" --config $YARN_CONF_DIR cd "$YARN_HOME" \; "$bin/yarn-daemon.sh" --config $YARN_CONF_DIR "$@" exec "$bin/slaves.sh" --config $YARN_CONF_DIR cd "$HADOOP_YARN_HOME" \; "$bin/yarn-daemon.sh" --config $YARN_CONF_DIR "$@"

View File

@ -17,7 +17,7 @@
export HADOOP_YARN_USER=${HADOOP_YARN_USER:-yarn} export HADOOP_YARN_USER=${HADOOP_YARN_USER:-yarn}
# resolve links - $0 may be a softlink # resolve links - $0 may be a softlink
export YARN_CONF_DIR="${YARN_CONF_DIR:-$YARN_HOME/conf}" export YARN_CONF_DIR="${YARN_CONF_DIR:-$HADOOP_YARN_HOME/conf}"
# some Java parameters # some Java parameters
# export JAVA_HOME=/home/y/libexec/jdk1.6.0/ # export JAVA_HOME=/home/y/libexec/jdk1.6.0/
@ -47,7 +47,7 @@ IFS=
# default log directory & file # default log directory & file
if [ "$YARN_LOG_DIR" = "" ]; then if [ "$YARN_LOG_DIR" = "" ]; then
YARN_LOG_DIR="$YARN_HOME/logs" YARN_LOG_DIR="$HADOOP_YARN_HOME/logs"
fi fi
if [ "$YARN_LOGFILE" = "" ]; then if [ "$YARN_LOGFILE" = "" ]; then
YARN_LOGFILE='yarn.log' YARN_LOGFILE='yarn.log'

View File

@ -169,9 +169,9 @@ public interface ApplicationConstants {
MALLOC_ARENA_MAX("MALLOC_ARENA_MAX"), MALLOC_ARENA_MAX("MALLOC_ARENA_MAX"),
/** /**
* $YARN_HOME * $HADOOP_YARN_HOME
*/ */
YARN_HOME("YARN_HOME"); HADOOP_YARN_HOME("HADOOP_YARN_HOME");
private final String variable; private final String variable;
private Environment(String variable) { private Environment(String variable) {

View File

@ -21,9 +21,12 @@ package org.apache.hadoop.yarn.conf;
import java.net.InetAddress; import java.net.InetAddress;
import java.net.InetSocketAddress; import java.net.InetSocketAddress;
import java.net.UnknownHostException; import java.net.UnknownHostException;
import java.util.Arrays;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.net.NetUtils;
import org.apache.hadoop.util.StringUtils;
import org.apache.hadoop.yarn.api.ApplicationConstants;
import com.google.common.base.Joiner; import com.google.common.base.Joiner;
import com.google.common.base.Splitter; import com.google.common.base.Splitter;
@ -281,7 +284,12 @@ public class YarnConfiguration extends Configuration {
/** Environment variables that containers may override rather than use NodeManager's default.*/ /** Environment variables that containers may override rather than use NodeManager's default.*/
public static final String NM_ENV_WHITELIST = NM_PREFIX + "env-whitelist"; public static final String NM_ENV_WHITELIST = NM_PREFIX + "env-whitelist";
public static final String DEFAULT_NM_ENV_WHITELIST = "JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,YARN_HOME"; public static final String DEFAULT_NM_ENV_WHITELIST = StringUtils.join(",",
Arrays.asList(ApplicationConstants.Environment.JAVA_HOME.key(),
ApplicationConstants.Environment.HADOOP_COMMON_HOME.key(),
ApplicationConstants.Environment.HADOOP_HDFS_HOME.key(),
ApplicationConstants.Environment.HADOOP_CONF_DIR.key(),
ApplicationConstants.Environment.HADOOP_YARN_HOME.key()));
/** address of node manager IPC.*/ /** address of node manager IPC.*/
public static final String NM_ADDRESS = NM_PREFIX + "address"; public static final String NM_ADDRESS = NM_PREFIX + "address";
@ -567,12 +575,19 @@ public class YarnConfiguration extends Configuration {
* CLASSPATH entries * CLASSPATH entries
*/ */
public static final String[] DEFAULT_YARN_APPLICATION_CLASSPATH = { public static final String[] DEFAULT_YARN_APPLICATION_CLASSPATH = {
"$HADOOP_CONF_DIR", "$HADOOP_COMMON_HOME/share/hadoop/common/*", ApplicationConstants.Environment.HADOOP_CONF_DIR.$(),
"$HADOOP_COMMON_HOME/share/hadoop/common/lib/*", ApplicationConstants.Environment.HADOOP_COMMON_HOME.$()
"$HADOOP_HDFS_HOME/share/hadoop/hdfs/*", + "/share/hadoop/common/*",
"$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*", ApplicationConstants.Environment.HADOOP_COMMON_HOME.$()
"$YARN_HOME/share/hadoop/yarn/*", + "/share/hadoop/common/lib/*",
"$YARN_HOME/share/hadoop/yarn/lib/*"}; ApplicationConstants.Environment.HADOOP_HDFS_HOME.$()
+ "/share/hadoop/hdfs/*",
ApplicationConstants.Environment.HADOOP_HDFS_HOME.$()
+ "/share/hadoop/hdfs/lib/*",
ApplicationConstants.Environment.HADOOP_YARN_HOME.$()
+ "/share/hadoop/yarn/*",
ApplicationConstants.Environment.HADOOP_YARN_HOME.$()
+ "/share/hadoop/yarn/lib/*" };
/** Container temp directory */ /** Container temp directory */
public static final String DEFAULT_CONTAINER_TEMP_DIR = "./tmp"; public static final String DEFAULT_CONTAINER_TEMP_DIR = "./tmp";

View File

@ -266,7 +266,7 @@
<property> <property>
<description>Environment variables that containers may override rather than use NodeManager's default.</description> <description>Environment variables that containers may override rather than use NodeManager's default.</description>
<name>yarn.nodemanager.env-whitelist</name> <name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,YARN_HOME</value> <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,HADOOP_YARN_HOME</value>
</property> </property>
<property> <property>
@ -561,7 +561,7 @@
<description>CLASSPATH for YARN applications. A comma-separated list <description>CLASSPATH for YARN applications. A comma-separated list
of CLASSPATH entries</description> of CLASSPATH entries</description>
<name>yarn.application.classpath</name> <name>yarn.application.classpath</name>
<value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*</value> <value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$HADOOP_YARN_HOME/share/hadoop/yarn/*,$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*</value>
</property> </property>
</configuration> </configuration>

View File

@ -32,6 +32,7 @@ import org.apache.hadoop.fs.Path;
import org.apache.hadoop.util.Shell.ExitCodeException; import org.apache.hadoop.util.Shell.ExitCodeException;
import org.apache.hadoop.util.Shell.ShellCommandExecutor; import org.apache.hadoop.util.Shell.ShellCommandExecutor;
import org.apache.hadoop.util.StringUtils; import org.apache.hadoop.util.StringUtils;
import org.apache.hadoop.yarn.api.ApplicationConstants;
import org.apache.hadoop.yarn.api.records.ContainerId; import org.apache.hadoop.yarn.api.records.ContainerId;
import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.conf.YarnConfiguration;
import org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container; import org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container;
@ -92,7 +93,9 @@ public class LinuxContainerExecutor extends ContainerExecutor {
} }
protected String getContainerExecutorExecutablePath(Configuration conf) { protected String getContainerExecutorExecutablePath(Configuration conf) {
File hadoopBin = new File(System.getenv("YARN_HOME"), "bin"); String yarnHomeEnvVar =
System.getenv(ApplicationConstants.Environment.HADOOP_YARN_HOME.key());
File hadoopBin = new File(yarnHomeEnvVar, "bin");
String defaultPath = String defaultPath =
new File(hadoopBin, "container-executor").getAbsolutePath(); new File(hadoopBin, "container-executor").getAbsolutePath();
return null == conf return null == conf

View File

@ -318,7 +318,7 @@ Hadoop MapReduce Next Generation - Capacity Scheduler
---- ----
$ vi $HADOOP_CONF_DIR/capacity-scheduler.xml $ vi $HADOOP_CONF_DIR/capacity-scheduler.xml
$ $YARN_HOME/bin/yarn rmadmin -refreshQueues $ $HADOOP_YARN_HOME/bin/yarn rmadmin -refreshQueues
---- ----
<Note:> Queues cannot be <deleted>, only addition of new queues is supported - <Note:> Queues cannot be <deleted>, only addition of new queues is supported -

View File

@ -497,20 +497,20 @@ Hadoop MapReduce Next Generation - Cluster Setup
ResourceManager: ResourceManager:
---- ----
$ $YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager $ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager
---- ----
Run a script to start NodeManagers on all slaves: Run a script to start NodeManagers on all slaves:
---- ----
$ $YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start nodemanager $ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start nodemanager
---- ----
Start a standalone WebAppProxy server. If multiple servers Start a standalone WebAppProxy server. If multiple servers
are used with load balancing it should be run on each of them: are used with load balancing it should be run on each of them:
---- ----
$ $YARN_HOME/bin/yarn start proxyserver --config $HADOOP_CONF_DIR $ $HADOOP_YARN_HOME/bin/yarn start proxyserver --config $HADOOP_CONF_DIR
---- ----
Start the MapReduce JobHistory Server with the following command, run on the Start the MapReduce JobHistory Server with the following command, run on the
@ -539,20 +539,20 @@ Hadoop MapReduce Next Generation - Cluster Setup
ResourceManager: ResourceManager:
---- ----
$ $YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager $ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager
---- ----
Run a script to stop NodeManagers on all slaves: Run a script to stop NodeManagers on all slaves:
---- ----
$ $YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop nodemanager $ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop nodemanager
---- ----
Stop the WebAppProxy server. If multiple servers are used with load Stop the WebAppProxy server. If multiple servers are used with load
balancing it should be run on each of them: balancing it should be run on each of them:
---- ----
$ $YARN_HOME/bin/yarn stop proxyserver --config $HADOOP_CONF_DIR $ $HADOOP_YARN_HOME/bin/yarn stop proxyserver --config $HADOOP_CONF_DIR
---- ----
@ -883,7 +883,7 @@ KVNO Timestamp Principal
The path passed in <<<-Dcontainer-executor.conf.dir>>> should be the The path passed in <<<-Dcontainer-executor.conf.dir>>> should be the
path on the cluster nodes where a configuration file for the setuid path on the cluster nodes where a configuration file for the setuid
executable should be located. The executable should be installed in executable should be located. The executable should be installed in
$YARN_HOME/bin. $HADOOP_YARN_HOME/bin.
The executable must have specific permissions: 6050 or --Sr-s--- The executable must have specific permissions: 6050 or --Sr-s---
permissions user-owned by <root> (super-user) and group-owned by a permissions user-owned by <root> (super-user) and group-owned by a
@ -1040,13 +1040,13 @@ KVNO Timestamp Principal
ResourceManager as <yarn>: ResourceManager as <yarn>:
---- ----
[yarn]$ $YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager [yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager
---- ----
Run a script to start NodeManagers on all slaves as <yarn>: Run a script to start NodeManagers on all slaves as <yarn>:
---- ----
[yarn]$ $YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start nodemanager [yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start nodemanager
---- ----
Start a standalone WebAppProxy server. Run on the WebAppProxy Start a standalone WebAppProxy server. Run on the WebAppProxy
@ -1054,7 +1054,7 @@ KVNO Timestamp Principal
it should be run on each of them: it should be run on each of them:
---- ----
[yarn]$ $YARN_HOME/bin/yarn start proxyserver --config $HADOOP_CONF_DIR [yarn]$ $HADOOP_YARN_HOME/bin/yarn start proxyserver --config $HADOOP_CONF_DIR
---- ----
Start the MapReduce JobHistory Server with the following command, run on the Start the MapReduce JobHistory Server with the following command, run on the
@ -1083,13 +1083,13 @@ KVNO Timestamp Principal
ResourceManager as <yarn>: ResourceManager as <yarn>:
---- ----
[yarn]$ $YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager [yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager
---- ----
Run a script to stop NodeManagers on all slaves as <yarn>: Run a script to stop NodeManagers on all slaves as <yarn>:
---- ----
[yarn]$ $YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop nodemanager [yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop nodemanager
---- ----
Stop the WebAppProxy server. Run on the WebAppProxy server as Stop the WebAppProxy server. Run on the WebAppProxy server as
@ -1097,7 +1097,7 @@ KVNO Timestamp Principal
should be run on each of them: should be run on each of them:
---- ----
[yarn]$ $YARN_HOME/bin/yarn stop proxyserver --config $HADOOP_CONF_DIR [yarn]$ $HADOOP_YARN_HOME/bin/yarn stop proxyserver --config $HADOOP_CONF_DIR
---- ----
Stop the MapReduce JobHistory Server with the following command, run on the Stop the MapReduce JobHistory Server with the following command, run on the

View File

@ -43,7 +43,7 @@ $ mvn clean install assembly:assembly -Pnative
Assuming you have installed hadoop-common/hadoop-hdfs and exported Assuming you have installed hadoop-common/hadoop-hdfs and exported
<<$HADOOP_COMMON_HOME>>/<<$HADOOP_HDFS_HOME>>, untar hadoop mapreduce <<$HADOOP_COMMON_HOME>>/<<$HADOOP_HDFS_HOME>>, untar hadoop mapreduce
tarball and set environment variable <<$HADOOP_MAPRED_HOME>> to the tarball and set environment variable <<$HADOOP_MAPRED_HOME>> to the
untarred directory. Set <<$YARN_HOME>> the same as <<$HADOOP_MAPRED_HOME>>. untarred directory. Set <<$HADOOP_YARN_HOME>> the same as <<$HADOOP_MAPRED_HOME>>.
<<NOTE:>> The following instructions assume you have hdfs running. <<NOTE:>> The following instructions assume you have hdfs running.
@ -174,7 +174,7 @@ Add the following configs to your <<<yarn-site.xml>>>
* Running daemons. * Running daemons.
Assuming that the environment variables <<$HADOOP_COMMON_HOME>>, <<$HADOOP_HDFS_HOME>>, <<$HADOO_MAPRED_HOME>>, Assuming that the environment variables <<$HADOOP_COMMON_HOME>>, <<$HADOOP_HDFS_HOME>>, <<$HADOO_MAPRED_HOME>>,
<<$YARN_HOME>>, <<$JAVA_HOME>> and <<$HADOOP_CONF_DIR>> have been set appropriately. <<$HADOOP_YARN_HOME>>, <<$JAVA_HOME>> and <<$HADOOP_CONF_DIR>> have been set appropriately.
Set $<<$YARN_CONF_DIR>> the same as $<<HADOOP_CONF_DIR>> Set $<<$YARN_CONF_DIR>> the same as $<<HADOOP_CONF_DIR>>
Run ResourceManager and NodeManager as: Run ResourceManager and NodeManager as: