Merge branch 'trunk' into HDFS-7240

This commit is contained in:
Anu Engineer 2016-04-07 14:43:39 -07:00
commit 3f62ba558d
423 changed files with 9537 additions and 4163 deletions

View File

@ -75,6 +75,7 @@ Optional packages:
$ sudo apt-get install snappy libsnappy-dev $ sudo apt-get install snappy libsnappy-dev
* Intel ISA-L library for erasure coding * Intel ISA-L library for erasure coding
Please refer to https://01.org/intel%C2%AE-storage-acceleration-library-open-source-version Please refer to https://01.org/intel%C2%AE-storage-acceleration-library-open-source-version
(OR https://github.com/01org/isa-l)
* Bzip2 * Bzip2
$ sudo apt-get install bzip2 libbz2-dev $ sudo apt-get install bzip2 libbz2-dev
* Jansson (C Library for JSON) * Jansson (C Library for JSON)
@ -188,11 +189,12 @@ Maven build goals:
Intel ISA-L build options: Intel ISA-L build options:
Intel ISA-L is a erasure coding library that can be utilized by the native code. Intel ISA-L is an erasure coding library that can be utilized by the native code.
It is currently an optional component, meaning that Hadoop can be built with It is currently an optional component, meaning that Hadoop can be built with
or without this dependency. Note the library is used via dynamic module. Please or without this dependency. Note the library is used via dynamic module. Please
reference the official site for the library details. reference the official site for the library details.
https://01.org/intel%C2%AE-storage-acceleration-library-open-source-version https://01.org/intel%C2%AE-storage-acceleration-library-open-source-version
(OR https://github.com/01org/isa-l)
* Use -Drequire.isal to fail the build if libisal.so is not found. * Use -Drequire.isal to fail the build if libisal.so is not found.
If this option is not specified and the isal library is missing, If this option is not specified and the isal library is missing,

View File

@ -61,9 +61,9 @@ import java.util.*;
* <li>[#PREFIX#.]type: simple|kerberos|#CLASS#, 'simple' is short for the * <li>[#PREFIX#.]type: simple|kerberos|#CLASS#, 'simple' is short for the
* {@link PseudoAuthenticationHandler}, 'kerberos' is short for {@link KerberosAuthenticationHandler}, otherwise * {@link PseudoAuthenticationHandler}, 'kerberos' is short for {@link KerberosAuthenticationHandler}, otherwise
* the full class name of the {@link AuthenticationHandler} must be specified.</li> * the full class name of the {@link AuthenticationHandler} must be specified.</li>
* <li>[#PREFIX#.]signature.secret: when signer.secret.provider is set to * <li>[#PREFIX#.]signature.secret.file: when signer.secret.provider is set to
* "string" or not specified, this is the value for the secret used to sign the * "file" or not specified, this is the location of file including the secret
* HTTP cookie.</li> * used to sign the HTTP cookie.</li>
* <li>[#PREFIX#.]token.validity: time -in seconds- that the generated token is * <li>[#PREFIX#.]token.validity: time -in seconds- that the generated token is
* valid before a new authentication is triggered, default value is * valid before a new authentication is triggered, default value is
* <code>3600</code> seconds. This is also used for the rollover interval for * <code>3600</code> seconds. This is also used for the rollover interval for
@ -79,17 +79,16 @@ import java.util.*;
* </p> * </p>
* <p> * <p>
* Out of the box it provides 3 signer secret provider implementations: * Out of the box it provides 3 signer secret provider implementations:
* "string", "random", and "zookeeper" * "file", "random" and "zookeeper"
* </p> * </p>
* Additional signer secret providers are supported via the * Additional signer secret providers are supported via the
* {@link SignerSecretProvider} class. * {@link SignerSecretProvider} class.
* <p> * <p>
* For the HTTP cookies mentioned above, the SignerSecretProvider is used to * For the HTTP cookies mentioned above, the SignerSecretProvider is used to
* determine the secret to use for signing the cookies. Different * determine the secret to use for signing the cookies. Different
* implementations can have different behaviors. The "string" implementation * implementations can have different behaviors. The "file" implementation
* simply uses the string set in the [#PREFIX#.]signature.secret property * loads the secret from a specified file. The "random" implementation uses a
* mentioned above. The "random" implementation uses a randomly generated * randomly generated secret that rolls over at the interval specified by the
* secret that rolls over at the interval specified by the
* [#PREFIX#.]token.validity mentioned above. The "zookeeper" implementation * [#PREFIX#.]token.validity mentioned above. The "zookeeper" implementation
* is like the "random" one, except that it synchronizes the random secret * is like the "random" one, except that it synchronizes the random secret
* and rollovers between multiple servers; it's meant for HA services. * and rollovers between multiple servers; it's meant for HA services.
@ -97,12 +96,12 @@ import java.util.*;
* The relevant configuration properties are: * The relevant configuration properties are:
* <ul> * <ul>
* <li>signer.secret.provider: indicates the name of the SignerSecretProvider * <li>signer.secret.provider: indicates the name of the SignerSecretProvider
* class to use. Possible values are: "string", "random", "zookeeper", or a * class to use. Possible values are: "file", "random", "zookeeper", or a
* classname. If not specified, the "string" implementation will be used with * classname. If not specified, the "file" implementation will be used with
* [#PREFIX#.]signature.secret; and if that's not specified, the "random" * [#PREFIX#.]signature.secret.file; and if that's not specified, the "random"
* implementation will be used.</li> * implementation will be used.</li>
* <li>[#PREFIX#.]signature.secret: When the "string" implementation is * <li>[#PREFIX#.]signature.secret.file: When the "file" implementation is
* specified, this value is used as the secret.</li> * specified, this content of this file is used as the secret.</li>
* <li>[#PREFIX#.]token.validity: When the "random" or "zookeeper" * <li>[#PREFIX#.]token.validity: When the "random" or "zookeeper"
* implementations are specified, this value is used as the rollover * implementations are specified, this value is used as the rollover
* interval.</li> * interval.</li>
@ -176,10 +175,10 @@ public class AuthenticationFilter implements Filter {
/** /**
* Constant for the configuration property that indicates the name of the * Constant for the configuration property that indicates the name of the
* SignerSecretProvider class to use. * SignerSecretProvider class to use.
* Possible values are: "string", "random", "zookeeper", or a classname. * Possible values are: "file", "random", "zookeeper", or a classname.
* If not specified, the "string" implementation will be used with * If not specified, the "file" implementation will be used with
* SIGNATURE_SECRET; and if that's not specified, the "random" implementation * SIGNATURE_SECRET_FILE; and if that's not specified, the "random"
* will be used. * implementation will be used.
*/ */
public static final String SIGNER_SECRET_PROVIDER = public static final String SIGNER_SECRET_PROVIDER =
"signer.secret.provider"; "signer.secret.provider";

View File

@ -47,8 +47,8 @@ function hadoop_usage
# This script runs the hadoop core commands. # This script runs the hadoop core commands.
# let's locate libexec... # let's locate libexec...
if [[ -n "${HADOOP_PREFIX}" ]]; then if [[ -n "${HADOOP_HOME}" ]]; then
HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_PREFIX}/libexec" HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_HOME}/libexec"
else else
bin=$(cd -P -- "$(dirname -- "${MYNAME}")" >/dev/null && pwd -P) bin=$(cd -P -- "$(dirname -- "${MYNAME}")" >/dev/null && pwd -P)
HADOOP_DEFAULT_LIBEXEC_DIR="${bin}/../libexec" HADOOP_DEFAULT_LIBEXEC_DIR="${bin}/../libexec"
@ -84,9 +84,9 @@ case ${COMMAND} in
# shellcheck disable=SC2086 # shellcheck disable=SC2086
exec "${HADOOP_HDFS_HOME}/bin/hdfs" \ exec "${HADOOP_HDFS_HOME}/bin/hdfs" \
--config "${HADOOP_CONF_DIR}" "${COMMAND}" "$@" --config "${HADOOP_CONF_DIR}" "${COMMAND}" "$@"
elif [[ -f "${HADOOP_PREFIX}/bin/hdfs" ]]; then elif [[ -f "${HADOOP_HOME}/bin/hdfs" ]]; then
# shellcheck disable=SC2086 # shellcheck disable=SC2086
exec "${HADOOP_PREFIX}/bin/hdfs" \ exec "${HADOOP_HOME}/bin/hdfs" \
--config "${HADOOP_CONF_DIR}" "${COMMAND}" "$@" --config "${HADOOP_CONF_DIR}" "${COMMAND}" "$@"
else else
hadoop_error "HADOOP_HDFS_HOME not found!" hadoop_error "HADOOP_HDFS_HOME not found!"
@ -104,8 +104,8 @@ case ${COMMAND} in
if [[ -f "${HADOOP_MAPRED_HOME}/bin/mapred" ]]; then if [[ -f "${HADOOP_MAPRED_HOME}/bin/mapred" ]]; then
exec "${HADOOP_MAPRED_HOME}/bin/mapred" \ exec "${HADOOP_MAPRED_HOME}/bin/mapred" \
--config "${HADOOP_CONF_DIR}" "${COMMAND}" "$@" --config "${HADOOP_CONF_DIR}" "${COMMAND}" "$@"
elif [[ -f "${HADOOP_PREFIX}/bin/mapred" ]]; then elif [[ -f "${HADOOP_HOME}/bin/mapred" ]]; then
exec "${HADOOP_PREFIX}/bin/mapred" \ exec "${HADOOP_HOME}/bin/mapred" \
--config "${HADOOP_CONF_DIR}" "${COMMAND}" "$@" --config "${HADOOP_CONF_DIR}" "${COMMAND}" "$@"
else else
hadoop_error "HADOOP_MAPRED_HOME not found!" hadoop_error "HADOOP_MAPRED_HOME not found!"

View File

@ -63,6 +63,8 @@ else
exit 1 exit 1
fi fi
hadoop_deprecate_envvar HADOOP_PREFIX HADOOP_HOME
# allow overrides of the above and pre-defines of the below # allow overrides of the above and pre-defines of the below
if [[ -n "${HADOOP_COMMON_HOME}" ]] && if [[ -n "${HADOOP_COMMON_HOME}" ]] &&
[[ -e "${HADOOP_COMMON_HOME}/libexec/hadoop-layout.sh" ]]; then [[ -e "${HADOOP_COMMON_HOME}/libexec/hadoop-layout.sh" ]]; then
@ -128,8 +130,8 @@ fi
hadoop_shellprofiles_init hadoop_shellprofiles_init
# get the native libs in there pretty quick # get the native libs in there pretty quick
hadoop_add_javalibpath "${HADOOP_PREFIX}/build/native" hadoop_add_javalibpath "${HADOOP_HOME}/build/native"
hadoop_add_javalibpath "${HADOOP_PREFIX}/${HADOOP_COMMON_LIB_NATIVE_DIR}" hadoop_add_javalibpath "${HADOOP_HOME}/${HADOOP_COMMON_LIB_NATIVE_DIR}"
hadoop_shellprofiles_nativelib hadoop_shellprofiles_nativelib

View File

@ -21,8 +21,8 @@ function hadoop_usage
} }
# let's locate libexec... # let's locate libexec...
if [[ -n "${HADOOP_PREFIX}" ]]; then if [[ -n "${HADOOP_HOME}" ]]; then
HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_PREFIX}/libexec" HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_HOME}/libexec"
else else
this="${BASH_SOURCE-$0}" this="${BASH_SOURCE-$0}"
bin=$(cd -P -- "$(dirname -- "${this}")" >/dev/null && pwd -P) bin=$(cd -P -- "$(dirname -- "${this}")" >/dev/null && pwd -P)
@ -47,7 +47,7 @@ daemonmode=$1
shift shift
if [[ -z "${HADOOP_HDFS_HOME}" ]]; then if [[ -z "${HADOOP_HDFS_HOME}" ]]; then
hdfsscript="${HADOOP_PREFIX}/bin/hdfs" hdfsscript="${HADOOP_HOME}/bin/hdfs"
else else
hdfsscript="${HADOOP_HDFS_HOME}/bin/hdfs" hdfsscript="${HADOOP_HDFS_HOME}/bin/hdfs"
fi fi

View File

@ -27,8 +27,8 @@ this="${BASH_SOURCE-$0}"
bin=$(cd -P -- "$(dirname -- "${this}")" >/dev/null && pwd -P) bin=$(cd -P -- "$(dirname -- "${this}")" >/dev/null && pwd -P)
# let's locate libexec... # let's locate libexec...
if [[ -n "${HADOOP_PREFIX}" ]]; then if [[ -n "${HADOOP_HOME}" ]]; then
HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_PREFIX}/libexec" HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_HOME}/libexec"
else else
HADOOP_DEFAULT_LIBEXEC_DIR="${bin}/../libexec" HADOOP_DEFAULT_LIBEXEC_DIR="${bin}/../libexec"
fi fi
@ -51,7 +51,7 @@ daemonmode=$1
shift shift
if [[ -z "${HADOOP_HDFS_HOME}" ]]; then if [[ -z "${HADOOP_HDFS_HOME}" ]]; then
hdfsscript="${HADOOP_PREFIX}/bin/hdfs" hdfsscript="${HADOOP_HOME}/bin/hdfs"
else else
hdfsscript="${HADOOP_HDFS_HOME}/bin/hdfs" hdfsscript="${HADOOP_HDFS_HOME}/bin/hdfs"
fi fi

View File

@ -278,7 +278,7 @@ function hadoop_bootstrap
# By now, HADOOP_LIBEXEC_DIR should have been defined upstream # By now, HADOOP_LIBEXEC_DIR should have been defined upstream
# We can piggyback off of that to figure out where the default # We can piggyback off of that to figure out where the default
# HADOOP_FREFIX should be. This allows us to run without # HADOOP_FREFIX should be. This allows us to run without
# HADOOP_PREFIX ever being defined by a human! As a consequence # HADOOP_HOME ever being defined by a human! As a consequence
# HADOOP_LIBEXEC_DIR now becomes perhaps the single most powerful # HADOOP_LIBEXEC_DIR now becomes perhaps the single most powerful
# env var within Hadoop. # env var within Hadoop.
if [[ -z "${HADOOP_LIBEXEC_DIR}" ]]; then if [[ -z "${HADOOP_LIBEXEC_DIR}" ]]; then
@ -286,8 +286,8 @@ function hadoop_bootstrap
exit 1 exit 1
fi fi
HADOOP_DEFAULT_PREFIX=$(cd -P -- "${HADOOP_LIBEXEC_DIR}/.." >/dev/null && pwd -P) HADOOP_DEFAULT_PREFIX=$(cd -P -- "${HADOOP_LIBEXEC_DIR}/.." >/dev/null && pwd -P)
HADOOP_PREFIX=${HADOOP_PREFIX:-$HADOOP_DEFAULT_PREFIX} HADOOP_HOME=${HADOOP_HOME:-$HADOOP_DEFAULT_PREFIX}
export HADOOP_PREFIX export HADOOP_HOME
# #
# short-cuts. vendors may redefine these as well, preferably # short-cuts. vendors may redefine these as well, preferably
@ -302,7 +302,7 @@ function hadoop_bootstrap
YARN_LIB_JARS_DIR=${YARN_LIB_JARS_DIR:-"share/hadoop/yarn/lib"} YARN_LIB_JARS_DIR=${YARN_LIB_JARS_DIR:-"share/hadoop/yarn/lib"}
MAPRED_DIR=${MAPRED_DIR:-"share/hadoop/mapreduce"} MAPRED_DIR=${MAPRED_DIR:-"share/hadoop/mapreduce"}
MAPRED_LIB_JARS_DIR=${MAPRED_LIB_JARS_DIR:-"share/hadoop/mapreduce/lib"} MAPRED_LIB_JARS_DIR=${MAPRED_LIB_JARS_DIR:-"share/hadoop/mapreduce/lib"}
HADOOP_TOOLS_HOME=${HADOOP_TOOLS_HOME:-${HADOOP_PREFIX}} HADOOP_TOOLS_HOME=${HADOOP_TOOLS_HOME:-${HADOOP_HOME}}
HADOOP_TOOLS_DIR=${HADOOP_TOOLS_DIR:-"share/hadoop/tools"} HADOOP_TOOLS_DIR=${HADOOP_TOOLS_DIR:-"share/hadoop/tools"}
HADOOP_TOOLS_LIB_JARS_DIR=${HADOOP_TOOLS_LIB_JARS_DIR:-"${HADOOP_TOOLS_DIR}/lib"} HADOOP_TOOLS_LIB_JARS_DIR=${HADOOP_TOOLS_LIB_JARS_DIR:-"${HADOOP_TOOLS_DIR}/lib"}
@ -326,12 +326,12 @@ function hadoop_find_confdir
# An attempt at compatibility with some Hadoop 1.x # An attempt at compatibility with some Hadoop 1.x
# installs. # installs.
if [[ -e "${HADOOP_PREFIX}/conf/hadoop-env.sh" ]]; then if [[ -e "${HADOOP_HOME}/conf/hadoop-env.sh" ]]; then
conf_dir="conf" conf_dir="conf"
else else
conf_dir="etc/hadoop" conf_dir="etc/hadoop"
fi fi
export HADOOP_CONF_DIR="${HADOOP_CONF_DIR:-${HADOOP_PREFIX}/${conf_dir}}" export HADOOP_CONF_DIR="${HADOOP_CONF_DIR:-${HADOOP_HOME}/${conf_dir}}"
hadoop_debug "HADOOP_CONF_DIR=${HADOOP_CONF_DIR}" hadoop_debug "HADOOP_CONF_DIR=${HADOOP_CONF_DIR}"
} }
@ -524,8 +524,8 @@ function hadoop_basic_init
hadoop_debug "Initialize CLASSPATH" hadoop_debug "Initialize CLASSPATH"
if [[ -z "${HADOOP_COMMON_HOME}" ]] && if [[ -z "${HADOOP_COMMON_HOME}" ]] &&
[[ -d "${HADOOP_PREFIX}/${HADOOP_COMMON_DIR}" ]]; then [[ -d "${HADOOP_HOME}/${HADOOP_COMMON_DIR}" ]]; then
export HADOOP_COMMON_HOME="${HADOOP_PREFIX}" export HADOOP_COMMON_HOME="${HADOOP_HOME}"
fi fi
# default policy file for service-level authorization # default policy file for service-level authorization
@ -533,20 +533,20 @@ function hadoop_basic_init
# define HADOOP_HDFS_HOME # define HADOOP_HDFS_HOME
if [[ -z "${HADOOP_HDFS_HOME}" ]] && if [[ -z "${HADOOP_HDFS_HOME}" ]] &&
[[ -d "${HADOOP_PREFIX}/${HDFS_DIR}" ]]; then [[ -d "${HADOOP_HOME}/${HDFS_DIR}" ]]; then
export HADOOP_HDFS_HOME="${HADOOP_PREFIX}" export HADOOP_HDFS_HOME="${HADOOP_HOME}"
fi fi
# define HADOOP_YARN_HOME # define HADOOP_YARN_HOME
if [[ -z "${HADOOP_YARN_HOME}" ]] && if [[ -z "${HADOOP_YARN_HOME}" ]] &&
[[ -d "${HADOOP_PREFIX}/${YARN_DIR}" ]]; then [[ -d "${HADOOP_HOME}/${YARN_DIR}" ]]; then
export HADOOP_YARN_HOME="${HADOOP_PREFIX}" export HADOOP_YARN_HOME="${HADOOP_HOME}"
fi fi
# define HADOOP_MAPRED_HOME # define HADOOP_MAPRED_HOME
if [[ -z "${HADOOP_MAPRED_HOME}" ]] && if [[ -z "${HADOOP_MAPRED_HOME}" ]] &&
[[ -d "${HADOOP_PREFIX}/${MAPRED_DIR}" ]]; then [[ -d "${HADOOP_HOME}/${MAPRED_DIR}" ]]; then
export HADOOP_MAPRED_HOME="${HADOOP_PREFIX}" export HADOOP_MAPRED_HOME="${HADOOP_HOME}"
fi fi
if [[ ! -d "${HADOOP_COMMON_HOME}" ]]; then if [[ ! -d "${HADOOP_COMMON_HOME}" ]]; then
@ -573,7 +573,7 @@ function hadoop_basic_init
# let's define it as 'hadoop' # let's define it as 'hadoop'
HADOOP_IDENT_STRING=${HADOOP_IDENT_STRING:-$USER} HADOOP_IDENT_STRING=${HADOOP_IDENT_STRING:-$USER}
HADOOP_IDENT_STRING=${HADOOP_IDENT_STRING:-hadoop} HADOOP_IDENT_STRING=${HADOOP_IDENT_STRING:-hadoop}
HADOOP_LOG_DIR=${HADOOP_LOG_DIR:-"${HADOOP_PREFIX}/logs"} HADOOP_LOG_DIR=${HADOOP_LOG_DIR:-"${HADOOP_HOME}/logs"}
HADOOP_LOGFILE=${HADOOP_LOGFILE:-hadoop.log} HADOOP_LOGFILE=${HADOOP_LOGFILE:-hadoop.log}
HADOOP_LOGLEVEL=${HADOOP_LOGLEVEL:-INFO} HADOOP_LOGLEVEL=${HADOOP_LOGLEVEL:-INFO}
HADOOP_NICENESS=${HADOOP_NICENESS:-0} HADOOP_NICENESS=${HADOOP_NICENESS:-0}
@ -1219,7 +1219,6 @@ function hadoop_finalize_hadoop_opts
hadoop_translate_cygwin_path HADOOP_LOG_DIR hadoop_translate_cygwin_path HADOOP_LOG_DIR
hadoop_add_param HADOOP_OPTS hadoop.log.dir "-Dhadoop.log.dir=${HADOOP_LOG_DIR}" hadoop_add_param HADOOP_OPTS hadoop.log.dir "-Dhadoop.log.dir=${HADOOP_LOG_DIR}"
hadoop_add_param HADOOP_OPTS hadoop.log.file "-Dhadoop.log.file=${HADOOP_LOGFILE}" hadoop_add_param HADOOP_OPTS hadoop.log.file "-Dhadoop.log.file=${HADOOP_LOGFILE}"
HADOOP_HOME=${HADOOP_PREFIX}
hadoop_translate_cygwin_path HADOOP_HOME hadoop_translate_cygwin_path HADOOP_HOME
export HADOOP_HOME export HADOOP_HOME
hadoop_add_param HADOOP_OPTS hadoop.home.dir "-Dhadoop.home.dir=${HADOOP_HOME}" hadoop_add_param HADOOP_OPTS hadoop.home.dir "-Dhadoop.home.dir=${HADOOP_HOME}"
@ -1252,11 +1251,11 @@ function hadoop_finalize_catalina_opts
local prefix=${HADOOP_CATALINA_PREFIX} local prefix=${HADOOP_CATALINA_PREFIX}
hadoop_add_param CATALINA_OPTS hadoop.home.dir "-Dhadoop.home.dir=${HADOOP_PREFIX}" hadoop_add_param CATALINA_OPTS hadoop.home.dir "-Dhadoop.home.dir=${HADOOP_HOME}"
if [[ -n "${JAVA_LIBRARY_PATH}" ]]; then if [[ -n "${JAVA_LIBRARY_PATH}" ]]; then
hadoop_add_param CATALINA_OPTS java.library.path "-Djava.library.path=${JAVA_LIBRARY_PATH}" hadoop_add_param CATALINA_OPTS java.library.path "-Djava.library.path=${JAVA_LIBRARY_PATH}"
fi fi
hadoop_add_param CATALINA_OPTS "${prefix}.home.dir" "-D${prefix}.home.dir=${HADOOP_PREFIX}" hadoop_add_param CATALINA_OPTS "${prefix}.home.dir" "-D${prefix}.home.dir=${HADOOP_HOME}"
hadoop_add_param CATALINA_OPTS "${prefix}.config.dir" "-D${prefix}.config.dir=${HADOOP_CATALINA_CONFIG}" hadoop_add_param CATALINA_OPTS "${prefix}.config.dir" "-D${prefix}.config.dir=${HADOOP_CATALINA_CONFIG}"
hadoop_add_param CATALINA_OPTS "${prefix}.log.dir" "-D${prefix}.log.dir=${HADOOP_CATALINA_LOG}" hadoop_add_param CATALINA_OPTS "${prefix}.log.dir" "-D${prefix}.log.dir=${HADOOP_CATALINA_LOG}"
hadoop_add_param CATALINA_OPTS "${prefix}.temp.dir" "-D${prefix}.temp.dir=${HADOOP_CATALINA_TEMP}" hadoop_add_param CATALINA_OPTS "${prefix}.temp.dir" "-D${prefix}.temp.dir=${HADOOP_CATALINA_TEMP}"
@ -1282,7 +1281,7 @@ function hadoop_finalize
hadoop_finalize_hadoop_heap hadoop_finalize_hadoop_heap
hadoop_finalize_hadoop_opts hadoop_finalize_hadoop_opts
hadoop_translate_cygwin_path HADOOP_PREFIX hadoop_translate_cygwin_path HADOOP_HOME
hadoop_translate_cygwin_path HADOOP_CONF_DIR hadoop_translate_cygwin_path HADOOP_CONF_DIR
hadoop_translate_cygwin_path HADOOP_COMMON_HOME hadoop_translate_cygwin_path HADOOP_COMMON_HOME
hadoop_translate_cygwin_path HADOOP_HDFS_HOME hadoop_translate_cygwin_path HADOOP_HDFS_HOME

View File

@ -26,8 +26,8 @@
## ##
## If you move HADOOP_LIBEXEC_DIR from some location that ## If you move HADOOP_LIBEXEC_DIR from some location that
## isn't bin/../libexec, you MUST define either HADOOP_LIBEXEC_DIR ## isn't bin/../libexec, you MUST define either HADOOP_LIBEXEC_DIR
## or have HADOOP_PREFIX/libexec/hadoop-config.sh and ## or have HADOOP_HOME/libexec/hadoop-config.sh and
## HADOOP_PREFIX/libexec/hadoop-layout.sh (this file) exist. ## HADOOP_HOME/libexec/hadoop-layout.sh (this file) exist.
## NOTE: ## NOTE:
## ##
@ -44,7 +44,7 @@
#### ####
# Default location for the common/core Hadoop project # Default location for the common/core Hadoop project
# export HADOOP_COMMON_HOME=${HADOOP_PREFIX} # export HADOOP_COMMON_HOME=${HADOOP_HOME}
# Relative locations where components under HADOOP_COMMON_HOME are located # Relative locations where components under HADOOP_COMMON_HOME are located
# export HADOOP_COMMON_DIR="share/hadoop/common" # export HADOOP_COMMON_DIR="share/hadoop/common"
@ -56,7 +56,7 @@
#### ####
# Default location for the HDFS subproject # Default location for the HDFS subproject
# export HADOOP_HDFS_HOME=${HADOOP_PREFIX} # export HADOOP_HDFS_HOME=${HADOOP_HOME}
# Relative locations where components under HADOOP_HDFS_HOME are located # Relative locations where components under HADOOP_HDFS_HOME are located
# export HDFS_DIR="share/hadoop/hdfs" # export HDFS_DIR="share/hadoop/hdfs"
@ -67,7 +67,7 @@
#### ####
# Default location for the YARN subproject # Default location for the YARN subproject
# export HADOOP_YARN_HOME=${HADOOP_PREFIX} # export HADOOP_YARN_HOME=${HADOOP_HOME}
# Relative locations where components under HADOOP_YARN_HOME are located # Relative locations where components under HADOOP_YARN_HOME are located
# export YARN_DIR="share/hadoop/yarn" # export YARN_DIR="share/hadoop/yarn"
@ -78,7 +78,7 @@
#### ####
# Default location for the MapReduce subproject # Default location for the MapReduce subproject
# export HADOOP_MAPRED_HOME=${HADOOP_PREFIX} # export HADOOP_MAPRED_HOME=${HADOOP_HOME}
# Relative locations where components under HADOOP_MAPRED_HOME are located # Relative locations where components under HADOOP_MAPRED_HOME are located
# export MAPRED_DIR="share/hadoop/mapreduce" # export MAPRED_DIR="share/hadoop/mapreduce"
@ -92,6 +92,6 @@
# note that this path only gets added for certain commands and not # note that this path only gets added for certain commands and not
# part of the general classpath unless HADOOP_OPTIONAL_TOOLS is used # part of the general classpath unless HADOOP_OPTIONAL_TOOLS is used
# to configure them in # to configure them in
# export HADOOP_TOOLS_HOME=${HADOOP_PREFIX} # export HADOOP_TOOLS_HOME=${HADOOP_HOME}
# export HADOOP_TOOLS_DIR=${HADOOP_TOOLS_DIR:-"share/hadoop/tools"} # export HADOOP_TOOLS_DIR=${HADOOP_TOOLS_DIR:-"share/hadoop/tools"}
# export HADOOP_TOOLS_LIB_JARS_DIR=${HADOOP_TOOLS_LIB_JARS_DIR:-"${HADOOP_TOOLS_DIR}/lib"} # export HADOOP_TOOLS_LIB_JARS_DIR=${HADOOP_TOOLS_LIB_JARS_DIR:-"${HADOOP_TOOLS_DIR}/lib"}

View File

@ -22,7 +22,7 @@
# #
# HADOOP_SLAVES File naming remote hosts. # HADOOP_SLAVES File naming remote hosts.
# Default is ${HADOOP_CONF_DIR}/slaves. # Default is ${HADOOP_CONF_DIR}/slaves.
# HADOOP_CONF_DIR Alternate conf dir. Default is ${HADOOP_PREFIX}/conf. # HADOOP_CONF_DIR Alternate conf dir. Default is ${HADOOP_HOME}/conf.
# HADOOP_SLAVE_SLEEP Seconds to sleep between spawning remote commands. # HADOOP_SLAVE_SLEEP Seconds to sleep between spawning remote commands.
# HADOOP_SSH_OPTS Options passed to ssh when running remote commands. # HADOOP_SSH_OPTS Options passed to ssh when running remote commands.
## ##
@ -33,8 +33,8 @@ function hadoop_usage
} }
# let's locate libexec... # let's locate libexec...
if [[ -n "${HADOOP_PREFIX}" ]]; then if [[ -n "${HADOOP_HOME}" ]]; then
HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_PREFIX}/libexec" HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_HOME}/libexec"
else else
this="${BASH_SOURCE-$0}" this="${BASH_SOURCE-$0}"
bin=$(cd -P -- "$(dirname -- "${this}")" >/dev/null && pwd -P) bin=$(cd -P -- "$(dirname -- "${this}")" >/dev/null && pwd -P)

View File

@ -21,8 +21,8 @@ exit 1
# let's locate libexec... # let's locate libexec...
if [[ -n "${HADOOP_PREFIX}" ]]; then if [[ -n "${HADOOP_HOME}" ]]; then
HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_PREFIX}/libexec" HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_HOME}/libexec"
else else
this="${BASH_SOURCE-$0}" this="${BASH_SOURCE-$0}"
bin=$(cd -P -- "$(dirname -- "${this}")" >/dev/null && pwd -P) bin=$(cd -P -- "$(dirname -- "${this}")" >/dev/null && pwd -P)

View File

@ -22,8 +22,8 @@ echo "This script is deprecated. Use stop-dfs.sh and stop-yarn.sh instead."
exit 1 exit 1
# let's locate libexec... # let's locate libexec...
if [[ -n "${HADOOP_PREFIX}" ]]; then if [[ -n "${HADOOP_HOME}" ]]; then
HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_PREFIX}/libexec" HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_HOME}/libexec"
else else
this="${BASH_SOURCE-$0}" this="${BASH_SOURCE-$0}"
bin=$(cd -P -- "$(dirname -- "${this}")" >/dev/null && pwd -P) bin=$(cd -P -- "$(dirname -- "${this}")" >/dev/null && pwd -P)

View File

@ -55,14 +55,14 @@
# Location of Hadoop. By default, Hadoop will attempt to determine # Location of Hadoop. By default, Hadoop will attempt to determine
# this location based upon its execution path. # this location based upon its execution path.
# export HADOOP_PREFIX= # export HADOOP_HOME=
# Location of Hadoop's configuration information. i.e., where this # Location of Hadoop's configuration information. i.e., where this
# file is probably living. Many sites will also set this in the # file is probably living. Many sites will also set this in the
# same location where JAVA_HOME is defined. If this is not defined # same location where JAVA_HOME is defined. If this is not defined
# Hadoop will attempt to locate it based upon its execution # Hadoop will attempt to locate it based upon its execution
# path. # path.
# export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop # export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
# The maximum amount of heap to use (Java -Xmx). If no unit # The maximum amount of heap to use (Java -Xmx). If no unit
# is provided, it will be converted to MB. Daemons will # is provided, it will be converted to MB. Daemons will
@ -186,10 +186,10 @@ esac
# non-secure) # non-secure)
# #
# Where (primarily) daemon log files are stored. # $HADOOP_PREFIX/logs # Where (primarily) daemon log files are stored.
# by default. # ${HADOOP_HOME}/logs by default.
# Java property: hadoop.log.dir # Java property: hadoop.log.dir
# export HADOOP_LOG_DIR=${HADOOP_PREFIX}/logs # export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
# A string representing this instance of hadoop. $USER by default. # A string representing this instance of hadoop. $USER by default.
# This is used in writing log and pid files, so keep that in mind! # This is used in writing log and pid files, so keep that in mind!

View File

@ -1626,6 +1626,10 @@ public class Configuration implements Iterable<Map.Entry<String,String>>,
return defaultValue; return defaultValue;
} }
vStr = vStr.trim(); vStr = vStr.trim();
return getTimeDurationHelper(name, vStr, unit);
}
private long getTimeDurationHelper(String name, String vStr, TimeUnit unit) {
ParsedTimeDuration vUnit = ParsedTimeDuration.unitFor(vStr); ParsedTimeDuration vUnit = ParsedTimeDuration.unitFor(vStr);
if (null == vUnit) { if (null == vUnit) {
LOG.warn("No unit for " + name + "(" + vStr + ") assuming " + unit); LOG.warn("No unit for " + name + "(" + vStr + ") assuming " + unit);
@ -1636,6 +1640,15 @@ public class Configuration implements Iterable<Map.Entry<String,String>>,
return unit.convert(Long.parseLong(vStr), vUnit.unit()); return unit.convert(Long.parseLong(vStr), vUnit.unit());
} }
public long[] getTimeDurations(String name, TimeUnit unit) {
String[] strings = getTrimmedStrings(name);
long[] durations = new long[strings.length];
for (int i = 0; i < strings.length; i++) {
durations[i] = getTimeDurationHelper(name, strings[i], unit);
}
return durations;
}
/** /**
* Get the value of the <code>name</code> property as a <code>Pattern</code>. * Get the value of the <code>name</code> property as a <code>Pattern</code>.
* If no such property is specified, or if the specified value is not a valid * If no such property is specified, or if the specified value is not a valid

View File

@ -90,14 +90,22 @@ public class CommonConfigurationKeys extends CommonConfigurationKeysPublic {
/** /**
* CallQueue related settings. These are not used directly, but rather * CallQueue related settings. These are not used directly, but rather
* combined with a namespace and port. For instance: * combined with a namespace and port. For instance:
* IPC_CALLQUEUE_NAMESPACE + ".8020." + IPC_CALLQUEUE_IMPL_KEY * IPC_NAMESPACE + ".8020." + IPC_CALLQUEUE_IMPL_KEY
*/ */
public static final String IPC_CALLQUEUE_NAMESPACE = "ipc"; public static final String IPC_NAMESPACE = "ipc";
public static final String IPC_CALLQUEUE_IMPL_KEY = "callqueue.impl"; public static final String IPC_CALLQUEUE_IMPL_KEY = "callqueue.impl";
public static final String IPC_CALLQUEUE_IDENTITY_PROVIDER_KEY = "identity-provider.impl"; public static final String IPC_SCHEDULER_IMPL_KEY = "scheduler.impl";
public static final String IPC_IDENTITY_PROVIDER_KEY = "identity-provider.impl";
public static final String IPC_BACKOFF_ENABLE = "backoff.enable"; public static final String IPC_BACKOFF_ENABLE = "backoff.enable";
public static final boolean IPC_BACKOFF_ENABLE_DEFAULT = false; public static final boolean IPC_BACKOFF_ENABLE_DEFAULT = false;
/**
* IPC scheduler priority levels.
*/
public static final String IPC_SCHEDULER_PRIORITY_LEVELS_KEY =
"scheduler.priority.levels";
public static final int IPC_SCHEDULER_PRIORITY_LEVELS_DEFAULT_KEY = 4;
/** This is for specifying the implementation for the mappings from /** This is for specifying the implementation for the mappings from
* hostnames to the racks they belong to * hostnames to the racks they belong to
*/ */

View File

@ -23,7 +23,6 @@ import java.net.InetAddress;
import java.net.URI; import java.net.URI;
import java.net.UnknownHostException; import java.net.UnknownHostException;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.Arrays;
import java.util.Enumeration; import java.util.Enumeration;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
@ -381,46 +380,6 @@ public class FileUtil {
} }
/** Copy all files in a directory to one output file (merge). */
public static boolean copyMerge(FileSystem srcFS, Path srcDir,
FileSystem dstFS, Path dstFile,
boolean deleteSource,
Configuration conf, String addString) throws IOException {
dstFile = checkDest(srcDir.getName(), dstFS, dstFile, false);
if (!srcFS.getFileStatus(srcDir).isDirectory())
return false;
OutputStream out = dstFS.create(dstFile);
try {
FileStatus contents[] = srcFS.listStatus(srcDir);
Arrays.sort(contents);
for (int i = 0; i < contents.length; i++) {
if (contents[i].isFile()) {
InputStream in = srcFS.open(contents[i].getPath());
try {
IOUtils.copyBytes(in, out, conf, false);
if (addString!=null)
out.write(addString.getBytes("UTF-8"));
} finally {
in.close();
}
}
}
} finally {
out.close();
}
if (deleteSource) {
return srcFS.delete(srcDir, true);
} else {
return true;
}
}
/** Copy local files to a FileSystem. */ /** Copy local files to a FileSystem. */
public static boolean copy(File src, public static boolean copy(File src,
FileSystem dstFS, Path dst, FileSystem dstFS, Path dst,

View File

@ -33,6 +33,7 @@ public class PathIOException extends IOException {
// uris with no authority // uris with no authority
private String operation; private String operation;
private String path; private String path;
private String fullyQualifiedPath;
private String targetPath; private String targetPath;
/** /**
@ -68,6 +69,11 @@ public class PathIOException extends IOException {
this.path = path; this.path = path;
} }
public PathIOException withFullyQualifiedPath(String fqPath) {
fullyQualifiedPath = fqPath;
return this;
}
/** Format: /** Format:
* cmd: {operation} `path' {to `target'}: error string * cmd: {operation} `path' {to `target'}: error string
*/ */
@ -85,6 +91,9 @@ public class PathIOException extends IOException {
if (getCause() != null) { if (getCause() != null) {
message.append(": " + getCause().getMessage()); message.append(": " + getCause().getMessage());
} }
if (fullyQualifiedPath != null && !fullyQualifiedPath.equals(path)) {
message.append(": ").append(formatPath(fullyQualifiedPath));
}
return message.toString(); return message.toString();
} }

View File

@ -220,7 +220,8 @@ abstract class CommandWithDestination extends FsCommand {
throw new PathExistsException(dst.toString()); throw new PathExistsException(dst.toString());
} }
} else if (!dst.parentExists()) { } else if (!dst.parentExists()) {
throw new PathNotFoundException(dst.toString()); throw new PathNotFoundException(dst.toString())
.withFullyQualifiedPath(dst.path.toUri().toString());
} }
super.processArguments(args); super.processArguments(args);
} }

View File

@ -100,7 +100,11 @@ class MoveCommands {
@Override @Override
protected void processPath(PathData src, PathData target) throws IOException { protected void processPath(PathData src, PathData target) throws IOException {
if (!src.fs.getUri().equals(target.fs.getUri())) { String srcUri = src.fs.getUri().getScheme() + "://" +
src.fs.getUri().getHost();
String dstUri = target.fs.getUri().getScheme() + "://" +
target.fs.getUri().getHost();
if (!srcUri.equals(dstUri)) {
throw new PathIOException(src.toString(), throw new PathIOException(src.toString(),
"Does not match target filesystem"); "Does not match target filesystem");
} }

View File

@ -72,7 +72,8 @@ class Touch extends FsCommand {
@Override @Override
protected void processNonexistentPath(PathData item) throws IOException { protected void processNonexistentPath(PathData item) throws IOException {
if (!item.parentExists()) { if (!item.parentExists()) {
throw new PathNotFoundException(item.toString()); throw new PathNotFoundException(item.toString())
.withFullyQualifiedPath(item.path.toUri().toString());
} }
touchz(item); touchz(item);
} }

View File

@ -19,6 +19,7 @@
package org.apache.hadoop.ipc; package org.apache.hadoop.ipc;
import java.lang.reflect.Constructor; import java.lang.reflect.Constructor;
import java.lang.reflect.InvocationTargetException;
import java.util.concurrent.BlockingQueue; import java.util.concurrent.BlockingQueue;
import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReference; import java.util.concurrent.atomic.AtomicReference;
@ -26,6 +27,7 @@ import java.util.concurrent.atomic.AtomicReference;
import org.apache.commons.logging.Log; import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory; import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.CommonConfigurationKeys;
/** /**
* Abstracts queue operations for different blocking queues. * Abstracts queue operations for different blocking queues.
@ -42,6 +44,13 @@ public class CallQueueManager<E> {
Class<?> queueClass, Class<E> elementClass) { Class<?> queueClass, Class<E> elementClass) {
return (Class<? extends BlockingQueue<E>>)queueClass; return (Class<? extends BlockingQueue<E>>)queueClass;
} }
@SuppressWarnings("unchecked")
static Class<? extends RpcScheduler> convertSchedulerClass(
Class<?> schedulerClass) {
return (Class<? extends RpcScheduler>)schedulerClass;
}
private final boolean clientBackOffEnabled; private final boolean clientBackOffEnabled;
// Atomic refs point to active callQueue // Atomic refs point to active callQueue
@ -49,36 +58,47 @@ public class CallQueueManager<E> {
private final AtomicReference<BlockingQueue<E>> putRef; private final AtomicReference<BlockingQueue<E>> putRef;
private final AtomicReference<BlockingQueue<E>> takeRef; private final AtomicReference<BlockingQueue<E>> takeRef;
private RpcScheduler scheduler;
public CallQueueManager(Class<? extends BlockingQueue<E>> backingClass, public CallQueueManager(Class<? extends BlockingQueue<E>> backingClass,
Class<? extends RpcScheduler> schedulerClass,
boolean clientBackOffEnabled, int maxQueueSize, String namespace, boolean clientBackOffEnabled, int maxQueueSize, String namespace,
Configuration conf) { Configuration conf) {
int priorityLevels = parseNumLevels(namespace, conf);
this.scheduler = createScheduler(schedulerClass, priorityLevels,
namespace, conf);
BlockingQueue<E> bq = createCallQueueInstance(backingClass, BlockingQueue<E> bq = createCallQueueInstance(backingClass,
maxQueueSize, namespace, conf); priorityLevels, maxQueueSize, namespace, conf);
this.clientBackOffEnabled = clientBackOffEnabled; this.clientBackOffEnabled = clientBackOffEnabled;
this.putRef = new AtomicReference<BlockingQueue<E>>(bq); this.putRef = new AtomicReference<BlockingQueue<E>>(bq);
this.takeRef = new AtomicReference<BlockingQueue<E>>(bq); this.takeRef = new AtomicReference<BlockingQueue<E>>(bq);
LOG.info("Using callQueue " + backingClass); LOG.info("Using callQueue: " + backingClass + " scheduler: " +
schedulerClass);
} }
private <T extends BlockingQueue<E>> T createCallQueueInstance( private static <T extends RpcScheduler> T createScheduler(
Class<T> theClass, int maxLen, String ns, Configuration conf) { Class<T> theClass, int priorityLevels, String ns, Configuration conf) {
// Used for custom, configurable scheduler
// Used for custom, configurable callqueues
try { try {
Constructor<T> ctor = theClass.getDeclaredConstructor(int.class, String.class, Constructor<T> ctor = theClass.getDeclaredConstructor(int.class,
Configuration.class); String.class, Configuration.class);
return ctor.newInstance(maxLen, ns, conf); return ctor.newInstance(priorityLevels, ns, conf);
} catch (RuntimeException e) { } catch (RuntimeException e) {
throw e; throw e;
} catch (InvocationTargetException e) {
throw new RuntimeException(theClass.getName()
+ " could not be constructed.", e.getCause());
} catch (Exception e) { } catch (Exception e) {
} }
// Used for LinkedBlockingQueue, ArrayBlockingQueue, etc
try { try {
Constructor<T> ctor = theClass.getDeclaredConstructor(int.class); Constructor<T> ctor = theClass.getDeclaredConstructor(int.class);
return ctor.newInstance(maxLen); return ctor.newInstance(priorityLevels);
} catch (RuntimeException e) { } catch (RuntimeException e) {
throw e; throw e;
} catch (InvocationTargetException e) {
throw new RuntimeException(theClass.getName()
+ " could not be constructed.", e.getCause());
} catch (Exception e) { } catch (Exception e) {
} }
@ -88,6 +108,55 @@ public class CallQueueManager<E> {
return ctor.newInstance(); return ctor.newInstance();
} catch (RuntimeException e) { } catch (RuntimeException e) {
throw e; throw e;
} catch (InvocationTargetException e) {
throw new RuntimeException(theClass.getName()
+ " could not be constructed.", e.getCause());
} catch (Exception e) {
}
// Nothing worked
throw new RuntimeException(theClass.getName() +
" could not be constructed.");
}
private <T extends BlockingQueue<E>> T createCallQueueInstance(
Class<T> theClass, int priorityLevels, int maxLen, String ns,
Configuration conf) {
// Used for custom, configurable callqueues
try {
Constructor<T> ctor = theClass.getDeclaredConstructor(int.class,
int.class, String.class, Configuration.class);
return ctor.newInstance(priorityLevels, maxLen, ns, conf);
} catch (RuntimeException e) {
throw e;
} catch (InvocationTargetException e) {
throw new RuntimeException(theClass.getName()
+ " could not be constructed.", e.getCause());
} catch (Exception e) {
}
// Used for LinkedBlockingQueue, ArrayBlockingQueue, etc
try {
Constructor<T> ctor = theClass.getDeclaredConstructor(int.class);
return ctor.newInstance(maxLen);
} catch (RuntimeException e) {
throw e;
} catch (InvocationTargetException e) {
throw new RuntimeException(theClass.getName()
+ " could not be constructed.", e.getCause());
} catch (Exception e) {
}
// Last attempt
try {
Constructor<T> ctor = theClass.getDeclaredConstructor();
return ctor.newInstance();
} catch (RuntimeException e) {
throw e;
} catch (InvocationTargetException e) {
throw new RuntimeException(theClass.getName()
+ " could not be constructed.", e.getCause());
} catch (Exception e) { } catch (Exception e) {
} }
@ -100,6 +169,22 @@ public class CallQueueManager<E> {
return clientBackOffEnabled; return clientBackOffEnabled;
} }
// Based on policy to determine back off current call
boolean shouldBackOff(Schedulable e) {
return scheduler.shouldBackOff(e);
}
void addResponseTime(String name, int priorityLevel, int queueTime,
int processingTime) {
scheduler.addResponseTime(name, priorityLevel, queueTime, processingTime);
}
// This should be only called once per call and cached in the call object
// each getPriorityLevel call will increment the counter for the caller
int getPriorityLevel(Schedulable e) {
return scheduler.getPriorityLevel(e);
}
/** /**
* Insert e into the backing queue or block until we can. * Insert e into the backing queue or block until we can.
* If we block and the queue changes on us, we will insert while the * If we block and the queue changes on us, we will insert while the
@ -136,15 +221,46 @@ public class CallQueueManager<E> {
return takeRef.get().size(); return takeRef.get().size();
} }
/**
* Read the number of levels from the configuration.
* This will affect the FairCallQueue's overall capacity.
* @throws IllegalArgumentException on invalid queue count
*/
@SuppressWarnings("deprecation")
private static int parseNumLevels(String ns, Configuration conf) {
// Fair call queue levels (IPC_CALLQUEUE_PRIORITY_LEVELS_KEY)
// takes priority over the scheduler level key
// (IPC_SCHEDULER_PRIORITY_LEVELS_KEY)
int retval = conf.getInt(ns + "." +
FairCallQueue.IPC_CALLQUEUE_PRIORITY_LEVELS_KEY, 0);
if (retval == 0) { // No FCQ priority level configured
retval = conf.getInt(ns + "." +
CommonConfigurationKeys.IPC_SCHEDULER_PRIORITY_LEVELS_KEY,
CommonConfigurationKeys.IPC_SCHEDULER_PRIORITY_LEVELS_DEFAULT_KEY);
} else {
LOG.warn(ns + "." + FairCallQueue.IPC_CALLQUEUE_PRIORITY_LEVELS_KEY +
" is deprecated. Please use " + ns + "." +
CommonConfigurationKeys.IPC_SCHEDULER_PRIORITY_LEVELS_KEY + ".");
}
if(retval < 1) {
throw new IllegalArgumentException("numLevels must be at least 1");
}
return retval;
}
/** /**
* Replaces active queue with the newly requested one and transfers * Replaces active queue with the newly requested one and transfers
* all calls to the newQ before returning. * all calls to the newQ before returning.
*/ */
public synchronized void swapQueue( public synchronized void swapQueue(
Class<? extends RpcScheduler> schedulerClass,
Class<? extends BlockingQueue<E>> queueClassToUse, int maxSize, Class<? extends BlockingQueue<E>> queueClassToUse, int maxSize,
String ns, Configuration conf) { String ns, Configuration conf) {
BlockingQueue<E> newQ = createCallQueueInstance(queueClassToUse, maxSize, int priorityLevels = parseNumLevels(ns, conf);
ns, conf); RpcScheduler newScheduler = createScheduler(schedulerClass, priorityLevels,
ns, conf);
BlockingQueue<E> newQ = createCallQueueInstance(queueClassToUse,
priorityLevels, maxSize, ns, conf);
// Our current queue becomes the old queue // Our current queue becomes the old queue
BlockingQueue<E> oldQ = putRef.get(); BlockingQueue<E> oldQ = putRef.get();
@ -158,6 +274,8 @@ public class CallQueueManager<E> {
// Swap takeRef to handle new calls // Swap takeRef to handle new calls
takeRef.set(newQ); takeRef.set(newQ);
this.scheduler = newScheduler;
LOG.info("Old Queue: " + stringRepr(oldQ) + ", " + LOG.info("Old Queue: " + stringRepr(oldQ) + ", " +
"Replacement: " + stringRepr(newQ)); "Replacement: " + stringRepr(newQ));
} }

View File

@ -62,6 +62,7 @@ import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory; import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability; import org.apache.hadoop.classification.InterfaceStability;
import org.apache.hadoop.classification.InterfaceStability.Unstable;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.CommonConfigurationKeys; import org.apache.hadoop.fs.CommonConfigurationKeys;
import org.apache.hadoop.fs.CommonConfigurationKeysPublic; import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
@ -96,6 +97,7 @@ import org.apache.htrace.core.Tracer;
import com.google.common.annotations.VisibleForTesting; import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Preconditions; import com.google.common.base.Preconditions;
import com.google.common.util.concurrent.AbstractFuture;
import com.google.common.util.concurrent.ThreadFactoryBuilder; import com.google.common.util.concurrent.ThreadFactoryBuilder;
import com.google.protobuf.CodedOutputStream; import com.google.protobuf.CodedOutputStream;
@ -107,7 +109,7 @@ import com.google.protobuf.CodedOutputStream;
*/ */
@InterfaceAudience.LimitedPrivate(value = { "Common", "HDFS", "MapReduce", "Yarn" }) @InterfaceAudience.LimitedPrivate(value = { "Common", "HDFS", "MapReduce", "Yarn" })
@InterfaceStability.Evolving @InterfaceStability.Evolving
public class Client { public class Client implements AutoCloseable {
public static final Log LOG = LogFactory.getLog(Client.class); public static final Log LOG = LogFactory.getLog(Client.class);
@ -116,6 +118,20 @@ public class Client {
private static final ThreadLocal<Integer> callId = new ThreadLocal<Integer>(); private static final ThreadLocal<Integer> callId = new ThreadLocal<Integer>();
private static final ThreadLocal<Integer> retryCount = new ThreadLocal<Integer>(); private static final ThreadLocal<Integer> retryCount = new ThreadLocal<Integer>();
private static final ThreadLocal<Future<?>> returnValue = new ThreadLocal<>();
private static final ThreadLocal<Boolean> asynchronousMode =
new ThreadLocal<Boolean>() {
@Override
protected Boolean initialValue() {
return false;
}
};
@SuppressWarnings("unchecked")
@Unstable
public static <T> Future<T> getReturnValue() {
return (Future<T>) returnValue.get();
}
/** Set call id and retry count for the next call. */ /** Set call id and retry count for the next call. */
public static void setCallIdAndRetryCount(int cid, int rc) { public static void setCallIdAndRetryCount(int cid, int rc) {
@ -239,14 +255,33 @@ public class Client {
* *
* @param conf Configuration * @param conf Configuration
* @return the timeout period in milliseconds. -1 if no timeout value is set * @return the timeout period in milliseconds. -1 if no timeout value is set
* @deprecated use {@link #getRpcTimeout(Configuration)} instead
*/ */
@Deprecated
final public static int getTimeout(Configuration conf) { final public static int getTimeout(Configuration conf) {
int timeout = getRpcTimeout(conf);
if (timeout > 0) {
return timeout;
}
if (!conf.getBoolean(CommonConfigurationKeys.IPC_CLIENT_PING_KEY, if (!conf.getBoolean(CommonConfigurationKeys.IPC_CLIENT_PING_KEY,
CommonConfigurationKeys.IPC_CLIENT_PING_DEFAULT)) { CommonConfigurationKeys.IPC_CLIENT_PING_DEFAULT)) {
return getPingInterval(conf); return getPingInterval(conf);
} }
return -1; return -1;
} }
/**
* The time after which a RPC will timeout.
*
* @param conf Configuration
* @return the timeout period in milliseconds.
*/
public static final int getRpcTimeout(Configuration conf) {
int timeout =
conf.getInt(CommonConfigurationKeys.IPC_CLIENT_RPC_TIMEOUT_KEY,
CommonConfigurationKeys.IPC_CLIENT_RPC_TIMEOUT_DEFAULT);
return (timeout < 0) ? 0 : timeout;
}
/** /**
* set the connection timeout value in configuration * set the connection timeout value in configuration
* *
@ -386,7 +421,7 @@ public class Client {
private Socket socket = null; // connected socket private Socket socket = null; // connected socket
private DataInputStream in; private DataInputStream in;
private DataOutputStream out; private DataOutputStream out;
private int rpcTimeout; private final int rpcTimeout;
private int maxIdleTime; //connections will be culled if it was idle for private int maxIdleTime; //connections will be culled if it was idle for
//maxIdleTime msecs //maxIdleTime msecs
private final RetryPolicy connectionRetryPolicy; private final RetryPolicy connectionRetryPolicy;
@ -394,8 +429,9 @@ public class Client {
private int maxRetriesOnSocketTimeouts; private int maxRetriesOnSocketTimeouts;
private final boolean tcpNoDelay; // if T then disable Nagle's Algorithm private final boolean tcpNoDelay; // if T then disable Nagle's Algorithm
private final boolean tcpLowLatency; // if T then use low-delay QoS private final boolean tcpLowLatency; // if T then use low-delay QoS
private boolean doPing; //do we need to send ping message private final boolean doPing; //do we need to send ping message
private int pingInterval; // how often sends ping to the server in msecs private final int pingInterval; // how often sends ping to the server
private final int soTimeout; // used by ipc ping and rpc timeout
private ByteArrayOutputStream pingRequest; // ping message private ByteArrayOutputStream pingRequest; // ping message
// currently active calls // currently active calls
@ -434,6 +470,14 @@ public class Client {
pingHeader.writeDelimitedTo(pingRequest); pingHeader.writeDelimitedTo(pingRequest);
} }
this.pingInterval = remoteId.getPingInterval(); this.pingInterval = remoteId.getPingInterval();
if (rpcTimeout > 0) {
// effective rpc timeout is rounded up to multiple of pingInterval
// if pingInterval < rpcTimeout.
this.soTimeout = (doPing && pingInterval < rpcTimeout) ?
pingInterval : rpcTimeout;
} else {
this.soTimeout = pingInterval;
}
this.serviceClass = serviceClass; this.serviceClass = serviceClass;
if (LOG.isDebugEnabled()) { if (LOG.isDebugEnabled()) {
LOG.debug("The ping interval is " + this.pingInterval + " ms."); LOG.debug("The ping interval is " + this.pingInterval + " ms.");
@ -484,12 +528,12 @@ public class Client {
/* Process timeout exception /* Process timeout exception
* if the connection is not going to be closed or * if the connection is not going to be closed or
* is not configured to have a RPC timeout, send a ping. * the RPC is not timed out yet, send a ping.
* (if rpcTimeout is not set to be 0, then RPC should timeout.
* otherwise, throw the timeout exception.
*/ */
private void handleTimeout(SocketTimeoutException e) throws IOException { private void handleTimeout(SocketTimeoutException e, int waiting)
if (shouldCloseConnection.get() || !running.get() || rpcTimeout > 0) { throws IOException {
if (shouldCloseConnection.get() || !running.get() ||
(0 < rpcTimeout && rpcTimeout <= waiting)) {
throw e; throw e;
} else { } else {
sendPing(); sendPing();
@ -503,11 +547,13 @@ public class Client {
*/ */
@Override @Override
public int read() throws IOException { public int read() throws IOException {
int waiting = 0;
do { do {
try { try {
return super.read(); return super.read();
} catch (SocketTimeoutException e) { } catch (SocketTimeoutException e) {
handleTimeout(e); waiting += soTimeout;
handleTimeout(e, waiting);
} }
} while (true); } while (true);
} }
@ -520,11 +566,13 @@ public class Client {
*/ */
@Override @Override
public int read(byte[] buf, int off, int len) throws IOException { public int read(byte[] buf, int off, int len) throws IOException {
int waiting = 0;
do { do {
try { try {
return super.read(buf, off, len); return super.read(buf, off, len);
} catch (SocketTimeoutException e) { } catch (SocketTimeoutException e) {
handleTimeout(e); waiting += soTimeout;
handleTimeout(e, waiting);
} }
} while (true); } while (true);
} }
@ -632,10 +680,7 @@ public class Client {
} }
NetUtils.connect(this.socket, server, connectionTimeout); NetUtils.connect(this.socket, server, connectionTimeout);
if (rpcTimeout > 0) { this.socket.setSoTimeout(soTimeout);
pingInterval = rpcTimeout; // rpcTimeout overwrites pingInterval
}
this.socket.setSoTimeout(pingInterval);
return; return;
} catch (ConnectTimeoutException toe) { } catch (ConnectTimeoutException toe) {
/* Check for an address change and update the local reference. /* Check for an address change and update the local reference.
@ -1325,8 +1370,8 @@ public class Client {
ConnectionId remoteId, int serviceClass, ConnectionId remoteId, int serviceClass,
AtomicBoolean fallbackToSimpleAuth) throws IOException { AtomicBoolean fallbackToSimpleAuth) throws IOException {
final Call call = createCall(rpcKind, rpcRequest); final Call call = createCall(rpcKind, rpcRequest);
Connection connection = getConnection(remoteId, call, serviceClass, final Connection connection = getConnection(remoteId, call, serviceClass,
fallbackToSimpleAuth); fallbackToSimpleAuth);
try { try {
connection.sendRpcRequest(call); // send the rpc request connection.sendRpcRequest(call); // send the rpc request
} catch (RejectedExecutionException e) { } catch (RejectedExecutionException e) {
@ -1337,6 +1382,51 @@ public class Client {
throw new IOException(e); throw new IOException(e);
} }
if (isAsynchronousMode()) {
Future<Writable> returnFuture = new AbstractFuture<Writable>() {
@Override
public Writable get() throws InterruptedException, ExecutionException {
try {
set(getRpcResponse(call, connection));
} catch (IOException ie) {
setException(ie);
}
return super.get();
}
};
returnValue.set(returnFuture);
return null;
} else {
return getRpcResponse(call, connection);
}
}
/**
* Check if RPC is in asynchronous mode or not.
*
* @returns true, if RPC is in asynchronous mode, otherwise false for
* synchronous mode.
*/
@Unstable
static boolean isAsynchronousMode() {
return asynchronousMode.get();
}
/**
* Set RPC to asynchronous or synchronous mode.
*
* @param async
* true, RPC will be in asynchronous mode, otherwise false for
* synchronous mode
*/
@Unstable
public static void setAsynchronousMode(boolean async) {
asynchronousMode.set(async);
}
private Writable getRpcResponse(final Call call, final Connection connection)
throws IOException {
synchronized (call) { synchronized (call) {
while (!call.done) { while (!call.done) {
try { try {
@ -1611,4 +1701,10 @@ public class Client {
public static int nextCallId() { public static int nextCallId() {
return callIdCounter.getAndIncrement() & 0x7FFFFFFF; return callIdCounter.getAndIncrement() & 0x7FFFFFFF;
} }
@Override
@Unstable
public void close() throws Exception {
stop();
}
} }

View File

@ -27,17 +27,21 @@ import java.util.Map;
import java.util.Timer; import java.util.Timer;
import java.util.TimerTask; import java.util.TimerTask;
import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.atomic.AtomicLongArray;
import java.util.concurrent.atomic.AtomicReference; import java.util.concurrent.atomic.AtomicReference;
import org.apache.commons.logging.Log; import com.google.common.util.concurrent.AtomicDoubleArray;
import org.apache.commons.logging.LogFactory; import org.apache.commons.lang.exception.ExceptionUtils;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.CommonConfigurationKeys; import org.apache.hadoop.fs.CommonConfigurationKeys;
import org.apache.hadoop.metrics2.util.MBeans; import org.apache.hadoop.metrics2.util.MBeans;
import org.codehaus.jackson.map.ObjectMapper; import org.codehaus.jackson.map.ObjectMapper;
import com.google.common.annotations.VisibleForTesting; import com.google.common.annotations.VisibleForTesting;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/** /**
* The decay RPC scheduler counts incoming requests in a map, then * The decay RPC scheduler counts incoming requests in a map, then
@ -49,22 +53,28 @@ public class DecayRpcScheduler implements RpcScheduler, DecayRpcSchedulerMXBean
/** /**
* Period controls how many milliseconds between each decay sweep. * Period controls how many milliseconds between each decay sweep.
*/ */
public static final String IPC_CALLQUEUE_DECAYSCHEDULER_PERIOD_KEY = public static final String IPC_SCHEDULER_DECAYSCHEDULER_PERIOD_KEY =
"decay-scheduler.period-ms";
public static final long IPC_SCHEDULER_DECAYSCHEDULER_PERIOD_DEFAULT =
5000;
@Deprecated
public static final String IPC_FCQ_DECAYSCHEDULER_PERIOD_KEY =
"faircallqueue.decay-scheduler.period-ms"; "faircallqueue.decay-scheduler.period-ms";
public static final long IPC_CALLQUEUE_DECAYSCHEDULER_PERIOD_DEFAULT =
5000L;
/** /**
* Decay factor controls how much each count is suppressed by on each sweep. * Decay factor controls how much each count is suppressed by on each sweep.
* Valid numbers are > 0 and < 1. Decay factor works in tandem with period * Valid numbers are > 0 and < 1. Decay factor works in tandem with period
* to control how long the scheduler remembers an identity. * to control how long the scheduler remembers an identity.
*/ */
public static final String IPC_CALLQUEUE_DECAYSCHEDULER_FACTOR_KEY = public static final String IPC_SCHEDULER_DECAYSCHEDULER_FACTOR_KEY =
"decay-scheduler.decay-factor";
public static final double IPC_SCHEDULER_DECAYSCHEDULER_FACTOR_DEFAULT =
0.5;
@Deprecated
public static final String IPC_FCQ_DECAYSCHEDULER_FACTOR_KEY =
"faircallqueue.decay-scheduler.decay-factor"; "faircallqueue.decay-scheduler.decay-factor";
public static final double IPC_CALLQUEUE_DECAYSCHEDULER_FACTOR_DEFAULT =
0.5;
/** /**
* Thresholds are specified as integer percentages, and specify which usage * Thresholds are specified as integer percentages, and specify which usage
* range each queue will be allocated to. For instance, specifying the list * range each queue will be allocated to. For instance, specifying the list
* 10, 40, 80 * 10, 40, 80
@ -74,15 +84,31 @@ public class DecayRpcScheduler implements RpcScheduler, DecayRpcSchedulerMXBean
* - q1 from 10 up to 40 * - q1 from 10 up to 40
* - q0 otherwise. * - q0 otherwise.
*/ */
public static final String IPC_CALLQUEUE_DECAYSCHEDULER_THRESHOLDS_KEY = public static final String IPC_DECAYSCHEDULER_THRESHOLDS_KEY =
"faircallqueue.decay-scheduler.thresholds"; "decay-scheduler.thresholds";
@Deprecated
public static final String IPC_FCQ_DECAYSCHEDULER_THRESHOLDS_KEY =
"faircallqueue.decay-scheduler.thresholds";
// Specifies the identity to use when the IdentityProvider cannot handle // Specifies the identity to use when the IdentityProvider cannot handle
// a schedulable. // a schedulable.
public static final String DECAYSCHEDULER_UNKNOWN_IDENTITY = public static final String DECAYSCHEDULER_UNKNOWN_IDENTITY =
"IdentityProvider.Unknown"; "IdentityProvider.Unknown";
public static final Log LOG = LogFactory.getLog(DecayRpcScheduler.class); public static final String
IPC_DECAYSCHEDULER_BACKOFF_RESPONSETIME_ENABLE_KEY =
"decay-scheduler.backoff.responsetime.enable";
public static final Boolean
IPC_DECAYSCHEDULER_BACKOFF_RESPONSETIME_ENABLE_DEFAULT = false;
// Specifies the average response time (ms) thresholds of each
// level to trigger backoff
public static final String
IPC_DECAYSCHEDULER_BACKOFF_RESPONSETIME_THRESHOLDS_KEY =
"decay-scheduler.backoff.responsetime.thresholds";
public static final Logger LOG =
LoggerFactory.getLogger(DecayRpcScheduler.class);
// Track the number of calls for each schedulable identity // Track the number of calls for each schedulable identity
private final ConcurrentHashMap<Object, AtomicLong> callCounts = private final ConcurrentHashMap<Object, AtomicLong> callCounts =
@ -91,6 +117,14 @@ public class DecayRpcScheduler implements RpcScheduler, DecayRpcSchedulerMXBean
// Should be the sum of all AtomicLongs in callCounts // Should be the sum of all AtomicLongs in callCounts
private final AtomicLong totalCalls = new AtomicLong(); private final AtomicLong totalCalls = new AtomicLong();
// Track total call count and response time in current decay window
private final AtomicLongArray responseTimeCountInCurrWindow;
private final AtomicLongArray responseTimeTotalInCurrWindow;
// Track average response time in previous decay window
private final AtomicDoubleArray responseTimeAvgInLastWindow;
private final AtomicLongArray responseTimeCountInLastWindow;
// Pre-computed scheduling decisions during the decay sweep are // Pre-computed scheduling decisions during the decay sweep are
// atomically swapped in as a read-only map // atomically swapped in as a read-only map
private final AtomicReference<Map<Object, Integer>> scheduleCacheRef = private final AtomicReference<Map<Object, Integer>> scheduleCacheRef =
@ -98,10 +132,12 @@ public class DecayRpcScheduler implements RpcScheduler, DecayRpcSchedulerMXBean
// Tune the behavior of the scheduler // Tune the behavior of the scheduler
private final long decayPeriodMillis; // How long between each tick private final long decayPeriodMillis; // How long between each tick
private final double decayFactor; // nextCount = currentCount / decayFactor private final double decayFactor; // nextCount = currentCount * decayFactor
private final int numQueues; // affects scheduling decisions, from 0 to numQueues - 1 private final int numLevels;
private final double[] thresholds; private final double[] thresholds;
private final IdentityProvider identityProvider; private final IdentityProvider identityProvider;
private final boolean backOffByResponseTimeEnabled;
private final long[] backOffResponseTimeThresholds;
/** /**
* This TimerTask will call decayCurrentCounts until * This TimerTask will call decayCurrentCounts until
@ -132,35 +168,46 @@ public class DecayRpcScheduler implements RpcScheduler, DecayRpcSchedulerMXBean
/** /**
* Create a decay scheduler. * Create a decay scheduler.
* @param numQueues number of queues to schedule for * @param numLevels number of priority levels
* @param ns config prefix, so that we can configure multiple schedulers * @param ns config prefix, so that we can configure multiple schedulers
* in a single instance. * in a single instance.
* @param conf configuration to use. * @param conf configuration to use.
*/ */
public DecayRpcScheduler(int numQueues, String ns, Configuration conf) { public DecayRpcScheduler(int numLevels, String ns, Configuration conf) {
if (numQueues < 1) { if(numLevels < 1) {
throw new IllegalArgumentException("number of queues must be > 0"); throw new IllegalArgumentException("Number of Priority Levels must be " +
"at least 1");
} }
this.numLevels = numLevels;
this.numQueues = numQueues;
this.decayFactor = parseDecayFactor(ns, conf); this.decayFactor = parseDecayFactor(ns, conf);
this.decayPeriodMillis = parseDecayPeriodMillis(ns, conf); this.decayPeriodMillis = parseDecayPeriodMillis(ns, conf);
this.identityProvider = this.parseIdentityProvider(ns, conf); this.identityProvider = this.parseIdentityProvider(ns, conf);
this.thresholds = parseThresholds(ns, conf, numQueues); this.thresholds = parseThresholds(ns, conf, numLevels);
this.backOffByResponseTimeEnabled = parseBackOffByResponseTimeEnabled(ns,
conf);
this.backOffResponseTimeThresholds =
parseBackOffResponseTimeThreshold(ns, conf, numLevels);
// Setup delay timer // Setup delay timer
Timer timer = new Timer(); Timer timer = new Timer();
DecayTask task = new DecayTask(this, timer); DecayTask task = new DecayTask(this, timer);
timer.scheduleAtFixedRate(task, decayPeriodMillis, decayPeriodMillis); timer.scheduleAtFixedRate(task, decayPeriodMillis, decayPeriodMillis);
MetricsProxy prox = MetricsProxy.getInstance(ns); // Setup response time metrics
responseTimeTotalInCurrWindow = new AtomicLongArray(numLevels);
responseTimeCountInCurrWindow = new AtomicLongArray(numLevels);
responseTimeAvgInLastWindow = new AtomicDoubleArray(numLevels);
responseTimeCountInLastWindow = new AtomicLongArray(numLevels);
MetricsProxy prox = MetricsProxy.getInstance(ns, numLevels);
prox.setDelegate(this); prox.setDelegate(this);
} }
// Load configs // Load configs
private IdentityProvider parseIdentityProvider(String ns, Configuration conf) { private IdentityProvider parseIdentityProvider(String ns,
Configuration conf) {
List<IdentityProvider> providers = conf.getInstances( List<IdentityProvider> providers = conf.getInstances(
ns + "." + CommonConfigurationKeys.IPC_CALLQUEUE_IDENTITY_PROVIDER_KEY, ns + "." + CommonConfigurationKeys.IPC_IDENTITY_PROVIDER_KEY,
IdentityProvider.class); IdentityProvider.class);
if (providers.size() < 1) { if (providers.size() < 1) {
@ -174,10 +221,16 @@ public class DecayRpcScheduler implements RpcScheduler, DecayRpcSchedulerMXBean
private static double parseDecayFactor(String ns, Configuration conf) { private static double parseDecayFactor(String ns, Configuration conf) {
double factor = conf.getDouble(ns + "." + double factor = conf.getDouble(ns + "." +
IPC_CALLQUEUE_DECAYSCHEDULER_FACTOR_KEY, IPC_FCQ_DECAYSCHEDULER_FACTOR_KEY, 0.0);
IPC_CALLQUEUE_DECAYSCHEDULER_FACTOR_DEFAULT if (factor == 0.0) {
); factor = conf.getDouble(ns + "." +
IPC_SCHEDULER_DECAYSCHEDULER_FACTOR_KEY,
IPC_SCHEDULER_DECAYSCHEDULER_FACTOR_DEFAULT);
} else if ((factor > 0.0) && (factor < 1)) {
LOG.warn(IPC_FCQ_DECAYSCHEDULER_FACTOR_KEY +
" is deprecated. Please use " +
IPC_SCHEDULER_DECAYSCHEDULER_FACTOR_KEY + ".");
}
if (factor <= 0 || factor >= 1) { if (factor <= 0 || factor >= 1) {
throw new IllegalArgumentException("Decay Factor " + throw new IllegalArgumentException("Decay Factor " +
"must be between 0 and 1"); "must be between 0 and 1");
@ -188,10 +241,17 @@ public class DecayRpcScheduler implements RpcScheduler, DecayRpcSchedulerMXBean
private static long parseDecayPeriodMillis(String ns, Configuration conf) { private static long parseDecayPeriodMillis(String ns, Configuration conf) {
long period = conf.getLong(ns + "." + long period = conf.getLong(ns + "." +
IPC_CALLQUEUE_DECAYSCHEDULER_PERIOD_KEY, IPC_FCQ_DECAYSCHEDULER_PERIOD_KEY,
IPC_CALLQUEUE_DECAYSCHEDULER_PERIOD_DEFAULT 0);
); if (period == 0) {
period = conf.getLong(ns + "." +
IPC_SCHEDULER_DECAYSCHEDULER_PERIOD_KEY,
IPC_SCHEDULER_DECAYSCHEDULER_PERIOD_DEFAULT);
} else if (period > 0) {
LOG.warn((IPC_FCQ_DECAYSCHEDULER_PERIOD_KEY +
" is deprecated. Please use " +
IPC_SCHEDULER_DECAYSCHEDULER_PERIOD_KEY));
}
if (period <= 0) { if (period <= 0) {
throw new IllegalArgumentException("Period millis must be >= 0"); throw new IllegalArgumentException("Period millis must be >= 0");
} }
@ -200,15 +260,24 @@ public class DecayRpcScheduler implements RpcScheduler, DecayRpcSchedulerMXBean
} }
private static double[] parseThresholds(String ns, Configuration conf, private static double[] parseThresholds(String ns, Configuration conf,
int numQueues) { int numLevels) {
int[] percentages = conf.getInts(ns + "." + int[] percentages = conf.getInts(ns + "." +
IPC_CALLQUEUE_DECAYSCHEDULER_THRESHOLDS_KEY); IPC_FCQ_DECAYSCHEDULER_THRESHOLDS_KEY);
if (percentages.length == 0) { if (percentages.length == 0) {
return getDefaultThresholds(numQueues); percentages = conf.getInts(ns + "." + IPC_DECAYSCHEDULER_THRESHOLDS_KEY);
} else if (percentages.length != numQueues-1) { if (percentages.length == 0) {
return getDefaultThresholds(numLevels);
}
} else {
LOG.warn(IPC_FCQ_DECAYSCHEDULER_THRESHOLDS_KEY +
" is deprecated. Please use " +
IPC_DECAYSCHEDULER_THRESHOLDS_KEY);
}
if (percentages.length != numLevels-1) {
throw new IllegalArgumentException("Number of thresholds should be " + throw new IllegalArgumentException("Number of thresholds should be " +
(numQueues-1) + ". Was: " + percentages.length); (numLevels-1) + ". Was: " + percentages.length);
} }
// Convert integer percentages to decimals // Convert integer percentages to decimals
@ -223,14 +292,14 @@ public class DecayRpcScheduler implements RpcScheduler, DecayRpcSchedulerMXBean
/** /**
* Generate default thresholds if user did not specify. Strategy is * Generate default thresholds if user did not specify. Strategy is
* to halve each time, since queue usage tends to be exponential. * to halve each time, since queue usage tends to be exponential.
* So if numQueues is 4, we would generate: double[]{0.125, 0.25, 0.5} * So if numLevels is 4, we would generate: double[]{0.125, 0.25, 0.5}
* which specifies the boundaries between each queue's usage. * which specifies the boundaries between each queue's usage.
* @param numQueues number of queues to compute for * @param numLevels number of levels to compute for
* @return array of boundaries of length numQueues - 1 * @return array of boundaries of length numLevels - 1
*/ */
private static double[] getDefaultThresholds(int numQueues) { private static double[] getDefaultThresholds(int numLevels) {
double[] ret = new double[numQueues - 1]; double[] ret = new double[numLevels - 1];
double div = Math.pow(2, numQueues - 1); double div = Math.pow(2, numLevels - 1);
for (int i = 0; i < ret.length; i++) { for (int i = 0; i < ret.length; i++) {
ret[i] = Math.pow(2, i)/div; ret[i] = Math.pow(2, i)/div;
@ -238,39 +307,89 @@ public class DecayRpcScheduler implements RpcScheduler, DecayRpcSchedulerMXBean
return ret; return ret;
} }
private static long[] parseBackOffResponseTimeThreshold(String ns,
Configuration conf, int numLevels) {
long[] responseTimeThresholds = conf.getTimeDurations(ns + "." +
IPC_DECAYSCHEDULER_BACKOFF_RESPONSETIME_THRESHOLDS_KEY,
TimeUnit.MILLISECONDS);
// backoff thresholds not specified
if (responseTimeThresholds.length == 0) {
return getDefaultBackOffResponseTimeThresholds(numLevels);
}
// backoff thresholds specified but not match with the levels
if (responseTimeThresholds.length != numLevels) {
throw new IllegalArgumentException(
"responseTimeThresholds must match with the number of priority " +
"levels");
}
// invalid thresholds
for (long responseTimeThreshold: responseTimeThresholds) {
if (responseTimeThreshold <= 0) {
throw new IllegalArgumentException(
"responseTimeThreshold millis must be >= 0");
}
}
return responseTimeThresholds;
}
// 10s for level 0, 20s for level 1, 30s for level 2, ...
private static long[] getDefaultBackOffResponseTimeThresholds(int numLevels) {
long[] ret = new long[numLevels];
for (int i = 0; i < ret.length; i++) {
ret[i] = 10000*(i+1);
}
return ret;
}
private static Boolean parseBackOffByResponseTimeEnabled(String ns,
Configuration conf) {
return conf.getBoolean(ns + "." +
IPC_DECAYSCHEDULER_BACKOFF_RESPONSETIME_ENABLE_KEY,
IPC_DECAYSCHEDULER_BACKOFF_RESPONSETIME_ENABLE_DEFAULT);
}
/** /**
* Decay the stored counts for each user and clean as necessary. * Decay the stored counts for each user and clean as necessary.
* This method should be called periodically in order to keep * This method should be called periodically in order to keep
* counts current. * counts current.
*/ */
private void decayCurrentCounts() { private void decayCurrentCounts() {
long total = 0; try {
Iterator<Map.Entry<Object, AtomicLong>> it = long total = 0;
callCounts.entrySet().iterator(); Iterator<Map.Entry<Object, AtomicLong>> it =
callCounts.entrySet().iterator();
while (it.hasNext()) { while (it.hasNext()) {
Map.Entry<Object, AtomicLong> entry = it.next(); Map.Entry<Object, AtomicLong> entry = it.next();
AtomicLong count = entry.getValue(); AtomicLong count = entry.getValue();
// Compute the next value by reducing it by the decayFactor // Compute the next value by reducing it by the decayFactor
long currentValue = count.get(); long currentValue = count.get();
long nextValue = (long)(currentValue * decayFactor); long nextValue = (long) (currentValue * decayFactor);
total += nextValue; total += nextValue;
count.set(nextValue); count.set(nextValue);
if (nextValue == 0) { if (nextValue == 0) {
// We will clean up unused keys here. An interesting optimization might // We will clean up unused keys here. An interesting optimization
// be to have an upper bound on keyspace in callCounts and only // might be to have an upper bound on keyspace in callCounts and only
// clean once we pass it. // clean once we pass it.
it.remove(); it.remove();
}
} }
// Update the total so that we remain in sync
totalCalls.set(total);
// Now refresh the cache of scheduling decisions
recomputeScheduleCache();
// Update average response time with decay
updateAverageResponseTime(true);
} catch (Exception ex) {
LOG.error("decayCurrentCounts exception: " +
ExceptionUtils.getFullStackTrace(ex));
throw ex;
} }
// Update the total so that we remain in sync
totalCalls.set(total);
// Now refresh the cache of scheduling decisions
recomputeScheduleCache();
} }
/** /**
@ -324,7 +443,7 @@ public class DecayRpcScheduler implements RpcScheduler, DecayRpcSchedulerMXBean
/** /**
* Given the number of occurrences, compute a scheduling decision. * Given the number of occurrences, compute a scheduling decision.
* @param occurrences how many occurrences * @param occurrences how many occurrences
* @return scheduling decision from 0 to numQueues - 1 * @return scheduling decision from 0 to numLevels - 1
*/ */
private int computePriorityLevel(long occurrences) { private int computePriorityLevel(long occurrences) {
long totalCallSnapshot = totalCalls.get(); long totalCallSnapshot = totalCalls.get();
@ -334,14 +453,14 @@ public class DecayRpcScheduler implements RpcScheduler, DecayRpcSchedulerMXBean
proportion = (double) occurrences / totalCallSnapshot; proportion = (double) occurrences / totalCallSnapshot;
} }
// Start with low priority queues, since they will be most common // Start with low priority levels, since they will be most common
for(int i = (numQueues - 1); i > 0; i--) { for(int i = (numLevels - 1); i > 0; i--) {
if (proportion >= this.thresholds[i - 1]) { if (proportion >= this.thresholds[i - 1]) {
return i; // We've found our queue number return i; // We've found our level number
} }
} }
// If we get this far, we're at queue 0 // If we get this far, we're at level 0
return 0; return 0;
} }
@ -349,7 +468,7 @@ public class DecayRpcScheduler implements RpcScheduler, DecayRpcSchedulerMXBean
* Returns the priority level for a given identity by first trying the cache, * Returns the priority level for a given identity by first trying the cache,
* then computing it. * then computing it.
* @param identity an object responding to toString and hashCode * @param identity an object responding to toString and hashCode
* @return integer scheduling decision from 0 to numQueues - 1 * @return integer scheduling decision from 0 to numLevels - 1
*/ */
private int cachedOrComputedPriorityLevel(Object identity) { private int cachedOrComputedPriorityLevel(Object identity) {
try { try {
@ -360,22 +479,29 @@ public class DecayRpcScheduler implements RpcScheduler, DecayRpcSchedulerMXBean
if (scheduleCache != null) { if (scheduleCache != null) {
Integer priority = scheduleCache.get(identity); Integer priority = scheduleCache.get(identity);
if (priority != null) { if (priority != null) {
LOG.debug("Cache priority for: {} with priority: {}", identity,
priority);
return priority; return priority;
} }
} }
// Cache was no good, compute it // Cache was no good, compute it
return computePriorityLevel(occurrences); int priority = computePriorityLevel(occurrences);
LOG.debug("compute priority for " + identity + " priority " + priority);
return priority;
} catch (InterruptedException ie) { } catch (InterruptedException ie) {
LOG.warn("Caught InterruptedException, returning low priority queue"); LOG.warn("Caught InterruptedException, returning low priority level");
return numQueues - 1; LOG.debug("Fallback priority for: {} with priority: {}", identity,
numLevels - 1);
return numLevels - 1;
} }
} }
/** /**
* Compute the appropriate priority for a schedulable based on past requests. * Compute the appropriate priority for a schedulable based on past requests.
* @param obj the schedulable obj to query and remember * @param obj the schedulable obj to query and remember
* @return the queue index which we recommend scheduling in * @return the level index which we recommend scheduling in
*/ */
@Override @Override
public int getPriorityLevel(Schedulable obj) { public int getPriorityLevel(Schedulable obj) {
@ -389,6 +515,73 @@ public class DecayRpcScheduler implements RpcScheduler, DecayRpcSchedulerMXBean
return cachedOrComputedPriorityLevel(identity); return cachedOrComputedPriorityLevel(identity);
} }
@Override
public boolean shouldBackOff(Schedulable obj) {
Boolean backOff = false;
if (backOffByResponseTimeEnabled) {
int priorityLevel = obj.getPriorityLevel();
if (LOG.isDebugEnabled()) {
double[] responseTimes = getAverageResponseTime();
LOG.debug("Current Caller: {} Priority: {} ",
obj.getUserGroupInformation().getUserName(),
obj.getPriorityLevel());
for (int i = 0; i < numLevels; i++) {
LOG.debug("Queue: {} responseTime: {} backoffThreshold: {}", i,
responseTimes[i], backOffResponseTimeThresholds[i]);
}
}
// High priority rpc over threshold triggers back off of low priority rpc
for (int i = 0; i < priorityLevel + 1; i++) {
if (responseTimeAvgInLastWindow.get(i) >
backOffResponseTimeThresholds[i]) {
backOff = true;
break;
}
}
}
return backOff;
}
@Override
public void addResponseTime(String name, int priorityLevel, int queueTime,
int processingTime) {
responseTimeCountInCurrWindow.getAndIncrement(priorityLevel);
responseTimeTotalInCurrWindow.getAndAdd(priorityLevel,
queueTime+processingTime);
if (LOG.isDebugEnabled()) {
LOG.debug("addResponseTime for call: {} priority: {} queueTime: {} " +
"processingTime: {} ", name, priorityLevel, queueTime,
processingTime);
}
}
// Update the cached average response time at the end of decay window
void updateAverageResponseTime(boolean enableDecay) {
for (int i = 0; i < numLevels; i++) {
double averageResponseTime = 0;
long totalResponseTime = responseTimeTotalInCurrWindow.get(i);
long responseTimeCount = responseTimeCountInCurrWindow.get(i);
if (responseTimeCount > 0) {
averageResponseTime = (double) totalResponseTime / responseTimeCount;
}
final double lastAvg = responseTimeAvgInLastWindow.get(i);
if (enableDecay && lastAvg > 0.0) {
final double decayed = decayFactor * lastAvg + averageResponseTime;
responseTimeAvgInLastWindow.set(i, decayed);
} else {
responseTimeAvgInLastWindow.set(i, averageResponseTime);
}
responseTimeCountInLastWindow.set(i, responseTimeCount);
if (LOG.isDebugEnabled()) {
LOG.debug("updateAverageResponseTime queue: {} Average: {} Count: {}",
i, averageResponseTime, responseTimeCount);
}
// Reset for next decay window
responseTimeTotalInCurrWindow.set(i, 0);
responseTimeCountInCurrWindow.set(i, 0);
}
}
// For testing // For testing
@VisibleForTesting @VisibleForTesting
public double getDecayFactor() { return decayFactor; } public double getDecayFactor() { return decayFactor; }
@ -429,16 +622,21 @@ public class DecayRpcScheduler implements RpcScheduler, DecayRpcSchedulerMXBean
// Weakref for delegate, so we don't retain it forever if it can be GC'd // Weakref for delegate, so we don't retain it forever if it can be GC'd
private WeakReference<DecayRpcScheduler> delegate; private WeakReference<DecayRpcScheduler> delegate;
private double[] averageResponseTimeDefault;
private long[] callCountInLastWindowDefault;
private MetricsProxy(String namespace) { private MetricsProxy(String namespace, int numLevels) {
averageResponseTimeDefault = new double[numLevels];
callCountInLastWindowDefault = new long[numLevels];
MBeans.register(namespace, "DecayRpcScheduler", this); MBeans.register(namespace, "DecayRpcScheduler", this);
} }
public static synchronized MetricsProxy getInstance(String namespace) { public static synchronized MetricsProxy getInstance(String namespace,
int numLevels) {
MetricsProxy mp = INSTANCES.get(namespace); MetricsProxy mp = INSTANCES.get(namespace);
if (mp == null) { if (mp == null) {
// We must create one // We must create one
mp = new MetricsProxy(namespace); mp = new MetricsProxy(namespace, numLevels);
INSTANCES.put(namespace, mp); INSTANCES.put(namespace, mp);
} }
return mp; return mp;
@ -487,6 +685,25 @@ public class DecayRpcScheduler implements RpcScheduler, DecayRpcSchedulerMXBean
return scheduler.getTotalCallVolume(); return scheduler.getTotalCallVolume();
} }
} }
@Override
public double[] getAverageResponseTime() {
DecayRpcScheduler scheduler = delegate.get();
if (scheduler == null) {
return averageResponseTimeDefault;
} else {
return scheduler.getAverageResponseTime();
}
}
public long[] getResponseTimeCountInLastWindow() {
DecayRpcScheduler scheduler = delegate.get();
if (scheduler == null) {
return callCountInLastWindowDefault;
} else {
return scheduler.getResponseTimeCountInLastWindow();
}
}
} }
public int getUniqueIdentityCount() { public int getUniqueIdentityCount() {
@ -497,6 +714,23 @@ public class DecayRpcScheduler implements RpcScheduler, DecayRpcSchedulerMXBean
return totalCalls.get(); return totalCalls.get();
} }
public long[] getResponseTimeCountInLastWindow() {
long[] ret = new long[responseTimeCountInLastWindow.length()];
for (int i = 0; i < responseTimeCountInLastWindow.length(); i++) {
ret[i] = responseTimeCountInLastWindow.get(i);
}
return ret;
}
@Override
public double[] getAverageResponseTime() {
double[] ret = new double[responseTimeAvgInLastWindow.length()];
for (int i = 0; i < responseTimeAvgInLastWindow.length(); i++) {
ret[i] = responseTimeAvgInLastWindow.get(i);
}
return ret;
}
public String getSchedulingDecisionSummary() { public String getSchedulingDecisionSummary() {
Map<Object, Integer> decisions = scheduleCacheRef.get(); Map<Object, Integer> decisions = scheduleCacheRef.get();
if (decisions == null) { if (decisions == null) {

View File

@ -27,4 +27,6 @@ public interface DecayRpcSchedulerMXBean {
String getCallVolumeSummary(); String getCallVolumeSummary();
int getUniqueIdentityCount(); int getUniqueIdentityCount();
long getTotalCallVolume(); long getTotalCallVolume();
double[] getAverageResponseTime();
long[] getResponseTimeCountInLastWindow();
} }

View File

@ -0,0 +1,45 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.ipc;
import org.apache.hadoop.conf.Configuration;
/**
* No op default RPC scheduler.
*/
public class DefaultRpcScheduler implements RpcScheduler {
@Override
public int getPriorityLevel(Schedulable obj) {
return 0;
}
@Override
public boolean shouldBackOff(Schedulable obj) {
return false;
}
@Override
public void addResponseTime(String name, int priorityLevel, int queueTime,
int processingTime) {
}
public DefaultRpcScheduler(int priorityLevels, String namespace,
Configuration conf) {
}
}

View File

@ -44,8 +44,9 @@ import org.apache.hadoop.metrics2.util.MBeans;
*/ */
public class FairCallQueue<E extends Schedulable> extends AbstractQueue<E> public class FairCallQueue<E extends Schedulable> extends AbstractQueue<E>
implements BlockingQueue<E> { implements BlockingQueue<E> {
// Configuration Keys @Deprecated
public static final int IPC_CALLQUEUE_PRIORITY_LEVELS_DEFAULT = 4; public static final int IPC_CALLQUEUE_PRIORITY_LEVELS_DEFAULT = 4;
@Deprecated
public static final String IPC_CALLQUEUE_PRIORITY_LEVELS_KEY = public static final String IPC_CALLQUEUE_PRIORITY_LEVELS_KEY =
"faircallqueue.priority-levels"; "faircallqueue.priority-levels";
@ -66,9 +67,6 @@ public class FairCallQueue<E extends Schedulable> extends AbstractQueue<E>
} }
} }
/* Scheduler picks which queue to place in */
private RpcScheduler scheduler;
/* Multiplexer picks which queue to draw from */ /* Multiplexer picks which queue to draw from */
private RpcMultiplexer multiplexer; private RpcMultiplexer multiplexer;
@ -83,8 +81,13 @@ public class FairCallQueue<E extends Schedulable> extends AbstractQueue<E>
* Notes: the FairCallQueue has no fixed capacity. Rather, it has a minimum * Notes: the FairCallQueue has no fixed capacity. Rather, it has a minimum
* capacity of `capacity` and a maximum capacity of `capacity * number_queues` * capacity of `capacity` and a maximum capacity of `capacity * number_queues`
*/ */
public FairCallQueue(int capacity, String ns, Configuration conf) { public FairCallQueue(int priorityLevels, int capacity, String ns,
int numQueues = parseNumQueues(ns, conf); Configuration conf) {
if(priorityLevels < 1) {
throw new IllegalArgumentException("Number of Priority Levels must be " +
"at least 1");
}
int numQueues = priorityLevels;
LOG.info("FairCallQueue is in use with " + numQueues + " queues."); LOG.info("FairCallQueue is in use with " + numQueues + " queues.");
this.queues = new ArrayList<BlockingQueue<E>>(numQueues); this.queues = new ArrayList<BlockingQueue<E>>(numQueues);
@ -95,28 +98,12 @@ public class FairCallQueue<E extends Schedulable> extends AbstractQueue<E>
this.overflowedCalls.add(new AtomicLong(0)); this.overflowedCalls.add(new AtomicLong(0));
} }
this.scheduler = new DecayRpcScheduler(numQueues, ns, conf);
this.multiplexer = new WeightedRoundRobinMultiplexer(numQueues, ns, conf); this.multiplexer = new WeightedRoundRobinMultiplexer(numQueues, ns, conf);
// Make this the active source of metrics // Make this the active source of metrics
MetricsProxy mp = MetricsProxy.getInstance(ns); MetricsProxy mp = MetricsProxy.getInstance(ns);
mp.setDelegate(this); mp.setDelegate(this);
} }
/**
* Read the number of queues from the configuration.
* This will affect the FairCallQueue's overall capacity.
* @throws IllegalArgumentException on invalid queue count
*/
private static int parseNumQueues(String ns, Configuration conf) {
int retval = conf.getInt(ns + "." + IPC_CALLQUEUE_PRIORITY_LEVELS_KEY,
IPC_CALLQUEUE_PRIORITY_LEVELS_DEFAULT);
if(retval < 1) {
throw new IllegalArgumentException("numQueues must be at least 1");
}
return retval;
}
/** /**
* Returns the first non-empty queue with equal or lesser priority * Returns the first non-empty queue with equal or lesser priority
* than <i>startIdx</i>. Wraps around, searching a maximum of N * than <i>startIdx</i>. Wraps around, searching a maximum of N
@ -144,7 +131,7 @@ public class FairCallQueue<E extends Schedulable> extends AbstractQueue<E>
/** /**
* Put and offer follow the same pattern: * Put and offer follow the same pattern:
* 1. Get a priorityLevel from the scheduler * 1. Get the assigned priorityLevel from the call by scheduler
* 2. Get the nth sub-queue matching this priorityLevel * 2. Get the nth sub-queue matching this priorityLevel
* 3. delegate the call to this sub-queue. * 3. delegate the call to this sub-queue.
* *
@ -154,7 +141,7 @@ public class FairCallQueue<E extends Schedulable> extends AbstractQueue<E>
*/ */
@Override @Override
public void put(E e) throws InterruptedException { public void put(E e) throws InterruptedException {
int priorityLevel = scheduler.getPriorityLevel(e); int priorityLevel = e.getPriorityLevel();
final int numLevels = this.queues.size(); final int numLevels = this.queues.size();
while (true) { while (true) {
@ -185,7 +172,7 @@ public class FairCallQueue<E extends Schedulable> extends AbstractQueue<E>
@Override @Override
public boolean offer(E e, long timeout, TimeUnit unit) public boolean offer(E e, long timeout, TimeUnit unit)
throws InterruptedException { throws InterruptedException {
int priorityLevel = scheduler.getPriorityLevel(e); int priorityLevel = e.getPriorityLevel();
BlockingQueue<E> q = this.queues.get(priorityLevel); BlockingQueue<E> q = this.queues.get(priorityLevel);
boolean ret = q.offer(e, timeout, unit); boolean ret = q.offer(e, timeout, unit);
@ -196,7 +183,7 @@ public class FairCallQueue<E extends Schedulable> extends AbstractQueue<E>
@Override @Override
public boolean offer(E e) { public boolean offer(E e) {
int priorityLevel = scheduler.getPriorityLevel(e); int priorityLevel = e.getPriorityLevel();
BlockingQueue<E> q = this.queues.get(priorityLevel); BlockingQueue<E> q = this.queues.get(priorityLevel);
boolean ret = q.offer(e); boolean ret = q.offer(e);
@ -436,12 +423,6 @@ public class FairCallQueue<E extends Schedulable> extends AbstractQueue<E>
return calls; return calls;
} }
// For testing
@VisibleForTesting
public void setScheduler(RpcScheduler newScheduler) {
this.scheduler = newScheduler;
}
@VisibleForTesting @VisibleForTesting
public void setMultiplexer(RpcMultiplexer newMux) { public void setMultiplexer(RpcMultiplexer newMux) {
this.multiplexer = newMux; this.multiplexer = newMux;

View File

@ -654,13 +654,7 @@ public class ProtobufRpcEngine implements RpcEngine {
String detailedMetricsName = (exception == null) ? String detailedMetricsName = (exception == null) ?
methodName : methodName :
exception.getClass().getSimpleName(); exception.getClass().getSimpleName();
server.rpcMetrics.addRpcQueueTime(qTime); server.updateMetrics(detailedMetricsName, qTime, processingTime);
server.rpcMetrics.addRpcProcessingTime(processingTime);
server.rpcDetailedMetrics.addProcessingTime(detailedMetricsName,
processingTime);
if (server.isLogSlowRPC()) {
server.logSlowRpcCalls(methodName, processingTime);
}
} }
return new RpcResponseWrapper(result); return new RpcResponseWrapper(result);
} }

View File

@ -19,11 +19,17 @@
package org.apache.hadoop.ipc; package org.apache.hadoop.ipc;
/** /**
* Implement this interface to be used for RPC scheduling in the fair call queues. * Implement this interface to be used for RPC scheduling and backoff.
*
*/ */
public interface RpcScheduler { public interface RpcScheduler {
/** /**
* Returns priority level greater than zero as a hint for scheduling. * Returns priority level greater than zero as a hint for scheduling.
*/ */
int getPriorityLevel(Schedulable obj); int getPriorityLevel(Schedulable obj);
boolean shouldBackOff(Schedulable obj);
void addResponseTime(String name, int priorityLevel, int queueTime,
int processingTime);
} }

View File

@ -18,11 +18,8 @@
package org.apache.hadoop.ipc; package org.apache.hadoop.ipc;
import java.nio.ByteBuffer;
import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.io.Writable;
/** /**
* Interface which allows extracting information necessary to * Interface which allows extracting information necessary to
@ -31,4 +28,6 @@ import org.apache.hadoop.io.Writable;
@InterfaceAudience.Private @InterfaceAudience.Private
public interface Schedulable { public interface Schedulable {
public UserGroupInformation getUserGroupInformation(); public UserGroupInformation getUserGroupInformation();
int getPriorityLevel();
} }

View File

@ -396,6 +396,15 @@ public abstract class Server {
return CurCall.get() != null; return CurCall.get() != null;
} }
/**
* Return the priority level assigned by call queue to an RPC
* Returns 0 in case no priority is assigned.
*/
public static int getPriorityLevel() {
Call call = CurCall.get();
return call != null? call.getPriorityLevel() : 0;
}
private String bindAddress; private String bindAddress;
private int port; // port we listen on private int port; // port we listen on
private int handlerCount; // number of handler threads private int handlerCount; // number of handler threads
@ -482,6 +491,18 @@ public abstract class Server {
} }
} }
void updateMetrics(String name, int queueTime, int processingTime) {
rpcMetrics.addRpcQueueTime(queueTime);
rpcMetrics.addRpcProcessingTime(processingTime);
rpcDetailedMetrics.addProcessingTime(name, processingTime);
callQueue.addResponseTime(name, getPriorityLevel(), queueTime,
processingTime);
if (isLogSlowRPC()) {
logSlowRpcCalls(name, processingTime);
}
}
/** /**
* A convenience method to bind to a given address and report * A convenience method to bind to a given address and report
* better exceptions if the address is not a valid host. * better exceptions if the address is not a valid host.
@ -578,6 +599,10 @@ public abstract class Server {
return serviceAuthorizationManager; return serviceAuthorizationManager;
} }
private String getQueueClassPrefix() {
return CommonConfigurationKeys.IPC_NAMESPACE + "." + port;
}
static Class<? extends BlockingQueue<Call>> getQueueClass( static Class<? extends BlockingQueue<Call>> getQueueClass(
String prefix, Configuration conf) { String prefix, Configuration conf) {
String name = prefix + "." + CommonConfigurationKeys.IPC_CALLQUEUE_IMPL_KEY; String name = prefix + "." + CommonConfigurationKeys.IPC_CALLQUEUE_IMPL_KEY;
@ -585,8 +610,29 @@ public abstract class Server {
return CallQueueManager.convertQueueClass(queueClass, Call.class); return CallQueueManager.convertQueueClass(queueClass, Call.class);
} }
private String getQueueClassPrefix() { static Class<? extends RpcScheduler> getSchedulerClass(
return CommonConfigurationKeys.IPC_CALLQUEUE_NAMESPACE + "." + port; String prefix, Configuration conf) {
String schedulerKeyname = prefix + "." + CommonConfigurationKeys
.IPC_SCHEDULER_IMPL_KEY;
Class<?> schedulerClass = conf.getClass(schedulerKeyname, null);
// Patch the configuration for legacy fcq configuration that does not have
// a separate scheduler setting
if (schedulerClass == null) {
String queueKeyName = prefix + "." + CommonConfigurationKeys
.IPC_CALLQUEUE_IMPL_KEY;
Class<?> queueClass = conf.getClass(queueKeyName, null);
if (queueClass != null) {
if (queueClass.getCanonicalName().equals(
FairCallQueue.class.getCanonicalName())) {
conf.setClass(schedulerKeyname, DecayRpcScheduler.class,
RpcScheduler.class);
}
}
}
schedulerClass = conf.getClass(schedulerKeyname,
DefaultRpcScheduler.class);
return CallQueueManager.convertSchedulerClass(schedulerClass);
} }
/* /*
@ -595,7 +641,8 @@ public abstract class Server {
public synchronized void refreshCallQueue(Configuration conf) { public synchronized void refreshCallQueue(Configuration conf) {
// Create the next queue // Create the next queue
String prefix = getQueueClassPrefix(); String prefix = getQueueClassPrefix();
callQueue.swapQueue(getQueueClass(prefix, conf), maxQueueSize, prefix, conf); callQueue.swapQueue(getSchedulerClass(prefix, conf),
getQueueClass(prefix, conf), maxQueueSize, prefix, conf);
} }
/** /**
@ -623,6 +670,8 @@ public abstract class Server {
private final byte[] clientId; private final byte[] clientId;
private final TraceScope traceScope; // the HTrace scope on the server side private final TraceScope traceScope; // the HTrace scope on the server side
private final CallerContext callerContext; // the call context private final CallerContext callerContext; // the call context
private int priorityLevel;
// the priority level assigned by scheduler, 0 by default
private Call(Call call) { private Call(Call call) {
this(call.callId, call.retryCount, call.rpcRequest, call.connection, this(call.callId, call.retryCount, call.rpcRequest, call.connection,
@ -710,6 +759,15 @@ public abstract class Server {
public UserGroupInformation getUserGroupInformation() { public UserGroupInformation getUserGroupInformation() {
return connection.user; return connection.user;
} }
@Override
public int getPriorityLevel() {
return this.priorityLevel;
}
public void setPriorityLevel(int priorityLevel) {
this.priorityLevel = priorityLevel;
}
} }
/** Listens on the socket. Creates jobs for the handler threads*/ /** Listens on the socket. Creates jobs for the handler threads*/
@ -2151,6 +2209,9 @@ public abstract class Server {
rpcRequest, this, ProtoUtil.convert(header.getRpcKind()), rpcRequest, this, ProtoUtil.convert(header.getRpcKind()),
header.getClientId().toByteArray(), traceScope, callerContext); header.getClientId().toByteArray(), traceScope, callerContext);
// Save the priority level assignment by the scheduler
call.setPriorityLevel(callQueue.getPriorityLevel(call));
if (callQueue.isClientBackoffEnabled()) { if (callQueue.isClientBackoffEnabled()) {
// if RPC queue is full, we will ask the RPC client to back off by // if RPC queue is full, we will ask the RPC client to back off by
// throwing RetriableException. Whether RPC client will honor // throwing RetriableException. Whether RPC client will honor
@ -2166,9 +2227,10 @@ public abstract class Server {
private void queueRequestOrAskClientToBackOff(Call call) private void queueRequestOrAskClientToBackOff(Call call)
throws WrappedRpcServerException, InterruptedException { throws WrappedRpcServerException, InterruptedException {
// If rpc queue is full, we will ask the client to back off. // If rpc scheduler indicates back off based on performance
boolean isCallQueued = callQueue.offer(call); // degradation such as response time or rpc queue is full,
if (!isCallQueued) { // we will ask the client to back off.
if (callQueue.shouldBackOff(call) || !callQueue.offer(call)) {
rpcMetrics.incrClientBackoff(); rpcMetrics.incrClientBackoff();
RetriableException retriableException = RetriableException retriableException =
new RetriableException("Server is too busy."); new RetriableException("Server is too busy.");
@ -2513,6 +2575,7 @@ public abstract class Server {
// Setup appropriate callqueue // Setup appropriate callqueue
final String prefix = getQueueClassPrefix(); final String prefix = getQueueClassPrefix();
this.callQueue = new CallQueueManager<Call>(getQueueClass(prefix, conf), this.callQueue = new CallQueueManager<Call>(getQueueClass(prefix, conf),
getSchedulerClass(prefix, conf),
getClientBackoffEnable(prefix, conf), maxQueueSize, prefix, conf); getClientBackoffEnable(prefix, conf), maxQueueSize, prefix, conf);
this.secretManager = (SecretManager<TokenIdentifier>) secretManager; this.secretManager = (SecretManager<TokenIdentifier>) secretManager;

View File

@ -34,7 +34,6 @@ import org.apache.hadoop.io.*;
import org.apache.hadoop.io.retry.RetryPolicy; import org.apache.hadoop.io.retry.RetryPolicy;
import org.apache.hadoop.ipc.Client.ConnectionId; import org.apache.hadoop.ipc.Client.ConnectionId;
import org.apache.hadoop.ipc.RPC.RpcInvoker; import org.apache.hadoop.ipc.RPC.RpcInvoker;
import org.apache.hadoop.ipc.VersionedProtocol;
import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.security.token.SecretManager; import org.apache.hadoop.security.token.SecretManager;
import org.apache.hadoop.security.token.TokenIdentifier; import org.apache.hadoop.security.token.TokenIdentifier;
@ -503,12 +502,11 @@ public class WritableRpcEngine implements RpcEngine {
} }
} }
// Invoke the protocol method
// Invoke the protocol method long startTime = Time.now();
long startTime = Time.now(); int qTime = (int) (startTime-receivedTime);
int qTime = (int) (startTime-receivedTime); Exception exception = null;
Exception exception = null; try {
try {
Method method = Method method =
protocolImpl.protocolClass.getMethod(call.getMethodName(), protocolImpl.protocolClass.getMethod(call.getMethodName(),
call.getParameterClasses()); call.getParameterClasses());
@ -539,27 +537,20 @@ public class WritableRpcEngine implements RpcEngine {
exception = ioe; exception = ioe;
throw ioe; throw ioe;
} finally { } finally {
int processingTime = (int) (Time.now() - startTime); int processingTime = (int) (Time.now() - startTime);
if (LOG.isDebugEnabled()) { if (LOG.isDebugEnabled()) {
String msg = "Served: " + call.getMethodName() + String msg = "Served: " + call.getMethodName() +
" queueTime= " + qTime + " queueTime= " + qTime + " procesingTime= " + processingTime;
" procesingTime= " + processingTime; if (exception != null) {
if (exception != null) { msg += " exception= " + exception.getClass().getSimpleName();
msg += " exception= " + exception.getClass().getSimpleName(); }
} LOG.debug(msg);
LOG.debug(msg);
}
String detailedMetricsName = (exception == null) ?
call.getMethodName() :
exception.getClass().getSimpleName();
server.rpcMetrics.addRpcQueueTime(qTime);
server.rpcMetrics.addRpcProcessingTime(processingTime);
server.rpcDetailedMetrics.addProcessingTime(detailedMetricsName,
processingTime);
if (server.isLogSlowRPC()) {
server.logSlowRpcCalls(call.getMethodName(), processingTime);
} }
} String detailedMetricsName = (exception == null) ?
call.getMethodName() :
exception.getClass().getSimpleName();
server.updateMetrics(detailedMetricsName, qTime, processingTime);
}
} }
} }
} }

View File

@ -31,6 +31,7 @@ import org.apache.hadoop.classification.InterfaceStability;
import org.apache.hadoop.metrics2.MetricsInfo; import org.apache.hadoop.metrics2.MetricsInfo;
import org.apache.hadoop.metrics2.MetricsRecordBuilder; import org.apache.hadoop.metrics2.MetricsRecordBuilder;
import org.apache.hadoop.metrics2.util.Quantile; import org.apache.hadoop.metrics2.util.Quantile;
import org.apache.hadoop.metrics2.util.QuantileEstimator;
import org.apache.hadoop.metrics2.util.SampleQuantiles; import org.apache.hadoop.metrics2.util.SampleQuantiles;
import com.google.common.annotations.VisibleForTesting; import com.google.common.annotations.VisibleForTesting;
@ -54,7 +55,7 @@ public class MutableQuantiles extends MutableMetric {
private final MetricsInfo[] quantileInfos; private final MetricsInfo[] quantileInfos;
private final int interval; private final int interval;
private SampleQuantiles estimator; private QuantileEstimator estimator;
private long previousCount = 0; private long previousCount = 0;
@VisibleForTesting @VisibleForTesting
@ -134,6 +135,10 @@ public class MutableQuantiles extends MutableMetric {
return interval; return interval;
} }
public synchronized void setEstimator(QuantileEstimator quantileEstimator) {
this.estimator = quantileEstimator;
}
/** /**
* Runnable used to periodically roll over the internal * Runnable used to periodically roll over the internal
* {@link SampleQuantiles} every interval. * {@link SampleQuantiles} every interval.

View File

@ -0,0 +1,32 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.metrics2.util;
import java.util.Map;
public interface QuantileEstimator {
void insert(long value);
Map<Quantile, Long> snapshot();
long getCount();
void clear();
}

View File

@ -47,7 +47,7 @@ import com.google.common.base.Preconditions;
* *
*/ */
@InterfaceAudience.Private @InterfaceAudience.Private
public class SampleQuantiles { public class SampleQuantiles implements QuantileEstimator {
/** /**
* Total number of items in stream * Total number of items in stream

View File

@ -782,12 +782,21 @@ public class NetUtils {
+ ": " + exception + ": " + exception
+ ";" + ";"
+ see("EOFException")); + see("EOFException"));
} else if (exception instanceof SocketException) {
// Many of the predecessor exceptions are subclasses of SocketException,
// so must be handled before this
return wrapWithMessage(exception,
"Call From "
+ localHost + " to " + destHost + ":" + destPort
+ " failed on socket exception: " + exception
+ ";"
+ see("SocketException"));
} }
else { else {
return (IOException) new IOException("Failed on local exception: " return (IOException) new IOException("Failed on local exception: "
+ exception + exception
+ "; Host Details : " + "; Host Details : "
+ getHostDetailsAsString(destHost, destPort, localHost)) + getHostDetailsAsString(destHost, destPort, localHost))
.initCause(exception); .initCause(exception);
} }

View File

@ -73,16 +73,38 @@ public class SecurityUtil {
@VisibleForTesting @VisibleForTesting
static HostResolver hostResolver; static HostResolver hostResolver;
private static boolean logSlowLookups;
private static int slowLookupThresholdMs;
static { static {
Configuration conf = new Configuration(); setConfigurationInternal(new Configuration());
}
@InterfaceAudience.Public
@InterfaceStability.Evolving
public static void setConfiguration(Configuration conf) {
LOG.info("Updating Configuration");
setConfigurationInternal(conf);
}
private static void setConfigurationInternal(Configuration conf) {
boolean useIp = conf.getBoolean( boolean useIp = conf.getBoolean(
CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP, CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP,
CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP_DEFAULT); CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP_DEFAULT);
setTokenServiceUseIp(useIp); setTokenServiceUseIp(useIp);
}
private static boolean logSlowLookups = getLogSlowLookupsEnabled(); logSlowLookups = conf.getBoolean(
private static int slowLookupThresholdMs = getSlowLookupThresholdMs(); CommonConfigurationKeys
.HADOOP_SECURITY_DNS_LOG_SLOW_LOOKUPS_ENABLED_KEY,
CommonConfigurationKeys
.HADOOP_SECURITY_DNS_LOG_SLOW_LOOKUPS_ENABLED_DEFAULT);
slowLookupThresholdMs = conf.getInt(
CommonConfigurationKeys
.HADOOP_SECURITY_DNS_LOG_SLOW_LOOKUPS_THRESHOLD_MS_KEY,
CommonConfigurationKeys
.HADOOP_SECURITY_DNS_LOG_SLOW_LOOKUPS_THRESHOLD_MS_DEFAULT);
}
/** /**
* For use only by tests and initialization * For use only by tests and initialization
@ -90,6 +112,11 @@ public class SecurityUtil {
@InterfaceAudience.Private @InterfaceAudience.Private
@VisibleForTesting @VisibleForTesting
public static void setTokenServiceUseIp(boolean flag) { public static void setTokenServiceUseIp(boolean flag) {
if (LOG.isDebugEnabled()) {
LOG.debug("Setting "
+ CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP
+ " to " + flag);
}
useIpForTokenService = flag; useIpForTokenService = flag;
hostResolver = !useIpForTokenService hostResolver = !useIpForTokenService
? new QualifiedHostResolver() ? new QualifiedHostResolver()
@ -485,24 +512,6 @@ public class SecurityUtil {
} }
} }
private static boolean getLogSlowLookupsEnabled() {
Configuration conf = new Configuration();
return conf.getBoolean(CommonConfigurationKeys
.HADOOP_SECURITY_DNS_LOG_SLOW_LOOKUPS_ENABLED_KEY,
CommonConfigurationKeys
.HADOOP_SECURITY_DNS_LOG_SLOW_LOOKUPS_ENABLED_DEFAULT);
}
private static int getSlowLookupThresholdMs() {
Configuration conf = new Configuration();
return conf.getInt(CommonConfigurationKeys
.HADOOP_SECURITY_DNS_LOG_SLOW_LOOKUPS_THRESHOLD_MS_KEY,
CommonConfigurationKeys
.HADOOP_SECURITY_DNS_LOG_SLOW_LOOKUPS_THRESHOLD_MS_DEFAULT);
}
/** /**
* Resolves a host subject to the security requirements determined by * Resolves a host subject to the security requirements determined by
* hadoop.security.token.service.use_ip. Optionally logs slow resolutions. * hadoop.security.token.service.use_ip. Optionally logs slow resolutions.

View File

@ -62,7 +62,7 @@ public class RestCsrfPreventionFilter implements Filter {
public static final String CUSTOM_METHODS_TO_IGNORE_PARAM = public static final String CUSTOM_METHODS_TO_IGNORE_PARAM =
"methods-to-ignore"; "methods-to-ignore";
static final String BROWSER_USER_AGENTS_DEFAULT = "^Mozilla.*,^Opera.*"; static final String BROWSER_USER_AGENTS_DEFAULT = "^Mozilla.*,^Opera.*";
static final String HEADER_DEFAULT = "X-XSRF-HEADER"; public static final String HEADER_DEFAULT = "X-XSRF-HEADER";
static final String METHODS_TO_IGNORE_DEFAULT = "GET,OPTIONS,HEAD,TRACE"; static final String METHODS_TO_IGNORE_DEFAULT = "GET,OPTIONS,HEAD,TRACE";
private String headerName = HEADER_DEFAULT; private String headerName = HEADER_DEFAULT;
private Set<String> methodsToIgnore = null; private Set<String> methodsToIgnore = null;

View File

@ -23,6 +23,8 @@ import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.security.authentication.client.ConnectionConfigurator; import org.apache.hadoop.security.authentication.client.ConnectionConfigurator;
import org.apache.hadoop.util.ReflectionUtils; import org.apache.hadoop.util.ReflectionUtils;
import org.apache.hadoop.util.StringUtils; import org.apache.hadoop.util.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import static org.apache.hadoop.util.PlatformName.IBM_JAVA; import static org.apache.hadoop.util.PlatformName.IBM_JAVA;
import javax.net.ssl.HostnameVerifier; import javax.net.ssl.HostnameVerifier;
@ -34,6 +36,11 @@ import javax.net.ssl.SSLSocketFactory;
import java.io.IOException; import java.io.IOException;
import java.net.HttpURLConnection; import java.net.HttpURLConnection;
import java.security.GeneralSecurityException; import java.security.GeneralSecurityException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
/** /**
* Factory that creates SSLEngine and SSLSocketFactory instances using * Factory that creates SSLEngine and SSLSocketFactory instances using
@ -48,6 +55,7 @@ import java.security.GeneralSecurityException;
@InterfaceAudience.Private @InterfaceAudience.Private
@InterfaceStability.Evolving @InterfaceStability.Evolving
public class SSLFactory implements ConnectionConfigurator { public class SSLFactory implements ConnectionConfigurator {
static final Logger LOG = LoggerFactory.getLogger(SSLFactory.class);
@InterfaceAudience.Private @InterfaceAudience.Private
public static enum Mode { CLIENT, SERVER } public static enum Mode { CLIENT, SERVER }
@ -71,6 +79,8 @@ public class SSLFactory implements ConnectionConfigurator {
"hadoop.ssl.enabled.protocols"; "hadoop.ssl.enabled.protocols";
public static final String DEFAULT_SSL_ENABLED_PROTOCOLS = public static final String DEFAULT_SSL_ENABLED_PROTOCOLS =
"TLSv1,SSLv2Hello,TLSv1.1,TLSv1.2"; "TLSv1,SSLv2Hello,TLSv1.1,TLSv1.2";
public static final String SSL_SERVER_EXCLUDE_CIPHER_LIST =
"ssl.server.exclude.cipher.list";
private Configuration conf; private Configuration conf;
private Mode mode; private Mode mode;
@ -80,6 +90,7 @@ public class SSLFactory implements ConnectionConfigurator {
private KeyStoresFactory keystoresFactory; private KeyStoresFactory keystoresFactory;
private String[] enabledProtocols = null; private String[] enabledProtocols = null;
private List<String> excludeCiphers;
/** /**
* Creates an SSLFactory. * Creates an SSLFactory.
@ -105,6 +116,14 @@ public class SSLFactory implements ConnectionConfigurator {
enabledProtocols = conf.getStrings(SSL_ENABLED_PROTOCOLS, enabledProtocols = conf.getStrings(SSL_ENABLED_PROTOCOLS,
DEFAULT_SSL_ENABLED_PROTOCOLS); DEFAULT_SSL_ENABLED_PROTOCOLS);
String excludeCiphersConf =
sslConf.get(SSL_SERVER_EXCLUDE_CIPHER_LIST, "");
if (excludeCiphersConf.isEmpty()) {
excludeCiphers = new LinkedList<String>();
} else {
LOG.debug("will exclude cipher suites: {}", excludeCiphersConf);
excludeCiphers = Arrays.asList(excludeCiphersConf.split(","));
}
} }
private Configuration readSSLConfiguration(Mode mode) { private Configuration readSSLConfiguration(Mode mode) {
@ -195,11 +214,32 @@ public class SSLFactory implements ConnectionConfigurator {
} else { } else {
sslEngine.setUseClientMode(false); sslEngine.setUseClientMode(false);
sslEngine.setNeedClientAuth(requireClientCert); sslEngine.setNeedClientAuth(requireClientCert);
disableExcludedCiphers(sslEngine);
} }
sslEngine.setEnabledProtocols(enabledProtocols); sslEngine.setEnabledProtocols(enabledProtocols);
return sslEngine; return sslEngine;
} }
private void disableExcludedCiphers(SSLEngine sslEngine) {
String[] cipherSuites = sslEngine.getEnabledCipherSuites();
ArrayList<String> defaultEnabledCipherSuites =
new ArrayList<String>(Arrays.asList(cipherSuites));
Iterator iterator = excludeCiphers.iterator();
while(iterator.hasNext()) {
String cipherName = (String)iterator.next();
if(defaultEnabledCipherSuites.contains(cipherName)) {
defaultEnabledCipherSuites.remove(cipherName);
LOG.debug("Disabling cipher suite {}.", cipherName);
}
}
cipherSuites = defaultEnabledCipherSuites.toArray(
new String[defaultEnabledCipherSuites.size()]);
sslEngine.setEnabledCipherSuites(cipherSuites);
}
/** /**
* Returns a configured SSLServerSocketFactory. * Returns a configured SSLServerSocketFactory.
* *

View File

@ -32,7 +32,7 @@ import org.apache.htrace.core.HTraceConfiguration;
@InterfaceAudience.Private @InterfaceAudience.Private
public class TraceUtils { public class TraceUtils {
private static List<ConfigurationPair> EMPTY = Collections.emptyList(); private static List<ConfigurationPair> EMPTY = Collections.emptyList();
static final String DEFAULT_HADOOP_PREFIX = "hadoop.htrace."; static final String DEFAULT_HADOOP_TRACE_PREFIX = "hadoop.htrace.";
public static HTraceConfiguration wrapHadoopConf(final String prefix, public static HTraceConfiguration wrapHadoopConf(final String prefix,
final Configuration conf) { final Configuration conf) {
@ -52,7 +52,7 @@ public class TraceUtils {
if (ret != null) { if (ret != null) {
return ret; return ret;
} }
return getInternal(DEFAULT_HADOOP_PREFIX + key); return getInternal(DEFAULT_HADOOP_TRACE_PREFIX + key);
} }
@Override @Override

View File

@ -95,12 +95,12 @@ public class NativeLibraryChecker {
snappyLibraryName = SnappyCodec.getLibraryName(); snappyLibraryName = SnappyCodec.getLibraryName();
} }
try { isalDetail = ErasureCodeNative.getLoadingFailureReason();
isalDetail = ErasureCodeNative.getLoadingFailureReason(); if (isalDetail != null) {
isalLoaded = false;
} else {
isalDetail = ErasureCodeNative.getLibraryName(); isalDetail = ErasureCodeNative.getLibraryName();
isalLoaded = true; isalLoaded = true;
} catch (UnsatisfiedLinkError e) {
isalLoaded = false;
} }
openSslDetail = OpensslCipher.getLoadingFailureReason(); openSslDetail = OpensslCipher.getLoadingFailureReason();

View File

@ -17,8 +17,10 @@
*/ */
package org.apache.hadoop.util; package org.apache.hadoop.util;
import com.google.common.util.concurrent.ThreadFactoryBuilder;
import org.apache.commons.logging.Log; import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory; import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.util.concurrent.HadoopExecutors;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.Collections; import java.util.Collections;
@ -26,6 +28,10 @@ import java.util.Comparator;
import java.util.HashSet; import java.util.HashSet;
import java.util.List; import java.util.List;
import java.util.Set; import java.util.Set;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Future;
import java.util.concurrent.TimeoutException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicBoolean;
/** /**
@ -42,7 +48,12 @@ public class ShutdownHookManager {
private static final ShutdownHookManager MGR = new ShutdownHookManager(); private static final ShutdownHookManager MGR = new ShutdownHookManager();
private static final Log LOG = LogFactory.getLog(ShutdownHookManager.class); private static final Log LOG = LogFactory.getLog(ShutdownHookManager.class);
private static final long TIMEOUT_DEFAULT = 10;
private static final TimeUnit TIME_UNIT_DEFAULT = TimeUnit.SECONDS;
private static final ExecutorService EXECUTOR =
HadoopExecutors.newSingleThreadExecutor(new ThreadFactoryBuilder()
.setDaemon(true).build());
static { static {
try { try {
Runtime.getRuntime().addShutdownHook( Runtime.getRuntime().addShutdownHook(
@ -50,14 +61,33 @@ public class ShutdownHookManager {
@Override @Override
public void run() { public void run() {
MGR.shutdownInProgress.set(true); MGR.shutdownInProgress.set(true);
for (Runnable hook: MGR.getShutdownHooksInOrder()) { for (HookEntry entry: MGR.getShutdownHooksInOrder()) {
Future<?> future = EXECUTOR.submit(entry.getHook());
try { try {
hook.run(); future.get(entry.getTimeout(), entry.getTimeUnit());
} catch (TimeoutException ex) {
future.cancel(true);
LOG.warn("ShutdownHook '" + entry.getHook().getClass().
getSimpleName() + "' timeout, " + ex.toString(), ex);
} catch (Throwable ex) { } catch (Throwable ex) {
LOG.warn("ShutdownHook '" + hook.getClass().getSimpleName() + LOG.warn("ShutdownHook '" + entry.getHook().getClass().
"' failed, " + ex.toString(), ex); getSimpleName() + "' failed, " + ex.toString(), ex);
} }
} }
try {
EXECUTOR.shutdown();
if (!EXECUTOR.awaitTermination(TIMEOUT_DEFAULT,
TIME_UNIT_DEFAULT)) {
LOG.error("ShutdownHookManger shutdown forcefully.");
EXECUTOR.shutdownNow();
}
LOG.info("ShutdownHookManger complete shutdown.");
} catch (InterruptedException ex) {
LOG.error("ShutdownHookManger interrupted while waiting for " +
"termination.", ex);
EXECUTOR.shutdownNow();
Thread.currentThread().interrupt();
}
} }
} }
); );
@ -77,15 +107,24 @@ public class ShutdownHookManager {
} }
/** /**
* Private structure to store ShutdownHook and its priority. * Private structure to store ShutdownHook, its priority and timeout
* settings.
*/ */
private static class HookEntry { static class HookEntry {
Runnable hook; private final Runnable hook;
int priority; private final int priority;
private final long timeout;
private final TimeUnit unit;
public HookEntry(Runnable hook, int priority) { HookEntry(Runnable hook, int priority) {
this(hook, priority, TIMEOUT_DEFAULT, TIME_UNIT_DEFAULT);
}
HookEntry(Runnable hook, int priority, long timeout, TimeUnit unit) {
this.hook = hook; this.hook = hook;
this.priority = priority; this.priority = priority;
this.timeout = timeout;
this.unit = unit;
} }
@Override @Override
@ -104,10 +143,25 @@ public class ShutdownHookManager {
return eq; return eq;
} }
Runnable getHook() {
return hook;
}
int getPriority() {
return priority;
}
long getTimeout() {
return timeout;
}
TimeUnit getTimeUnit() {
return unit;
}
} }
private Set<HookEntry> hooks = private final Set<HookEntry> hooks =
Collections.synchronizedSet(new HashSet<HookEntry>()); Collections.synchronizedSet(new HashSet<HookEntry>());
private AtomicBoolean shutdownInProgress = new AtomicBoolean(false); private AtomicBoolean shutdownInProgress = new AtomicBoolean(false);
@ -121,7 +175,7 @@ public class ShutdownHookManager {
* *
* @return the list of shutdownHooks in order of execution. * @return the list of shutdownHooks in order of execution.
*/ */
List<Runnable> getShutdownHooksInOrder() { List<HookEntry> getShutdownHooksInOrder() {
List<HookEntry> list; List<HookEntry> list;
synchronized (MGR.hooks) { synchronized (MGR.hooks) {
list = new ArrayList<HookEntry>(MGR.hooks); list = new ArrayList<HookEntry>(MGR.hooks);
@ -134,11 +188,7 @@ public class ShutdownHookManager {
return o2.priority - o1.priority; return o2.priority - o1.priority;
} }
}); });
List<Runnable> ordered = new ArrayList<Runnable>(); return list;
for (HookEntry entry: list) {
ordered.add(entry.hook);
}
return ordered;
} }
/** /**
@ -154,11 +204,36 @@ public class ShutdownHookManager {
throw new IllegalArgumentException("shutdownHook cannot be NULL"); throw new IllegalArgumentException("shutdownHook cannot be NULL");
} }
if (shutdownInProgress.get()) { if (shutdownInProgress.get()) {
throw new IllegalStateException("Shutdown in progress, cannot add a shutdownHook"); throw new IllegalStateException("Shutdown in progress, cannot add a " +
"shutdownHook");
} }
hooks.add(new HookEntry(shutdownHook, priority)); hooks.add(new HookEntry(shutdownHook, priority));
} }
/**
*
* Adds a shutdownHook with a priority and timeout the higher the priority
* the earlier will run. ShutdownHooks with same priority run
* in a non-deterministic order. The shutdown hook will be terminated if it
* has not been finished in the specified period of time.
*
* @param shutdownHook shutdownHook <code>Runnable</code>
* @param priority priority of the shutdownHook
* @param timeout timeout of the shutdownHook
* @param unit unit of the timeout <code>TimeUnit</code>
*/
public void addShutdownHook(Runnable shutdownHook, int priority, long timeout,
TimeUnit unit) {
if (shutdownHook == null) {
throw new IllegalArgumentException("shutdownHook cannot be NULL");
}
if (shutdownInProgress.get()) {
throw new IllegalStateException("Shutdown in progress, cannot add a " +
"shutdownHook");
}
hooks.add(new HookEntry(shutdownHook, priority, timeout, unit));
}
/** /**
* Removes a shutdownHook. * Removes a shutdownHook.
* *
@ -168,7 +243,8 @@ public class ShutdownHookManager {
*/ */
public boolean removeShutdownHook(Runnable shutdownHook) { public boolean removeShutdownHook(Runnable shutdownHook) {
if (shutdownInProgress.get()) { if (shutdownInProgress.get()) {
throw new IllegalStateException("Shutdown in progress, cannot remove a shutdownHook"); throw new IllegalStateException("Shutdown in progress, cannot remove a " +
"shutdownHook");
} }
return hooks.remove(new HookEntry(shutdownHook, 0)); return hooks.remove(new HookEntry(shutdownHook, 0));
} }

View File

@ -87,7 +87,9 @@ JNIEXPORT jstring JNICALL
Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_getLibraryName( Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_getLibraryName(
JNIEnv *env, jclass class JNIEnv *env, jclass class
) { ) {
return (*env)->NewStringUTF(env, "revision:99"); char version_buf[128];
snprintf(version_buf, sizeof(version_buf), "revision:%d", LZ4_versionNumber());
return (*env)->NewStringUTF(env, version_buf);
} }
JNIEXPORT jint JNICALL Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_compressBytesDirectHC JNIEXPORT jint JNICALL Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_compressBytesDirectHC

View File

@ -19,6 +19,7 @@
#include "erasure_code.h" #include "erasure_code.h"
#include "gf_util.h" #include "gf_util.h"
#include "erasure_coder.h" #include "erasure_coder.h"
#include "dump.h"
#include <stdio.h> #include <stdio.h>
#include <stdlib.h> #include <stdlib.h>

View File

@ -78,6 +78,12 @@ static const char* load_functions() {
void load_erasurecode_lib(char* err, size_t err_len) { void load_erasurecode_lib(char* err, size_t err_len) {
const char* errMsg; const char* errMsg;
const char* library = NULL;
#ifdef UNIX
Dl_info dl_info;
#else
LPTSTR filename = NULL;
#endif
err[0] = '\0'; err[0] = '\0';
@ -111,6 +117,22 @@ void load_erasurecode_lib(char* err, size_t err_len) {
if (errMsg != NULL) { if (errMsg != NULL) {
snprintf(err, err_len, "Loading functions from ISA-L failed: %s", errMsg); snprintf(err, err_len, "Loading functions from ISA-L failed: %s", errMsg);
} }
#ifdef UNIX
if(dladdr(isaLoader->ec_encode_data, &dl_info)) {
library = dl_info.dli_fname;
}
#else
if (GetModuleFileName(isaLoader->libec, filename, 256) > 0) {
library = filename;
}
#endif
if (library == NULL) {
library = HADOOP_ISAL_LIBRARY;
}
isaLoader->libname = strdup(library);
} }
int build_support_erasurecode() { int build_support_erasurecode() {
@ -120,29 +142,3 @@ int build_support_erasurecode() {
return 0; return 0;
#endif #endif
} }
const char* get_library_name() {
#ifdef UNIX
Dl_info dl_info;
if (isaLoader->ec_encode_data == NULL) {
return HADOOP_ISAL_LIBRARY;
}
if(dladdr(isaLoader->ec_encode_data, &dl_info)) {
return dl_info.dli_fname;
}
#else
LPTSTR filename = NULL;
if (isaLoader->libec == NULL) {
return HADOOP_ISAL_LIBRARY;
}
if (GetModuleFileName(isaLoader->libec, filename, 256) > 0) {
return filename;
}
#endif
return NULL;
}

View File

@ -78,6 +78,7 @@ typedef void (__cdecl *__d_ec_encode_data_update)(int, int, int, int, unsigned c
typedef struct __IsaLibLoader { typedef struct __IsaLibLoader {
// The loaded library handle // The loaded library handle
void* libec; void* libec;
char* libname;
__d_gf_mul gf_mul; __d_gf_mul gf_mul;
__d_gf_inv gf_inv; __d_gf_inv gf_inv;
@ -133,11 +134,6 @@ static FARPROC WINAPI myDlsym(HMODULE handle, LPCSTR symbol) {
*/ */
int build_support_erasurecode(); int build_support_erasurecode();
/**
* Get the library name possibly of full path.
*/
const char* get_library_name();
/** /**
* Initialize and load erasure code library, returning error message if any. * Initialize and load erasure code library, returning error message if any.
* *

View File

@ -22,6 +22,7 @@
#include "org_apache_hadoop.h" #include "org_apache_hadoop.h"
#include "jni_common.h" #include "jni_common.h"
#include "isal_load.h"
#include "org_apache_hadoop_io_erasurecode_ErasureCodeNative.h" #include "org_apache_hadoop_io_erasurecode_ErasureCodeNative.h"
#ifdef UNIX #ifdef UNIX
@ -37,9 +38,11 @@ Java_org_apache_hadoop_io_erasurecode_ErasureCodeNative_loadLibrary
JNIEXPORT jstring JNICALL JNIEXPORT jstring JNICALL
Java_org_apache_hadoop_io_erasurecode_ErasureCodeNative_getLibraryName Java_org_apache_hadoop_io_erasurecode_ErasureCodeNative_getLibraryName
(JNIEnv *env, jclass myclass) { (JNIEnv *env, jclass myclass) {
char* libName = get_library_name(); if (isaLoader == NULL) {
if (libName == NULL) { THROW(env, "java/lang/UnsatisfiedLinkError",
libName = "Unavailable"; "Unavailable: library not loaded yet");
return (jstring)NULL;
} }
return (*env)->NewStringUTF(env, libName);
return (*env)->NewStringUTF(env, isaLoader->libname);
} }

View File

@ -1054,7 +1054,7 @@
<value>true</value> <value>true</value>
<description>Send a ping to the server when timeout on reading the response, <description>Send a ping to the server when timeout on reading the response,
if set to true. If no failure is detected, the client retries until at least if set to true. If no failure is detected, the client retries until at least
a byte is read. a byte is read or the time given by ipc.client.rpc-timeout.ms is passed.
</description> </description>
</property> </property>
@ -1071,10 +1071,9 @@
<name>ipc.client.rpc-timeout.ms</name> <name>ipc.client.rpc-timeout.ms</name>
<value>0</value> <value>0</value>
<description>Timeout on waiting response from server, in milliseconds. <description>Timeout on waiting response from server, in milliseconds.
Currently this timeout works only when ipc.client.ping is set to true If ipc.client.ping is set to true and this rpc-timeout is greater than
because it uses the same facilities with IPC ping. the value of ipc.ping.interval, the effective value of the rpc-timeout is
The timeout overrides the ipc.ping.interval and client will throw exception rounded up to multiple of ipc.ping.interval.
instead of sending ping when the interval is passed.
</description> </description>
</property> </property>

View File

@ -86,10 +86,10 @@ Other useful configuration parameters that you can customize include:
In most cases, you should specify the `HADOOP_PID_DIR` and `HADOOP_LOG_DIR` directories such that they can only be written to by the users that are going to run the hadoop daemons. Otherwise there is the potential for a symlink attack. In most cases, you should specify the `HADOOP_PID_DIR` and `HADOOP_LOG_DIR` directories such that they can only be written to by the users that are going to run the hadoop daemons. Otherwise there is the potential for a symlink attack.
It is also traditional to configure `HADOOP_PREFIX` in the system-wide shell environment configuration. For example, a simple script inside `/etc/profile.d`: It is also traditional to configure `HADOOP_HOME` in the system-wide shell environment configuration. For example, a simple script inside `/etc/profile.d`:
HADOOP_PREFIX=/path/to/hadoop HADOOP_HOME=/path/to/hadoop
export HADOOP_PREFIX export HADOOP_HOME
| Daemon | Environment Variable | | Daemon | Environment Variable |
|:---- |:---- | |:---- |:---- |
@ -243,73 +243,73 @@ To start a Hadoop cluster you will need to start both the HDFS and YARN cluster.
The first time you bring up HDFS, it must be formatted. Format a new distributed filesystem as *hdfs*: The first time you bring up HDFS, it must be formatted. Format a new distributed filesystem as *hdfs*:
[hdfs]$ $HADOOP_PREFIX/bin/hdfs namenode -format <cluster_name> [hdfs]$ $HADOOP_HOME/bin/hdfs namenode -format <cluster_name>
Start the HDFS NameNode with the following command on the designated node as *hdfs*: Start the HDFS NameNode with the following command on the designated node as *hdfs*:
[hdfs]$ $HADOOP_PREFIX/bin/hdfs --daemon start namenode [hdfs]$ $HADOOP_HOME/bin/hdfs --daemon start namenode
Start a HDFS DataNode with the following command on each designated node as *hdfs*: Start a HDFS DataNode with the following command on each designated node as *hdfs*:
[hdfs]$ $HADOOP_PREFIX/bin/hdfs --daemon start datanode [hdfs]$ $HADOOP_HOME/bin/hdfs --daemon start datanode
If `etc/hadoop/slaves` and ssh trusted access is configured (see [Single Node Setup](./SingleCluster.html)), all of the HDFS processes can be started with a utility script. As *hdfs*: If `etc/hadoop/slaves` and ssh trusted access is configured (see [Single Node Setup](./SingleCluster.html)), all of the HDFS processes can be started with a utility script. As *hdfs*:
[hdfs]$ $HADOOP_PREFIX/sbin/start-dfs.sh [hdfs]$ $HADOOP_HOME/sbin/start-dfs.sh
Start the YARN with the following command, run on the designated ResourceManager as *yarn*: Start the YARN with the following command, run on the designated ResourceManager as *yarn*:
[yarn]$ $HADOOP_PREFIX/bin/yarn --daemon start resourcemanager [yarn]$ $HADOOP_HOME/bin/yarn --daemon start resourcemanager
Run a script to start a NodeManager on each designated host as *yarn*: Run a script to start a NodeManager on each designated host as *yarn*:
[yarn]$ $HADOOP_PREFIX/bin/yarn --daemon start nodemanager [yarn]$ $HADOOP_HOME/bin/yarn --daemon start nodemanager
Start a standalone WebAppProxy server. Run on the WebAppProxy server as *yarn*. If multiple servers are used with load balancing it should be run on each of them: Start a standalone WebAppProxy server. Run on the WebAppProxy server as *yarn*. If multiple servers are used with load balancing it should be run on each of them:
[yarn]$ $HADOOP_PREFIX/bin/yarn --daemon start proxyserver [yarn]$ $HADOOP_HOME/bin/yarn --daemon start proxyserver
If `etc/hadoop/slaves` and ssh trusted access is configured (see [Single Node Setup](./SingleCluster.html)), all of the YARN processes can be started with a utility script. As *yarn*: If `etc/hadoop/slaves` and ssh trusted access is configured (see [Single Node Setup](./SingleCluster.html)), all of the YARN processes can be started with a utility script. As *yarn*:
[yarn]$ $HADOOP_PREFIX/sbin/start-yarn.sh [yarn]$ $HADOOP_HOME/sbin/start-yarn.sh
Start the MapReduce JobHistory Server with the following command, run on the designated server as *mapred*: Start the MapReduce JobHistory Server with the following command, run on the designated server as *mapred*:
[mapred]$ $HADOOP_PREFIX/bin/mapred --daemon start historyserver [mapred]$ $HADOOP_HOME/bin/mapred --daemon start historyserver
### Hadoop Shutdown ### Hadoop Shutdown
Stop the NameNode with the following command, run on the designated NameNode as *hdfs*: Stop the NameNode with the following command, run on the designated NameNode as *hdfs*:
[hdfs]$ $HADOOP_PREFIX/bin/hdfs --daemon stop namenode [hdfs]$ $HADOOP_HOME/bin/hdfs --daemon stop namenode
Run a script to stop a DataNode as *hdfs*: Run a script to stop a DataNode as *hdfs*:
[hdfs]$ $HADOOP_PREFIX/bin/hdfs --daemon stop datanode [hdfs]$ $HADOOP_HOME/bin/hdfs --daemon stop datanode
If `etc/hadoop/slaves` and ssh trusted access is configured (see [Single Node Setup](./SingleCluster.html)), all of the HDFS processes may be stopped with a utility script. As *hdfs*: If `etc/hadoop/slaves` and ssh trusted access is configured (see [Single Node Setup](./SingleCluster.html)), all of the HDFS processes may be stopped with a utility script. As *hdfs*:
[hdfs]$ $HADOOP_PREFIX/sbin/stop-dfs.sh [hdfs]$ $HADOOP_HOME/sbin/stop-dfs.sh
Stop the ResourceManager with the following command, run on the designated ResourceManager as *yarn*: Stop the ResourceManager with the following command, run on the designated ResourceManager as *yarn*:
[yarn]$ $HADOOP_PREFIX/bin/yarn --daemon stop resourcemanager [yarn]$ $HADOOP_HOME/bin/yarn --daemon stop resourcemanager
Run a script to stop a NodeManager on a slave as *yarn*: Run a script to stop a NodeManager on a slave as *yarn*:
[yarn]$ $HADOOP_PREFIX/bin/yarn --daemon stop nodemanager [yarn]$ $HADOOP_HOME/bin/yarn --daemon stop nodemanager
If `etc/hadoop/slaves` and ssh trusted access is configured (see [Single Node Setup](./SingleCluster.html)), all of the YARN processes can be stopped with a utility script. As *yarn*: If `etc/hadoop/slaves` and ssh trusted access is configured (see [Single Node Setup](./SingleCluster.html)), all of the YARN processes can be stopped with a utility script. As *yarn*:
[yarn]$ $HADOOP_PREFIX/sbin/stop-yarn.sh [yarn]$ $HADOOP_HOME/sbin/stop-yarn.sh
Stop the WebAppProxy server. Run on the WebAppProxy server as *yarn*. If multiple servers are used with load balancing it should be run on each of them: Stop the WebAppProxy server. Run on the WebAppProxy server as *yarn*. If multiple servers are used with load balancing it should be run on each of them:
[yarn]$ $HADOOP_PREFIX/bin/yarn stop proxyserver [yarn]$ $HADOOP_HOME/bin/yarn stop proxyserver
Stop the MapReduce JobHistory Server with the following command, run on the designated server as *mapred*: Stop the MapReduce JobHistory Server with the following command, run on the designated server as *mapred*:
[mapred]$ $HADOOP_PREFIX/bin/mapred --daemon stop historyserver [mapred]$ $HADOOP_HOME/bin/mapred --daemon stop historyserver
Web Interfaces Web Interfaces
-------------- --------------

View File

@ -39,7 +39,7 @@ All of the shell commands will accept a common set of options. For some commands
| SHELL\_OPTION | Description | | SHELL\_OPTION | Description |
|:---- |:---- | |:---- |:---- |
| `--buildpaths` | Enables developer versions of jars. | | `--buildpaths` | Enables developer versions of jars. |
| `--config confdir` | Overwrites the default Configuration directory. Default is `$HADOOP_PREFIX/etc/hadoop`. | | `--config confdir` | Overwrites the default Configuration directory. Default is `$HADOOP_HOME/etc/hadoop`. |
| `--daemon mode` | If the command supports daemonization (e.g., `hdfs namenode`), execute in the appropriate mode. Supported modes are `start` to start the process in daemon mode, `stop` to stop the process, and `status` to determine the active status of the process. `status` will return an [LSB-compliant](http://refspecs.linuxbase.org/LSB_3.0.0/LSB-generic/LSB-generic/iniscrptact.html) result code. If no option is provided, commands that support daemonization will run in the foreground. For commands that do not support daemonization, this option is ignored. | | `--daemon mode` | If the command supports daemonization (e.g., `hdfs namenode`), execute in the appropriate mode. Supported modes are `start` to start the process in daemon mode, `stop` to stop the process, and `status` to determine the active status of the process. `status` will return an [LSB-compliant](http://refspecs.linuxbase.org/LSB_3.0.0/LSB-generic/LSB-generic/iniscrptact.html) result code. If no option is provided, commands that support daemonization will run in the foreground. For commands that do not support daemonization, this option is ignored. |
| `--debug` | Enables shell level configuration debugging information | | `--debug` | Enables shell level configuration debugging information |
| `--help` | Shell script usage information. | | `--help` | Shell script usage information. |

View File

@ -83,7 +83,7 @@ Apache Hadoop allows for third parties to easily add new features through a vari
Core to this functionality is the concept of a shell profile. Shell profiles are shell snippets that can do things such as add jars to the classpath, configure Java system properties and more. Core to this functionality is the concept of a shell profile. Shell profiles are shell snippets that can do things such as add jars to the classpath, configure Java system properties and more.
Shell profiles may be installed in either `${HADOOP_CONF_DIR}/shellprofile.d` or `${HADOOP_PREFIX}/libexec/shellprofile.d`. Shell profiles in the `libexec` directory are part of the base installation and cannot be overriden by the user. Shell profiles in the configuration directory may be ignored if the end user changes the configuration directory at runtime. Shell profiles may be installed in either `${HADOOP_CONF_DIR}/shellprofile.d` or `${HADOOP_HOME}/libexec/shellprofile.d`. Shell profiles in the `libexec` directory are part of the base installation and cannot be overriden by the user. Shell profiles in the configuration directory may be ignored if the end user changes the configuration directory at runtime.
An example of a shell profile is in the libexec directory. An example of a shell profile is in the libexec directory.

View File

@ -49,6 +49,8 @@ import org.apache.hadoop.conf.Configuration.IntegerRanges;
import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils; import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.net.NetUtils;
import org.apache.hadoop.test.GenericTestUtils;
import static org.apache.hadoop.util.PlatformName.IBM_JAVA; import static org.apache.hadoop.util.PlatformName.IBM_JAVA;
import org.codehaus.jackson.map.ObjectMapper; import org.codehaus.jackson.map.ObjectMapper;
@ -427,8 +429,7 @@ public class TestConfiguration extends TestCase {
Configuration conf = new Configuration(); Configuration conf = new Configuration();
String[] dirs = new String[]{"a", "b", "c"}; String[] dirs = new String[]{"a", "b", "c"};
for (int i = 0; i < dirs.length; i++) { for (int i = 0; i < dirs.length; i++) {
dirs[i] = new Path(System.getProperty("test.build.data"), dirs[i]) dirs[i] = new Path(GenericTestUtils.getTempPath(dirs[i])).toString();
.toString();
} }
conf.set("dirs", StringUtils.join(dirs, ",")); conf.set("dirs", StringUtils.join(dirs, ","));
for (int i = 0; i < 1000; i++) { for (int i = 0; i < 1000; i++) {
@ -444,8 +445,7 @@ public class TestConfiguration extends TestCase {
Configuration conf = new Configuration(); Configuration conf = new Configuration();
String[] dirs = new String[]{"a", "b", "c"}; String[] dirs = new String[]{"a", "b", "c"};
for (int i = 0; i < dirs.length; i++) { for (int i = 0; i < dirs.length; i++) {
dirs[i] = new Path(System.getProperty("test.build.data"), dirs[i]) dirs[i] = new Path(GenericTestUtils.getTempPath(dirs[i])).toString();
.toString();
} }
conf.set("dirs", StringUtils.join(dirs, ",")); conf.set("dirs", StringUtils.join(dirs, ","));
for (int i = 0; i < 1000; i++) { for (int i = 0; i < 1000; i++) {

View File

@ -30,6 +30,7 @@ import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FileUtil; import org.apache.hadoop.fs.FileUtil;
import org.apache.hadoop.fs.LocalFileSystem; import org.apache.hadoop.fs.LocalFileSystem;
import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.Path;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.After; import org.junit.After;
import org.junit.AfterClass; import org.junit.AfterClass;
import org.junit.Before; import org.junit.Before;
@ -38,8 +39,8 @@ import org.junit.Ignore;
import org.junit.Test; import org.junit.Test;
public class TestCryptoStreamsForLocalFS extends CryptoStreamsTestBase { public class TestCryptoStreamsForLocalFS extends CryptoStreamsTestBase {
private static final String TEST_ROOT_DIR private static final String TEST_ROOT_DIR =
= System.getProperty("test.build.data","build/test/data") + "/work-dir/localfs"; GenericTestUtils.getTempPath("work-dir/testcryptostreamsforlocalfs");
private final File base = new File(TEST_ROOT_DIR); private final File base = new File(TEST_ROOT_DIR);
private final Path file = new Path(TEST_ROOT_DIR, "test-file"); private final Path file = new Path(TEST_ROOT_DIR, "test-file");

View File

@ -25,6 +25,7 @@ import java.util.UUID;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.Path;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.After; import org.junit.After;
import org.junit.Before; import org.junit.Before;
import org.junit.Test; import org.junit.Test;
@ -47,8 +48,8 @@ public class TestKeyShell {
public void setup() throws Exception { public void setup() throws Exception {
outContent.reset(); outContent.reset();
errContent.reset(); errContent.reset();
final File tmpDir = new File(System.getProperty("test.build.data", "target"), final File tmpDir = GenericTestUtils.getTestDir(UUID.randomUUID()
UUID.randomUUID().toString()); .toString());
if (!tmpDir.mkdirs()) { if (!tmpDir.mkdirs()) {
throw new IOException("Unable to create " + tmpDir); throw new IOException("Unable to create " + tmpDir);
} }

View File

@ -21,6 +21,7 @@ import java.io.IOException;
import org.apache.commons.lang.RandomStringUtils; import org.apache.commons.lang.RandomStringUtils;
import org.apache.hadoop.fs.Options.CreateOpts; import org.apache.hadoop.fs.Options.CreateOpts;
import org.apache.hadoop.test.GenericTestUtils;
/** /**
* Abstraction of filesystem functionality with additional helper methods * Abstraction of filesystem functionality with additional helper methods
@ -43,7 +44,7 @@ public abstract class FSTestWrapper implements FSWrapper {
public FSTestWrapper(String testRootDir) { public FSTestWrapper(String testRootDir) {
// Use default test dir if not provided // Use default test dir if not provided
if (testRootDir == null || testRootDir.isEmpty()) { if (testRootDir == null || testRootDir.isEmpty()) {
testRootDir = System.getProperty("test.build.data", "build/test/data"); testRootDir = GenericTestUtils.getTestDir().getAbsolutePath();
} }
// salt test dir with some random digits for safe parallel runs // salt test dir with some random digits for safe parallel runs
this.testRootDir = testRootDir + "/" this.testRootDir = testRootDir + "/"

View File

@ -29,6 +29,7 @@ import org.apache.hadoop.HadoopIllegalArgumentException;
import org.apache.hadoop.fs.Options.CreateOpts; import org.apache.hadoop.fs.Options.CreateOpts;
import org.apache.hadoop.fs.Options.Rename; import org.apache.hadoop.fs.Options.Rename;
import org.apache.hadoop.fs.permission.FsPermission; import org.apache.hadoop.fs.permission.FsPermission;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.After; import org.junit.After;
import org.junit.Assert; import org.junit.Assert;
import static org.junit.Assert.*; import static org.junit.Assert.*;
@ -99,8 +100,7 @@ public abstract class FileContextMainOperationsBaseTest {
@Before @Before
public void setUp() throws Exception { public void setUp() throws Exception {
File testBuildData = new File(System.getProperty("test.build.data", File testBuildData = GenericTestUtils.getRandomizedTestDir();
"build/test/data"), RandomStringUtils.randomAlphanumeric(10));
Path rootPath = new Path(testBuildData.getAbsolutePath(), Path rootPath = new Path(testBuildData.getAbsolutePath(),
"root-uri"); "root-uri");
localFsRootPath = rootPath.makeQualified(LocalFileSystem.NAME, null); localFsRootPath = rootPath.makeQualified(LocalFileSystem.NAME, null);

View File

@ -26,6 +26,7 @@ import org.apache.commons.lang.RandomStringUtils;
import org.apache.hadoop.fs.Options.CreateOpts; import org.apache.hadoop.fs.Options.CreateOpts;
import org.apache.hadoop.fs.Options.CreateOpts.BlockSize; import org.apache.hadoop.fs.Options.CreateOpts.BlockSize;
import org.apache.hadoop.io.IOUtils; import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.Assert; import org.junit.Assert;
/** /**
@ -39,11 +40,10 @@ public final class FileContextTestHelper {
private String absTestRootDir = null; private String absTestRootDir = null;
/** /**
* Create a context with test root relative to the <wd>/build/test/data * Create a context with test root relative to the test directory
*/ */
public FileContextTestHelper() { public FileContextTestHelper() {
this(System.getProperty("test.build.data", "target/test/data") + "/" + this(GenericTestUtils.getRandomizedTestDir().getAbsolutePath());
RandomStringUtils.randomAlphanumeric(10));
} }
/** /**

View File

@ -21,6 +21,8 @@ package org.apache.hadoop.fs;
import java.io.*; import java.io.*;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.regex.Pattern; import java.util.regex.Pattern;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.Assert; import org.junit.Assert;
import org.apache.hadoop.fs.permission.FsPermission; import org.apache.hadoop.fs.permission.FsPermission;
@ -51,8 +53,8 @@ import static org.apache.hadoop.fs.FileContextTestHelper.*;
* </p> * </p>
*/ */
public abstract class FileContextURIBase { public abstract class FileContextURIBase {
private static final String basePath = System.getProperty("test.build.data", private static final String basePath =
"build/test/data") + "/testContextURI"; GenericTestUtils.getTempPath("testContextURI");
private static final Path BASE = new Path(basePath); private static final Path BASE = new Path(basePath);
// Matches anything containing <, >, :, ", |, ?, *, or anything that ends with // Matches anything containing <, >, :, ", |, ?, *, or anything that ends with

View File

@ -22,9 +22,9 @@ import java.io.FileNotFoundException;
import java.net.URI; import java.net.URI;
import java.util.Random; import java.util.Random;
import org.apache.commons.lang.RandomStringUtils;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.security.token.Token; import org.apache.hadoop.security.token.Token;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.Assert; import org.junit.Assert;
import static org.junit.Assert.*; import static org.junit.Assert.*;
@ -45,7 +45,7 @@ public class FileSystemTestHelper {
* Create helper with test root located at <wd>/build/test/data * Create helper with test root located at <wd>/build/test/data
*/ */
public FileSystemTestHelper() { public FileSystemTestHelper() {
this(System.getProperty("test.build.data", "target/test/data") + "/" + RandomStringUtils.randomAlphanumeric(10)); this(GenericTestUtils.getRandomizedTempPath());
} }
/** /**

View File

@ -22,6 +22,7 @@ import java.io.BufferedWriter;
import java.io.OutputStreamWriter; import java.io.OutputStreamWriter;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.test.GenericTestUtils;
import junit.framework.TestCase; import junit.framework.TestCase;
@ -30,15 +31,9 @@ public class TestAvroFSInput extends TestCase {
private static final String INPUT_DIR = "AvroFSInput"; private static final String INPUT_DIR = "AvroFSInput";
private Path getInputPath() { private Path getInputPath() {
String dataDir = System.getProperty("test.build.data"); return new Path(GenericTestUtils.getTempPath(INPUT_DIR));
if (null == dataDir) {
return new Path(INPUT_DIR);
} else {
return new Path(new Path(dataDir), INPUT_DIR);
}
} }
public void testAFSInput() throws Exception { public void testAFSInput() throws Exception {
Configuration conf = new Configuration(); Configuration conf = new Configuration();
FileSystem fs = FileSystem.getLocal(conf); FileSystem fs = FileSystem.getLocal(conf);

View File

@ -22,12 +22,13 @@ import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FSDataOutputStream;
import static org.apache.hadoop.fs.FileSystemTestHelper.*; import static org.apache.hadoop.fs.FileSystemTestHelper.*;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.*; import org.junit.*;
import static org.junit.Assert.*; import static org.junit.Assert.*;
public class TestChecksumFileSystem { public class TestChecksumFileSystem {
static final String TEST_ROOT_DIR static final String TEST_ROOT_DIR =
= System.getProperty("test.build.data","build/test/data/work-dir/localfs"); GenericTestUtils.getTempPath("work-dir/localfs");
static LocalFileSystem localFs; static LocalFileSystem localFs;

View File

@ -37,7 +37,7 @@ import static org.junit.Assert.*;
public class TestDFVariations { public class TestDFVariations {
private static final String TEST_ROOT_DIR = private static final String TEST_ROOT_DIR =
System.getProperty("test.build.data","build/test/data") + "/TestDFVariations"; GenericTestUtils.getTestDir("testdfvariations").getAbsolutePath();
private static File test_root = null; private static File test_root = null;
@Before @Before

View File

@ -26,11 +26,11 @@ import java.util.Random;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.CommonConfigurationKeys; import org.apache.hadoop.fs.CommonConfigurationKeys;
import org.apache.hadoop.test.GenericTestUtils;
/** This test makes sure that "DU" does not get to run on each call to getUsed */ /** This test makes sure that "DU" does not get to run on each call to getUsed */
public class TestDU extends TestCase { public class TestDU extends TestCase {
final static private File DU_DIR = new File( final static private File DU_DIR = GenericTestUtils.getTestDir("dutmp");
System.getProperty("test.build.data","/tmp"), "dutmp");
@Override @Override
public void setUp() { public void setUp() {

View File

@ -23,6 +23,7 @@ import java.io.IOException;
import java.util.Set; import java.util.Set;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.Assert; import org.junit.Assert;
import org.junit.Before; import org.junit.Before;
import org.junit.Test; import org.junit.Test;
@ -34,8 +35,9 @@ public class TestFileContextResolveAfs {
static { static {
FileSystem.enableSymlinks(); FileSystem.enableSymlinks();
} }
private static String TEST_ROOT_DIR_LOCAL
= System.getProperty("test.build.data","/tmp"); private static String TEST_ROOT_DIR_LOCAL =
GenericTestUtils.getTestDir().getAbsolutePath();
private FileContext fc; private FileContext fc;
private FileSystem localFs; private FileSystem localFs;

View File

@ -17,13 +17,12 @@
*/ */
package org.apache.hadoop.fs; package org.apache.hadoop.fs;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.*; import org.junit.*;
import java.io.BufferedReader;
import java.io.File; import java.io.File;
import java.io.FileInputStream; import java.io.FileInputStream;
import java.io.FileOutputStream; import java.io.FileOutputStream;
import java.io.FileReader;
import java.io.IOException; import java.io.IOException;
import java.io.OutputStream; import java.io.OutputStream;
import java.net.InetAddress; import java.net.InetAddress;
@ -49,7 +48,6 @@ import org.apache.hadoop.util.StringUtils;
import org.apache.tools.tar.TarEntry; import org.apache.tools.tar.TarEntry;
import org.apache.tools.tar.TarOutputStream; import org.apache.tools.tar.TarOutputStream;
import javax.print.attribute.URISyntax;
import static org.junit.Assert.*; import static org.junit.Assert.*;
import static org.mockito.Mockito.mock; import static org.mockito.Mockito.mock;
@ -58,9 +56,7 @@ import static org.mockito.Mockito.when;
public class TestFileUtil { public class TestFileUtil {
private static final Log LOG = LogFactory.getLog(TestFileUtil.class); private static final Log LOG = LogFactory.getLog(TestFileUtil.class);
private static final String TEST_ROOT_DIR = System.getProperty( private static final File TEST_DIR = GenericTestUtils.getTestDir("fu");
"test.build.data", "/tmp") + "/fu";
private static final File TEST_DIR = new File(TEST_ROOT_DIR);
private static final String FILE = "x"; private static final String FILE = "x";
private static final String LINK = "y"; private static final String LINK = "y";
private static final String DIR = "dir"; private static final String DIR = "dir";
@ -526,60 +522,6 @@ public class TestFileUtil {
validateAndSetWritablePermissions(false, ret); validateAndSetWritablePermissions(false, ret);
} }
@Test (timeout = 30000)
public void testCopyMergeSingleDirectory() throws IOException {
setupDirs();
boolean copyMergeResult = copyMerge("partitioned", "tmp/merged");
Assert.assertTrue("Expected successful copyMerge result.", copyMergeResult);
File merged = new File(TEST_DIR, "tmp/merged");
Assert.assertTrue("File tmp/merged must exist after copyMerge.",
merged.exists());
BufferedReader rdr = new BufferedReader(new FileReader(merged));
try {
Assert.assertEquals("Line 1 of merged file must contain \"foo\".",
"foo", rdr.readLine());
Assert.assertEquals("Line 2 of merged file must contain \"bar\".",
"bar", rdr.readLine());
Assert.assertNull("Expected end of file reading merged file.",
rdr.readLine());
}
finally {
rdr.close();
}
}
/**
* Calls FileUtil.copyMerge using the specified source and destination paths.
* Both source and destination are assumed to be on the local file system.
* The call will not delete source on completion and will not add an
* additional string between files.
* @param src String non-null source path.
* @param dst String non-null destination path.
* @return boolean true if the call to FileUtil.copyMerge was successful.
* @throws IOException if an I/O error occurs.
*/
private boolean copyMerge(String src, String dst)
throws IOException {
Configuration conf = new Configuration();
FileSystem fs = FileSystem.getLocal(conf);
final boolean result;
try {
Path srcPath = new Path(TEST_ROOT_DIR, src);
Path dstPath = new Path(TEST_ROOT_DIR, dst);
boolean deleteSource = false;
String addString = null;
result = FileUtil.copyMerge(fs, srcPath, fs, dstPath, deleteSource, conf,
addString);
}
finally {
fs.close();
}
return result;
}
/** /**
* Test that getDU is able to handle cycles caused due to symbolic links * Test that getDU is able to handle cycles caused due to symbolic links
* and that directory sizes are not added to the final calculated size * and that directory sizes are not added to the final calculated size
@ -1019,10 +961,10 @@ public class TestFileUtil {
@Test (timeout = 30000) @Test (timeout = 30000)
public void testUntar() throws IOException { public void testUntar() throws IOException {
String tarGzFileName = System.getProperty("test.cache.data", String tarGzFileName = System.getProperty("test.cache.data",
"build/test/cache") + "/test-untar.tgz"; "target/test/cache") + "/test-untar.tgz";
String tarFileName = System.getProperty("test.cache.data", String tarFileName = System.getProperty("test.cache.data",
"build/test/cache") + "/test-untar.tar"; "build/test/cache") + "/test-untar.tar";
String dataDir = System.getProperty("test.build.data", "build/test/data"); File dataDir = GenericTestUtils.getTestDir();
File untarDir = new File(dataDir, "untarDir"); File untarDir = new File(dataDir, "untarDir");
doUntarAndVerify(new File(tarGzFileName), untarDir); doUntarAndVerify(new File(tarGzFileName), untarDir);

View File

@ -18,18 +18,29 @@
package org.apache.hadoop.fs; package org.apache.hadoop.fs;
import static org.junit.Assert.*; import static org.hamcrest.CoreMatchers.is;
import static org.hamcrest.CoreMatchers.not;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertThat;
import static org.junit.Assert.assertTrue;
import static org.junit.Assume.assumeTrue; import static org.junit.Assume.assumeTrue;
import java.io.File; import java.io.File;
import java.io.IOException; import java.io.IOException;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.util.StringUtils;
import org.junit.Before; import org.junit.Before;
import org.junit.BeforeClass; import org.junit.BeforeClass;
import org.junit.Test; import org.junit.Test;
public class TestFsShellCopy { public class TestFsShellCopy {
static final Log LOG = LogFactory.getLog(TestFsShellCopy.class);
static Configuration conf; static Configuration conf;
static FsShell shell; static FsShell shell;
static LocalFileSystem lfs; static LocalFileSystem lfs;
@ -40,11 +51,11 @@ public class TestFsShellCopy {
conf = new Configuration(); conf = new Configuration();
shell = new FsShell(conf); shell = new FsShell(conf);
lfs = FileSystem.getLocal(conf); lfs = FileSystem.getLocal(conf);
testRootDir = lfs.makeQualified(new Path( testRootDir = lfs.makeQualified(new Path(GenericTestUtils.getTempPath(
System.getProperty("test.build.data","test/build/data"), "testFsShellCopy")));
"testShellCopy"));
lfs.mkdirs(testRootDir); lfs.mkdirs(testRootDir);
lfs.setWorkingDirectory(testRootDir);
srcPath = new Path(testRootDir, "srcFile"); srcPath = new Path(testRootDir, "srcFile");
dstPath = new Path(testRootDir, "dstFile"); dstPath = new Path(testRootDir, "dstFile");
} }
@ -62,6 +73,16 @@ public class TestFsShellCopy {
assertTrue(lfs.exists(lfs.getChecksumFile(srcPath))); assertTrue(lfs.exists(lfs.getChecksumFile(srcPath)));
} }
private void shellRun(int n, String ... args) throws Exception {
assertEquals(n, shell.run(args));
}
private int shellRun(String... args) throws Exception {
int exitCode = shell.run(args);
LOG.info("exit " + exitCode + " - " + StringUtils.join(" ", args));
return exitCode;
}
@Test @Test
public void testCopyNoCrc() throws Exception { public void testCopyNoCrc() throws Exception {
shellRun(0, "-get", srcPath.toString(), dstPath.toString()); shellRun(0, "-get", srcPath.toString(), dstPath.toString());
@ -95,10 +116,6 @@ public class TestFsShellCopy {
assertEquals(expectChecksum, hasChecksum); assertEquals(expectChecksum, hasChecksum);
} }
private void shellRun(int n, String ... args) throws Exception {
assertEquals(n, shell.run(args));
}
@Test @Test
public void testCopyFileFromLocal() throws Exception { public void testCopyFileFromLocal() throws Exception {
Path testRoot = new Path(testRootDir, "testPutFile"); Path testRoot = new Path(testRootDir, "testPutFile");
@ -571,4 +588,23 @@ public class TestFsShellCopy {
String s = (p == null) ? Path.CUR_DIR : p.toString(); String s = (p == null) ? Path.CUR_DIR : p.toString();
return s.isEmpty() ? Path.CUR_DIR : s; return s.isEmpty() ? Path.CUR_DIR : s;
} }
/**
* Test copy to a path with non-existent parent directory.
*/
@Test
public void testCopyNoParent() throws Exception {
final String noDirName = "noDir";
final Path noDir = new Path(noDirName);
lfs.delete(noDir, true);
assertThat(lfs.exists(noDir), is(false));
assertThat("Expected failed put to a path without parent directory",
shellRun("-put", srcPath.toString(), noDirName + "/foo"), is(not(0)));
// Note the trailing '/' in the target path.
assertThat("Expected failed copyFromLocal to a non-existent directory",
shellRun("-copyFromLocal", srcPath.toString(), noDirName + "/"),
is(not(0)));
}
} }

View File

@ -41,6 +41,8 @@ import org.apache.hadoop.fs.shell.FsCommand;
import org.apache.hadoop.fs.shell.PathData; import org.apache.hadoop.fs.shell.PathData;
import org.apache.hadoop.io.IOUtils; import org.apache.hadoop.io.IOUtils;
import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY; import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.util.Shell; import org.apache.hadoop.util.Shell;
import org.junit.BeforeClass; import org.junit.BeforeClass;
import org.junit.Test; import org.junit.Test;
@ -64,8 +66,8 @@ public class TestFsShellReturnCode {
fsShell = new FsShell(conf); fsShell = new FsShell(conf);
} }
private static String TEST_ROOT_DIR = System.getProperty("test.build.data", private static String TEST_ROOT_DIR =
"build/test/data/testCHReturnCode"); GenericTestUtils.getTempPath("testCHReturnCode");
static void writeFile(FileSystem fs, Path name) throws Exception { static void writeFile(FileSystem fs, Path name) throws Exception {
FSDataOutputStream stm = fs.create(name); FSDataOutputStream stm = fs.create(name);

View File

@ -0,0 +1,88 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs;
import static org.hamcrest.CoreMatchers.is;
import static org.hamcrest.CoreMatchers.not;
import static org.junit.Assert.assertThat;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.util.StringUtils;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;
public class TestFsShellTouch {
static final Log LOG = LogFactory.getLog(TestFsShellTouch.class);
static FsShell shell;
static LocalFileSystem lfs;
static Path testRootDir;
@BeforeClass
public static void setup() throws Exception {
Configuration conf = new Configuration();
shell = new FsShell(conf);
lfs = FileSystem.getLocal(conf);
testRootDir = lfs.makeQualified(
new Path(GenericTestUtils.getTempPath("testFsShell")));
lfs.mkdirs(testRootDir);
lfs.setWorkingDirectory(testRootDir);
}
@Before
public void prepFiles() throws Exception {
lfs.setVerifyChecksum(true);
lfs.setWriteChecksum(true);
}
private int shellRun(String... args) throws Exception {
int exitCode = shell.run(args);
LOG.info("exit " + exitCode + " - " + StringUtils.join(" ", args));
return exitCode;
}
@Test
public void testTouchz() throws Exception {
// Ensure newFile does not exist
final String newFileName = "newFile";
final Path newFile = new Path(newFileName);
lfs.delete(newFile, true);
assertThat(lfs.exists(newFile), is(false));
assertThat("Expected successful touchz on a new file",
shellRun("-touchz", newFileName), is(0));
shellRun("-ls", newFileName);
assertThat("Expected successful touchz on an existing zero-length file",
shellRun("-touchz", newFileName), is(0));
// Ensure noDir does not exist
final String noDirName = "noDir";
final Path noDir = new Path(noDirName);
lfs.delete(noDir, true);
assertThat(lfs.exists(noDir), is(false));
assertThat("Expected failed touchz in a non-existent directory",
shellRun("-touchz", noDirName + "/foo"), is(not(0)));
}
}

View File

@ -25,13 +25,14 @@ import java.util.Random;
import junit.framework.TestCase; import junit.framework.TestCase;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.test.GenericTestUtils;
/** /**
* Testing the correctness of FileSystem.getFileBlockLocations. * Testing the correctness of FileSystem.getFileBlockLocations.
*/ */
public class TestGetFileBlockLocations extends TestCase { public class TestGetFileBlockLocations extends TestCase {
private static String TEST_ROOT_DIR = private static String TEST_ROOT_DIR = GenericTestUtils.getTempPath(
System.getProperty("test.build.data", "/tmp/testGetFileBlockLocations"); "testGetFileBlockLocations");
private static final int FileLength = 4 * 1024 * 1024; // 4MB private static final int FileLength = 4 * 1024 * 1024; // 4MB
private Configuration conf; private Configuration conf;
private Path path; private Path path;

View File

@ -20,6 +20,7 @@ package org.apache.hadoop.fs;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.permission.FsPermission; import org.apache.hadoop.fs.permission.FsPermission;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.util.Shell; import org.apache.hadoop.util.Shell;
import org.junit.After; import org.junit.After;
import org.junit.Assert; import org.junit.Assert;
@ -45,8 +46,8 @@ import static org.junit.Assert.*;
*/ */
public class TestHarFileSystemBasics { public class TestHarFileSystemBasics {
private static final String ROOT_PATH = System.getProperty("test.build.data", private static final String ROOT_PATH =
"build/test/data"); GenericTestUtils.getTempPath("testharfilesystembasics");
private static final Path rootPath; private static final Path rootPath;
static { static {
String root = new Path(new File(ROOT_PATH).getAbsolutePath(), "localfs") String root = new Path(new File(ROOT_PATH).getAbsolutePath(), "localfs")

View File

@ -24,6 +24,7 @@ import java.io.FileWriter;
import java.io.IOException; import java.io.IOException;
import java.util.Arrays; import java.util.Arrays;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.After; import org.junit.After;
import static org.junit.Assert.*; import static org.junit.Assert.*;
import org.junit.Before; import org.junit.Before;
@ -57,9 +58,7 @@ import static org.apache.hadoop.fs.HardLink.*;
*/ */
public class TestHardLink { public class TestHardLink {
public static final String TEST_ROOT_DIR = final static private File TEST_DIR = GenericTestUtils.getTestDir("test/hl");
System.getProperty("test.build.data", "build/test/data") + "/test";
final static private File TEST_DIR = new File(TEST_ROOT_DIR, "hl");
private static String DIR = "dir_"; private static String DIR = "dir_";
//define source and target directories //define source and target directories
private static File src = new File(TEST_DIR, DIR + "src"); private static File src = new File(TEST_DIR, DIR + "src");

View File

@ -22,11 +22,8 @@ import java.util.HashSet;
import java.util.Random; import java.util.Random;
import java.util.Set; import java.util.Set;
import org.apache.commons.logging.impl.Log4JLogger;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.log4j.Level; import org.apache.log4j.Level;
import static org.junit.Assert.*; import static org.junit.Assert.*;
@ -37,8 +34,8 @@ import org.junit.BeforeClass;
* This class tests the FileStatus API. * This class tests the FileStatus API.
*/ */
public class TestListFiles { public class TestListFiles {
{ static {
((Log4JLogger)FileSystem.LOG).getLogger().setLevel(Level.ALL); GenericTestUtils.setLogLevel(FileSystem.LOG, Level.ALL);
} }
static final long seed = 0xDEADBEEFL; static final long seed = 0xDEADBEEFL;
@ -53,9 +50,8 @@ public class TestListFiles {
private static Path FILE3; private static Path FILE3;
static { static {
setTestPaths(new Path( setTestPaths(new Path(GenericTestUtils.getTempPath("testlistfiles"),
System.getProperty("test.build.data", "build/test/data/work-dir/localfs"), "main_"));
"main_"));
} }
protected static Path getTestDir() { protected static Path getTestDir() {

View File

@ -20,6 +20,7 @@ package org.apache.hadoop.fs;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem.Statistics; import org.apache.hadoop.fs.FileSystem.Statistics;
import org.apache.hadoop.io.IOUtils; import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.util.Shell; import org.apache.hadoop.util.Shell;
import org.apache.hadoop.util.StringUtils; import org.apache.hadoop.util.StringUtils;
@ -45,10 +46,10 @@ import org.mockito.internal.util.reflection.Whitebox;
* This class tests the local file system via the FileSystem abstraction. * This class tests the local file system via the FileSystem abstraction.
*/ */
public class TestLocalFileSystem { public class TestLocalFileSystem {
private static final String TEST_ROOT_DIR private static final File base =
= System.getProperty("test.build.data","build/test/data") + "/work-dir/localfs"; GenericTestUtils.getTestDir("work-dir/localfs");
private final File base = new File(TEST_ROOT_DIR); private static final String TEST_ROOT_DIR = base.getAbsolutePath();
private final Path TEST_PATH = new Path(TEST_ROOT_DIR, "test-file"); private final Path TEST_PATH = new Path(TEST_ROOT_DIR, "test-file");
private Configuration conf; private Configuration conf;
private LocalFileSystem fileSys; private LocalFileSystem fileSys;

View File

@ -19,7 +19,9 @@ package org.apache.hadoop.fs;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.permission.*; import org.apache.hadoop.fs.permission.*;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.util.StringUtils; import org.apache.hadoop.util.StringUtils;
import org.apache.log4j.Level;
import org.apache.hadoop.util.Shell; import org.apache.hadoop.util.Shell;
import java.io.*; import java.io.*;
@ -31,19 +33,11 @@ import junit.framework.*;
* This class tests the local file system via the FileSystem abstraction. * This class tests the local file system via the FileSystem abstraction.
*/ */
public class TestLocalFileSystemPermission extends TestCase { public class TestLocalFileSystemPermission extends TestCase {
static final String TEST_PATH_PREFIX = new Path(System.getProperty( static final String TEST_PATH_PREFIX = GenericTestUtils.getTempPath(
"test.build.data", "/tmp")).toString().replace(' ', '_') TestLocalFileSystemPermission.class.getSimpleName());
+ "/" + TestLocalFileSystemPermission.class.getSimpleName() + "_";
{ static {
try { GenericTestUtils.setLogLevel(FileSystem.LOG, Level.DEBUG);
((org.apache.commons.logging.impl.Log4JLogger)FileSystem.LOG).getLogger()
.setLevel(org.apache.log4j.Level.DEBUG);
}
catch(Exception e) {
System.out.println("Cannot change log level\n"
+ StringUtils.stringifyException(e));
}
} }
private Path writeFile(FileSystem fs, String name) throws IOException { private Path writeFile(FileSystem fs, String name) throws IOException {

View File

@ -26,6 +26,7 @@ import java.util.Arrays;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.AvroTestUtil; import org.apache.hadoop.io.AvroTestUtil;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.util.Shell; import org.apache.hadoop.util.Shell;
import com.google.common.base.Joiner; import com.google.common.base.Joiner;
@ -402,9 +403,8 @@ public class TestPath extends TestCase {
// This test is not meaningful on Windows where * is disallowed in file name. // This test is not meaningful on Windows where * is disallowed in file name.
if (Shell.WINDOWS) return; if (Shell.WINDOWS) return;
FileSystem lfs = FileSystem.getLocal(new Configuration()); FileSystem lfs = FileSystem.getLocal(new Configuration());
Path testRoot = lfs.makeQualified(new Path( Path testRoot = lfs.makeQualified(
System.getProperty("test.build.data","test/build/data"), new Path(GenericTestUtils.getTempPath("testPathGlob")));
"testPathGlob"));
lfs.delete(testRoot, true); lfs.delete(testRoot, true);
lfs.mkdirs(testRoot); lfs.mkdirs(testRoot);
assertTrue(lfs.isDirectory(testRoot)); assertTrue(lfs.isDirectory(testRoot));

View File

@ -34,6 +34,7 @@ import java.util.Set;
import junit.framework.TestCase; import junit.framework.TestCase;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.util.Time; import org.apache.hadoop.util.Time;
/** /**
@ -41,9 +42,8 @@ import org.apache.hadoop.util.Time;
*/ */
public class TestTrash extends TestCase { public class TestTrash extends TestCase {
private final static Path TEST_DIR = private final static Path TEST_DIR = new Path(GenericTestUtils.getTempPath(
new Path(new File(System.getProperty("test.build.data","/tmp") "testTrash"));
).toURI().toString().replace(' ', '+'), "testTrash");
protected static Path mkdir(FileSystem fs, Path p) throws IOException { protected static Path mkdir(FileSystem fs, Path p) throws IOException {
assertTrue(fs.mkdirs(p)); assertTrue(fs.mkdirs(p));

View File

@ -23,6 +23,7 @@ import java.io.IOException;
import junit.framework.TestCase; import junit.framework.TestCase;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.test.GenericTestUtils;
/** /**
* test for the input truncation bug when mark/reset is used. * test for the input truncation bug when mark/reset is used.
@ -30,8 +31,7 @@ import org.apache.hadoop.conf.Configuration;
*/ */
public class TestTruncatedInputBug extends TestCase { public class TestTruncatedInputBug extends TestCase {
private static String TEST_ROOT_DIR = private static String TEST_ROOT_DIR =
new Path(System.getProperty("test.build.data","/tmp")) GenericTestUtils.getTestDir().getAbsolutePath();
.toString().replace(' ', '+');
private void writeFile(FileSystem fileSys, private void writeFile(FileSystem fileSys,
Path name, int nBytesToWrite) Path name, int nBytesToWrite)

View File

@ -19,10 +19,14 @@
package org.apache.hadoop.fs.contract; package org.apache.hadoop.fs.contract;
import java.io.FileNotFoundException; import java.io.FileNotFoundException;
import java.io.IOException;
import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.Path;
import org.junit.Test; import org.junit.Test;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
@ -58,4 +62,23 @@ public abstract class AbstractContractGetFileStatusTest extends
handleExpectedException(e); handleExpectedException(e);
} }
} }
@Test
public void testListStatusEmptyDirectory() throws IOException {
// remove the test directory
FileSystem fs = getFileSystem();
assertTrue(fs.delete(getContract().getTestPath(), true));
// create a - non-qualified - Path for a subdir
Path subfolder = getContract().getTestPath().suffix("/"+testPath.getName());
assertTrue(fs.mkdirs(subfolder));
// assert empty ls on the empty dir
assertEquals("ls on an empty directory not of length 0", 0,
fs.listStatus(subfolder).length);
// assert non-empty ls on parent dir
assertTrue("ls on a non-empty directory of length 0",
fs.listStatus(getContract().getTestPath()).length > 0);
}
} }

View File

@ -29,6 +29,7 @@ import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.Path;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.util.Shell; import org.apache.hadoop.util.Shell;
import org.apache.sshd.SshServer; import org.apache.sshd.SshServer;
@ -54,7 +55,7 @@ public class TestSFTPFileSystem {
private static final String TEST_SFTP_DIR = "testsftp"; private static final String TEST_SFTP_DIR = "testsftp";
private static final String TEST_ROOT_DIR = private static final String TEST_ROOT_DIR =
System.getProperty("test.build.data", "build/test/data"); GenericTestUtils.getTestDir().getAbsolutePath();
@Rule public TestName name = new TestName(); @Rule public TestName name = new TestName();

View File

@ -29,6 +29,7 @@ import java.util.Arrays;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.Path;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.util.Shell; import org.apache.hadoop.util.Shell;
import org.junit.After; import org.junit.After;
import org.junit.Before; import org.junit.Before;
@ -36,7 +37,7 @@ import org.junit.Test;
public class TestPathData { public class TestPathData {
private static final String TEST_ROOT_DIR = private static final String TEST_ROOT_DIR =
System.getProperty("test.build.data","build/test/data") + "/testPD"; GenericTestUtils.getTestDir("testPD").getAbsolutePath();
protected Configuration conf; protected Configuration conf;
protected FileSystem fs; protected FileSystem fs;
protected Path testDir; protected Path testDir;

View File

@ -33,6 +33,7 @@ import java.nio.file.Paths;
import org.apache.commons.io.IOUtils; import org.apache.commons.io.IOUtils;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.Path;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.Test; import org.junit.Test;
/** /**
@ -41,8 +42,7 @@ import org.junit.Test;
*/ */
public class TestTextCommand { public class TestTextCommand {
private static final File TEST_ROOT_DIR = private static final File TEST_ROOT_DIR =
Paths.get(System.getProperty("test.build.data", "build/test/data"), GenericTestUtils.getTestDir("testText");
"testText").toFile();
private static final String AVRO_FILENAME = private static final String AVRO_FILENAME =
new File(TEST_ROOT_DIR, "weather.avro").toURI().getPath(); new File(TEST_ROOT_DIR, "weather.avro").toURI().getPath();
private static final String TEXT_FILENAME = private static final String TEXT_FILENAME =

View File

@ -30,7 +30,7 @@ import org.apache.hadoop.fs.FsConstants;
import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.DataInputBuffer; import org.apache.hadoop.io.DataInputBuffer;
import org.apache.hadoop.io.DataOutputBuffer; import org.apache.hadoop.io.DataOutputBuffer;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.AfterClass; import org.junit.AfterClass;
import org.junit.Test; import org.junit.Test;
import org.mockito.Mockito; import org.mockito.Mockito;
@ -44,9 +44,8 @@ import static org.junit.Assert.*;
*/ */
public class TestViewfsFileStatus { public class TestViewfsFileStatus {
private static final File TEST_DIR = private static final File TEST_DIR = GenericTestUtils.getTestDir(
new File(System.getProperty("test.build.data", "/tmp"), TestViewfsFileStatus.class.getSimpleName());
TestViewfsFileStatus.class.getSimpleName());
@Test @Test
public void testFileStatusSerialziation() public void testFileStatusSerialziation()

View File

@ -32,6 +32,7 @@ import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException; import java.util.concurrent.TimeoutException;
import org.apache.hadoop.net.ServerSocketUtil; import org.apache.hadoop.net.ServerSocketUtil;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.util.Time; import org.apache.hadoop.util.Time;
import org.apache.zookeeper.TestableZooKeeper; import org.apache.zookeeper.TestableZooKeeper;
import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.WatchedEvent;
@ -62,8 +63,7 @@ public abstract class ClientBaseWithFixes extends ZKTestCase {
protected static final Logger LOG = LoggerFactory.getLogger(ClientBaseWithFixes.class); protected static final Logger LOG = LoggerFactory.getLogger(ClientBaseWithFixes.class);
public static int CONNECTION_TIMEOUT = 30000; public static int CONNECTION_TIMEOUT = 30000;
static final File BASETEST = static final File BASETEST = GenericTestUtils.getTestDir();
new File(System.getProperty("test.build.data", "build"));
protected final String hostPort = initHostPort(); protected final String hostPort = initHostPort();
protected int maxCnxns = 0; protected int maxCnxns = 0;

View File

@ -19,6 +19,7 @@ import org.apache.hadoop.fs.FileUtil;
import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.net.NetUtils;
import org.apache.hadoop.security.authentication.server.AuthenticationFilter; import org.apache.hadoop.security.authentication.server.AuthenticationFilter;
import org.apache.hadoop.security.ssl.KeyStoreTestUtil; import org.apache.hadoop.security.ssl.KeyStoreTestUtil;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.After; import org.junit.After;
import org.junit.Test; import org.junit.Test;
import org.mortbay.log.Log; import org.mortbay.log.Log;
@ -35,8 +36,8 @@ import java.net.HttpCookie;
import java.util.List; import java.util.List;
public class TestAuthenticationSessionCookie { public class TestAuthenticationSessionCookie {
private static final String BASEDIR = System.getProperty("test.build.dir", private static final String BASEDIR =
"target/test-dir") + "/" + TestHttpCookieFlag.class.getSimpleName(); GenericTestUtils.getTempPath(TestHttpCookieFlag.class.getSimpleName());
private static boolean isCookiePersistent; private static boolean isCookiePersistent;
private static final long TOKEN_VALIDITY_SEC = 1000; private static final long TOKEN_VALIDITY_SEC = 1000;
private static long expires; private static long expires;

View File

@ -20,6 +20,7 @@ import org.apache.hadoop.net.NetUtils;
import org.apache.hadoop.security.authentication.server.AuthenticationFilter; import org.apache.hadoop.security.authentication.server.AuthenticationFilter;
import org.apache.hadoop.security.ssl.KeyStoreTestUtil; import org.apache.hadoop.security.ssl.KeyStoreTestUtil;
import org.apache.hadoop.security.ssl.SSLFactory; import org.apache.hadoop.security.ssl.SSLFactory;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.AfterClass; import org.junit.AfterClass;
import org.junit.BeforeClass; import org.junit.BeforeClass;
import org.junit.Test; import org.junit.Test;
@ -37,8 +38,8 @@ import java.net.HttpCookie;
import java.util.List; import java.util.List;
public class TestHttpCookieFlag { public class TestHttpCookieFlag {
private static final String BASEDIR = System.getProperty("test.build.dir", private static final String BASEDIR =
"target/test-dir") + "/" + TestHttpCookieFlag.class.getSimpleName(); GenericTestUtils.getTempPath(TestHttpCookieFlag.class.getSimpleName());
private static String keystoresDir; private static String keystoresDir;
private static String sslConfDir; private static String sslConfDir;
private static SSLFactory clientSslFactory; private static SSLFactory clientSslFactory;

View File

@ -17,6 +17,7 @@
*/ */
package org.apache.hadoop.http; package org.apache.hadoop.http;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.log4j.Logger; import org.apache.log4j.Logger;
import org.junit.Test; import org.junit.Test;
@ -76,8 +77,8 @@ public class TestHttpServerLifecycle extends HttpServerFunctionalTest {
public void testStartedServerWithRequestLog() throws Throwable { public void testStartedServerWithRequestLog() throws Throwable {
HttpRequestLogAppender requestLogAppender = new HttpRequestLogAppender(); HttpRequestLogAppender requestLogAppender = new HttpRequestLogAppender();
requestLogAppender.setName("httprequestlog"); requestLogAppender.setName("httprequestlog");
requestLogAppender.setFilename(System.getProperty("test.build.data", "/tmp/") requestLogAppender.setFilename(
+ "jetty-name-yyyy_mm_dd.log"); GenericTestUtils.getTempPath("jetty-name-yyyy_mm_dd.log"));
Logger.getLogger(HttpServer2.class.getName() + ".test").addAppender(requestLogAppender); Logger.getLogger(HttpServer2.class.getName() + ".test").addAppender(requestLogAppender);
HttpServer2 server = null; HttpServer2 server = null;
server = createTestServer(); server = createTestServer();

View File

@ -40,6 +40,7 @@ import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.net.NetUtils;
import org.apache.hadoop.security.ssl.KeyStoreTestUtil; import org.apache.hadoop.security.ssl.KeyStoreTestUtil;
import org.apache.hadoop.security.ssl.SSLFactory; import org.apache.hadoop.security.ssl.SSLFactory;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.AfterClass; import org.junit.AfterClass;
import org.junit.BeforeClass; import org.junit.BeforeClass;
import org.junit.Test; import org.junit.Test;
@ -51,8 +52,8 @@ import org.junit.Test;
*/ */
public class TestSSLHttpServer extends HttpServerFunctionalTest { public class TestSSLHttpServer extends HttpServerFunctionalTest {
private static final String BASEDIR = System.getProperty("test.build.dir", private static final String BASEDIR =
"target/test-dir") + "/" + TestSSLHttpServer.class.getSimpleName(); GenericTestUtils.getTempPath(TestSSLHttpServer.class.getSimpleName());
private static final Log LOG = LogFactory.getLog(TestSSLHttpServer.class); private static final Log LOG = LogFactory.getLog(TestSSLHttpServer.class);
private static Configuration conf; private static Configuration conf;

View File

@ -24,6 +24,7 @@ import java.io.*;
import org.apache.commons.logging.*; import org.apache.commons.logging.*;
import org.apache.hadoop.fs.*; import org.apache.hadoop.fs.*;
import org.apache.hadoop.io.SequenceFile.CompressionType; import org.apache.hadoop.io.SequenceFile.CompressionType;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.util.Progressable; import org.apache.hadoop.util.Progressable;
import org.apache.hadoop.conf.*; import org.apache.hadoop.conf.*;
import org.junit.Test; import org.junit.Test;
@ -38,9 +39,8 @@ import static org.junit.Assert.fail;
public class TestArrayFile { public class TestArrayFile {
private static final Log LOG = LogFactory.getLog(TestArrayFile.class); private static final Log LOG = LogFactory.getLog(TestArrayFile.class);
private static final Path TEST_DIR = new Path( private static final Path TEST_DIR = new Path(GenericTestUtils.getTempPath(
System.getProperty("test.build.data", "/tmp"), TestMapFile.class.getSimpleName()));
TestMapFile.class.getSimpleName());
private static String TEST_FILE = new Path(TEST_DIR, "test.array").toString(); private static String TEST_FILE = new Path(TEST_DIR, "test.array").toString();
@Test @Test

View File

@ -38,6 +38,7 @@ import org.apache.hadoop.io.compress.CompressionInputStream;
import org.apache.hadoop.io.compress.CompressionOutputStream; import org.apache.hadoop.io.compress.CompressionOutputStream;
import org.apache.hadoop.io.compress.Compressor; import org.apache.hadoop.io.compress.Compressor;
import org.apache.hadoop.io.compress.Decompressor; import org.apache.hadoop.io.compress.Decompressor;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.util.Progressable; import org.apache.hadoop.util.Progressable;
import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue; import static org.junit.Assert.assertTrue;
@ -49,9 +50,8 @@ import org.junit.Test;
public class TestBloomMapFile { public class TestBloomMapFile {
private static Configuration conf = new Configuration(); private static Configuration conf = new Configuration();
private static final Path TEST_ROOT = new Path( private static final Path TEST_ROOT = new Path(GenericTestUtils.getTempPath(
System.getProperty("test.build.data", "/tmp"), TestMapFile.class.getSimpleName()));
TestMapFile.class.getSimpleName());
private static final Path TEST_DIR = new Path(TEST_ROOT, "testfile"); private static final Path TEST_DIR = new Path(TEST_ROOT, "testfile");
private static final Path TEST_FILE = new Path(TEST_ROOT, "testfile"); private static final Path TEST_FILE = new Path(TEST_ROOT, "testfile");

View File

@ -37,6 +37,7 @@ import org.apache.hadoop.io.compress.CompressionInputStream;
import org.apache.hadoop.io.compress.CompressionOutputStream; import org.apache.hadoop.io.compress.CompressionOutputStream;
import org.apache.hadoop.io.compress.Compressor; import org.apache.hadoop.io.compress.Compressor;
import org.apache.hadoop.io.compress.Decompressor; import org.apache.hadoop.io.compress.Decompressor;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.util.Progressable; import org.apache.hadoop.util.Progressable;
import org.junit.Assert; import org.junit.Assert;
import org.junit.Before; import org.junit.Before;
@ -48,9 +49,8 @@ import static org.mockito.Mockito.*;
public class TestMapFile { public class TestMapFile {
private static final Path TEST_DIR = new Path( private static final Path TEST_DIR = new Path(GenericTestUtils.getTempPath(
System.getProperty("test.build.data", "/tmp"), TestMapFile.class.getSimpleName()));
TestMapFile.class.getSimpleName());
private static Configuration conf = new Configuration(); private static Configuration conf = new Configuration();

View File

@ -29,6 +29,7 @@ import org.apache.hadoop.io.SequenceFile.Metadata;
import org.apache.hadoop.io.compress.CompressionCodec; import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.DefaultCodec; import org.apache.hadoop.io.compress.DefaultCodec;
import org.apache.hadoop.io.serializer.avro.AvroReflectSerialization; import org.apache.hadoop.io.serializer.avro.AvroReflectSerialization;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.util.ReflectionUtils; import org.apache.hadoop.util.ReflectionUtils;
import org.apache.hadoop.conf.*; import org.apache.hadoop.conf.*;
import org.junit.Test; import org.junit.Test;
@ -58,11 +59,11 @@ public class TestSequenceFile {
int count = 1024 * 10; int count = 1024 * 10;
int megabytes = 1; int megabytes = 1;
int factor = 5; int factor = 5;
Path file = new Path(System.getProperty("test.build.data",".")+"/test.seq"); Path file = new Path(GenericTestUtils.getTempPath("test.seq"));
Path recordCompressedFile = Path recordCompressedFile = new Path(GenericTestUtils.getTempPath(
new Path(System.getProperty("test.build.data",".")+"/test.rc.seq"); "test.rc.seq"));
Path blockCompressedFile = Path blockCompressedFile = new Path(GenericTestUtils.getTempPath(
new Path(System.getProperty("test.build.data",".")+"/test.bc.seq"); "test.bc.seq"));
int seed = new Random().nextInt(); int seed = new Random().nextInt();
LOG.info("Seed = " + seed); LOG.info("Seed = " + seed);
@ -320,13 +321,13 @@ public class TestSequenceFile {
LOG.info("Testing SequenceFile with metadata"); LOG.info("Testing SequenceFile with metadata");
int count = 1024 * 10; int count = 1024 * 10;
CompressionCodec codec = new DefaultCodec(); CompressionCodec codec = new DefaultCodec();
Path file = new Path(System.getProperty("test.build.data",".")+"/test.seq.metadata"); Path file = new Path(GenericTestUtils.getTempPath("test.seq.metadata"));
Path sortedFile = Path sortedFile = new Path(GenericTestUtils.getTempPath(
new Path(System.getProperty("test.build.data",".")+"/test.sorted.seq.metadata"); "test.sorted.seq.metadata"));
Path recordCompressedFile = Path recordCompressedFile = new Path(GenericTestUtils.getTempPath(
new Path(System.getProperty("test.build.data",".")+"/test.rc.seq.metadata"); "test.rc.seq.metadata"));
Path blockCompressedFile = Path blockCompressedFile = new Path(GenericTestUtils.getTempPath(
new Path(System.getProperty("test.build.data",".")+"/test.bc.seq.metadata"); "test.bc.seq.metadata"));
FileSystem fs = FileSystem.getLocal(conf); FileSystem fs = FileSystem.getLocal(conf);
SequenceFile.Metadata theMetadata = new SequenceFile.Metadata(); SequenceFile.Metadata theMetadata = new SequenceFile.Metadata();
@ -426,14 +427,14 @@ public class TestSequenceFile {
LocalFileSystem fs = FileSystem.getLocal(conf); LocalFileSystem fs = FileSystem.getLocal(conf);
// create a sequence file 1 // create a sequence file 1
Path path1 = new Path(System.getProperty("test.build.data",".")+"/test1.seq"); Path path1 = new Path(GenericTestUtils.getTempPath("test1.seq"));
SequenceFile.Writer writer = SequenceFile.createWriter(fs, conf, path1, SequenceFile.Writer writer = SequenceFile.createWriter(fs, conf, path1,
Text.class, NullWritable.class, CompressionType.BLOCK); Text.class, NullWritable.class, CompressionType.BLOCK);
writer.append(new Text("file1-1"), NullWritable.get()); writer.append(new Text("file1-1"), NullWritable.get());
writer.append(new Text("file1-2"), NullWritable.get()); writer.append(new Text("file1-2"), NullWritable.get());
writer.close(); writer.close();
Path path2 = new Path(System.getProperty("test.build.data",".")+"/test2.seq"); Path path2 = new Path(GenericTestUtils.getTempPath("test2.seq"));
writer = SequenceFile.createWriter(fs, conf, path2, Text.class, writer = SequenceFile.createWriter(fs, conf, path2, Text.class,
NullWritable.class, CompressionType.BLOCK); NullWritable.class, CompressionType.BLOCK);
writer.append(new Text("file2-1"), NullWritable.get()); writer.append(new Text("file2-1"), NullWritable.get());
@ -482,7 +483,7 @@ public class TestSequenceFile {
public void testCreateUsesFsArg() throws Exception { public void testCreateUsesFsArg() throws Exception {
FileSystem fs = FileSystem.getLocal(conf); FileSystem fs = FileSystem.getLocal(conf);
FileSystem spyFs = Mockito.spy(fs); FileSystem spyFs = Mockito.spy(fs);
Path p = new Path(System.getProperty("test.build.data", ".")+"/testCreateUsesFSArg.seq"); Path p = new Path(GenericTestUtils.getTempPath("testCreateUsesFSArg.seq"));
SequenceFile.Writer writer = SequenceFile.createWriter( SequenceFile.Writer writer = SequenceFile.createWriter(
spyFs, conf, p, NullWritable.class, NullWritable.class); spyFs, conf, p, NullWritable.class, NullWritable.class);
writer.close(); writer.close();
@ -515,7 +516,7 @@ public class TestSequenceFile {
LocalFileSystem fs = FileSystem.getLocal(conf); LocalFileSystem fs = FileSystem.getLocal(conf);
// create an empty file (which is not a valid sequence file) // create an empty file (which is not a valid sequence file)
Path path = new Path(System.getProperty("test.build.data",".")+"/broken.seq"); Path path = new Path(GenericTestUtils.getTempPath("broken.seq"));
fs.create(path).close(); fs.create(path).close();
// try to create SequenceFile.Reader // try to create SequenceFile.Reader
@ -547,8 +548,7 @@ public class TestSequenceFile {
LocalFileSystem fs = FileSystem.getLocal(conf); LocalFileSystem fs = FileSystem.getLocal(conf);
// create an empty file (which is not a valid sequence file) // create an empty file (which is not a valid sequence file)
Path path = new Path(System.getProperty("test.build.data", ".") + Path path = new Path(GenericTestUtils.getTempPath("zerolength.seq"));
"/zerolength.seq");
fs.create(path).close(); fs.create(path).close();
try { try {
@ -569,8 +569,8 @@ public class TestSequenceFile {
public void testCreateWriterOnExistingFile() throws IOException { public void testCreateWriterOnExistingFile() throws IOException {
Configuration conf = new Configuration(); Configuration conf = new Configuration();
FileSystem fs = FileSystem.getLocal(conf); FileSystem fs = FileSystem.getLocal(conf);
Path name = new Path(new Path(System.getProperty("test.build.data","."), Path name = new Path(new Path(GenericTestUtils.getTempPath(
"createWriterOnExistingFile") , "file"); "createWriterOnExistingFile")), "file");
fs.create(name); fs.create(name);
SequenceFile.createWriter(fs, conf, name, RandomDatum.class, SequenceFile.createWriter(fs, conf, name, RandomDatum.class,
@ -582,8 +582,8 @@ public class TestSequenceFile {
@Test @Test
public void testRecursiveSeqFileCreate() throws IOException { public void testRecursiveSeqFileCreate() throws IOException {
FileSystem fs = FileSystem.getLocal(conf); FileSystem fs = FileSystem.getLocal(conf);
Path name = new Path(new Path(System.getProperty("test.build.data","."), Path name = new Path(new Path(GenericTestUtils.getTempPath(
"recursiveCreateDir") , "file"); "recursiveCreateDir")), "file");
boolean createParent = false; boolean createParent = false;
try { try {
@ -605,8 +605,8 @@ public class TestSequenceFile {
@Test @Test
public void testSerializationAvailability() throws IOException { public void testSerializationAvailability() throws IOException {
Configuration conf = new Configuration(); Configuration conf = new Configuration();
Path path = new Path(System.getProperty("test.build.data", "."), Path path = new Path(GenericTestUtils.getTempPath(
"serializationAvailability"); "serializationAvailability"));
// Check if any serializers aren't found. // Check if any serializers aren't found.
try { try {
SequenceFile.createWriter( SequenceFile.createWriter(

View File

@ -43,8 +43,8 @@ public class TestSequenceFileAppend {
private static Configuration conf; private static Configuration conf;
private static FileSystem fs; private static FileSystem fs;
private static Path ROOT_PATH = new Path(System.getProperty( private static Path ROOT_PATH =
"test.build.data", "build/test/data")); new Path(GenericTestUtils.getTestDir().getAbsolutePath());
@BeforeClass @BeforeClass
public static void setUp() throws Exception { public static void setUp() throws Exception {

View File

@ -24,6 +24,7 @@ import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.SequenceFile.Reader; import org.apache.hadoop.io.SequenceFile.Reader;
import org.apache.hadoop.io.SequenceFile.Writer; import org.apache.hadoop.io.SequenceFile.Writer;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.After; import org.junit.After;
import org.junit.Before; import org.junit.Before;
import org.junit.Test; import org.junit.Test;
@ -50,8 +51,7 @@ public class TestSequenceFileSerialization {
@Test @Test
public void testJavaSerialization() throws Exception { public void testJavaSerialization() throws Exception {
Path file = new Path(System.getProperty("test.build.data",".") + Path file = new Path(GenericTestUtils.getTempPath("testseqser.seq"));
"/testseqser.seq");
fs.delete(file, true); fs.delete(file, true);
Writer writer = SequenceFile.createWriter(fs, conf, file, Long.class, Writer writer = SequenceFile.createWriter(fs, conf, file, Long.class,

View File

@ -27,6 +27,7 @@ import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.Path;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.Test; import org.junit.Test;
public class TestSequenceFileSync { public class TestSequenceFileSync {
@ -52,8 +53,8 @@ public class TestSequenceFileSync {
public void testLowSyncpoint() throws IOException { public void testLowSyncpoint() throws IOException {
final Configuration conf = new Configuration(); final Configuration conf = new Configuration();
final FileSystem fs = FileSystem.getLocal(conf); final FileSystem fs = FileSystem.getLocal(conf);
final Path path = new Path(System.getProperty("test.build.data", "/tmp"), final Path path = new Path(GenericTestUtils.getTempPath(
"sequencefile.sync.test"); "sequencefile.sync.test"));
final IntWritable input = new IntWritable(); final IntWritable input = new IntWritable();
final Text val = new Text(); final Text val = new Text();
SequenceFile.Writer writer = new SequenceFile.Writer(fs, conf, path, SequenceFile.Writer writer = new SequenceFile.Writer(fs, conf, path,

View File

@ -20,7 +20,6 @@ package org.apache.hadoop.io;
import java.io.*; import java.io.*;
import java.util.*; import java.util.*;
import java.util.concurrent.atomic.AtomicReference;
import org.apache.commons.logging.*; import org.apache.commons.logging.*;
@ -28,6 +27,7 @@ import org.apache.commons.logging.*;
import org.apache.hadoop.fs.*; import org.apache.hadoop.fs.*;
import org.apache.hadoop.conf.*; import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.SequenceFile.CompressionType; import org.apache.hadoop.io.SequenceFile.CompressionType;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.Test; import org.junit.Test;
import static org.junit.Assert.assertTrue; import static org.junit.Assert.assertTrue;
@ -38,8 +38,7 @@ import static org.junit.Assert.fail;
/** Support for flat files of binary key/value pairs. */ /** Support for flat files of binary key/value pairs. */
public class TestSetFile { public class TestSetFile {
private static final Log LOG = LogFactory.getLog(TestSetFile.class); private static final Log LOG = LogFactory.getLog(TestSetFile.class);
private static String FILE = private static String FILE = GenericTestUtils.getTempPath("test.set");
System.getProperty("test.build.data",".") + "/test.set";
private static Configuration conf = new Configuration(); private static Configuration conf = new Configuration();

View File

@ -72,6 +72,7 @@ import org.apache.hadoop.io.compress.zlib.BuiltInZlibInflater;
import org.apache.hadoop.io.compress.zlib.ZlibCompressor; import org.apache.hadoop.io.compress.zlib.ZlibCompressor;
import org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionLevel; import org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionLevel;
import org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionStrategy; import org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionStrategy;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.io.compress.zlib.ZlibFactory; import org.apache.hadoop.io.compress.zlib.ZlibFactory;
import org.apache.hadoop.util.LineReader; import org.apache.hadoop.util.LineReader;
import org.apache.hadoop.util.NativeCodeLoader; import org.apache.hadoop.util.NativeCodeLoader;
@ -338,9 +339,9 @@ public class TestCodec {
private static Path writeSplitTestFile(FileSystem fs, Random rand, private static Path writeSplitTestFile(FileSystem fs, Random rand,
CompressionCodec codec, long infLen) throws IOException { CompressionCodec codec, long infLen) throws IOException {
final int REC_SIZE = 1024; final int REC_SIZE = 1024;
final Path wd = new Path(new Path( final Path wd = new Path(GenericTestUtils.getTempPath(
System.getProperty("test.build.data", "/tmp")).makeQualified(fs), codec.getClass().getSimpleName())).makeQualified(
codec.getClass().getSimpleName()); fs.getUri(), fs.getWorkingDirectory());
final Path file = new Path(wd, "test" + codec.getDefaultExtension()); final Path file = new Path(wd, "test" + codec.getDefaultExtension());
final byte[] b = new byte[REC_SIZE]; final byte[] b = new byte[REC_SIZE];
final Base64 b64 = new Base64(0, null); final Base64 b64 = new Base64(0, null);
@ -596,9 +597,8 @@ public class TestCodec {
FileSystem fs = FileSystem.get(conf); FileSystem fs = FileSystem.get(conf);
LOG.info("Creating MapFiles with " + records + LOG.info("Creating MapFiles with " + records +
" records using codec " + clazz.getSimpleName()); " records using codec " + clazz.getSimpleName());
Path path = new Path(new Path( Path path = new Path(GenericTestUtils.getTempPath(
System.getProperty("test.build.data", "/tmp")), clazz.getSimpleName() + "-" + type + "-" + records));
clazz.getSimpleName() + "-" + type + "-" + records);
LOG.info("Writing " + path); LOG.info("Writing " + path);
createMapFile(conf, fs, path, clazz.newInstance(), type, records); createMapFile(conf, fs, path, clazz.newInstance(), type, records);
@ -750,8 +750,7 @@ public class TestCodec {
CodecPool.returnDecompressor(zlibDecompressor); CodecPool.returnDecompressor(zlibDecompressor);
// Now create a GZip text file. // Now create a GZip text file.
String tmpDir = System.getProperty("test.build.data", "/tmp/"); Path f = new Path(GenericTestUtils.getTempPath("testGzipCodecRead.txt.gz"));
Path f = new Path(new Path(tmpDir), "testGzipCodecRead.txt.gz");
BufferedWriter bw = new BufferedWriter(new OutputStreamWriter( BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(
new GZIPOutputStream(new FileOutputStream(f.toString())))); new GZIPOutputStream(new FileOutputStream(f.toString()))));
final String msg = "This is the message in the file!"; final String msg = "This is the message in the file!";
@ -802,8 +801,7 @@ public class TestCodec {
CodecPool.returnDecompressor(zlibDecompressor); CodecPool.returnDecompressor(zlibDecompressor);
// Now create a GZip text file. // Now create a GZip text file.
String tmpDir = System.getProperty("test.build.data", "/tmp/"); Path f = new Path(GenericTestUtils.getTempPath("testGzipLongOverflow.bin.gz"));
Path f = new Path(new Path(tmpDir), "testGzipLongOverflow.bin.gz");
BufferedWriter bw = new BufferedWriter(new OutputStreamWriter( BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(
new GZIPOutputStream(new FileOutputStream(f.toString())))); new GZIPOutputStream(new FileOutputStream(f.toString()))));
@ -862,9 +860,8 @@ public class TestCodec {
codec instanceof GzipCodec); codec instanceof GzipCodec);
final String msg = "This is the message we are going to compress."; final String msg = "This is the message we are going to compress.";
final String tmpDir = System.getProperty("test.build.data", "/tmp/"); final String fileName = new Path(GenericTestUtils.getTempPath(
final String fileName = new Path(new Path(tmpDir), "testGzipCodecWrite.txt.gz")).toString();
"testGzipCodecWrite.txt.gz").toString();
BufferedWriter w = null; BufferedWriter w = null;
Compressor gzipCompressor = CodecPool.getCompressor(codec); Compressor gzipCompressor = CodecPool.getCompressor(codec);

View File

@ -30,6 +30,7 @@ import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.file.tfile.TFile.Reader; import org.apache.hadoop.io.file.tfile.TFile.Reader;
import org.apache.hadoop.io.file.tfile.TFile.Writer; import org.apache.hadoop.io.file.tfile.TFile.Writer;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner; import org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner;
import org.junit.After; import org.junit.After;
import org.junit.Before; import org.junit.Before;
@ -43,8 +44,7 @@ import static org.junit.Assert.assertFalse;
* *
*/ */
public class TestTFile { public class TestTFile {
private static String ROOT = private static String ROOT = GenericTestUtils.getTempPath("tfile-test");
System.getProperty("test.build.data", "/tmp/tfile-test");
private FileSystem fs; private FileSystem fs;
private Configuration conf; private Configuration conf;
private static final int minBlockSize = 512; private static final int minBlockSize = 512;

View File

@ -35,6 +35,7 @@ import org.apache.hadoop.io.file.tfile.TFile.Reader;
import org.apache.hadoop.io.file.tfile.TFile.Writer; import org.apache.hadoop.io.file.tfile.TFile.Writer;
import org.apache.hadoop.io.file.tfile.TFile.Reader.Location; import org.apache.hadoop.io.file.tfile.TFile.Reader.Location;
import org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner; import org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.After; import org.junit.After;
import org.junit.Before; import org.junit.Before;
import org.junit.Test; import org.junit.Test;
@ -46,8 +47,7 @@ import org.junit.Test;
* *
*/ */
public class TestTFileByteArrays { public class TestTFileByteArrays {
private static String ROOT = private static String ROOT = GenericTestUtils.getTestDir().getAbsolutePath();
System.getProperty("test.build.data", "/tmp/tfile-test");
private final static int BLOCK_SIZE = 512; private final static int BLOCK_SIZE = 512;
private final static int BUF_SIZE = 64; private final static int BUF_SIZE = 64;
private final static int K = 1024; private final static int K = 1024;

View File

@ -27,13 +27,13 @@ import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.BytesWritable; import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.file.tfile.TFile.Writer; import org.apache.hadoop.io.file.tfile.TFile.Writer;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.Test; import org.junit.Test;
import static org.junit.Assert.*; import static org.junit.Assert.*;
public class TestTFileComparator2 { public class TestTFileComparator2 {
private static final String ROOT = System.getProperty("test.build.data", private static String ROOT = GenericTestUtils.getTestDir().getAbsolutePath();
"/tmp/tfile-test");
private static final String name = "test-tfile-comparator2"; private static final String name = "test-tfile-comparator2";
private final static int BLOCK_SIZE = 512; private final static int BLOCK_SIZE = 512;
private static final String VALUE = "value"; private static final String VALUE = "value";

View File

@ -30,6 +30,7 @@ import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.file.tfile.TFile.Writer; import org.apache.hadoop.io.file.tfile.TFile.Writer;
import org.apache.hadoop.test.GenericTestUtils;
/** /**
* *
@ -38,9 +39,7 @@ import org.apache.hadoop.io.file.tfile.TFile.Writer;
* *
*/ */
public class TestTFileComparators { public class TestTFileComparators {
private static String ROOT = private static String ROOT = GenericTestUtils.getTestDir().getAbsolutePath();
System.getProperty("test.build.data", "/tmp/tfile-test");
private final static int BLOCK_SIZE = 512; private final static int BLOCK_SIZE = 512;
private FileSystem fs; private FileSystem fs;
private Configuration conf; private Configuration conf;

View File

@ -45,6 +45,7 @@ import org.apache.hadoop.io.file.tfile.RandomDistribution.DiscreteRNG;
import org.apache.hadoop.io.file.tfile.TFile.Reader; import org.apache.hadoop.io.file.tfile.TFile.Reader;
import org.apache.hadoop.io.file.tfile.TFile.Writer; import org.apache.hadoop.io.file.tfile.TFile.Writer;
import org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner; import org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner;
import org.apache.hadoop.test.GenericTestUtils;
/** /**
* test the performance for seek. * test the performance for seek.
@ -246,8 +247,7 @@ public class TestTFileSeek {
int fsOutputBufferSizeLzo = 1; int fsOutputBufferSizeLzo = 1;
int fsOutputBufferSizeGz = 1; int fsOutputBufferSizeGz = 1;
String rootDir = String rootDir = GenericTestUtils.getTestDir().getAbsolutePath();
System.getProperty("test.build.data", "/tmp/tfile-test");
String file = "TestTFileSeek"; String file = "TestTFileSeek";
String compress = "gz"; String compress = "gz";
int minKeyLen = 10; int minKeyLen = 10;

View File

@ -45,6 +45,7 @@ import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.SequenceFile; import org.apache.hadoop.io.SequenceFile;
import org.apache.hadoop.io.compress.CompressionCodec; import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry; import org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.util.Time; import org.apache.hadoop.util.Time;
public class TestTFileSeqFileComparison { public class TestTFileSeqFileComparison {
@ -515,9 +516,7 @@ public class TestTFileSeqFileComparison {
} }
private static class MyOptions { private static class MyOptions {
String rootDir = String rootDir = GenericTestUtils.getTestDir().getAbsolutePath();;
System
.getProperty("test.build.data", "/tmp/tfile-test");
String compress = "gz"; String compress = "gz";
String format = "tfile"; String format = "tfile";
int dictSize = 1000; int dictSize = 1000;

Some files were not shown because too many files have changed in this diff Show More