HADOOP-11554. Expose HadoopKerberosName as a hadoop subcommand (aw)

This commit is contained in:
Allen Wittenauer 2015-02-11 07:40:39 -08:00
parent c3da2db48f
commit cfd8a2174a
5 changed files with 23 additions and 1 deletions

View File

@ -27,6 +27,8 @@ Trunk (Unreleased)
HADOOP-8934. Shell command ls should include sort options (Jonathan Allen HADOOP-8934. Shell command ls should include sort options (Jonathan Allen
via aw) via aw)
HADOOP-11554. Expose HadoopKerberosName as a hadoop subcommand (aw)
IMPROVEMENTS IMPROVEMENTS
HADOOP-8017. Configure hadoop-main pom to get rid of M2E plugin execution HADOOP-8017. Configure hadoop-main pom to get rid of M2E plugin execution

View File

@ -38,6 +38,7 @@ function hadoop_usage()
echo " note: please use \"yarn jar\" to launch" echo " note: please use \"yarn jar\" to launch"
echo " YARN applications, not this command." echo " YARN applications, not this command."
echo " jnipath prints the java.library.path" echo " jnipath prints the java.library.path"
echo " kerbname show auth_to_local principal conversion"
echo " key manage keys via the KeyProvider" echo " key manage keys via the KeyProvider"
echo " trace view and modify Hadoop tracing settings" echo " trace view and modify Hadoop tracing settings"
echo " version print the version" echo " version print the version"
@ -156,6 +157,9 @@ case ${COMMAND} in
echo "${JAVA_LIBRARY_PATH}" echo "${JAVA_LIBRARY_PATH}"
exit 0 exit 0
;; ;;
kerbname)
CLASS=org.apache.hadoop.security.HadoopKerberosName
;;
key) key)
CLASS=org.apache.hadoop.crypto.key.KeyShell CLASS=org.apache.hadoop.crypto.key.KeyShell
;; ;;

View File

@ -146,7 +146,7 @@ call :updatepath %HADOOP_BIN_PATH%
) )
) )
set corecommands=fs version jar checknative distcp daemonlog archive classpath credential key set corecommands=fs version jar checknative distcp daemonlog archive classpath credential kerbname key
for %%i in ( %corecommands% ) do ( for %%i in ( %corecommands% ) do (
if %hadoop-command% == %%i set corecommand=true if %hadoop-command% == %%i set corecommand=true
) )
@ -215,6 +215,10 @@ call :updatepath %HADOOP_BIN_PATH%
set CLASS=org.apache.hadoop.security.alias.CredentialShell set CLASS=org.apache.hadoop.security.alias.CredentialShell
goto :eof goto :eof
:kerbname
set CLASS=org.apache.hadoop.security.HadoopKerberosName
goto :eof
:key :key
set CLASS=org.apache.hadoop.crypto.key.KeyShell set CLASS=org.apache.hadoop.crypto.key.KeyShell
goto :eof goto :eof

View File

@ -27,6 +27,7 @@
* [fs](#fs) * [fs](#fs)
* [jar](#jar) * [jar](#jar)
* [jnipath](#jnipath) * [jnipath](#jnipath)
* [kerbname](#kerbname)
* [key](#key) * [key](#key)
* [trace](#trace) * [trace](#trace)
* [version](#version) * [version](#version)
@ -175,6 +176,15 @@ Usage: `hadoop jnipath`
Print the computed java.library.path. Print the computed java.library.path.
### `kerbname`
Usage: <<<hadoop kerbname principal>>>
Convert the named principal via the auth_to_local rules to the Hadoop
user name.
Example: <<<hadoop kerbname user@EXAMPLE.COM>>>
### `key` ### `key`
Manage keys via the KeyProvider. Manage keys via the KeyProvider.

View File

@ -162,6 +162,8 @@ Hadoop maps Kerberos principal to OS user account using the rule specified by `h
By default, it picks the first component of principal name as a user name if the realms matches to the `default_realm` (usually defined in /etc/krb5.conf). For example, `host/full.qualified.domain.name@REALM.TLD` is mapped to `host` by default rule. By default, it picks the first component of principal name as a user name if the realms matches to the `default_realm` (usually defined in /etc/krb5.conf). For example, `host/full.qualified.domain.name@REALM.TLD` is mapped to `host` by default rule.
Custom rules can be tested using the <<<hadoop kerbname>>> command. This command allows one to specify a principal and apply Hadoop's current auth_to_local ruleset. The output will be what identity Hadoop will use for its usage.
### Mapping from user to group ### Mapping from user to group
Though files on HDFS are associated to owner and group, Hadoop does not have the definition of group by itself. Mapping from user to group is done by OS or LDAP. Though files on HDFS are associated to owner and group, Hadoop does not have the definition of group by itself. Mapping from user to group is done by OS or LDAP.